title
listlengths
0
18
author
listlengths
0
4.41k
authoraffiliation
listlengths
0
6.45k
venue
listlengths
0
9
abstract
stringlengths
1
37.6k
doi
stringlengths
10
114
pdfurls
listlengths
1
3
corpusid
int64
158
259M
arxivid
stringlengths
9
16
pdfsha
stringlengths
40
40
text
stringlengths
66
715k
github_urls
listlengths
0
36
[ "THE CHANGE-MAKING PROBLEM FOR SIX COIN VALUES AND BEYOND", "THE CHANGE-MAKING PROBLEM FOR SIX COIN VALUES AND BEYOND" ]
[ "Cornelia A Van ", "Qiyu Zhang " ]
[]
[]
The change-making problem asks: given a positive integer v and a collection C of integer coin values c 1 = 1 < c 2 < c 3 < · · · < cn, what is the minimum number of coins needed to represent v with coin values from C? For some coin systems C, the greedy algorithm finds a representation with a minimum number of coins for all v. We call such coin systems orderly. However, there are coin systems where the greedy algorithm fails to always produce a minimal representation. Over the past fifty years, progress has been made on the change-making problem, including finding a characterization of all orderly coin systems with 3, 4, and 5 coin values. We characterize orderly coin systems with 6 coin values, and we make generalizations to orderly coin systems with n coin values.
null
[ "https://export.arxiv.org/pdf/2303.00078v1.pdf" ]
257,254,855
2303.00078
d536f05e313d0fd8060e50ec926094981b2ef937
THE CHANGE-MAKING PROBLEM FOR SIX COIN VALUES AND BEYOND 28 Feb 2023 Cornelia A Van Qiyu Zhang THE CHANGE-MAKING PROBLEM FOR SIX COIN VALUES AND BEYOND 28 Feb 2023 The change-making problem asks: given a positive integer v and a collection C of integer coin values c 1 = 1 < c 2 < c 3 < · · · < cn, what is the minimum number of coins needed to represent v with coin values from C? For some coin systems C, the greedy algorithm finds a representation with a minimum number of coins for all v. We call such coin systems orderly. However, there are coin systems where the greedy algorithm fails to always produce a minimal representation. Over the past fifty years, progress has been made on the change-making problem, including finding a characterization of all orderly coin systems with 3, 4, and 5 coin values. We characterize orderly coin systems with 6 coin values, and we make generalizations to orderly coin systems with n coin values. Introduction As customers, we trust store cashiers to give us our change using the minimum possible numbers of coins. Most of the time, cashiers get this done in a heartbeat. The secret to their success is that they use the so-called greedy algorithm to make change. Given a positive integer v, this algorithm works as follows. Find the largest coin value less than or equal to v. Take as many coins of that value as possible without the sum of the coin values exceeding v. Then move to the next largest coin denomination and repeat the process until the entire collection's value sums up to v. For both the United States' coin system (1,5,10,25) and the EU coin system (1,2,5,10,20,50,100,200), the greedy algorithm always produces the minimum number of coins. Since the greedy algorithm serves us so well with these familiar coin systems, it often comes as a surprise that the greedy algorithm fails to be optimal in other settings. Suppose, for example, that the United States replaced the dime and quarter with 15 and 20 cent coins, making the coin system (1,5,15,20). If we needed 30 cents change, the greedy algorithm will produce a solution with three coins: 20 + 5 + 5. But we can get the job done with just two coins: 15 + 15. What algorithm should we use in general to get a representation with a minimum number of coins? This is the so-called change-making problem. Stated formally, the change-making problem is as follows. Given a positive integer v and a coin system C = (c 1 , c 2 , c 3 , . . . , c n ) with c 1 = 1 < c 2 < c 3 < · · · < c n , what is the minimum number of coins needed to represent v with coins from C? Denote this minimum number via opt C (v). We assume that there are an unlimited number of each type of coin available. The fact that the smallest coin value is always c 1 = 1 ensures that each positive integer v has at least one representation. A collection of coins with total value v is called an optimal solution if the collection has a minimum number of coins. An optimal solution need not be unique. For instance, suppose the coin system is C = (1,3,4). Then there are two optimal solutions for v = 9 as follows: 3 + 3 + 3 and 4 + 4 + 1. Let grd C (v) denote the number of coins used when creating v using the greedy algorithm with the coin system C. For some coin systems, the change-making problem is solved via the greedy algorithm. We give a term to these coin systems. Definition 1.1. A coin system C = (1, c 2 , c 3 , . . . , c n ) is orderly 1 if for all v > 0, grd C (v) = opt C (v). The United States' coin system and the EU coin system are both orderly (we will justify this later), while the coin system (1,5,15,20) is not orderly since the greedy algorithm failed to be optimal for v = 30. For a non-orderly coin system C, any value v such that grd C (v) = opt C (v) is called a counterexample. The change-making problem is a special case of the knapsack problem. In this more general setting, suppose you have n types of objects available to pack into a knapsack, and the monetary value of each object is denoted by c 1 , c 2 , . . . , c n , respectively. These objects have weights denoted by w 1 , w 2 , . . . , w n , respectively. The knapsack problem asks how we can pack the knapsack in such a way that the monetary value of the content equals some value v and the total weight of the packed knapsack is minimized. The change-making problem, then, is the special case in which the items have weight w i = 1 for all i. For a discussion of the knapsack problem, see survey articles [2,3] and books [12,15]. In Section 2, we review the progress made on the change-making problem over the last several decades and the primary tools used to study coin systems in general. Notably, characterizations of orderly coin systems with 3, 4, and 5 coin values have been established (Theorems 2.8, 2.9, and 2.10). One goal of our work here is to give a characterization of orderly coin systems with 6 coin values. A useful way to study coin systems is through considering their prefix coin systems. Given any orderly coin system C = (1, c 2 , c 3 , . . . , c n ), a prefix coin system of C is defined as C ′ = (1, c 2 , c 3 , . . . , c i ) where 1 ≤ i ≤ n. The system C ′ for 1 ≤ i < n may or may not be orderly. We can associate to any n-value coin system a string of n symbols where the ith symbol is + if the length i prefix coin system is orderly and is − otherwise. So for instance, the coin system (1, 2, 5, 6, 10) has pattern (+ + + − +), because all the prefix coin systems are orderly except for the fourth (1,2,5,6) which has a counterexample of 10. Every coin system with 4 or more coin values has a pattern that begins with (+ + +) (Theorem 2.5). Therefore, orderly coin systems with 6 coin values have one of four possible +/− patterns. Our result is as follows. Theorem 1.2. A coin system C = (1, c 2 , c 3 , c 4 , c 5 , c 6 ) is orderly if and only if C is one of the following: (1) Coin systems with pattern (+ + + + −+). (a) (1, 2, 3, a, a + 1, 2a), where a ≥ 5 (b) (1, a, 2a, b, b + a, 2b), where a ≥ 2, b ≥ 3a − 1, b = 3a, and grd C (2ma) ≤ m where m = ⌈ b 2a ⌉ (c) (1, a, 2a − 1, b, b + a − 1, 2b − 1), where a ≥ 2, b ≥ 3a − 1, grd C (m ′ (2a − 1)) ≤ m ′ where m ′ = ⌈ b 2a−1 ⌉ (2) Coin systems with pattern (+ + + − −+). (a) (1, a, 2a − 1, m(2a − 1) − (a − 1), m(2a − 1), (2m − 1)(2a − 1)), where 1 < m < a. (b) (1, a, 2a, m(2a − 1) − (a − 1), m(2a − 1) + 1, (2m − 1)(2a − 1) + 1), where 1 < m ≤ a.(3) Coin systems with pattern (+ + + + ++) or (+ + + − ++). (1, c 2 , c 3 , c 4 , c 5 ) is orderly and grd C (mc 5 ) ≤ m where m = ⌈ c6 c5 ⌉. While orderly coin systems of size 3, 4, and 5 provide a foundation for understanding the structure of orderly coin systems, they yield limited intuition about larger coin systems, because they are so constrained. Thus the result here for coin systems with 6 coin values is critical in understanding the general structure of orderly coin systems. In Section 4, we generalize to coin systems with n values with pattern (+ + + − · · ·−+). All coin systems we observed with this pattern have a particular structure; we call them fixed gap coin systems (Definition 4.1). We give three infinite families of fixed gap coin systems D, E, and F (see Theorems 4.3,4.5,and 4.7). Coin systems in D have 3r + 2 coin types where r ≥ 1, while E and F both have 3r coin types where r ≥ 2. Every coin system with pattern (+ + + − · · · − +) that we observed belonged to one of these three infinite families, so we make the following conjecture. Conjecture 1.3. A coin system has pattern (+ + + − · · · − +) only if it is a member of one of the families D, E, or F (defined in Theorems 4.3,4.5,and 4.7). In particular, there are no coin systems with pattern (+ + + − · · · − +) that have 3r + 1 coin values where r ≥ 2. In the final stages of our work, we were made aware of another paper recently posted that also characterizes orderly coin systems with 6 coin values (see Proposition 1 in [17]). The two characterizations (Theorem 1.2 here in this paper and Proposition 1 of [17]) have different perspectives and use different tools. Thus we believe that our result here stands side-by-side with the work of [17], each shedding a complementary light on the change-making problem. Background A priori, checking whether an arbitrary coin system C is orderly is an infinite task, requiring us to check that grd C (v) = opt C (v) for every v > 0. The following result narrows down the location of the smallest counterexample of a coin system, if it exists. Theorem 2.1. [13] If the coin system C = (1, c 2 , c 3 , . . . , c n ) is not orderly, then the smallest counterexample v lies in the range: c 3 < v < c n−1 + c n . The following result improves on the previous one and enables us to determine whether a coin system is orderly in O(n 3 ) time. We first set up some notation and terminology. Say we have a coin system C = (1, c 2 , c 3 , . . . , c n ), and suppose we found a way to represent the value v via coins as follows: v = x 1 + x 2 c 2 · · · + x n c n where x i are each positive integers. We express this representation of v by coins in C via a vector of the x i 's as follows: (x 1 , x 2 , . . . , x n ) C . Such a vector representing v need not be unique, and we can order all such representations as follows. Definition 2.2. Let v be a positive integer and suppose that x = (x 1 , x 2 , . . . , x n ) C and x ′ = (x ′ 1 , x ′ 2 , . . . , x ′ n ) C are two representations of the value v with respect to the coin system C = (1, c 2 , c 3 , . . . , c n ). The representation x is lexicographically smaller than x ′ if x i = x ′ i for 1 ≤ i ≤ k < n for some k and x k+1 < x ′ k+1 . Theorem 2.3. [18] Let C = (1, c 2 , c 3 , . . . , c n ) be a non-orderly coin system. Let w be the minimum counterexample for C. Suppose that the lexicographically smallest optimal solution for w in C is (0, 0, . . . , 0, x i , x i+1 , . . . , x j , 0, 0, . . . , 0) C where 1 ≤ i ≤ j < n and x i , x j > 0. Then the greedy representation for c j+1 − 1 in C is (y 1 , y 2 , . . . , y i−1 , x i − 1, x i+1 , . . . , x j , 0, 0, . . . , 0) C where each y i is a nonnegative integer. Now we discuss properties and results related to prefix coin systems. We have a term for orderly coin systems where all prefix coin systems are orderly. Definition 2.4. A coin system C = (1, c 2 , c 3 , . . . , c n ) is totally orderly if each coin system (1, c 2 , c 3 , . . . , c i ) is orderly for all i = 1, 2, . . . , n. Sometimes knowing information about a prefix coin system yields information about the coin system as a whole. Theorem 2.5. [1] If C = (1, c 2 , c 3 , . . . , c n ) is orderly, then the coin system (1, c 2 , c k ) is orderly for all 3 ≤ k ≤ n. In particular, (1, c 2 , c 3 ) is orderly. For example, consider the coin system (1,3,4). The value 6 is a counterexample for this coin system (the greedy solution is 4 + 1 + 1 while an optimal solution is 3 + 3). Thus the pattern for this coin system is (+ + −). One might think that adding new coin values lager than 4 to the coin system could "fix" it and make it orderly. But Theorem 2.5 tells us that this is impossible. There is no collection of higher coin values which one could add to the coin system (1,3,4) which would cause the resulting coin system to be orderly. A more involved example involving 7-coins is as follows. Theorem 2.6. [1] There are no coin systems with the pattern (+ + + − + − +). The following theorem (sometimes called the One Point Theorem) gives a necessary and sufficient condition to determine whether a coin system (1, c 2 , . . . , c n−1 , c n ) is orderly, if one already knows that the prefix coin system (1, c 2 , . . . , c n−1 ) is orderly. Theorem 2.7 (One Point Theorem). [1,10,14] Suppose that C ′ = (1, c 2 , . . . , c n−1 ) is orderly and c n−1 < c n . Let m = ⌈ cn cn−1 ⌉. The following are equivalent: (1) The coin system C = (1, c 2 , . . . , c n−1 , c n ) is orderly. (2) grd C (mc n−1 ) = opt C (mc n−1 ) (3) grd C (mc n−1 ) ≤ m The known results about orderly coin systems with only a few coin values are as follows. Any coin system with two coin values C = (1, c) where c > 1 is orderly. The orderly 3-value coin systems are then characterized by the One Point Theorem (Theorem 2.7). An equivalent characterization is as follows. is not orderly, (2) C is totally orderly. c 3 − c 2 ∈ A, where A is defined as follows A = {c 2 − 1, c 2 } ∪ {2c 2 − 2, 2c 2 − 1, 2c 2 } ∪ · · · ∪ {mc 2 − m, . . . , mc 2 } ∪ · · · Notice that Theorems 2.7 and 2.8 imply that the United States' coin system (1, 5, 10, 25) is totally orderly. For the EU coin systemn (1, 2, 5, 10, 20, 50, 100, 200), Theorem 2.8 together with repeated use of Theorem 2.7 show that it is also totally orderly. Next, we state constraints on the differences between consecutive coin values for orderly coin systems. The first is simple. Theorem 2.11. [1] If C = (1, c 2 , c 3 , . . . , c n ) is orderly, then for all i = 2, 3 . . . , n, c i − c i−1 ≥ c 2 − 1. The next result is more involved. Roughly speaking, if two consecutive differences between adjacent coin values are big, then all subsequent differences between adjacent coins are also big. Now we give properties of non-orderly coin systems. Consider again the coin system A = (1,3,4). The smallest counterexample for this coin system was w = 6. The greedy solution is 6 = 4 + 1 + 1, while an optimal solution is 6 = 3 + 3. Observe that the set of coin denominations used in creating the greedy solution {1, 4} was disjoint from the coin denominations used to create the optimal solution {3}. This is true in general, as the result below states. Proposition 2.13. [6] Suppose a coin system C = (1, c 2 , c 3 , . . . , c n ) is not orderly. Let w be the smallest counterexample, and suppose that the greedy solution for w is given by w = x 1 · 1 + x 2 · c 2 + x 3 · c 3 + · + x n · c n . Moreover, suppose the following is any optimal solution for w: w = y 1 · 1 + y 2 · c 2 + y 3 · c 3 + · + y n · c n . Then x i y i = 0 for all i. The final result considers the situation where a non-orderly coin system has another prefix coin system that is also non-orderly. This result requires us to define a new term. Definition 2.14. A coin system C = (1, c 2 , c 3 , . . . , c n ) is tight if it has no counterexample smaller than c n . Theorem 2.15. [4] Let C 1 = (1, c 2 , c 3 ), C 2 = (1, c 2 , c 3 , . . . , c m ) and C 3 = (1, c 2 , c 3 , . . . , c m , c m+1 ) be three tight coin systems such that C 1 is orderly but C 2 is not. If C 3 is not orderly, then there is a counterexample x = c i + c j > c m+1 of C 3 with 1 < c i ≤ c j ≤ c m . There are several perspectives and extensions of the change-making problem which we leave untouched, since they do not pertain to our goals. Interested readers might like to seek them out [5,7,9,19]. We note that [11] contains apparently elegant necessary and sufficient conditions for a coin system to be orderly, but the result was later shown false in [16]. Orderly coin systems with six coin values We are ready to prove Theorem 1.2, which gives a characterization of orderly coin systems with 6 coin values. In light of Theorem 2.5, each such coin system has one of the following four patterns: (1) (+ + + + −+) (2) (+ + + − −+) (3) (+ + + + ++) (4) (+ + + − ++) Part (1) of the theorem considers coin systems with pattern (+ + + + −+) and is proved in Section 3.1. Part (2) of the theorem considers the pattern (+ + + − −+) and is proved in Section 3.2. Part (3), which considers the remaining two patterns, directly follows from the One Point Theorem (Theorem 2.7). Coin systems with pattern: (+ + + + −+). In this section, we prove part (1) of Theorem 1.2. We restate the result here for clarity. Theorem 3.1. A coin system has pattern (+ + + + −+) if and only if it is one of the following: a. (1, 2, 3, a, a + 1, 2a), where a ≥ 5. b. (1, a, 2a, b, b + a, 2b), where a ≥ 2, b ≥ 3a − 1 and b = 3a and grd(2ma) ≤ m where m = ⌈ b 2a ⌉. c. (1, a, 2a − 1, b, b + a − 1, 2b − 1), where a ≥ 2, b ≥ 3a − 1 and grd(m ′ (2a − 1)) ≤ m ′ where m ′ = ⌈ b 2a−1 ⌉. Proof. Suppose a coin system has the form (+ + + + −+). Since (1, c 2 , c 3 , c 4 ) is orderly and (1, c 2 , c 3 , c 4 , c 5 ) is not orderly, we know by the One Point Theorem that mc 4 is a counterexample for (1, c 2 , c 3 , c 4 , c 5 ) where m = ⌈ c5 c4 ⌉. Moreover, it must be the case that c 6 ≤ mc 4 , for otherwise C would not be orderly. Now observe that c 6 ≤ mc 4 = ⌈ c5 c4 ⌉c 4 < ( c5 c4 + 1)c 4 = c 5 + c 4 < c 5 + c 5 . Therefore c 6 < c 5 + c 5 , c 6 < c 5 + c 4 . Now since (1, c 2 , c 3 , c 4 , c 5 , c 6 ) is orderly, there must exist coins c i and c j where 1 ≤ i < j ≤ 4 such that c 6 + c j = c 5 + c 5 ,(1)c 6 + c i = c 5 + c 4 .(2) Subtracting these two equations, we have c 5 − c 4 = c j − c i < c j ≤ c 4 Therefore c 5 ≤ 2c 4 . We conclude that 1 < c5 c4 ≤ 2, which implies that m = ⌈ c5 c4 ⌉ = 2. We now have three main cases to consider for the possible values of j (and within each case, we will consider the values i < j): Case (1) j = 4, Case (2) j = 3, and Case (3) j = 2. Case 1: j = 4. Plugging in and combining Equations 1 and 2 above, we have c 5 +c i = 2c 4 . This shows that 2c 4 is not a counterexample of (1, c 2 , c 3 , c 4 , c 5 ), which is a contradiction. Therefore, j = 4 is impossible. Case 2a: j = 3 and i = 1. Equations 1 and 2 above become: c 6 + c 3 = c 5 + c 5 , c 6 + 1 = c 5 + c 4 . Observe that c 5 + c 4 = c 6 + 1 > 2c 4 ≥ c 6 which implies that 2c 4 = c 6 . Setting c 4 = a, it follows that c 6 = 2a, c 5 = a + 1, and c 3 = 2. However, since 1 < c 2 < c 3 , we are left without any possible value for c 2 , which is a contradiction. Therefore it is impossible to have j = 3 and i = 1. Case 2b: j = 3 and i = 2. Equations 1 and 2 become: c 6 + c 3 = c 5 + c 5 , c 6 + c 2 = c 5 + c 4 . Subtracting these two equations. we have c 5 − c 4 = c 3 − c 2 = d where we define d is the gap between the coins. Simplifying the equations, we conclude that c 3 = c 2 + d c 5 = c 4 + d c 6 = 2c 4 − c 2 + d So then, the coin system is as follows, (1, c 2 , c 3 , c 4 , c 5 , c 6 ) = (1, c 2 , c 2 + d, c 4 , c 4 + d, 2c 4 − c 2 + d) By Theorem 2.11, we know d ≥ c 2 − 1. On the other hand, because c 6 ≤ 2c 4 , it follows that d ≤ c 2 . Therefore, there are only two possibilities: d = c 2 or d = c 2 − 1. So the coin system has one of two possible forms: C = (1, c 2 , 2c 2 , c 4 , c 4 + c 2 , 2c 4 ) or C ′ = (1, c 2 , 2c 2 − 1, c 4 , c 4 + c 2 − 1, 2c 4 − 1). There are further constraints on c 4 . First, Theorem 2.11 tells us that the skip between the third and fourth coins is at least c 2 − 1. Therefore the coin system C satisfies c 4 ≥ 3c 2 − 1, and the coin system C ′ satisfies c 4 ≥ 3c 2 − 2. Because 2c 4 is a counterexample for the 5-value coin system, the difference 2c 4 − c 5 is not a coin value in the coin system. Therefore for C we know c 4 = 3c 2 , and C ′ satisfies c 4 = 3c 2 − 2. Finally, since the 4-value coin system is orderly, we know by the One Point Theorem that the coin system C satisfies grd C (2mc 2 ) ≤ m where m = ⌈ c4 2c2 ⌉, and the coin system C ′ satisfies grd C ′ (m ′ (2c 2 − 1)) ≤ m ′ where m ′ = ⌈ c4 2c2−1 ⌉. Setting c 2 = a and c 4 = b, the coin systems C and C ′ coincide with parts (b) and (c) in the statement of the Theorem, respectively. Case 3: j = 2. In this case, we must have i = 1. Equations 1 and 2 become c 6 + c 2 = c 5 + c 5 , c 6 + 1 = c 5 + c 4 . Now observe that c 5 + c 4 = c 6 + 1 > 2c 4 ≥ c 6 . This forces 2c 4 = c 6 . Setting c 4 = a, and simplifying all these equations, we conclude that c 2 = 2 c 5 = a + 1 c 6 = 2a So then, gathering everything together, we conclude that (1, c 2 , c 3 , c 4 , c 5 , c 6 ) = (1, 2, b, a, a + 1, 2a) for some integers a and b with b < a. We now use Theorem 2.12 to show that the values of b and a cannot both be large. Suppose that b > 4 and a > 2b. Then Theorem 2.12 implies that (a + 1) − a ≥ a − b which simplifies to a ≤ b + 1, but this contradicts a > 2b. Therefore we cannot have both b > 4 and a > 2b, so it must be the case that b ≤ 4 or a ≤ 2b. Suppose that b > 4 and b < a ≤ 2b. We can rule out this possibility by observing that the resulting coin system (1, 2, b, a, a + 1, 2a) will have c = a + b as a counterexample, showing the coin system is not orderly. Finally, suppose that b ≤ 4, which implies that b = 3 or b = 4. We can immediately rule out the possibility that b = 4, since in that case our coin system is (1, 2, 4, a, a + 1, 2a) and the value c = a + 4 is a counterexample, making our coin system non-orderly. If b = 3, we obtain the coin system (1, 2, 3, a, a + 1, 2a). If a = 4 the system is totally orderly, so we must have a ≥ 5. Conversely, suppose that we have a coin system that is in one of the three forms stated in the theorem. We check that the coin systems have the pattern (+ + + + −+). We first show the coin system (1, 2, 3, a, a + 1, 2a) where a ≥ 5 has pattern (+ + + + −+). The coin system (1, 2, 3) is orderly by Theorem 2.8. The fact that (1, 2, 3, a) is orderly for a ≥ 5 follows from Theorem 2.1 and the fact that (1, 2, 3) is orderly. The 5-value prefix coin system (1, 2, 3, a, a + 1) for a ≥ 5 is not orderly because 2a is a counterexample (this is where we need the assumption that a ≥ 5). All that remains is to show that the 6-value coin system is orderly. Theorem 2.3 enables us to find the minimum counterexample (if it exists) as follows. First, set C = (1, c 2 , c 3 , c 4 , c 5 , c 6 ) = (1, 2, 3, a, a + 1, 2a). We find the greedy representation of c k − 1 for 1 < k ≤ 6 and express it as a vector with respect to C. By Theorem 2.3, the minimum counterexample (if it exists) is found as follows. For each vector corresponding to c k − 1 and for each p where 1 ≤ p < k − 1, set the first p entries of the vector to 0, and add 1 to the p + 1 th entry. The resulting values are the candidates for minimal counterexample. Doing this process to the vector associated to c 2 − 1 is not possible we would have to have 1 ≤ p < 1 (indeed, this is always the case). Doing this process for c 3 − 1 = (0, 1, 0, 0, 0, 0) C with p = 1 yields the vector (0, 2, 0, 0, 0, 0) C , which corresponds to the value 4. We know 4 is not a counterexample, since the greedy solution for 4 uses only 2 coins. Next we consider c 4 − 1 = a − 1. The greedy solution for this value can be expressed in vector form as ( x 1 , x 2 , x 3 , 0, 0, 0) C where x 1 + 2x 2 + 3x 3 = a − 1 and x 1 , x 2 ∈ {0, 1} and x 1 x 2 = 0. With p = 1, we have (0, x 2 + 1, x 3 , 0, 0, 0) C , which corresponds to the value: 2(x 2 + 1) + 3x 3 = a if x 1 = 1 a + 1 if x 1 = 0 For p = 2, we have (0, 0, x 3 + 1, 0, 0, 0) C , which corresponds to the value: 3(x 3 + 1) =      a if x 1 = 0, x 2 = 1 a + 1 if x 1 = 1, x 2 = 0 a + 2 if x 1 = 0, x 2 = 0 None of these values a, a + 1, or a + 2 are counterexamples, because each has a greedy solution with 1 or 2 coins. Next up, the value c 5 − 1 = a has greedy representation (0, 0, 0, 1, 0, 0) C and yields three candidates for counterexample (corresponding to p = 1, 2, 3): (0, 1, 0, 1, 0, 0) C = a + 2 (0, 0, 1, 1, 0, 0) C = a + 3 (0, 0, 0, 2, 0, 0) C = 2a Again, the values a + 2, a + 3, and 2a are not counterexamples, since each has a greedy solution with 1 or 2 coins. Finally, we consider the vector associated to c 6 − 1 = 2a − 1. The greedy solution has vector representation (y 1 , y 2 , y 3 , 0, 1, 0) C where y 1 + 2y 2 + 3y 3 = a − 2 and y 1 , y 2 ∈ {0, 1} and y 1 y 2 = 0. With p = 1, we have the vector (0, y 2 + 1, y 3 , 0, 1, 0) C , which corresponds to the value: 2(y 2 + 1) + 3y 3 + a + 1 = 2a if y 1 = 1 2a + 1 if y 1 = 0 For p = 2, we have the vector (0, 0, y 3 + 1, 0, 1, 0) C , which corresponds to the value: 3(y 3 + 1) =      2a if y 1 = 0, y 2 = 1 2a + 1 if y 1 = 1, y 2 = 0 2a + 2 if y 1 = 0, y 2 = 0 For p = 3 and p = 4, we have: (0, 0, 0, 1, 1, 0) C = 2a + 1 (0, 0, 0, 0, 2, 0) C = 2a + 2 Altogether from these four cases, we generate three possible candidates for counterexample: 2a, 2a + 1, and 2a + 2. None of these are counterexamples, since each has a greedy solution with 1 or 2 coins. Because the process found no minimum counterexample, it must be the case that (1, 2, 3, a, a + 1, 2a) is orderly for a ≥ 5, as claimed. Now let B and B ′ denote coin systems in the form of (b) and (c) in the statement of the theorem, respectively. Theorems 2.7 and 2.8 imply that the 4-value prefix coin systems of B and B ′ are both totally orderly. Furthermore, the 5-value prefix coin systems for B and for B ′ have counterexample 2b, so they are not orderly. Hence, all that remains is to show that the 6-value coin systems B and B ′ are orderly. Using Theorem 2.3, it follows that 2b is the minimum counterexample of the 5-value prefix coin systems of both B and B ′ . Thus the 5-value prefix coin systems are both tight. Now suppose that B or B ′ is not orderly. Then by Theorem 2.15, it follows that the coin system has a counterexample of the form c i + c j for some coin values c i and c j in B or in B ′ , where 1 ≤ i ≤ j ≤ 5. However, a straightforward check of these values reveals that c i + c j is never a counterexample. Hence it must be the case that the coin systems B and B ′ are orderly, as claimed. 3.2. Coin systems with pattern: (+ + + − −+). In this section, we prove part (2) of Theorem 1.2. We first prove a lemma. Lemma 3.2. If C = (1, c 2 , c 3 , c 4 , c 5 , c 6 ) has pattern (+ + + − −+), then the coin system has one of the following forms: 1, a, 2a, b, a + b, 2b) where a ≥ 2 and b > 2a. (1) (1, a, 2a − 1, b, a + b − 1, 2b − 1) where a ≥ 3 and b > 2a − 1. (2) ( Proof. Because the 6-value coin system is orderly, both the 4-coin and 5-value coin systems are tight. By Theorem 2.15, there must exist a counterexample for the 5-value coin system (1, c 2 , c 3 , c 4 , c 5 ) of the form c s + c t for some 1 ≤ s, t ≤ 4. And because the 6-value coin system is orderly, it follows that c 6 ≤ c s + c t ≤ c 4 + c 4 . Thus there exists 0 ≤ i < j < k ≤ 4 such that c 4 + c 4 = c 6 + c i c 4 + c 5 = c 6 + c j c 5 + c 5 = c 6 + c k (We set c 0 = 0.) Subtracting adjacent equations, we find c 5 − c 4 = c k − c j = c j − c i . The possible values for j are: 1, 2, and 3. We will rule out the possibility that j = 1 or j = 3 first, and then we will study the case j = 2. Case 1: j = 1. If j = 1, it follows that i = 0 and hence c k = 2c 1 − c 0 = 2. Thus it must be the case that k = 2. Substituting the values (i, j, k) = (0, 1, 2) into the above system of equations, we find the following relationships: c 5 = c 4 + 1 and c 6 = 2c 4 . So the coin system is (1, 2, c 3 , c 4 , c 4 + 1, 2c 4 ). Since the coin system is orderly, the value c 3 + c 4 should have a greedy solution with just 2 coins. Hence (c 3 + c 4 ) − (c 4 + 1) must be a value in the coin system. This forces either c 3 = 2 or c 3 = 3. The former option leaves no possible value for c 2 . For the latter option, the first three coin values are (1, 2, 3). Thus, no matter the value for c 4 , the 4-value coin system (1, 2, 3, c 4 ) is orderly, which is a contradiction. Hence j = 1 is not possible. This forces the value of c 4 to be one of the following: 2c 3 or 2c 3 − 1 or 2c 3 − c 2 . However, using the One Point Theorem, if the 3-value coin system (1, c 2 , c 3 ) is orderly, the resulting coin system (1, c 2 , c 3 , c 4 ) is orderly as well, if c 4 is any of the options above. Therefore it is impossible to have j = 3. Options (3) and (4) cannot occur. For, (3) and (4) imply that c 4 = 2c 2 or c 4 = 2c 2 − 1, respectively. On the other hand, we know from Theorem 2.11 that c 4 ≥ 3c 2 − 2. Applying this inequality, we conclude that c 2 ≤ 2 and c 2 ≤ 1, respectively. The former option implies c 2 = 2 and c 4 = 4, which forces c 3 = 3, and the 4-value coin system is necessarily orderly, which is a contradiction. The latter option (c 2 ≤ 1) is impossible, since c 2 > 1. Thus neither (3) nor (4) above are possible. This proves the lemma. Now we are ready to prove part (2) of Theorem 1.2. We restate the result here for clarity. Theorem 3.3. A coin system has pattern (+ + + − −+) if and only if it is one of the following: ( 1) C = (1, a, 2a − 1, m(2a − 1) − a + 1, m(2a − 1), (2m − 1)(2a − 1)) where 1 < m < a. (2) C ′ = (1, a, 2a, m(2a − 1) − a + 1, m(2a − 1) + 1, (2m − 1)(2a − 1) + 1) where 1 < m ≤ a. Proof. First we prove that any coin system with the pattern (+ + + − −+) is in the form of C or C ′ as stated in the theorem. From the previous lemma, we know that the coin system can be in two different forms. First consider the form: B = (1, a, 2a − 1, b, a + b − 1, 2b − 1) where a ≥ 3 and b > 2a − 1. From the One Point Theorem, we know that the coin system (1, a, 2a − 1, b) has a counterexample of the form m(2a − 1) where m = ⌈ b 2a−1 ⌉. We claim that this number m(2a−1) is not a counterexample of the 5-value prefix coin system of B. Suppose for contradiction that m(2a−1) is such a counterexample. Because the 6-value coin system is orderly, we know that c 6 = 2b − 1 ≤ m(2a − 1) = ⌈ b 2a−1 ⌉(2a − 1) ≤ ( b 2a−1 + 1)(2a − 1) = b + 2a − 1. This implies that b ≤ 2a. Since we already know b > 2a − 1, we conclude b = 2a. By Theorem 2.11, we know that the differences between adjacent coin values is at least a − 1, and because the third and fourth coin values differ by 1, we have a = 2, and the coin system is (1, 2, 3, 4, 5, 7). However, this coin system is totally orderly, which is a contradiction. Hence m(2a − 1) is not a counterexample for the 5-value coin system of B, as claimed. Because m(2a − 1) is a counterexample for the 4-value prefix coin system, if we subtract the value of the 4th coin (which has value b) from m(2a−1), the remaining value must use more than m − 1 coins in its greedy solution. Whereas, because m(2a − 1) is not a counterexample for the 5-value prefix coin system, if we subtract the value of the 5th coin (which has value a + b − 1) from m(2a − 1), the remaining value uses at most m − 1 coins in its greedy solution. Letting z = m(2a − 1) − (a + b − 1) and y = m(2a − 1) − b, we can express the above statements as an inequality: grd B (z) ≤ m − 1 < grd B (y). Moreover, observe that the value of z is small: z = m(2a − 1) − (a + b − 1) ≤ ( b 2a−1 + 1)(2a − 1) − (a + b − 1) = a Also, notice that y = z + a − 1. Collecting these statements together, we are looking for a value z with two relationships: (1) z ∈ [0, a] and (2) grd B (z) ≤ m − 1 < grd B (z + a − 1). Let's take a look at the function grd B (x) at values from x = 0 up to x = 2a − 1. grd B (x) =      x if 0 ≤ x ≤ a − 1 x − (a − 1) if a ≤ x ≤ 2a − 2 1 if x = 2a − 1 Observe that for all 1 ≤ x ≤ a, the function satisfies grd B (x) = grd B (x + a − 1). So z cannot be any integer from 1 to a, because the inequality (2) given above is not satisfied. As z ∈ [0, a], the only possible value for z is z = 0. Replacing z = 0 in the original expressions for y and z, we find the values of c 4 , c 5 , and c 6 . c 4 = b = m(2a − 1) − a + 1 c 5 = a + b − 1 = m(2a − 1) c 6 = 2b − 1 = (2m − 1)(2a − 1) Keep in mind that by the definition of m, we know m > 1, and because m − 1 < grd B (z + a − 1) = grd B (a − 1) = a − 1, we also have that m < a. This proves that the coin system B has the form of C in the statement of the theorem. Now consider the second coin system form in Lemma 3.2: a, 2a, b, a + b, 2b) where a ≥ 2 and b > 2a. We will show that B ′ has the form of C ′ given in the statement of the theorem. Initially, the argument for B ′ mirrors what we just did for B, but then it diverges in subtle ways, so we start from the beginning. From the One Point Theorem, we know that the coin system (1, a, 2a, b) has a counterexample of the form 2am where m = ⌈ b 2a ⌉. Suppose that this value 2am is also a counterexample of the 5-value prefix coin system of B ′ . Since the 6-value coin system B ′ is orderly, we know that B ′ = (1,c 6 = 2b ≤ 2am = (2a)⌈ b 2a ⌉ ≤ (2a)( b 2a + 1) = b + 2a. This implies that b ≤ 2a, a contradiction. We conclude that 2am is not a counterexample for the 5-value prefix coin system of B ′ . Therefore, starting with the value 2am, if we subtract the value of the 4th coin, the remaining value must require more than m−1 coins using the greedy algorithm. Whereas, starting with the value 2am, if we subtract the value of the 5th coin, the remaining value must require at most m − 1 coins using the greedy algorithm. Letting z = 2am − (a + b) and y = 2am − b, we can express the above thought as an inequality: grd B ′ (z) ≤ m − 1 < grd B ′ (y). Also observe that z ∈ [0, a] because: 0 ≤ z = 2am − (a + b) ≤ ( b 2a + 1)(2a) − (a + b) = a. Moreover, observe that y = z + a. Collecting these statements together, we are looking an integer z with two relationships: (1) z ∈ [0, a] and (2) grd B ′ (z) ≤ m − 1 < grd B ′ (z + a). Let us take a look at the function grd B ′ (x) at values from x = 0 to x = 2a, which the same as the previous case except at the last two values (x = 2a − 1 and x = 2a). grd B ′ (x) =      x if 0 ≤ x ≤ a − 1 x − (a − 1) if a ≤ x ≤ 2a − 1 1 if x = 2a Observe that grd B ′ (x) = grd B ′ (x + a) − 1 for all x ∈ [0, a − 1] . This implies that if z is any integer in the interval [0, a − 1], the inequality (2) above will be satisfied. In addition, if z is any value on this interval, the inequality from (2) above simplifies to z ≤ m − 1 < z + 1. Thus z = m − 1. Replacing z = m−1 in the original expressions for y and z, we find the expression for c 4 , c 5 , and c 6 as follows. c 4 = b = 2am − (z + a) = m(2a − 1) − a + 1 c 5 = a + b = m(2a − 1) + 1 c 6 = 2b = 2(m(2a − 1) − a + 1) = (2m − 1)(2a − 1) + 1 Notice that by the construction of m, we know that m > 1, and since m − 1 = grd B ′ (z) ≤ a − 1, it follows that m ≤ a. So then B ′ has the same form as C ′ , as desired. Conversely, we must prove that the coin systems C and C ′ in the theorem's statement have the pattern (+ + + − −+). We prove a stronger result, in fact. In Theorem 4.5, we prove that an infinite family of coin systems denoted E (for which C is a special case) has the pattern (+ + + − · · ·− +). In Theorem 4.7, we prove that an infinite family of coin system F (for which C ′ is a special case) has the pattern (+ + + − · · · − +). We thus delay this argument to the next section. Orderly coin systems with n coin values We now study orderly coin systems with n coin values. The only +/− coin system pattern which cannot be tackled via first characterizing an orderly coin system for a value of i with 3 < i < n is the pattern where every subsystem of size i with 3 < i < n fails to be orderly. Such a coin system has the pattern (+ + + − · · · − +). We focus on these coin systems. In [1], the authors found a single coin system with pattern (+ + + − · · · − +) and with n types of coins for each n = 3r where r ≥ 2 and for each n = 3r + 2 where r ≥ 1. Interestingly, the authors wrote that they suspected that there are no coin systems with 3r + 1 coin values with this pattern where r ≥ 2, but they were unable to prove their conjecture. We searched extensively for coin systems with pattern (+ + + − · · · − +) using a computer program. All observed examples had a common structure. We call these coins systems fixed gap coin systems, which we define below. Furthermore, the examples we found divided into three distinct infinite families D, E, and F . The coin systems in family D had 3r+2 types of coin where r ≥ 1, while the coin systems in E and F had 3r types of coins where r ≥ 2. In line with the conjecture [1], we found no coin systems with 3r + 1 coins where r ≥ 2. We prove that these three infinite families of coin systems D, E, and F have the pattern (+ + + − · · · − +). We conjecture that they are the only such coin systems (Conjecture 1.3). The conjecture holds true for n = 5 and n = 6. A fixed-gap coin system with n types of coins starts off with two initial coin values (c 1 , c 2 ) = (1, x). Then all the remaining coin values are generated in two stages. First choose two positive integers ∆ 1 and ∆ 2 where ∆ 1 = ∆ 2 . • Stage 1: Alternately add ∆ 1 and ∆ 2 to the previous coin value to produce the next coin value. We continue this until we have ℓ coin values for some 3 < ℓ < n. • Stage 2: For coin values c ℓ+1 through c n , add ∆ 1 + ∆ 2 to the previous coin value to produce the next coin value. We make this process precise as follows. Definition 4.1. Let n, ℓ, x, ∆ 1 , and ∆ 2 be positive integers with 3 < ℓ < n, x ≥ 2, and ∆ 1 = ∆ 2 . The coin system C = (1, c 2 , c 3 , . . . , c n ) is a fixed gap coin system if c i are defined as follows. (3) c i =                1 if i = 1 x if i = 2 c i−1 + ∆ 1 if i is odd and 3 ≤ i ≤ ℓ c i−1 + ∆ 2 if i is even and 3 < i ≤ ℓ c i−1 + ∆ 1 + ∆ 2 if ℓ < i ≤ n For most fixed-gap coin system C as above, the coin system has several consecutive prefix coin systems that fail to be orderly. We prove this in general in the following lemma, before focusing on the three specific families of fixed gap coin systems D, E, and F . Lemma 4.2. Let C be a fixed-gap coin system. If ℓ is odd, then each prefix coin system of C of length k with 5 ≤ k ≤ 1 2 (3ℓ − 5) is not orderly. If ℓ is even and x+∆ 1 > ∆ 2 +1, then each prefix coin system of C of length k with 5 ≤ k ≤ 1 2 (3ℓ−4) is not orderly. Proof. Throughout this proof, we use the same strategy multiple times to show that a given coin system (1, x, c 3 , . . . , c k ) is not orderly. The outline is as follows. We prove: (1) 2c k−1 − c k > 0, and (2) 2c k−1 − c k = c i for any i < k. This implies that 2c k−1 is a counterexample because on one hand the greedy solution has more than 2 coins, while the optimal solution has 2 coins. To begin, we prove that the prefix coin system (1, x, c 3 , . . . , c k ) is not orderly for all k such that 5 ≤ k ≤ ℓ. Suppose that k is even. We show 2c k−1 − c k is positive: 2c k−1 −c k = 2c k−1 −(c k−1 +∆ 2 ) = c k−1 −∆ 2 = (c k−3 +∆ 1 +∆ 2 )−∆ 2 = c k−3 +∆ 1 > 0 Now we show that 2c k−1 − c k is not a coin value c i for any i. Observe that if ∆ 1 < ∆ 2 , then the value 2c k−1 − c k = c k−3 + ∆ 1 falls between consecutive coin values: (4) c k−3 < c k−3 + ∆ 1 < c k−3 + ∆ 2 c k−2 And if ∆ 2 < ∆ 1 , we similarly have: (5) c k−3 + ∆ 2 c k−2 < c k−3 + ∆ 1 < c 2j−3 + ∆ 1 + ∆ 2 c k−1 . So no matter the relative sizes of ∆ 1 and ∆ 2 , the value 2c k−1 − c k falls between consecutive coin values, and therefore the value is not itself a coin value. We conclude that 2c k−1 is a counterexample for the prefix coin system (1, c 2 , c 3 , . . . , c k ) where k is even and 5 < k ≤ ℓ. Now suppose that k is odd and 5 ≤ k ≤ ℓ. Similar to before, we show that 2c k−1 is a counterexample for the prefix coin system (1, c 2 , c 3 , . . . , c k ). First, we simplify the expression 2c k−1 − c k and conclude that it must be positive: 2c k−1 −c k = 2c k−1 −(c k−1 +∆ 1 ) = c k−1 −∆ 1 = (c k−3 +∆ 1 +∆ 2 )−∆ 1 = c k−3 +∆ 2 > 0 Similar to the previous case, observe that if ∆ 1 < ∆ 2 , then the value 2c k−1 − c k = c k−3 + ∆ 2 falls between consecutive coin values: (6) c k−3 + ∆ 1 c k−2 < c k−3 + ∆ 2 < c k−3 + ∆ 1 + ∆ 2 c k−1 And if ∆ 2 < ∆ 1 , we have: (7) c k−3 < c k−3 + ∆ 2 < c k−3 + ∆ 1 c k−2 . So in either case, the value 2c k−1 − c k = c k−3 + ∆ 2 falls between consecutive coin values, and therefore the value is not itself a coin value. We conclude that 2c k−1 is a counterexample for the prefix coin system (1, c 2 , c 3 , . . . , c k ) where k is odd and 5 ≤ k ≤ ℓ. It remains to show that all prefix subcurrencies (1, c 2 , c 3 , . . . , c k ) where ℓ < k ≤ t also fail to be orderly, where t = 1 2 (3ℓ − 5) if ℓ is odd and t = 1 2 (3ℓ − 4) if ℓ is even. Suppose ℓ = 2r + 1 for some r ≥ 2. We show that (1, c 2 , c 3 , . . . , c k ) where ℓ < k ≤ 3r − 1 fail to be orderly by showing 2c 2r is a counterexample. 2c 2r − c k = 2c 2r − (c 2r+1 + (k − 2r − 1)(∆ 1 + ∆ 2 )) = 2c 2r − (c 2r + ∆ 1 + (k − 2r − 1)(∆ 1 + ∆ 2 )) = c 2r − ∆ 1 − (k − 2r − 1)(∆ 1 + ∆ 2 ) = c 2r−2(k−2r) + (k − 2r)(∆ 1 + ∆ 2 ) − ∆ 1 − (k − 2r − 1)(∆ 1 + ∆ 2 ) = c 6r−2k + ∆ 2 We previously proved that a value of form c 2i +∆ 2 where 1 ≤ i ≤ r −1 falls between adjacent coin values (Inequalities 6 and 7). Therefore, 2c 2r is a counterexample for the prefix coin system (1, c 2 , c 3 , . . . , c k ) where ℓ < k ≤ 3r − 1 and ℓ is odd. Finally, suppose ℓ is even. That is, ℓ = 2r + 2 for some r ≥ 1. And suppose x + ∆ 1 > ∆ 2 + 1. We show that (1, c 2 , c 3 , . . . , c k ) where ℓ < k ≤ 3r + 1 fails to be orderly by showing 2c 2r+1 is a counterexample. First we do the special case that k = 3r + 1. 2c 2r+1 − c 3r+1 = 2c 2r+1 − (c 2r+2 + (r − 1)(∆ 1 + ∆ 2 )) = 2c 2r+1 − (c 2r+1 + ∆ 2 + (r − 1)(∆ 1 + ∆ 2 ) = c 2r+1 − ∆ 2 − (r − 1)(∆ 1 + ∆ 2 ) = c 2r+1−2(r−1) + (r − 1)(∆ 1 + ∆ 2 ) − ∆ 2 − (r − 1)(∆ 1 + ∆ 2 ) = c 3 − ∆ 2 = x + ∆ 1 − ∆ 2 By our assumption, x + ∆ 1 > ∆ 2 + 1. Thus 1 < x + ∆ 1 − ∆ 2 < c 3 and moreover x + ∆ 1 − ∆ 2 cannot equal c 2 = x. Therefore, this value x + ∆ 1 − ∆ 2 is not a coin value and 2c 2r+1 is a counterexample for the prefix coin system (1, x, c 3 , . . . , c 3r+1 ). Now suppose ℓ < k < 3r + 1. We prove that this prefix coin system is not orderly by showing 2c 2r+1 is a counterexample. 2c 2r+1 − c k = 2c 2r+1 − (c 2r+2 + (k − 2r − 2)(∆ 1 + ∆ 2 )) = 2c 2r+1 − (c 2r+1 + ∆ 2 + (k − 2r − 2)(∆ 1 + ∆ 2 ) = c 2r+1 − ∆ 2 − (k − 2r − 2)(∆ 1 + ∆ 2 ) = c 2r+1−2(k−2r−2) + (k − 2r − 2)(∆ 1 + ∆ 2 ) − ∆ 2 − (k − 2r − 2)(∆ 1 + ∆ 2 ) = c 6r−2k+5 − ∆ 2 = c 6r−2k+3 + ∆ 1 We previously proved that a value of the form c 2i+1 + ∆ 1 where 1 ≤ i < r − 1 falls between adjacent coin values (Inequalities 4 and 5.). Therefore, 2c 2r+1 is a counterexample for the coin system (1, c 2 , c 3 , . . . , c k ) where ℓ < k < 3r + 2. Now we define the three infinite families of fixed-gap coin systems (denoted by D, E, and F , respectively) and prove that they have pattern (+ + + − · · · − +). d k =                1 if k = 1 2 if k = 2 d k−1 + a if k is odd and 3 ≤ k < 2r + 2 d k−1 + 1 if k is even and 3 < k ≤ 2r + 2 d k−1 + a + 1 if 2r + 2 < k ≤ 3r + 2 The coin system D has pattern (+ + + − · · · − +). It remains is to show the full coin system D is orderly. Suppose that the coin system is not orderly. We use Theorem 2.3 to find the smallest counterexample. First, we find the greedy solution for d k − 1 for all k ≥ 2 and express the value as a vector with respect to the coin system D. In the vectors below, an asterisk (*) denotes that the value is in the k − 1 position. For k even and 2 ≤ k ≤ 2r + 2, d k − 1 = d k−1 = (0, 0, . . . , 1 * , . . . , 0) D . For k odd and 2 < k < 2r + 2, d k − 1 = d k−1 + a − 1 = (x 1 , x 2 , 0, . . . , 1 * , . . . , 0) D where x 1 and x 2 are integers such that a − 1 = 2x 2 + x 1 , x 2 ≥ 0 and x 1 ∈ {0, 1}. For 2r + 1 < k ≤ 3r + 2, d k − 1 = d k−1 + a = (y 1 , y 2 , 0, . . . , 1 * , 0, . . . , 0) D where y 1 and y 2 are integers such that a = 2y 2 + y 1 , y 2 ≥ 1, and y 1 ∈ {0, 1}. Now for each 1 ≤ p < k −1, we set the first p entries of the vectors above to 0 and add 1 to the p + 1 entry. The set of the resulting vectors gives all the candidates for the smallest counterexample. When p = 1, the resulting vector in each case represents one of the following values: d i , d i + 1, or d i + 2. None of these values are counterexamples, since their greedy solutions have either 1 or 2 coins. When p ≥ 2, the resulting vector in each case represents a value of the form d i + d j where 2 ≤ i, j < 3r + 2. We compute the greedy solution for d i + d j for all i, j. First we consider the sum d i + d j where both of the coin values are belong to Stage 1 of the construction. That is, 3 ≤ i, j ≤ 2r + 2. d i + d j =                    d 2 + d i+j−2 if i and j are even and i + j ≤ 2r + 4 d 2 + d (i+j+2r)/2 if i and j are even and i + j > 2r + 4 d i+j if i and j are odd and i + j ≤ 2r + 2 d r+1+(i+j)/2 if i and j are odd and i + j > 2r + 2 d 1 + d i+j−1 if i is odd and j is even and i + j ≤ 2r + 2 d 1 + d (i+j+2r+1)/2 if i is odd and j is even and i + j > 2r + 2 Next we compute the greedy solution for d i + d j where at least one coin value (say, d j ) belongs to Stage 2 of the construction. That is, 2r + 2 < j < 3r + 2. Notice that we need not consider the case j = 3r + 2 because in that case, the greedy solution of d i + d 3r+2 is simply itself. d i +d j =                    d 3r+2 + d i+2j−6r−4 if 2 < i < 2r + 2 is odd and 1 2 (i − 1) + j ≥ 3r + 2 d 1 + d j+(i−1)/2 if 2 < i < 2r + 2 is odd and 1 2 (i − 1) + j < 3r + 2 d 3r+2 + d i+2j−8r−8 if 2 ≤ i ≤ 2r + 2 is even and 1 2 (i − 2) + j ≥ 3r + 2 d 2 + d j+(i−2)/2 if 2 ≤ i ≤ 2r + 2 is even and 1 2 (i − 2) + j < 3r + 2 d 3r+2 + d 2(i+j−4r−3) if 2r + 2 < i < 3r + 2 and i + j ≤ 5r + 4 d 3r+2 + d i+j−3r−2 if 2r + 2 < i < 3r + 2 and i + j > 5r + 4 None of these values d i + d j are counterexamples because each greedy solution has only 1 or 2 coins. Therefore the coin system D does not have a minimum counterexample, and the coin system is orderly. Theorem 4.5. Let r ≥ 2 and 1 < m < a. Define E = (1, e 2 , . . . , e 3r ) as follows: e k =                1 if k = 1 a if k = 2 e k−1 + a − 1 if k is odd and 3 ≤ k ≤ 2r + 1 e k−1 + (m − 1)(2a − 1) − (a − 1) if k is even and 3 < k < 2r + 1 e k−1 + (m − 1)(2a − 1) if 2r + 1 < k ≤ 3r The coin system E has pattern (+ + + − · · · − +). Now we show that the full coin system E is orderly using Theorem 2.3. First, we find the greedy solution for e k − 1 for all k ≥ 2 and express the value as a vector with respect to the coin system E. In the vectors below, the asterisk (*) denotes that the value is in the k − 1 st entry. (8) e k − 1 =                (a − 1, 0, . . . , 0) E if k = 2 (a − 2, 0, . . . , 1 * , . . . , 0) E if k is odd, 3 ≤ k ≤ 2r + 1 (a − 1, 0, m − 1, 0, . . . , 0) E if k = 4 (a − 1, 0, m − 2, 0 . . . , 1 * , . . . , 0) E if k is even, 6 ≤ k < 2r + 1 (a − 2, 1, m − 2, 0, . . . , 1 * , . . . , 0) E if 2r + 1 < k ≤ 3r Now for each 1 ≤ p < k − 1, we set the first p entries of the vectors above to 0 and add 1 to the p + 1 entry. In the first vector (when k = 2), there is nothing to check since in this case 1 ≤ p but also p < k − 1 = 1. In most cases, the resulting vector corresponds to the value e i + e j where 2 ≤ i, j < 3r (specifically, this is the case for the second vector and all values of p, and for the final two vectors in the case that 3 ≤ p < k − 1). So we must check whether a value of this form is a counterexample. Accordingly, we find the greedy solution for e i + e j for all 2 ≤ i, j < 3r. First, suppose 2 ≤ i, j ≤ 2r + 1 and m > 2. e i + e j =                    e i+j−1 + e 1 if i and j even and i + j ≤ 2r + 2 e (i+j)/2+r + e 1 if i and j even and i + j > 2r + 2 e i+j−3 + e 3 if i and j odd and i + j ≤ 2r + 4 e (i+j)/2+r−1 + e 3 if i and j odd and i + j > 2r + 4 e i+j−2 + e 2 if i is odd and j is even and i + j ≤ 2r + 4 e r+(i+j−1)/2 + e 2 if i is odd and j is even and i + j > 2r + 4 Next suppose 2r + 1 < j < 3r and m > 2. Then e i + e j =                    e j+(i−3)/2 + e 3 if 2 < i ≤ 2r + 1 is odd and i + 2j ≤ 6r + 3 e 3r + e i+2j−6r if 2 < i ≤ 2r + 1 is odd and i + 2j > 6r + 3 e j−1+i/2 + e 2 if 2 ≤ i ≤ 2r + 1 is even and i + 2j ≤ 6r + 2 e 3r + e i+2j−6r if 2 ≤ i ≤ 2r + 1 is even and i + 2j > 6r + 2 e 3r + e 2i+2j−8r−1 if 2r + 1 < i < 3r, and i + j ≤ 5r + 1 e 3r + e i+j−3r if 2r + 1 < i < 3r, and i + j > 5r + 1 In the special case m = 2, the above equations require three adjustments. If m = 2, then e 3 + e k = e k+2 for 2 ≤ k < 2r and e 3 + e k = e k+1 for 2r + 1 ≤ k < 3r + 1. Thus if m = 2, anytime we see e k + e 3 above where k ≥ 2 and k = 2r, we elminate the sum and replace it with a single coin value as the greedy solution. Observe that in all cases, the greedy solution for e i + e j contains either 1 or 2 coins. Therefore e i + e j is not a counterexample. Now, we consider one by one the last three vectors from Equation 8 where p = 1 and p = 2. Begin with e 4 − 1. For p = 1 and p = 2, respectively, we have the following two vectors, associated candidates for counterexample, and greedy solutions. Vector representation Candidate for counterexample Greedy solution (0, 1, m − 1, 0, . . . , 0) E 2am − m − a + 1 e 4 (0, 0, m, 0, . . . , 0) E m(2a − 1) e 5 The greedy solutions above both use 1 coin, so neither of these values are counterexamples. Now consider e k − 1 where k is even, 6 ≤ k < 2r + 1 (see Equation 8 for the vector representation). For p = 1 and p = 2, respectively, we have the following results. (Recall that an asterisk (*) denotes that the value is in the k − 1 st entry.) Vector representation Candidate for counterexample Greedy solution (0, 1, m − 2, 0 . . . , 1 * , . . . , 0) E e k−1 + 2am − 3a − m + 2 e k (0, 0, m − 1, 0 . . . , 1 * , . . . , 0) E e k−1 + 2am − 2a − m + 1 e k+1 Again, the greedy solutions above both use 1 coin, so neither of these values are counterexamples. Finally, consider e k − 1 where 2r + 1 < k ≤ 3r (see Equation 8). As before, for p = 1 and p = 2, respectively, we give the following two vectors, canidates for counterexample, and associated greedy solutions: Vector representation Candidate for counterexample Greedy solution (0, 2, m − 2, 0, . . . , 1 * , . . . , 0) E e k−1 + 2a(m − 1) − (m − 2) e k + e 1 (0, 0, m − 1, 0, . . . , 1 * , . . . , 0) E e k−1 + (2a − 1)(m − 1) e k Once more, the greedy solutions above use either 1 or 2 coins, so neither of these values are counterexamples. Thus the coin system E is orderly. Theorem 4.7. Let r ≥ 2 and 1 < m ≤ a. Define F = (1, f 2 , . . . , f 3r ) as follows: f k =                1 if k = 1 a if k = 2 f k−1 + a if k is odd and 3 ≤ k ≤ 2r + 1 f k−1 + (m − 1)(2a − 1) − a if k is even and 3 < k < 2r + 1 f k−1 + (m − 1)(2a − 1) if 2r + 1 < k ≤ 3r The coin system F has pattern (+ + + − · · · − +). Finally, we show that the full coin system F is orderly. Suppose that the coin system is not orderly. We use Theorem 2.3 to find the smallest counterexample. We find the greedy solution for f k − 1 for all k ≥ 2 and express it as a vector with respect to the coin system F . In the vectors below, the asterisk (*) denotes that the value is in the k − 1 st entry. 1, 0, . . . , 0) F if k = 2 (a − 1, 0, 0, . . . , 1 * , . . . , 0) F if k is odd, 3 ≤ k ≤ 2r + 1 (a − m, 0, m − 1, 0 . . . , 0) F if k = 4 (a − m, 0, m − 2, 0 . . . , 1 * , . . . , 0) F if k is even, 6 ≤ k < 2r + 1 (a − m, 1, m − 2, 0, . . . , 1 * , . . . , 0) F if 2r + 1 < k ≤ 3r (9) f k − 1 =                (a − As before, for each 1 ≤ p ≤ k − 1, we set the first p entries of the above vectors to 0 and add 1 to the p + 1 entry. Similar to the previous theorem, the case k = 2 yields nothing. For the second vector, the resulting vector has only two 1's for all such p, which corresponds to the value f i + f j . The same is true for the last three vectors when p ≥ 3. To settle these cases, we find the greedy solution for f i + f j for all 2 ≤ i, j < 3r. Suppose 2 ≤ i, j ≤ 2r + 1 and m > 2. The greedy solution for f i + f j is: (10) f i + f j =                    f i+j−1 if i and j even and i + j ≤ 2r + 2 f r+(i+j)/2 if i and j even and i + j > 2r + 2 f i+j−3 + f 3 if i and j odd and i + j ≤ 2r + 4 f r−1+(i+j)/2 + f 3 if i and j odd and i + j > 2r + 4 f i+j−2 + f 2 if i is even and j is odd and i + j ≤ 2r + 3 f r+(i+j−1)/2 + f 2 if i is odd and j is even and i + j > 2r + 3 We now show that none of the values given above as candidates for the minimum counterexample are, in fact, counterexamples. If m = 2, then the greedy solutions in each of the three cases are optimal, since each solution has only 1 or 2 coins. For m > 2, a total of m coins are in the greedy solution, and we must work carefully to show that the solution is optimal. Observe that all the greedy solutions above are of the form f k + (m − 1)f 1 for some value 4 ≤ k < 3r. So it remains to show that an optimal solution for f k + (m − 1)f 1 has m coins. Suppose that there exists a k such that an optimal solution f k + (m − 1)f 1 has fewer than m coins. Take the smallest such value, which we denote by v = f k + (m − 1)f 1 . By Theorem 2.13, an optimal solution for v does not use f 1 or f k . We know that an optimal solution for v uses at most one coin f j where 2r + 1 < j < 3r because the sum of any two such coins is larger than f 3r + m − 1 and v ≤ f 3r + m − 1. An optimal solution for v has at most one even-indexed coin f j where 2 ≤ j < 2r + 1, because if there were two such coins f i and f j , then they could be replaced with a single coin (either f i+j−1 or f r+(i+j)/2 , see Equation 10), which contradicts the optimality of the solution. If an optimal solution for v contains an even indexed coin f j where 3 < j < 2r+1, then by combining that coin with any other coin in the solution, we can trade the coin f j and the other coin f i for f 2 and a larger indexed coin (see Equations 10 and 11). Therefore, without loss of generality, the optimal solution for v either has no even-indexed coins f j where 2 ≤ j < 2r + 1 or else contains only f 2 . Next, if an optimal solution contains two coins f i and f j where i and j are odd and 3 < i, j ≤ 2r + 1, then we can trade them in for f 3 and another coin f r (again, see Equations 10 and 11). Therefore without loss of generality, the optimal solution has at most one coin f i with odd index 3 < i ≤ 2r + 1. If an optimal solution for v has both a coin of value f i where i is odd 3 ≤ i ≤ 2r+1 and a coin of value f j where 2r + 1 < j ≤ 3r, we can exchange the pair of coins for a different pair of coins f q and f 3 where 2r + 1 < q ≤ 3r (Equations 11). Therefore without loss of generality, the optimal solution for v has at most one coin with index 3 < j ≤ 3r. Putting all of these observations together, we conclude that any optimal solution for v can be exchanged for another optimal solution fitting one of the following four scenarios: ( 1) v = xf 3 , where 0 < x ≤ m − 1 (2) v = f 2 + yf 3 , where 0 < y ≤ m − 2 (3) v = f 2 + xf 3 + f j , where j is odd, 3 ≤ j ≤ 2r + 1, and 0 < x ≤ m − 3. (4) v = yf 3 + f j , where j is odd, 3 ≤ j ≤ 2r + 1, and 0 < y ≤ m − 2. (5) v = f 2 + xf 3 + f j , where 2r + 1 < j ≤ 3r, and 0 < x ≤ m − 3. (6) v = yf 3 + f j , where 2r + 1 < j ≤ 3r, and 0 < y ≤ m − 2. Now for (1)-(2), we compute: f 3 < v ≤ f 3 (m − 1) ≤ f j + (2a − 1)(m − 1) + a + m − 1 − a = f 4 + m − 1 − a < f 4 . And for (3)-(6), we compute: f j < v ≤ f j + 2a(m − 2) ≤ f j + (2a − 1)(m − 1) − 2a + m − 1 < f j+1 . In all cases, v falls between consecutive coin values, we conclude that f 3 (respectively, f j ) is a coin value in the greedy solution for v. This is a contradiction because on one hand v was the minimum counterexample (and as such, the optimal solutions for v use different coin values from the greedy solution), but then on the other hand, we showed that a coin in an optimal solution for v is also in the greedy solution. Hence there is not a minimal counterexample. Therefore the coin system F is orderly. The coin system C = (1, c 2 , c 3 ) is orderly if and only if The coin system C = (1, c 2 , c 3 , c 4 ) is orderly if and only if it is totally orderly. Theorem 2.10. [1, 8] The coin system C = (1, c 2 , c 3 , c 4 , c 5 ) is orderly if and only if one of the following holds: (1) (1, c 2 , c 3 , c 4 , c 5 ) = (1, 2, a, a+1, 2a) for some a ≥ 4, in which case (1, c 2 , c 3 , c 4 ) Theorem 2 . 212. [1] Suppose C = (1, c 2 , c 3 , . . . , c n ) is orderly. If m ≥ 3 and c m−1 > 2c m−2 and c m > 2c m−1 , then for every t ≥ m, we have c t+1 − c t ≥ c m − c m−1 . Case 2 : j = 3 . 23In this case, we must have k = 4. Considering the possibilities i = 0, 1, 2, we know c 4 − c 3 equals one of the elements in the set {c 3 , c 3 − 1, c 3 − c 2 }. Case 3 : j = 2 . 32In this case, the possibilities for i and k and the subsequent equationc 2 − c i = c k − c 2 are: (1) i = 0, k = 3, and c 2 = c 3 − c 2 (2) i = 1, k = 3, and c 2 − 1 = c 3 − c 2 (3) i = 0, k = 4, and c 2 = c 4 − c 2 (4) i = 1, k = 4, and c 2 − 1 = c 4 − c 2Option (1) above implies that c 3 = 2c 2 . Plugging the values for i, j, k into the system of equations, we also have c 5 = c 4 + c 2 and c 6 = 2c 4 . Thus the coin system is in the form of the second possibility in the statement of the lemma.Option(2)above implies that c 3 = 2c 2 − 1. Moreover, observe that c 3 = 2c 2 − 1 is possible only if c 2 ≥ 3. For, if c 2 = 2, then there is no value of c 4 for which the 4-value coin system is not orderly. Plugging the values for i, j, k into the system of equations, we also have c 5 = c 4 + c 2 − 1 and c 6 = 2c 4 − 1. Thus the coin system is in the form of the first possibility in the statement of the lemma. Theorem 4 . 3 . 43Let r ≥ 1 and a ≥ 2. Define D = (1, d 2 , . . . , d 3r+2 ) as follows: Example 4 . 4 . 44For r = 3 and a = 3, the coin system is Proof. The 3-coin prefix coin system (d 1 , d 2 , d 3 ) = (1, 2, a + 2) is totally orderly for all values of a ≥ 2 by Theorem 2.8 where d 3 − d 2 = a. The 4-coin prefix coin system (1, 2, a + 2, a + 3) is not orderly because 2a + 4 is a counterexample. From Lemma 4.2, we know all prefix coin systems of length i where 5 ≤ i ≤ 3r + 1 are not orderly. Example 4. 6 . 6For r = 4, m = 3, and a = 4, the coin system is E = (1, 4, 7, 18, 21, 32, 35, 46, 49, 63, 77, 91). Proof. The 3-value prefix coin system (e 1 , e 2 , e 3 ) = (1, a, 2a − 1) is totally orderly by Theorem 2.8 where e 3 − e 2 = a − 1. The 4-coin prefix coin system (1, a, 2a − 1, m(2a − 1) − (a − 1)) where 1 < m < a is not orderly because m(2a − 1) is a counterexample. From Lemma 4.2, we know all prefix coin systems of length i where 5 ≤ i ≤ 3r − 1 are not orderly. Example 4. 8 . 8For r = 4, m = 3, and a = 4, the coin system is F = (1, 4, 8, 18, 22, 32, 36, 46, 50, 64, 78, 92). Proof. The 3-coin prefix coin system (f 1 , f 2 , f 3 ) = (1, a, 2a) is totally orderly by Theorem 2.8 where f 3 − f 2 = a. The 4-coin prefix coin system (1, a, 2a, (m − 1)(2a − 1) + a) where 1 < m ≤ a is not orderly because 2am is a counterexample. From Lemma 4.2, we know all prefix coin systems of length i where 5 ≤ i ≤ 3r − 1 are not orderly. Other terms used in place of orderly in the literature are canonical, greedy, and standard. Suppose 2r + 1 < j < 3r and m > 2. Then we have:(11)if 2 ≤ i ≤ 2r + 1 is odd and i + 2j ≤ 6r + 3 f 3r + f i+2j−6r if 2 ≤ i ≤ 2r + 1 is odd and i + 2j > 6r + 3if 2r + 1 < i < 3r and i + j > 5r + 1From this, we see that if m > 2, then f i + f j is not a counterexample no matter the values of 3 ≤ i, j < 3r, because the greedy solutions given above always has 1 or 2 coins.In the special case m = 2, the above equations require three adjustments. Theand (c) f j+(i−3)/2 +f 3 above need appropriate adjustments. Everything else remains as is, and again we conclude that f i + f j is not a counterexample. Now we consider the remaining special cases. Namely, we consider the last three vectors for f k − 1 listed inEquation 9in the case that p = 1 and p = 2. First we simply work out what candidates for mimimum counterexample are generated from them. In fact, almost all the candidates will a greedy solution in the format of f k + (m − 1)f 1 for some k.We Next, consider f k − 1 where k is even and 6 ≤ k < 2r + 1.(Again, see Equation 9for the vector representation of f k − 1.) The vectors associated to p = 1 and p = 2 and the resulting candidates for counterexamples are below. An asterisk (*) denotes that the value is in the k − 1 st entry.Vector representationCandidate for Greedy solutionNow consider f k − 1 where 2r + 1 < k < 3r. The vectors associated to p = 1 and p = 2 and the resulting candidates for counterexamples are below.Vector representationCandidate for counterexample Greedy solution (0, 2, m − 2, 0, . . . , 1 * , . . . , 0) F f k−1 + 2am − 2a f k + (m − 1)f 1 (0, 0, m − 1, 0, . . . , 1 * , . . . , 0) F f k−1 + 2am − 2a f k + (m − 1)f 1 Combinatorics of the change-making problem. Anna Adamaszek, Michal Adamaszek, European J. Combin. 311Anna Adamaszek and Michal Adamaszek. Combinatorics of the change-making problem. European J. Combin., 31(1):47-63, 2010. Knapsack problems -an overview of recent advances. Part I: Single knapsack problems. Valentina Cacchiani, Manuel Iori, Alberto Locatelli, Silvano Martello, Comput. Oper. Res. 1432022Paper No. 105692, 13Valentina Cacchiani, Manuel Iori, Alberto Locatelli, and Silvano Martello. Knapsack problems -an overview of recent advances. Part I: Single knapsack problems. Comput. Oper. Res., 143:Paper No. 105692, 13, 2022. Knapsack problems -an overview of recent advances. Part II: Multiple, multidimensional, and quadratic knapsack problems. Valentina Cacchiani, Manuel Iori, Alberto Locatelli, Silvano Martello, Comput. Oper. Res. 143105693Valentina Cacchiani, Manuel Iori, Alberto Locatelli, and Silvano Martello. Knapsack problems -an overview of recent advances. Part II: Multiple, multidimensional, and quadratic knapsack problems. Comput. Oper. Res., 143:105693, 2022. Canonical coin systems for change-making problems. Xuan Cai, Ninth International Conference on Hybrid Intelligent Systems. 1Xuan Cai. Canonical coin systems for change-making problems. In 2009 Ninth International Conference on Hybrid Intelligent Systems, volume 1, pages 499-504, 2009. More on change-making and related problems. Timothy M Chan, Qizheng He, Journal of Computer and System Sciences. 124Timothy M. Chan and Qizheng He. More on change-making and related problems. Journal of Computer and System Sciences, 124:159-169, 2022. Canonical coin changing and greedy solutions. Lena Chang, James F Korsh, J. ACM. 233Lena Chang and James F. Korsh. Canonical coin changing and greedy solutions. J. ACM, 23(3):418-422, jul 1976. Algorithmic solution of the change-making problem. S K Chang, A Gill, J. ACM. 171S. K. Chang and A. Gill. Algorithmic solution of the change-making problem. J. ACM, 17(1):113-122, jan 1970. Totally greedy coin sets and greedy obstructions. L J Cowen, Robert Cowen, Arthur Steinberg, Electron. J. Combin. 151Research Paper 90, 13 pagesL. J. Cowen, Robert Cowen, and Arthur Steinberg. Totally greedy coin sets and greedy obstructions. Electron. J. Combin., 15(1):Research Paper 90, 13 pages, 2008. Change-making problems revisited: a parameterized point of view. Steffen Goebbels, Frank Gurski, Jochen Rethmann, Eda Yilmaz, J. Comb. Optim. 344Steffen Goebbels, Frank Gurski, Jochen Rethmann, and Eda Yilmaz. Change-making prob- lems revisited: a parameterized point of view. J. Comb. Optim., 34(4):1218-1236, 2017. Optimality of a heuristic solution for a class of knapsack problems. T C Hu, M L Lenard, Operations Res. 241T. C. Hu and M. L. Lenard. Optimality of a heuristic solution for a class of knapsack problems. Operations Res., 24(1):193-196, 1976. . John Dewey Jones , Orderly currencies. Amer. Math. Monthly. 1011John Dewey Jones. Orderly currencies. Amer. Math. Monthly, 101(1):36-38, 1994. Knapsack problems. Hans Kellerer, Ulrich Pferschy, David Pisinger, Springer-VerlagBerlinHans Kellerer, Ulrich Pferschy, and David Pisinger. Knapsack problems. Springer-Verlag, Berlin, 2004. Optimal bounds for the change-making problem. Dexter Kozen, Shmuel Zaks, Theoret. Comput. Sci. 1232Dexter Kozen and Shmuel Zaks. Optimal bounds for the change-making problem. Theoret. Comput. Sci., 123(2):377-388, 1994. When the greedy solution solves a class of knapsack problems. M J Magazine, G L Nemhauser, L E Trotter, Jr , Operations Res. 232M. J. Magazine, G. L. Nemhauser, and L. E. Trotter, Jr. When the greedy solution solves a class of knapsack problems. Operations Res., 23(2):207-217, 1975. Knapsack problems: algorithms and computer implementations. Silvano Martello, Paolo Toth, John Wiley & Sons, IncSilvano Martello and Paolo Toth. Knapsack problems: algorithms and computer implemen- tations. John Wiley & Sons, Inc., 1990. Orderly currencies. B Stephen, Maurer, Comment, J. D. Jones. Amer. Math. Monthly. 1011419Amer. Math. MonthlyStephen B. Maurer. Comment: "Orderly currencies" [Amer. Math. Monthly 101 (1994), no. 1, 36-38] by J. D. Jones. Amer. Math. Monthly, 101(5):419, 1994. Characterization of canonical systems with six types of coins for the change-making problem. Ryuhei Miyashiro, Yuma Suzuki, Ryuhei Miyashiro and Yuma Suzuki. Characterization of canonical systems with six types of coins for the change-making problem. URL: https://arxiv.org/abs/2111.12392, 2021. A polynomial-time algorithm for the change-making problem. David Pearson, Oper. Res. Lett. 333David Pearson. A polynomial-time algorithm for the change-making problem. Oper. Res. Lett., 33(3):231-234, 2005. Error bounds and the applicability of the greedy solution to the coin-changing problem. B N Tien, T C Hu, Operations Res. 253B. N. Tien and T. C. Hu. Error bounds and the applicability of the greedy solution to the coin-changing problem. Operations Res., 25(3):404-418, 1977.
[]
[ "Complex Sequential Question Answering: Towards Learning to Converse Over Linked Question Answer Pairs with a Knowledge Graph", "Complex Sequential Question Answering: Towards Learning to Converse Over Linked Question Answer Pairs with a Knowledge Graph" ]
[ "Amrita Saha [email protected] \nIBM Research AI\n\n\nI.I.T. Madras\nIndia\n", "Vardaan Pahuja [email protected] \nMILA\nUniversité de Montréal\n\n", "Mitesh M Khapra [email protected] \nI.I.T. Madras\nIndia\n", "Karthik Sankaranarayanan \nIBM Research AI\n\n", "Sarath Chandar [email protected] \nMILA\nUniversité de Montréal\n\n" ]
[ "IBM Research AI\n", "I.I.T. Madras\nIndia", "MILA\nUniversité de Montréal\n", "I.I.T. Madras\nIndia", "IBM Research AI\n", "MILA\nUniversité de Montréal\n" ]
[]
While conversing with chatbots, humans typically tend to ask many questions, a significant portion of which can be answered by referring to large-scale knowledge graphs (KG). While Question Answering (QA) and dialog systems have been studied independently, there is a need to study them closely to evaluate such real-world scenarios faced by bots involving both these tasks. Towards this end, we introduce the task of Complex Sequential QA which combines the two tasks of (i) answering factual questions through complex inferencing over a realistic-sized KG of millions of entities, and (ii) learning to converse through a series of coherently linked QA pairs. Through a labor intensive semi-automatic process, involving in-house and crowdsourced workers, we created a dataset containing around 200K dialogs with a total of 1.6M turns. Further, unlike existing large scale QA datasets which contain simple questions that can be answered from a single tuple, the questions in our dialogs require a larger subgraph of the KG. Specifically, our dataset has questions which require logical, quantitative, and comparative reasoning as well as their combinations. This calls for models which can: (i) parse complex natural language questions, (ii) use conversation context to resolve coreferences and ellipsis in utterances, (iii) ask for clarifications for ambiguous queries, and finally (iv) retrieve relevant subgraphs of the KG to answer such questions. However, our experiments with a combination of state of the art dialog and QA models show that they clearly do not achieve the above objectives and are inadequate for dealing with such complex real world settings. We believe that this new dataset coupled with the limitations of existing models as reported in this paper should encourage further research in Complex Sequential QA.
10.1609/aaai.v32i1.11332
[ "https://arxiv.org/pdf/1801.10314v2.pdf" ]
19,240,019
1801.10314
2cea3b200f5ad6c643fdd3f8e11cb986328e2309
Complex Sequential Question Answering: Towards Learning to Converse Over Linked Question Answer Pairs with a Knowledge Graph Amrita Saha [email protected] IBM Research AI I.I.T. Madras India Vardaan Pahuja [email protected] MILA Université de Montréal Mitesh M Khapra [email protected] I.I.T. Madras India Karthik Sankaranarayanan IBM Research AI Sarath Chandar [email protected] MILA Université de Montréal Complex Sequential Question Answering: Towards Learning to Converse Over Linked Question Answer Pairs with a Knowledge Graph While conversing with chatbots, humans typically tend to ask many questions, a significant portion of which can be answered by referring to large-scale knowledge graphs (KG). While Question Answering (QA) and dialog systems have been studied independently, there is a need to study them closely to evaluate such real-world scenarios faced by bots involving both these tasks. Towards this end, we introduce the task of Complex Sequential QA which combines the two tasks of (i) answering factual questions through complex inferencing over a realistic-sized KG of millions of entities, and (ii) learning to converse through a series of coherently linked QA pairs. Through a labor intensive semi-automatic process, involving in-house and crowdsourced workers, we created a dataset containing around 200K dialogs with a total of 1.6M turns. Further, unlike existing large scale QA datasets which contain simple questions that can be answered from a single tuple, the questions in our dialogs require a larger subgraph of the KG. Specifically, our dataset has questions which require logical, quantitative, and comparative reasoning as well as their combinations. This calls for models which can: (i) parse complex natural language questions, (ii) use conversation context to resolve coreferences and ellipsis in utterances, (iii) ask for clarifications for ambiguous queries, and finally (iv) retrieve relevant subgraphs of the KG to answer such questions. However, our experiments with a combination of state of the art dialog and QA models show that they clearly do not achieve the above objectives and are inadequate for dealing with such complex real world settings. We believe that this new dataset coupled with the limitations of existing models as reported in this paper should encourage further research in Complex Sequential QA. Introduction In recent years there has been an increased demand for AI driven personal assistants which are capable of conversing coherently with humans. Such personal assistants could benefit from large scale knowledge graphs which contain millions of facts stored as tuples of the form {predicate, subject, object} (for example, {director, Titanic, James Cameron}). Such knowledge graphs can indeed be handy when the bot is used in specific domains such as education, entertainment, sports, etc. where it is often required to answer factual questions while being aware of the context of the conversation. While Question Answering (Voorhees and Tice 2000;Wang, Smith, and Mitamura 2007;Yang, Yih, and Meek 2015;Berant et al. 2013;Bordes et al. 2015;Rajpurkar et al. 2016;Nguyen et al. 2016;Onishi et al. 2016;Richardson, Burges, and Renshaw 2013; and Conversation Systems (Ritter, Cherry, and Dolan 2010;Lowe et al. 2015;Banchs 2012;Bordes and Weston 2016) have received a lot of attention in the recent past, we would like to focus on such real life settings encountered by chatbots which involve a combination of QA and dialog. Specifically, we are interested in building systems which can learn to converse over a series of coherently linked questions that can be answered from a large scale knowledge graph. We refer to this task as Complex Sequential Question Answering (CSQA). Needless to say, CSQA is very different from the kind of conversations found in existing dialog datasets such as the Twitter (Ritter, Cherry, and Dolan 2010), Ubuntu (Lowe et al. 2015) and Movie Subtitles (Banchs 2012) datasets. Table 1 shows an example of one such conversation from our dataset containing a series of questions. Note that to answer the question in Turn 11, the bot needs to remember that the question involves the same predicate ('diplomatically related') as the previous question, but with a different subject ('Australia'). In other words, it is difficult to answer this question without retaining the context of the conversation. Further, in a natural conversation, some of the questions may require co-reference resolution (as in Turn 2), ellipsis resolution (as in Turn 11), etc. Finally, in some cases the question could be ambiguous (as in Turn 2) in which case the bot needs to ask for clarifications keeping in mind other entities and relations which were previously mentioned in the conversation. While the example in Table 1 already highlights some of the challenges involved in CSQA, we now discuss an orthogonal set of challenges which arise from the complexity of the questions. Existing datasets for Factual QA (Berant et al. 2013;Bordes et al. 2015;Yang, Yih, and Meek 2015) a single tuple in the KG. However, in a real-life setting, a bot could encounter more complex questions requiring logical, quantitative and comparative reasoning. Table 2 shows some examples of such questions. It should be clear that unlike simple questions, which can be answered from a single tuple, these questions require a larger subgraph of the KG. For example, to answer the question "Which rivers flow through India and China ?" one needs to find (i) the set of rivers flowing through India (ii) the set of rivers flowing through China and finally (iii) the intersection between these two sets. Answering such questions requires models which can parse complex natural language questions, retrieve relevant subgraphs of the KG and then perform some logical, comparative and/or quantitative operations on this subgraph. Also the Knowledge Graph used in our work is orders of magnitude larger than those used in some existing works Bordes et al. 2015;Dodge et al. 2015;Fader, Soderland, and Etzioni 2011) which lie at the intersection of QA and dialog. Having motivated the task of CSQA and highlighted its differences from existing work on dialog and QA, we now briefly describe the process used for creating our dataset. As mentioned earlier, a KG contains tuples of the form {predicate, subject, object}. For each of the 330 predicates in our Knowledge Graph, we first asked workers on Amazon Mechanical Turk to create questions containing the predicate and the subject (or object) such that the answer to that question is the object (or subject). These questions are complete in the sense that they do not have any ambiguity and can be answered in isolation without requiring any additional context. We then ask in-house annotators to create multiple templates for generating a conversation comprising of connected question answer pairs. Two question answer pairs are said to be connected if they contain the same predicate, subject or object. We then ask workers to make modifications to these questions so as to introduce challenges like co-references, ellipsis, incompleteness (or under specification) and contextual dependence. We also solicit templates and modifications to add logical, comparative and quantitative operators to the questions obtained above. This results in a dataset which contains conversations of the form shown in Table 1. The objective of this work is twofold: (i) to introduce the task of Complex Sequential QA and (ii) to show the inadequacy of current state of the art QA and dialog methods to deal with such tasks. Towards the second objective, we propose a model for CSQA which is a cross between a state of the art hierarchical conversation model (Serban et al. 2016a) and a key value based memory network model for QA (Miller et al. 2016). Through our experiments, we demonstrate the inadequacy of these models and highlight specific challenges that need to be addressed. It is also worth mentioning that the unambiguous (context independent) questions which appear in our dataset (typically, at the start of the conversation) can also be used for studying Complex Question Answering (as opposed to Simple Question Answering) in isolation ignoring the dialog context. This will help to independently push the state of the art in Complex QA. Related Work Our work lies at the intersection of Question Answering and Dialog Systems. Question Answering has always been of interest to the research community starting from early TREC evaluations (Voorhees and Tice 2000) . Over the years various datasets and tasks have been introduced to advance the state of the art in QA. These datasets can be divided into 5 main types (i) TREC style Open Domain QA (Voorhees and Tice 2000;Wang, Smith, and Mitamura 2007;Yang, Yih, and Meek 2015) where the aim is to answer a question from a collection of documents (ii) factoid QA over structured knowledge graphs (Berant et al. 2013;Bordes et al. 2015;Serban et al. 2016b) (iii) reading comprehension style QA (Rajpurkar et al. 2016;Nguyen et al. 2016), (iv) cloze style QA (Mostafazadeh et al. 2016;Onishi et al. 2016) (v) multiple choice QA (Richardson, Burges, and Renshaw 2013; Of the above QA tasks, factoid QA is most relevant to us as the questions in our CSQA dataset are factoid questions. Existing factoid QA datasets contain Simple Questions which can be answered from a single tuple in the knowledge graph. Specifically, unlike our dataset, none of the existing datasets contain complex questions requiring logical, quantitative and comparative reasoning involving larger subgraphs of the KG as opposed to a single tuple. Solutions to the simple QA task range from semantic parsing based methods (Berant and Liang 2014;Fader, Zettlemoyer, and Etzioni 2014) to embedding based methods Bordes, Chopra, and Weston 2014;Yang et al. 2014) and state of the art Memory Networks based architectures Miller et al. 2016;Kumar et al. 2016). In this work, we experiment with Memory Network based architectures and make a case for the need of better architectures when going beyond simple questions. Since we are interested in CSQA which contains a series of QA pairs over a coherent conversation, we also review some related work on dialog systems. Over the past few years three large scale dialog datasets, viz., Twitter-Conversations (Ritter, Cherry, and Dolan 2010), Ubuntu Dialogue and Movie-Dic Corpus (Banchs 2012) have become very popular. However, none of these datasets have the flavor of CSQA and there is no explicit Knowledge Graph associated with the conversations. Here again, neural network based (hierarchical) sequence to sequence methods (Luong et al. 2015;Serban et al. 2016a;Serban et al. 2017) have become the de facto choice. Recently, (Bordes and Weston 2016) proposed a dataset which contains knowledge graph driven goal oriented dialogs for the task of restaurant reservation. However, the size of the KG here is very small (<10 cuisines, locations, ambience, etc.) and the dialog contains very few states. (Dodge et al. 2015) also uses a dataset for QA and recommendation but unlike our dataset, their dataset does not contain coherently linked question answer pairs. Further, the KG is again much smaller (75K entities and 11 relations). Recently (Neelakantan et al. 2016) have explored complex question answering over around 18.5K queries from the Wik-iTableQuestions dataset, but their tables have less than 100 rows and a handful of columns whereas our complex QAs are grounded in a KB of over 20 million tuples. Further, their dataset does not have a conversational aspect. Dataset Creation Our aim is to create a dataset which contains a series of linked QA pairs forming a coherent conversation. Further, these questions should be answerable from a Knowledge Graph using logical, comparative and/or quantitative reasoning. We started by asking pairs of in-house annotators to converse with each other. One annotator in the pair acted as a user whose job was to ask questions and the other annotator acted as the system whose job was to answer the questions or ask for clarifications if required. Note that these annotators were Computer Science graduates who understood the concepts of knowledge graph, sub graph, tuples, subject, object, relation, etc. The idea was to use the in-house annotators to understand the types of simple and complex questions that can be asked over a knowledge graph. These could then be abstracted to templates and used to instantiate more questions involving different relations, subjects and objects. Similarly, we also wanted to understand the type of coreferences, ellipses etc used by users when asking linked questions over a coherent conversation. These could again be abstracted to templates and used to link individual QA pairs to form a coherent dialog. In the remainder of this section we describe (i) the knowledge graph supporting our CSQA (ii) simple question templates suggested by the in-house annotators (iii) complex question templates and finally (iv) the linked conversation templates and the process used to instantiate around 200K dialogs containing 1.6 million linked QA pairs. Knowledge Graph As our KG, we used wikidata which stores facts in the form of tuples containing a relation, a subject and an object. For example, (rel: capital, subj: India, obj: New Delhi) is a tuple in wikidata. Each entity (subject or object) is associated with an entity type. For example, in the above tuple, India is an entity of type country and New Delhi is an entity of type city. We use the wikidata dump of 14-Nov-2016 which contains 5.2K relations, 12.8M entities and 52.3M facts. Of these 5.2K relations, we retain only 330 meaningful ones. Specifically, we ignore relations such as "ISO 3166-1 alpha-2 code", "NDL Auth ID" etc., as we do not expect users to ask questions about such obscure relations. Similarly, of the 30.8K unique entity types in wikidata, we selected 642 types (considering only immediate parents of entities) which appeared in the top 90 percentile of the tuples associated with atleast one of the retained meaningful relations. In effect, there were around 21.2M such tuples containing only the filtered relations and entity types. The total number of unique entities in these filtered tuples is 12.8M out of which 3.8M appear in atleast 3 tuples. Simple Questions For discovering simple question templates, we asked the annotators to come up with questions which can be answered from a single tuple in the knowledge graph. The annotators suggested that for a given tuple (say, rel: CEO, subj: Google, obj: Sundar Pichai) there are mainly 3 types of simple questions that can be generated: Object based questions: Here the question contains the relation and the subject from a tuple and the answer is the tuple's object. For example, "Q: Who is the CEO (relation) of Google (subject) ? A: Sundar Pichai (object)". 2. Subject based questions: Here the question contains the relation and the object from a tuple and the answer is the tuple's subject. For example, "Q: Which company is Sundar Pichai (object) the CEO of (relation) ? A: Google (subject)". 3. Relation based questions: Here the question contains the subject and the object from a tuple and the answer is the tuple's relation. For example, "Q: How is Sundar Pichai (object) related to Google (subject) ? A: CEO (relation)". During our discussions, we found that in many cases, relation based questions do not make a lot of sense. For example, it is unnatural for someone to ask the question "Q: How is Himalayas related to India? A: located in". Hence, in this work we focus only on object based and subject based questions. Note that in some cases the question could have multiple correct answers. In other words, there are multiple tuples related to this question. For example, "Q: Which rivers flow through India ? A: Ganga, Yamuna, Narmada, ....". Note that even though these questions can be answered from multiple tuples, they are still simple questions because they do not require any joint reasoning over multiple tuples. Crowdsourced question generation: Based on this initial pilot with in-house annotators we then requested workers on AMT to create subject based and object based questions for each of the 330 relations in our KG. For creating subject based questions the annotators were shown (i) the object, (ii) the relation (iii) the type of the subject associated with that tuple and (iv) a few sample tuples. Note that the subject type is important as the annotator will need to look at the subject type (city) to form the question "Which city is the capital of India ?". This is important because some relations (for example, the relation tributary) can have multiple subject and object types as shown below: 1. subj: Spring Creek (type: river), obj: Lake Ilsanjo (type: lake) 2. subj: Spring Creek (type: river), obj: Matanzas Creek (type: stream) It should be obvious that even for the same relation different combinations of subjects and objects should result in different questions. For example, "Which lake is a tributary of Spring Creek?" v/s "Which river is a tributary of Spring Creek?". Note that, on an average each relation in our KG was associated with 5 subject types and 6 object types. We first asked a set of workers to create one subject based and object based question for each relation. We then asked a separate set of annotators to create paraphrases of these questions. In all, we collected 1531 subject based and 1450 object based question templates (including paraphrases) through this process. Once we get a template we can instantiate it with different entity types and entities to create many questions. For example, given the template "Which <water course> is located in <country> ?" we can instantiate it by replacing water course by it's sub-types (river, lake, etc) and by replacing country by entities of that type (U.S., India, etc.). This gives us a semi-automatic way of creating many questions from the collected templates. Note that the question templates also contain paraphrases, so we have different ways of asking the same question. Complex Questions Next we wanted the annotators to help us identify types of questions which require logical, comparative and quantitative reasoning over a larger subgraph of the KG. Logical Reasoning: These are questions which require some logical inferencing over multiple tuples in the KG. For example, consider the question "Which rivers flow through India and China ?" To answer this question we first need to create two sets (i) a set A containing rivers appearing in tuples of the form (flows through, India, river) and (ii) a set B containing rivers appearing in tuples of the form (flows through, China, river). The final answer to the question is then an intersection of these two sets. It should be obvious that answering such questions is more difficult then the Simple Questions studied in literature so far (and as described in the previous section). The annotators came up with questions involving different logical operators such as AND, OR, NOT, etc (see Table 2). They also suggested some templates for creating such logical reasoning questions from the simple questions that we had already collected (as described in Section 3.1). For example, one such template was to take a simple object based question such as "Which rivers flow through India" and augment it with another subject such as "and China". Similar templates and paraphrases were suggested for other operators such as OR, NOT, etc. for both subject based and object based questions. This allowed us to semi-automatically create many questions requiring logical reasoning. This process is semiautomatic because once a template is created, we instantiate it for multiple tuples (as explained earlier) and then manually verify a subset of these questions to check whether they are syntactically and semantically correct. Note that some of the logical reasoning questions suggested by the annotators contained multiple relations. For example, the question "Which river flows through India and has its source in Himalayas?" requires a logical operation over two relations, viz., flows through and source. Quantitative Reasoning: These questions require some quantitative reasoning involving standard aggregation functions like max, min, count, atleast / atmost / approximately / equal to N , etc. We refer the reader to Table 2 to see examples of different types of quantitative questions. Once again, with the help of in-house annotators we identified several templates for modifying the simple questions that we had already collected and creating quantitative reasoning questions involving different aggregation operators. For example, one such template was to take the object based question "Which rivers flow through India" and replace "Which" by "How many". In fact, we found this particular template to be so convenient that for every relation, we asked the workers to give us at least one simple question which starts with "Which subject-type ... ". Some of these simple questions starting with "Which subject-type ... " look a bit unnatural but we made a conscious choice to allow this so that it simplifies the process of creating complex questions. We also created questions which require quantitative reasoning on top of logical reasoning. For example, "How many rivers flow through India but not through China ?". Comparative Reasoning: These are questions which require a comparison between entities based on certain relations (predicates). For example, consider the question "Which countries have more number of rivers than India ?". This requires inference over multiple tuples in the KG. The model here essentially needs to learn the count, sort and more/less operations. Such questions could also involve multiple entity types. For example the question "Which countries have more lakes and rivers than India ?" involves two entity types (lakes, rivers). Finally, we could have questions which require a counting type quantitative reasoning on top of comparative reasoning. For example, "How many countries have more rivers than India ?" requires counting after comparing. These questions were created by modifying the simple questions, using the rules of transformation given by our annotators. Note that in all of the above cases, once the annotator suggests a modification, we can apply that modification and its paraphrases to multiple tuples to get many questions. Further, after instantiating we retain only those Qs which have less than 1000 answers. Linked Sequential QA So far we have described the process of collecting individual QA pairs containing various types of questions. We are now interested in creating coherent conversations involving such QA pairs. We can think of such a conversation as a walk over the Knowledge Graph using QA pairs such that subsequent questions refer to subjects, objects or relations which have appeared previously in the conversation. More specifically, such conversations should have the following properties (i) subsequent QA pairs should be linked and (ii) the conversation should contain typical elements of a dialog such as coreferences, ellipses, clarifications, confirmation, etc. The process of connecting linked QA pairs in a coherent conversation can be thought of as performing a systematic walk over a Knowledge Graph. Simply stated, two questions can be placed next to each other in a conversation if they share a relation or an entity. However, bringing in factors such as ambiguity, underspecified or coreferenced questions into the conversation requires manual effort. For this, we again requested in-house annotators to create templates for converting simple or complex questions described above into conversational questions. For example, one such template was to take a simple question such as "Which rivers flow through India ?" and replace the subject by "that subject-type" or "that country" in this case. Multiple such templates were created and refined for different question types that we described in the earlier sections. This was a labor intensive tedious process requiring several iterations. Some templates were also collected using crowdsourcing on AMT. We refer to such questions as Indirect questions as opposed to Direct questions which are fully specified and do not indirectly refer to some entity or relation from the earlier conversation. The in-house annotators also suggested some clarification templates which involved asking questions containing coreferences which could resolve to more than one of the previously mentioned entities. Turn 2 in Table 1 shows one such example. The information in this question is not enough to answer the question and hence the system needs to ask for a clarification. Note that, whenever we use linking we only link consecutive questions and not arbitrary questions in the sequence (i.e., the i-th question can be linked to the next pr previous question but not to arbitrary questions appearing before or after it.) Through the above processing involving a mix of manual work (crowdsourced and inhouse) and semi-automatic instantiation, we created a dataset containing 200 K dialogs and a total of 1.6 M turns. Table 2 shows the number of templates for each question type and some sample types. Table 3 shows various statistics about the dataset including the Train, Validation and Test splits . Note that we constructed the train, valid and test splits in such a way that the dialogs in the validation and test set do not contain questions corresponding to tuples for which questions were seen at train time. Some peculiar characteristics of Wikidata We found that Wikidata has some typical characteristics and predicates, subject types and object types which often leads to very unnatural questions. We list down some of these issues below: • Very generic predicates: Consider the relation lake outf low for which the annotators suggested the question "Which object type outflows from the lake YYY ?". This seems like a valid template but turns out that Wikidata also contains predicates of the form lake outf low(DalLake, evaporation) where evaporation is an outflow from the lake. Similarly, the relation f abrication method allows for methods used to grow, cook, weave, build, assemble, manufacture an item. Due to the presence of such very generic relations (which allow a wide range of object types) sometime the questions instantiated from these templates may look very unnatural. In many cases, we manually tried to filter out such questions but given the scale of the KB it was not always possible to do this. We expect some such noisy questions to be a part of the final dataset. • Overlapping predicate and subject types: The word religion is both a predicate and a subject type in Wikidata. Similarly, sport is both a predicate and a subject type in Wikidata. This often leads to some questions containing repitions (for example, "Which religion (subject type) is the religion (predicate) practised by YYY ?". Again, we filtered out many such cases by applying some rule based post-processing after instantiating the templates but we still expect a few of these to be present in the dataset. • Long tail of subject types and relations: There are a few subject types in Wikidata which are very dominant. For example, a large number of entities in Wikidata belong to the sub-class person and location. These subject types in turn are associated with a few dominant relations. For example, part-of is the predominant relations associated with almost entities of type location. Similarly, citizenof, birthdate, birthplace are common relations associated with almost all entities of type person. Other relations such as named-after are a bit rare. Hence, when creating complex or linked questions connecting multiple entities and relations some of the rarer relations do not show up frequently. Such long tail behavior wherein some relations and predicates dominate will be observed in any KB of a reasonable size and can't really be avoided. • Unnatural Peer Subject types: As per Wikidata, the subject types religion and social group are peers as they are both sub-classes of belief system. As a consequence of this we have logical questions of the form "Which religions and social groups does YYY belong to?". We found this is a bit odd and we are not sure if an average user would consider these to be peers. These are special cases and are expected to any such large scale KB. Proposed Model Since CSQA involves a combination of dialog and QA, we propose a model which is a cross between (i) the HRED model (Serban et al. 2016a) which is one of the state of the art models for dialog systems and (ii) the key value memory network model (Miller et al. 2016) which is a state of the art QA system. Our model has the following components: 1. Hierarchical Encoder: The model contains a lower level RNN encoder which goes over the words in an utterance and computes a representation for each utterance. This is followed by a higher level encoder which goes over these utterance representations and computes a representation q 1 for the context (current state of the dialog). 2. Handling Large Vocabulary: As input to the above encoder, we provide pre-trained Glove embeddings (Pennington, Socher, and Manning 2014) of the words in the question. However, our questions contain many entities (names, locations, etc.) for which pre-trained word embeddings are not available. Since these entities are crucial for answering the questions we cannot treat them as unknown words. One option is to randomly initialize the embeddings of these entity words and then train them along with other parameters of the model. This would effectively lead to a very large vocabulary and blow up the number of parameters. To avoid this, we use a state of the art TransE method (Bordes et al. 2013) for learning embeddings of KG entities offline. More specifically, for entities such as India, China, Ganga, Himalayas, etc. which are present in the KG we learn embeddings using the TransE model. We refer to these embeddings as KG embeddings. The final embedding of every question word is then a concatenation of the Glove embedding (if available, 0s otherwise) and the KG embedding (if available, 0s otherwise). Candidate generation: State of the art memory network based methods (Miller et al. 2016) learn to compute an attention function over the tuples in the KG based on the given question (or dialog context in our case). For large sized KGs, it is infeasible to compute the attention over the entire KG. Instead, following (Miller et al. 2016) we filter out tuples from the KG using the longest possible n-gram matching. We essentially consider only the longest n-gram which corresponds to the name of a KG-entity and retain only those tuples where the entity appears as subject/object. We observed that even with this filtering, the average number of candidate tuples for a given question in our dataset can sometimes be very large. We return to this issue in the Discussions section. Key Value Memory Network: A key value memory network stores each of the N candidate tuples (as selected above) as a key-value pair where the key contains the concatenated embedding of the relation and the subject (denoted by φ K (k hi ) ∈ R D for the i th memory entry) whereas the value contains the embedding of the object (denoted by φ V (v hi ) ∈ R D for the i th memory entry). Here, the subject, object and relation embeddings are the TransE KG embeddings, as described above. The model makes multiple passes over the memory computing new attention weights over the keys of the memory at each pass and updating the contextual question representation (q) whose initial representation q 1 ∈ R d is computed by the hierarchical encoder. The rationale behind making multiple passes over the question is that the model may learn to focus on different aspects of the question in each pass. This is especially important in the case of complex questions. The following equation shows how the query representation gets updated in the j th pass. q j+1 = R j (q + N i Softmax(q j Aφ K (k hi ))Aφ V (v hi )) (1) A ∈ R d×D and R 1...H ∈ R d×d are the parameters of the (for clarification questions) (iv) Ganga, Narmada, Yamuna, ... (list of KG entities satisfying the query) and so on. At a high level, we can say that the model always produces sequences and in most cases the sequences will contain KG entities whereas in some cases the sequences may contain counts, entity types (rivers, lakes, etc) and non-KG words. We thus model the decoder as an RNN based sequence generator which takes as input the modified query representation. At each time step it gives a softmax over a shortlisted vocabulary containing counts, yes/no and KG entity types amounting to 1500 words approximately. Note that even though the model has to produce KG entities, we cannot include all KG entities in this vocabulary (as it will blow up the number of parameters). Instead, we train the decoder to produce the token KG W ORD whenever a KG entity needs to be produced in the output. We then use a copy mechanism to replace the KG W ORD with relevant entities. For example, if the decoder produces n KG W ORD tokens then we use q H+1 to give a distribution over the entities in the candidate tuples and then replace each KG W ORD token in the output by these top n entities having the highest probability. The distribution over the candidate entities is computed as Softmax(q H+1 Bφ V (v hi )) where B ∈ R d×D is a parameter. The training loss is a sum of the cross-entropy loss over the tokens and the KG entities. Results We used Adam as the optimization algorithm and tuned the following hyperparameters using the validation set; learning We used Precision and Recall as the evaluation metrics which capture the percentage of entities in the final decoder output that were correct and the percentage of actual entities that were retrieved by the system respectively. For verification and count based questions which produce a sequence of Yes and/or No or counts we use accuracy as the evaluation metric (i.e., whether the count or boolean answer was exact or not). Finally for questions which need clarification, the system has to generate a natural language response which is usually a sequence of KG-entities and non-KG words, hence for that we separately report both Precision/Recall over the predicted KG-entities and BLEU for the overall utterance similarity. The results of our experiments are summarized in Table 4. Discussions Based on the results in Table 4, we discuss some shortcomings of existing methods and suggest areas for future research. 1. Simple v/s Complex Questions: It is obvious that the model performs very poorly on complex questions as compared to simple questions. There are multiple reasons for this. First, existing models do not really model an aggregate or logical function for handling quantitative, comparative and logical reasoning. Designing such aggregation functions for an end-to-end solution is non-trivial and needs further exploration. This dataset should provide a good benchmark for exploring such solutions for complex QA. Second, it is not clear if the existing encoders (HRED + KVmem, in this case) are capable of effectively parsing complex questions and feeding a good represetation to the decoder. For example, the encoder ideally needs to learn to break down the question "Which rivers flow through India and China?" into two parts (i) "Which rivers flow through India?" (ii) "Which rivers flow through China?" and then compute an attention over relevant tuples in the memory. Such kind of parsing is not explicitly modeled by existing encoders. There is clearly a need for revisiting some of the traditional parsing based methods for QA in the light of this dataset. Direct v/s Indirect Questions: Comparing the third and fourth rows of Table 4 with the second row, we see that the performance of the model drops when dealing with indirect or incomplete questions which rely on the context for resolving ellipsis, coreferences, etc. Even though current dialog systems (HRED, in this case) do learn to capture the context, one key challenge w.r.t our dataset is that, here named entities and relations matter more than other words in the context. We need better models which can explicitly learn to give importance to relations and entities (for example, using an explicit supervised attention mechanism). Candidate Generation: This step is required to prune the size of the KG and store only relevant steps in the memory. This step is a bit adhoc as it relies on n-gram matching and we saw specific issues while using this on our dataset. We had explicitly asked the annotators to create paraphrases of the same question. As a result simple n-gram matching does not work well resulting in low recall of the actual answer entity in the filtered candidate tuples. A better candidate matching algorithm which takes care of entity paraphrases (Leo, Leonardo, etc.) and relation paraphrases (director, directed by, direct, etc.) are needed. In some cases, we also have the reverse problem. For example, if the entity being referred to in the question is extremely popular then it will be involved in over 100K tuples in the KB (for example, an entity like U.S.A.). This causes the KV memory to blow up leading to poor and inefficient training and inference. 4. Better organization of the memory: It is inevitable that for some questions, especially complex questions involving logical operators over multiple entities and relations, the number of tuples required to be stored in the memory would be large. For example, around 15% of the questions in our data require more than 100K candidate tuples. Current Key Value Memory Networks which are flat in their organization are not suitable for this for two reasons. First, the amount of memory required by the model increases and can go beyond the capacity of existing GPUs. Second the attention weights computed using equation 1 need a prohibitively expensive softmax computation which increases both training and test time. Better ways of organizing the memory along with approximate methods for computing the softmax function are needed to handle such complex questions. We hope that the dataset, results and discussions on the resources presented in this paper will convince the reader that CSQA has several challenges which are not encountered in previous datasets for dialog and QA. Some of them are listed above and there are a few more which we do not list due to space constraints. Addressing/solving all of these challenges is clearly beyond the scope of a single paper. The purpose of this paper was to introduce the task and propose a model based on existing state of the art models and thereby highlight the need for further research to address the inadequacies of these models. To facilitate research, this dataset will be made available at https://github.com/iitm-nlp-miteshk/ AmritaSaha/tree/master/CSQA (please copy paste the URL in a browser instead of clicking on it). This URL will contain the following resources: • the train/valid/test splits used in our experiments • the processed version of the WikiData dump of 14-Nov-2016 that was used to construct the dataset • scripts to extract the train/valid/test set for each of the different question types listed in Table 2 • scripts to evaluate the performance of the model Conclusion In this paper, we introduced the task of Complex Sequential Question Answering (CSQA) with a large scale dataset consisting of conversations over linked QA pairs. The dataset contains 200K dialogs with 1.6M turns and was collected through a manually intensive semi-automated process. To the best of our knowledge, this is the first dataset of its kind which contains complex questions which require logical, quantitative and/or comparative reasoning over a large Knowledge Graph containing millions of tuples. We propose a model for CSQA which is a cross between state of the art models for dialog and QA and highlight the inadequacies of this model in dealing with the task of CSQA. It should be obvious that CSQA has several challenges and addressing/solving all of them is beyond the scope of a single paper. We hope that the introduction of this task and dataset should excite the research community to develop models for Complex Sequential Question Answering. Figure 1 : 1Proposed Model consisting of a (i) Hierarchical Encoder (ii) Key-Value Memory Network and (iii) Decoder key-value memory network and N is the number of candidate tuples.5. Decoder:For a truly end-to-end solution, the decoder should be generic enough to produce multiple types of answers. For example, here are some of the answer sequences that the decoder is expected to generate: (i) 5 rivers and 4 lakes (for count questions) (ii) Yes/No/Yes and No respectively etc. (for verification questions) (iii) Did you mean ... deal with Simple Questions, each of which can be answered from arXiv:1801.10314v2 [cs.CL] 4 Oct 2018 T12 Comparative Q. USER : Which countries have lesser number of political relations with others than Australia ? Response SYSTEM : Denmark, Canada, GrenadaTurn State Utterance T1 Simple Q. USER : Can you tell me which cities border Verderio In- feriore ? Response SYSTEM : Cornate d'Adda, Bernareggio, Robbiate T2 Coreference Q. USER : And which cities flank that one ? Clarification Q. SYSTEM : Did you mean Robbiate ? T3 Clarification USER : No, I meant Cornate d'Adda. Response SYSTEM : Bottanuco, Busnago, Trezzo sull'Adda T4 Logical Q. USER : Which ones share border with Robbiate and are not part of Azerbaijan ? Response SYSTEM : Trezzo sull'Adda, Busnago, Bottanuco T5 Quantitative Q. USER : Which city is the capital of min number of french administrative divisions ? Response SYSTEM : Riga T6 Boolean Q. USER : Is Rot am See located adjacent to that one ? Response SYSTEM : No T7 Quanti. Q. USER : How many political territories have strategic rela- tion with around 22 other countries ? Response SYSTEM : 11 T8 Logical Q. USER : Which are the nations connected with Robbiate and are not located in Italy ? Response SYSTEM : The answer count is 11. Do you want to see all possibilities? T9 Clarification USER : No, show only a few of them Response SYSTEM : Papua New Guinea, Jamaica, Antigua and Bar- buda, Austria, New Zealand, The Bahamas, Gabon T10 Quantitative Q. USER : How many countries are diplomatically related to Italy ? Response SYSTEM : 74 T11 Quantitative Q. USER : And how many of them also with Australia ? Response SYSTEM : 56 Table 1 : 1A sample dialog from the dataset (More examples of generated dialogs are provided in the supplementary material) Which country has at least N rivers and lakes combined ? Count over Atleast Single entity type How many rivers flow through at least N countries? /Atmost / Approx./Equal Mult. entity type How many countries have at least N rivers and lakes combined ? How many countries have more number of rivers than India ? Mult. entity type How many countries have more rivers and lakes than India ?Reaso- ning Type Containing Example Log- ical Union Single Relation Which rivers flow through India or China? Intersection Which rivers flow through India and China? Difference Which rivers flow through India but not China? Any of the above Multiple Rela- tions Which river flows through In- dia but does not originate in Hi- malayas? Verifi- cation Boolean Single/Multi- ple entities Does Ganga flow through India ? Quant- itative Count Single entity type How many rivers flow through In- dia ? Mult. entity type How many rivers and lakes does India have ? Logical operators How many rivers flow through In- dia and/or/but not China? Min/Max Single entity type Which river flows through maxi- mum number of countries ? Mult. entity type Which country has maximum number of rivers and lakes com- bined ? Atleast/Atmost Single entity type Which rivers flow through at least N countries ? /Approx./ Equal Mult. entity type Comp- arative More/Less Single entity type Which countries have more num- ber of rivers than India ? Mult. entity type Which countries have more rivers and lakes than India ? Count over More/Less Single entity type Table 2 : 2Types of questions in the dataset. Table 3 : 3Overall Dataset Statistics Table 4 : 4Performance of the proposed model on different types of questions in the dialog rate ∈ {1e-3, 4e-4}, RNN hidden unit size, word embedding size, KG embedding size ∈ {256, 512}, batch size ∈ {32, 64} and dialog context size as 2. The bracketed numbers indicate the values of each hyperparameter considered. On average, we found that the candidate generation step produces 10K candidate tuples, hence we kept upto 10K key value pairs in the memory network. Following(Miller et al. 2016), we set H = 2. Movie-dic: a movie dialogue corpus for research and development. R E Banchs, ACL. Banchs, R. E. 2012. Movie-dic: a movie dialogue corpus for research and development. In ACL, 2012, 203-207. Semantic parsing via paraphrasing. J Berant, P Liang, ACL (1). Berant, J., and Liang, P. 2014. Semantic parsing via para- phrasing. In ACL (1), 1415-1425. Semantic parsing on freebase from question-answer pairs. J Berant, A Chou, R Frostig, P Liang, EMNLP. 26Berant, J.; Chou, A.; Frostig, R.; and Liang, P. 2013. Se- mantic parsing on freebase from question-answer pairs. In EMNLP, volume 2, 6. Modeling biological processes for reading comprehension. J Berant, V Srikumar, P Chen, A V Linden, B Harding, B Huang, P Clark, C D Manning, Berant, J.; Srikumar, V.; Chen, P.; Linden, A. V.; Harding, B.; Huang, B.; Clark, P.; and Manning, C. D. 2014. Modeling biological processes for reading comprehension. In EMNLP 2014,. Learning end-to-end goaloriented dialog. A Bordes, Weston , J , CoRR abs/1605.07683Bordes, A., and Weston, J. 2016. Learning end-to-end goal- oriented dialog. CoRR abs/1605.07683. Translating embeddings for modeling multi-relational data. A Bordes, N Usunier, A García-Durán, J Weston, O Yakhnenko, Neural Information Processing Systems. Bordes, A.; Usunier, N.; García-Durán, A.; Weston, J.; and Yakhnenko, O. 2013. Translating embeddings for model- ing multi-relational data. In Neural Information Processing Systems 2013, 2787-2795. Large-scale simple question answering with memory networks. A Bordes, N Usunier, S Chopra, J Weston, A Bordes, S Chopra, J Weston, arXiv:1406.3676Question answering with subgraph embeddings. arXiv preprintBordes, A.; Usunier, N.; Chopra, S.; and Weston, J. 2015. Large-scale simple question answering with memory net- works. CoRR abs/1506.02075. Bordes, A.; Chopra, S.; and Weston, J. 2014. Ques- tion answering with subgraph embeddings. arXiv preprint arXiv:1406.3676. Open question answering with weakly supervised embedding models. A Bordes, J Weston, N Usunier, ECML PKDD 2014. Proceedings, Part I. Bordes, A.; Weston, J.; and Usunier, N. 2014. Open question answering with weakly supervised embedding models. In ECML PKDD 2014. Proceedings, Part I, 165-180. Evaluating prerequisite qualities for learning end-to-end dialog systems. J Dodge, A Gane, X Zhang, A Bordes, S Chopra, A H Miller, A Szlam, J Weston, CoRR abs/1511.06931Dodge, J.; Gane, A.; Zhang, X.; Bordes, A.; Chopra, S.; Miller, A. H.; Szlam, A.; and Weston, J. 2015. Evaluating prerequisite qualities for learning end-to-end dialog systems. CoRR abs/1511.06931. Identifying relations for open information extraction. A Fader, S Soderland, O Etzioni, EMNLP 2011. Fader, A.; Soderland, S.; and Etzioni, O. 2011. Identifying relations for open information extraction. In EMNLP 2011, 1535-1545. Open question answering over curated and extracted knowledge bases. A Fader, L Zettlemoyer, O Etzioni, Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. the 20th ACM SIGKDD international conference on Knowledge discovery and data miningACMFader, A.; Zettlemoyer, L.; and Etzioni, O. 2014. Open question answering over curated and extracted knowledge bases. In Proceedings of the 20th ACM SIGKDD interna- tional conference on Knowledge discovery and data mining, 1156-1165. ACM. Ask me anything: Dynamic memory networks for natural language processing. A Kumar, O Irsoy, P Ondruska, M Iyyer, J Bradbury, I Gulrajani, V Zhong, R Paulus, R Socher, ICML 2016. Kumar, A.; Irsoy, O.; Ondruska, P.; Iyyer, M.; Bradbury, J.; Gulrajani, I.; Zhong, V.; Paulus, R.; and Socher, R. 2016. Ask me anything: Dynamic memory networks for natural language processing. In ICML 2016, 1378-1387. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. R Lowe, N Pow, I Serban, J Pineau, Lowe, R.; Pow, N.; Serban, I.; and Pineau, J. 2015. The ubuntu dialogue corpus: A large dataset for research in un- structured multi-turn dialogue systems. In SIGDIAL 2015,, 285-294. Training end-to-end dialogue systems with the ubuntu dialogue corpus. R T Lowe, N Pow, I V Serban, L Charlin, C Liu, J Pineau, D&D. 81Lowe, R. T.; Pow, N.; Serban, I. V.; Charlin, L.; Liu, C.; and Pineau, J. 2017. Training end-to-end dialogue systems with the ubuntu dialogue corpus. D&D 8(1):31-65. Multi-task sequence to sequence learning. M.-T Luong, Q V Le, I Sutskever, O Vinyals, L Kaiser, arXiv:1511.06114arXiv preprintLuong, M.-T.; Le, Q. V.; Sutskever, I.; Vinyals, O.; and Kaiser, L. 2015. Multi-task sequence to sequence learning. arXiv preprint arXiv:1511.06114. Key-value memory networks for directly reading documents. A H Miller, A Fisch, J Dodge, A Karimi, A Bordes, J Weston, CoRR abs/1606.03126Miller, A. H.; Fisch, A.; Dodge, J.; Karimi, A.; Bordes, A.; and Weston, J. 2016. Key-value memory networks for di- rectly reading documents. CoRR abs/1606.03126. A corpus and evaluation framework for deeper understanding of commonsense stories. N Mostafazadeh, N Chambers, X He, D Parikh, D Batra, L Vanderwende, P Kohli, J F Allen, A Neelakantan, Q V Le, M Abadi, A Mccallum, D Amodei, CoRR abs/1604.01696Learning a natural language interface with neural programmer. CoRR abs/1611.08945Mostafazadeh, N.; Chambers, N.; He, X.; Parikh, D.; Batra, D.; Vanderwende, L.; Kohli, P.; and Allen, J. F. 2016. A corpus and evaluation framework for deeper understanding of commonsense stories. CoRR abs/1604.01696. Neelakantan, A.; Le, Q. V.; Abadi, M.; McCallum, A.; and Amodei, D. 2016. Learning a natural language interface with neural programmer. CoRR abs/1611.08945. MS MARCO: A human generated machine reading comprehension dataset. T Nguyen, M Rosenberg, X Song, J Gao, S Tiwary, R Majumder, L Deng, CoRR abs/1611.09268Nguyen, T.; Rosenberg, M.; Song, X.; Gao, J.; Tiwary, S.; Majumder, R.; and Deng, L. 2016. MS MARCO: A human generated machine reading comprehension dataset. CoRR abs/1611.09268. Who did what: A large-scale personcentered cloze dataset. T Onishi, H Wang, M Bansal, K Gimpel, D Mcallester, arXiv:1608.05457arXiv preprintOnishi, T.; Wang, H.; Bansal, M.; Gimpel, K.; and McAllester, D. 2016. Who did what: A large-scale person- centered cloze dataset. arXiv preprint arXiv:1608.05457. Glove: Global vectors for word representation. J Pennington, R Socher, C D Manning, Empirical Methods in Natural Language Processing. Pennington, J.; Socher, R.; and Manning, C. D. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), 1532-1543. Squad: 100,000+ questions for machine comprehension of text. P Rajpurkar, J Zhang, K Lopyrev, P Liang, arXiv:1606.05250arXiv preprintRajpurkar, P.; Zhang, J.; Lopyrev, K.; and Liang, P. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. Mctest: A challenge dataset for the open-domain machine comprehension of text. M Richardson, C J C Burges, E Renshaw, EMNLP 2013. Richardson, M.; Burges, C. J. C.; and Renshaw, E. 2013. Mctest: A challenge dataset for the open-domain machine comprehension of text. In EMNLP 2013, 193-203. Unsupervised modeling of twitter conversations. A Ritter, C Cherry, B Dolan, NAACL 2010. Ritter, A.; Cherry, C.; and Dolan, B. 2010. Unsupervised modeling of twitter conversations. In NAACL 2010, 172-180. Building end-to-end dialogue systems using generative hierarchical neural network models. I V Serban, A Sordoni, Y Bengio, A Courville, J Pineau, AAAI'. 16AAAI PressSerban, I. V.; Sordoni, A.; Bengio, Y.; Courville, A.; and Pineau, J. 2016a. Building end-to-end dialogue systems using generative hierarchical neural network models. AAAI'16, 3776-3783. AAAI Press. Generating factoid questions with recurrent neural networks: The 30m factoid question-answer corpus. I V Serban, A García-Durán, Ç Gülçehre, S Ahn, S Chandar, A C Courville, Y Bengio, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016. the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016Berlin, GermanyLong Papers1Serban, I. V.; García-Durán, A.; Gülçehre, Ç .; Ahn, S.; Chan- dar, S.; Courville, A. C.; and Bengio, Y. 2016b. Generating factoid questions with recurrent neural networks: The 30m factoid question-answer corpus. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. A hierarchical latent variable encoder-decoder model for generating dialogues. I V Serban, A Sordoni, R Lowe, L Charlin, J Pineau, A C Courville, Y Bengio, AAAI. Serban, I. V.; Sordoni, A.; Lowe, R.; Charlin, L.; Pineau, J.; Courville, A. C.; and Bengio, Y. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In AAAI, 3295-3301. What is the jeopardy model? a quasi-synchronous grammar for qa. E M Voorhees, D M Tice, Acm, M Wang, N A Smith, T Mitamura, Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval. the 23rd annual international ACM SIGIR conference on Research and development in information retrieval7EMNLP-CoNLLVoorhees, E. M., and Tice, D. M. 2000. Building a ques- tion answering test collection. In Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval, 200-207. ACM. Wang, M.; Smith, N. A.; and Mitamura, T. 2007. What is the jeopardy model? a quasi-synchronous grammar for qa. In EMNLP-CoNLL, volume 7, 22-32. Joint relational embeddings for knowledge-based question answering. M.-C Yang, N Duan, M Zhou, H.-C Rim, EMNLP. 14Yang, M.-C.; Duan, N.; Zhou, M.; and Rim, H.-C. 2014. Joint relational embeddings for knowledge-based question answering. In EMNLP, volume 14, 645-650. Wikiqa: A challenge dataset for open-domain question answering. Y Yang, W Yih, C Meek, EMNLP 2015. Lisbon, PortugalYang, Y.; Yih, W.; and Meek, C. 2015. Wikiqa: A challenge dataset for open-domain question answering. In EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, 2013-2018.
[ "https://github.com/iitm-nlp-miteshk/" ]
[ "FREE TRANSPORT FOR FINITE DEPTH SUBFACTOR PLANAR ALGEBRAS", "FREE TRANSPORT FOR FINITE DEPTH SUBFACTOR PLANAR ALGEBRAS" ]
[ "Brent Nelson " ]
[]
[]
Given a finite depth subfactor planar algebra P endowed with the graded * -algebra structures {Gr + k P} k∈N of Guionnet, Jones, and Shlyakhtenko, there is a sequence of canonical traces T r k,+ on Gr + k P induced by the Temperley-Lieb diagrams. Moreover, with a trace-preserving embedding into the bounded operators on a Hilbert space one can generate with {Gr + k P} k∈N a tower of non-commutative probability spaces {M k,+ } k∈N whose inclusions recover P as its standard invariant. We show that traces T r (v) k,+ induced by certain small perturbations of the Temperley-Lieb diagrams yield trace-preserving embeddings of Gr + k P that generate the same tower {M k,+ } k∈N .
10.1016/j.jfa.2014.12.018
[ "https://arxiv.org/pdf/1406.4766v2.pdf" ]
119,329,016
1406.4766
63e44444e657d5a140ac4da6b2691daf86a60a43
FREE TRANSPORT FOR FINITE DEPTH SUBFACTOR PLANAR ALGEBRAS 18 Jun 2014 Brent Nelson FREE TRANSPORT FOR FINITE DEPTH SUBFACTOR PLANAR ALGEBRAS 18 Jun 2014arXiv:1406.4766v1 [math.OA] Given a finite depth subfactor planar algebra P endowed with the graded * -algebra structures {Gr + k P} k∈N of Guionnet, Jones, and Shlyakhtenko, there is a sequence of canonical traces T r k,+ on Gr + k P induced by the Temperley-Lieb diagrams. Moreover, with a trace-preserving embedding into the bounded operators on a Hilbert space one can generate with {Gr + k P} k∈N a tower of non-commutative probability spaces {M k,+ } k∈N whose inclusions recover P as its standard invariant. We show that traces T r (v) k,+ induced by certain small perturbations of the Temperley-Lieb diagrams yield trace-preserving embeddings of Gr + k P that generate the same tower {M k,+ } k∈N . Introduction Despite the relatively innocuous definition of a subfactor, Jones showed in [5], [6], and [7] that there is in fact an incredibly rich structure underlying the inclusion of one II 1 factor in another. In particular, one can associate to a subfactor N ⊂ M its standard invariant: a planar algebra. It was later shown by Popa in [9] that in fact every subfactor planar algebra can be realized through this association. In [1] Guionnet, Jones, and Shlyakhtenko produced an alternate proof of this fact by constructing the subfactors via free probabilistic methods. Given a subfactor planar algebra P, for each k ≥ 0 one can turn Gr + k P = ⊕ n≥k P n,+ into a * -algebra with a trace T r k,+ defined by a particular pairing with Temperley-Lieb diagrams. Then each Gr + k P embeds into the bounded operators on a Hilbert space and generators a II 1 factor M k . Moreover, one can define inclusion maps i k−1 k : M k−1 → M k so the standard invariant associated to the subfactor inclusion i k−1 k (M k−1 ) ⊂ M k (for any k ≥ 1) recovers P as its standard invariant. The embedding relies on the fact that a subfactor planar algebra P always embeds into the planar algebra of a bipartite graph P Γ (cf. [7]). It turns out that Gr + 0 P embeds as a subalgebra of free Araki-Woods factor. Free Araki-Woods factors and their associated free quasi-free states, studied by Shlyakhtenko in [10], provide examples of type III factors and can be thought of as the non-tracial analogues of the free group factors. The are constructed starting from a strongly continuous one-parameter group of orthogonal transformations {U t } t∈R on some real Hilbert space. When U t = 1 for all t, this construction simply yields the free group factor. Stone's theorem guarantees the existence of a positive, non-singular generator A satisfying A it = U t for all t ∈ R. It was shown in [10] that the type classification of the free Araki-Woods factor depends on the spectrum of the generator A. Moreover, the action of the modular automorphism group is well known and also depends explicitly on A. In [8], by adapting the free transport methods of Guionnet and Shlyakhtenko (cf. [3]), it was shown that non-commutative random variables whose joint law is "close" to a free quasi-free state in fact generate a free Araki-Woods factor. In particular, the finitely generated q-deformed Araki-Woods algebras were shown to be isomorphic to the free Araki-Woods factor for small |q|. Let P be a finite depth subfactor planar algebra and T r : P → C be the state induced by the Temperley-Lieb diagrams via duality. By utilizing the transport construction methods of [8], we show that we can perturb the embedding constructed in [1] to make it state-preserving for states on P which are "close" to T r. Moreover, the von Neumann algebra generated by the subfactor planar algebra via this embedding is unchanged. In this context, if P embeds into P Γ and µ is the Perron-Frobenius eigenvector for the bipartite graph Γ, then the generator A associated to the free Araki-Woods algebra will be determined by µ. The free transport methods in [3] and [8] apply only to joint laws of finitely many non-commutative random variables. Since each edge in the graph Γ will correspond to a non-commutative random variable, we can only consider finite depth subfactor planar algebras with these methods. Research supported by NSF grants DMS-1161411 and DMS-0838680. 1 Acknowledgments. I would like to thank Arnaud Brothier and Michael Hartglass for many useful discussions and Dimitri Shlyakhtenko for the initial idea of the paper, many helpful suggestions, and his general guidance. Planar algebras We briefly recall the definitions of a planar algebra and planar tangle. For additional details, see [6], [1], and [2], [4]. Definition 2.1. A planar algebra is a collection of graded vector spaces P = {P n,+ , P n,− } n≥0 possessing a conjugate linear involution * . For each k ≥ 0 we call P k := P k,+ ⊕ P k,− the k-box space of P. A planar algebra also admits an action by planar tangles. A planar tangle consists of an output disc D 0 ⊂ R 2 and several input discs D 1 , . . . , D r ⊂ D 0 , each disc D j , 0 ≤ j ≤ r having 2k j boundary points (k j ≥ 0). These boundary points divide the boundaries of the discs into separate intervals and the distinguished interval is marked with a "⋆." Each boundary point is paired with another boundary point (potentially from a distinct disc) and connected via non-crossing strings in D 0 \ (D 1 ∪ · · · ∪ D r ). The strings divide D 0 \ (D 1 ∪ · · · ∪ D r ) into several adjacent regions which are then shaded black or white so that adjacent regions have different shades. Let T be a planar tangle whose output disc D 0 has 2k 0 boundary points and whose input discs D 1 , . . . , D r have 2k 1 , . . . , 2k r boundary points, and for each j = 0, . . . , r we define s j ∈ {+, −} to be + if the distinguished interval of D j borders a white region and − otherwise. Then T corresponds to a multilinear map Z T : P k1,s1 × · · · × P kr ,sr → P k0,s0 . These maps satisfy the following conditions. (1) Isotopy invariance: if F is an orientation preserving diffeomorphism of R 2 then Z T = Z F (T ) . (2) Naturality: gluing planar tangles into one another corresponds to composing the corresponds to composing the multilinear maps. (3) Involutive: if G is an orientation reversing diffeomorphism of R 2 then Z T (x 1 , . . . , x r ) * = Z G(T ) (x * 1 , . . . , x * r ) . Furthermore, there is a canonical scalar δ associated with P with the property that a tangle with a closed loop is equivalent to δ times the tangle with the closed loop removed. . In light of the isotopy invariance of the planar tangles, we will usually depict the input discs as rectangles with all strings emanating from the top side and the distinguished interval being formed by the other sides. For example: D 1 D 2 corresponds to a multilinear map P 2,− × P 1,− → P 1,− . We shall usually omit drawing the output disc and the shading. Given a planar algebra P we define Gr ± k P = n≥k P n,± and Gr k P = Gr + k P ⊕ Gr − k P for each k ≥ 0. An element of x ∈ Gr k P can be visually represented as x where the thick lines on the left and right each represent k strings, the thick line on top is an arbitrary (possibly zero) number of strings, and the shading of the region bordered by the distinguished interval varies according to the components of x. Gr k P is endowed with the multiplication x ∧ k y = x y , and the involution x † = x * . Now let TL ⊂ P be the canonical copy of the Temperley-Lieb planar algebra, and T L n the sum of all the Temperley-Lieb diagrams with 2n boundary points (both shadings). Then we consider the P 0,+ ⊕ P 0,− valued map T r k on Gr k P defined for x ∈ P n+k,+ ⊕ P n+k,− by T r k (x) = 1 δ k x T L n . Let Gr 0 [[P]] denote the family of formal power series on elements in Gr 0 P. As a vector space, this is equivalent to ±,n≥0 P n,± . Then if T L ∞ := n≥0 T L n ∈ Gr 0 [[P]], we can define T r k (x) for a general x ∈ Gr 0 P simply by T r k (x) = 1 δ k x T L ∞ , since the only components of T L ∞ which will contribute non-zero terms are those matching the components of x, of which there are a finite number. In fact, given any f ∈ Gr 0 [[P]] we can define a P 0,+ ⊕ P 0,− valued map with x f .(1) 2.1. Subfactor planar algebras. Definition 2.2. A subfactor planar algebra P is a planar algebra satisfying: (1) dim(P n,± ) < ∞ for all (n, ±); (2) dim(P 0,± ) = 1; (3) for each (n, ±) the sesquiliner form where the thick string denotes n strings b, a = b * a a, b ∈ P n,± (shaded according to ±) is positive definite; and (4) the equality x = x holds for any x ∈ P 1,± . Remark 2.3. As in condition (3) above, all inner products in this paper will be complex linear in the second coordinate. The condition dim(P 0,± ) = 1 implies that each P 0,± is isomorphic to C as C * -algebras with the multiplication ab = a b . Because of property (2), the maps T r k defined above are in fact C 2 -valued and we think of them as scalar valued when restricted to either Gr + k P or Gr − k P. Write T r k (x) = (T r k,+ (x), T r k,− (x)) for the two components, and let T L + ∞ (resp. T L − ∞ ) be the formal sum of all Temperley-Lieb diagrams whose distinguished interval borders an unshaded (resp. shaded) region. Then T r ± k are equivalently defined using the same tangle as T r k but replacing T L ∞ with T L + ∞ or T L − ∞ . We extend the inner product from property (3) to all of Gr 0 P with the convention that P n,s is orthogonal to P m,t when (n, s) = (m, t) ∈ N × {+, −}. Then T r ± k (x) = T L ± ∞ , x . More generally, if φ 0 : Gr 0 P → C 2 is a linear functional then there exists an element f ∈ Gr 0 [[P]] so that φ 0 (x) = f * , x . Hence we can define φ k : Gr k P → C for each k via (1). We also note that if φ 0 is positive then f = f * . 2.2. Planar algebra of a bipartite graph. For a more thorough treatment of the following section, please see Sections 2 and 4 of [1] (specifically subsections 2.4, 2.5, 4.1, 4.2, and 4.3). Let Γ = (V, E) be an oriented bipartite graph with positive vertices V + ⊂ V and negative vertices V − = V \ V + . Given an edge e ∈ E, we let s(e), t(e) ∈ E denote its beginning and ending vertex, respectively, and let e • denote the edge with the opposite orientation (i.e. s(e • ) = t(e) and t(e • ) = s(e)). Then E + = {e ∈ E : s(e) ∈ V + } is the set of edges starting on a positive vertex, and E − = {e ∈ E : s(e) ∈ V − } = {e • : e ∈ E + }. Let L denote the set of loops in Γ where a loop traveling along edges e 1 , e 2 , . . . , e n (in that order) is written as e 1 e 2 · · · e n . Since Γ is bipartite, any loop will consist of an even number of edges and so we let L n for n ≥ 0 denote the loops of length 2n (with L 0 = V ). We further sort the loops according to whether they start with a positive or negative vertex and denote these by L n,+ and L n,− , respectively. Then for each n ≥ 0, we consider the vector space P Γ n,+ (resp. P Γ n,− ) of bounded functions on L n,+ (resp. L m,− ). When |E| < ∞ (and consequently |L m,± | < ∞ for each n), the vector spaces P Γ n,± are finite dimensional and spanned by the delta functions supported on individual loops in L n,± . Letting u ∈ L n,± serve as notation for both the loop and the delta function supported on said loop, we write w = u∈Ln,± β(u)u for elements w ∈ P Γ n,± , where β(u) ∈ C. We define the following involution on P Γ n,± : w * := u∈Ln,± β(u)u op , where u op = e • n · · · e • 1 when u = e 1 · · · e n . Let A Γ be the adjacency matrix for the graph Γ. Then by the Perron-Frobenius theorem it has a unique largest eigenvalue δ > 0 with eigenvector µ satisfying µ(v) > 0 for all v ∈ V . We note that the eigenvalue condition A Γ µ = δµ guarantees µ(v) µ(w) < δ for all adjacent vertices v, w ∈ V . The map Z T associated to a planar tangle is defined as follows. Replace T with an isotopically equivalent tangle whose input and output discs are rectangles with boundary points along the top edges and distinguished interval forming the side and bottom edges. Assume D 0 and D 1 , . . . , D r are the input and output discs, respectively, and that D j has 2k j boundary points and whose distinguished interval has the shading s j ∈ {+, −}, 0 ≤ j ≤ r. Let u j ∈ L kj ,sj for each j, and assign each edge in u j to a boundary point on D j . The edges are assigned in order with the leftmost boundary point corresponding to the first edge and the rightmost boundary point corresponding to the last edge. We set Z T (u 1 , . . . , u r ) ≡ 0 unless every boundary point, say corresponding to an edge e, is connected to a boundary point of D 0 or is connected to a boundary point of another input disc corresponding to the edge e • . When the latter holds, each string is labeled by a single edge (and its opposite) and consequently the regions in D 0 \ (D 1 ∪ · · · ∪ D r ∪ {strings}) can be labeled by vertices: traversing the regions adjacent to D j clockwise corresponds to traveling along the vertices in the loop u j . In this case, Z T (u 1 , . . . , u r ) is supported on the loop f 1 · · · f 2k0 , where f l = e • if the lth boundary point of D 0 is connected to the boundary point of an input disc corresponding to the edge e. The value of this function is [Z T (u 1 , . . . , u r )](f 1 · · · f 2k0 ) = δ p s∈{strings in T } µ(t(e s )) µ(s(e s )) − θs 2π , where p is the number of closed loops in T , e s is the edge corresponding to the boundary point at the start of the string s, and θ s is the total winding angle of the string s (counter-clockwise being the direction of positive angles). We then multilinearly extend to Z T to P k1,s1 × · · · × P kr ,sr . When the output disc has zero boundary points there is one region of D 0 \ (D 1 ∪ · · · ∪ D r ∪ {strings}) bordered by the boundary of of D 0 . If the above procedure labels this region v 0 , then Z T (u 1 , . . . , u r ) is supported on v 0 with the same value as above. We have the following fact due to Jones (cf [7]): Proposition 2.4. Let P be a subfactor planar algebra. Then there exists a bipartite graph Γ and a planar algebra embedding i : P → P Γ . Definition 2.5. A subfactor planar algebra is said to have finite depth if in the above proposition Γ can be taken to be a finite graph. For the remainder of the paper we fix a finite depth subfactor planar algebra P, along with finite graph Γ and inclusion i : P → P Γ . We will use the notations b, a P or b, a P Γ to distinguish between the pairings b * a occurring in P or P Γ . Define the maps {T r k } k≥0 for both P and P Γ as above. As a planar algebra embedding, i preserves the actions of tangles. Hence T r k • i(x) = T r k (x) for all x ∈ Gr k P and all k ≥ 0. However, the 0-box space of P Γ is ℓ ∞ (V ), so T r k • i(x) is a function on V satisfying [T r k • i(x)](v) = T r k,+ (x) if v ∈ V + T r k,− (x) if v ∈ V − .(2) With this in mind we extend i to an embedding i : Gr k P → Gr k P Γ . As the * -algebra structure of Gr k P was defined using planar tangles, i is a * -algebra embedding. The Guionnet-Jones-Shlyakhtenko construction. We let H denote the complex Hilbert space with the edges E of Γ as an orthogonal basis and norms defined by 1 2 , and use the notation e 2 = µ(s(e)) µ(t(e))σ(e) = µ(t(e)) µ(s(e)) 1 2 = e −2 . Let A = ℓ ∞ (V ), then we define left and right actions of A on H by v · e · v ′ = δ v=s(e) δ v ′ =t(e) e, where v denotes both the vertex and the delta function supported at that vertex. Thus H is an A-bimodule. We define an A-valued inner product by e, f A = e, f t(e) = e, f t(f ). Let F A = A ⊕ n≥1 H ⊗An , and observe that because the tensor product is relative to A non-zero elements e 1 ⊗ · · · ⊗ e n ∈ F A correspond to paths e 1 · · · e n in Γ. Indeed: e ⊗ f = (e · t(e)) ⊗ f = e ⊗ (t(e) · f ) = δ t(e)=s(f ) e ⊗ f. For each e ∈ E we define ℓ(e) ∈ B(F A ) by ℓ(e)v = δ t(e)=v e ℓ(e)e 1 ⊗ · · · ⊗ e n = e ⊗ e 1 ⊗ · · · ⊗ e n , and then its adjoint is given by ℓ(e) * v = 0 ℓ(e) * e 1 ⊗ · · · ⊗ e n = e, e 1 A e 2 ⊗ · · · ⊗ e n . Notice that in the above formula e, e 1 A = e, e 1 t(e 1 ) and that t(e 1 )e 2 = e 2 if this element is a path. The norm of this operator is given by ℓ(e) = ℓ(e) * ℓ(e) 1 2 = e . For each e ∈ E we define the non-commutative random variable c(e) = ℓ(e) + ℓ(e • ) * ∈ B(F A ), and consider the conditional expectation E : B(F A ) → A given by E(x) = 1 A , x1 A A , where 1 A = v∈A v is the multiplicative identity in A. It is known that (Gr + 0 P Γ , T r 0 ) embeds via e 1 · · · e 2n → c(e 1 ) · · · c(e 2n ) into the von Neumann algebra (W * (c(e) : e ∈ E + ), E) in a trace-preserving manner (cf. Theorem 3 in [1]). In fact, all of Gr 0 P Γ embeds into W * (c(e) : e ∈ E) in a trace-preserving manner. Denote M := W * (c(e) : e ∈ E). For each v ∈ V , we can define a state φ v = δ v • E and a weight φ = v∈V φ v . Then for x ∈ Gr 0 P, using (2) we see that φ • c • i(x) = |V + |T r 0,+ (x) + |V − |T r 0,− (x) = |V + |T L + ∞ + |V − |T L − ∞ , x . Consequently we define T L ∞ := |V + |T L + ∞ + |V − |T L − ∞ ∈ Gr 0 [[P]] and T r 0 (x) = T L ∞ , x so that T r 0 (x) = φ • c • i(x).(3) Consider the Fock space F = CΩ ⊕ n≥1 H ⊗n (ignoring the A-bimodule structure of H). Let ϕ be the vacuum state on B(F ). For each e ∈ E we definê ℓ(e) ∈ B(F ) as above and letĉ(e) =l(e) +l(e • ) * . Extendingĉ to loops by e 1 · · · e 2n →ĉ(e 1 ) · · ·ĉ(e 2n ), it follows that φ • c = ϕ •ĉ. Indeed, the GNS vector space associated to φ v is isomorphic to the subspace of F spanned by elements of the form e 1 ⊗ · · · ⊗ e 2n where e 1 · · · e 2n ∈ L and s(e 1 ) = t(e 2n ) = v. Consequently, φ v (c(e 1 · · · e 2n )) = ϕ(ĉ(e 1 · · · e 2n ). Since this holds for each v, φ(c(x)) = ϕ(ĉ(x)) by summing over the support of x according to which vertex it starts at. Consequently, using (3) we have T r 0 (x) = ϕ(ĉ • i(x)) x ∈ Gr 0 P.(4) From now on, we will repress the embedding notation i and consider Gr 0 P as a subalgebra of Gr 0 P Γ , although the traces of such elements will still be thought of as scalars so that (4) makes sense. We will use the notation C e =ĉ(e) for e ∈ E, and M = W * (C e : e ∈ E) ⊂ B(F ). It turns out M is a free Araki-Woods factor, which we demonstrate below, and thus this embedding lies in the scope of the transport results obtained in [8]. Free Araki-Woods algebras Each C e is a generalized circular element (cf. [10]). Indeed, let h = e/ e and g = e • / e • be normalized opposite edges. Then C e = e l (h) + e −1l (g) * = e (l(h) + σ(e)l(g) * ), so letting λ(e) = σ(e) 2 = e −4 we see that C e / e is a generalized circular element of precisely the form discussed in [10]. Consequently the C e will be linearly related to certain semicircular random variables, and the von Neumann algebra they generate will be a free Araki-Woods factor. We describe these semicircular elements presently. For e ∈ E define u(e) =    1 √ σ(e)+σ(e • ) (e + e • ) if e ∈ E + i √ σ(e)+σ(e • ) (e − e • ) if e ∈ E − , so that u(e), u(e • ) are unit vectors. For each e ∈ E let X e =l(u(e)) +l(u(e)) * , then it is easy to check that C e =    √ σ(e)+σ(e • ) 2 (X e − iX e • ) if e ∈ E + √ σ(e)+σ(e • ) 2 (X e + iX e • ) if e ∈ E − (5) For each pair e, f ∈ E let α ef = ϕ(X f X e ) = u(f ), u(e) . Then take A ∈ M |E| (C) to be the matrix defined by 2 1+A ef = α ef . It follows that A is a block-diagonal matrix in the sense that [A] ef = 0 unless f ∈ {e, e • }. As this will be the case for all matrices considered in this paper, we adopt the following notation for B ∈ M |E| (C) and e ∈ E + : B(e) := [B] ee [B] ee • [B] e • e [B] e • e • ∈ M 2 (C) In particular, we have A(e) = 1 2 λ(e) + λ(e) −1 − i 2 λ(e) − λ(e) −1 i 2 λ(e) − λ(e) −1 1 2 λ(e) + λ(e) −1 . Moreover, A is positive with spectrum(A) = {λ(e)} e∈E and consequently, A = max e∈E λ(e) = max e∈E µ(t(e)) µ(s(e)) < δ.(6)Setting U t = A it for t ∈ R gives a one-parameter orthogonal group with [U t ] ef = 0 when f ∈ {e, e • } and U t (e) = cos(t log λ(e)) − sin(t log λ(e)) sin(t log λ(e)) cos(t log λ(e)) e ∈ E + . It follows that H is isomorphic to the closure of C |E| with respect to the inner product x, y U = 2 1 + A −1 x, y x, y ∈ C |E| , and this isomorphism is implemented by sending the standard basis of C |E| to {u(e)} e∈E in the obvious way. Moreover, M = W * (C e : e ∈ E) = W * (X e : e ∈ E) ∼ = Γ(R |E| , U t ) ′′ , where the latter von Neumann algebra is the free Araki-Woods factor. 3.1. The differential operators. Since M is a free Araki-Woods factor, all the machinery developed in [8] carries over and we proceed by translating it to the context of the generalized circular system C = (C e : e ∈ E). Let X = (X e : e ∈ E), then the linear relation in (5) can be stated succinctly as C = U X,(7) where U is the matrix with [U ] ef = 0 for f ∈ {e, e • } and U (e) = σ(e) + σ(e • ) 2 1 −i 1 i . Because of this linear relation, if we denote P = C X e : e ∈ E then these can be thought of as noncommutative polynomials in either the X e or in the C e . As elements of P, the distinction is trivial; however, for the purposes of composition with elements of P |E| it is necessary to indicate whether an element is being thought of as a function on the C e or the X e . Let {δ e } e∈E be the free difference quotients defined on P by δ e (X f ) = δ e=f 1 ⊗ 1 and the Leibniz rule. We use the same conventions on P ⊗ P op as those in [8]. The σ-difference quotients of [8] are given by ∂ u(e) = f ∈E α f e δ f , and these generate a new collection of derivations {∂ e } e∈E via the linear relation in (7): ∂ e = [U ] ee ∂ u(e) + [U ] ee • ∂ u(e • ) . These can also be independently defined on P by ∂ e (C f ) = δ f =e • σ(e)1 ⊗ 1 and the Leibniz rule. We shall refer to the derivations {∂ e } e∈E as c-difference quotients. For Q ∈ P |E| we define J c Q ∈ M |E| (P ⊗ P op ) by [J c Q] ef = ∂ f Q e . In particular, (J c C)(e) = 0 σ(e • )1 ⊗ 1 σ(e)1 ⊗ 1 0 e ∈ E + .(8) Letting J σ be the operator considered in [8] we have [J σ Q] ef = ∂ u(f ) Q e and J c Q = J σ Q#U T .(9) Let {D u(e) } e∈E be the σ-cyclic derivatives of [8]: D u(e) (X e1 · · · X en ) = n k=1 α u(e)e k σ −i (X e k+1 · · · X en )X e1 · · · X e k−1 and for Q ∈ P we let DQ be the σ-cyclic gradient of Q: DQ = (D u(e) Q : e ∈ E). We then define the c-cyclic derivatives D e = [U ] ee D u(e) + [U ] ee • D u(e • ) for each e ∈ E. That is, D e (C e1 · · · C en ) = σ(e • ) n k=1 δ e k =e • σ ϕ −i (C e k+1 · · · C en )C e1 · · · C e k−1 = σ(e • ) n k=1 δ e k =e • n l=k+1 σ(e l ) 2 C e k+1 · · · C en C e1 · · · C e k−1 , where we have used the action of the modular automorphism group ϕ ϕ t on C e discussed in Lemma 5.(ii) in [1]. For Q ∈ P we define D c Q = (D e Q : e ∈ E) as the c-cyclic gradient. It then follows that D c Q = U #DQ.(10) It is clear the the c-difference quotients and c-cyclic derivatives induce derivations on Gr 0 P Γ through c, and we denote these by ∂ e and D e as well. Suppose eu is a loop (so that u is a path from t(e) to s(e)). Then ∂ e u is zero unless e • is one of the edges traversed by u in which case ∂ e u is a tensor product u ℓ ⊗ u r of two loops such that u ℓ starts at t(e) and u r starts at s(e). If u itself is a loop, then D e u is zero unless e • is traversed by u in which case D e u is path starting at s(e) and ending at t(e). We next encode the action of these differential operators on Gr 0 P via planar tangles. Lemma 3.1. For g ∈ Gr 0 P, x ∈ P n,± , and 1 ≤ i ≤ 2n, consider the tangle g x where the ith boundary point of x is connected with g and we sum over all choices of boundary points of g. Then the image of the output of this tangle underĉ is the same asĉ(x) except with each monomial C e1 · · · C e2n changed to C e1 · · · (D eiĝ ) · · · C e2n . Proof. We prove this result for the corresponding tangle on Gr 0 P Γ , so that it then holds via our embedding Gr 0 P ֒→ Gr 0 P Γ . Suppose w = e 1 · · · e 2n and u = f 1 · · · f 2m , are loops. Then u w = 2m j=1 δ fj =e • i σ(e • i )e 1 · · · e i−1 [f j+1 · · · f 2m f 1 · · · f j−1 ] e i+1 · · · e 2n , The image of this underĉ is precisely C e1 · · · C ei−1 [D eiĉ (u)] C ei+1 · · · C e2n . Using the multilinearity of this and the tangle with respect to u and w, we obtain the result for general g and x. This lemma tells us that g can be thought of an vector whose components are indexed by how we label the bottom string, and the image of this vector underĉ is the c-cyclic gradient of g, D c g. Identify Gr 0 P Γ with its dual via the pairings v∈V [ f * , · P Γ ] (v) : Gr 0 P Γ → C, f ∈ Gr 0 [[P Γ ]].(11) Given a linear functional ψ on M , ψ •ĉ is a linear functional on Gr 0 P Γ and so by duality there is an element Then for x ∈ Gr 0 P embedding as u∈L σ x (u)u ∈ Gr 0 P Γ we have f ∈ Gr 0 [[P Γ ]] so that ψ •ĉ(x) = v∈V [ f * , x P Γ ] (v).x f f = ψ ⊗ ψ op eu∈L 1 V (e) σ x (eu)∂ eĉ (u)(12) where on the left we sum over the choices of the right-most endpoint of the string connecting x to itself and V (e) ∈ N is |V + | if e ∈ E + and |V − | otherwise. Proof. We first claim that x f embeds as (ψ ⊗ 1)( eu∈L σ x (eu)∂ eĉ (u)) underĉ. Indeed, let e 1 u = e 1 e 2 · · · e 2n ∈ L. Then this tangle evaluated at e 1 u instead of x yields 2n j=2 δ ej =e • 1 σ(e 1 )[ f * , e 2 · · · e j−1 P Γ ](t(e 1 ))e j+1 · · · e 2n . (We note that if e j = e • 1 then e 2 · · · e j1 and e j+1 · · · e 2n are indeed loops). Now, since f * , e 2 · · · e j−1 P Γ is supported only on t(e 1 ) = s(e 2 ), we have [ f * , e 2 · · · e j−1 P Γ ](t(e 1 )) = ψ(ĉ(e 2 · · · e j1 )). Consequently the image of the above expression underĉ is 2n j=2 δ ej =e • 1 σ(e 1 )ψ(ĉ(e 2 · · · e j−1 ))ĉ(e j+1 · · · e 2n ) = (ψ ⊗ 1)(∂ e1ĉ (e 2 · · · e 2n )) = (ψ ⊗ 1)(∂ e1 u). Summing over general eu ∈ L yields the claim for x. Now, for a ∈ Gr + 0 P we have that f * , a P Γ is the function supported on V + with constant value of f * , a P . Hence ψ(ĉ(a)) = |V + | f * , a P , or a f = 1 |V + | ψ(ĉ(a)), where the planar tangle is occurring in P. Similarly for a ∈ Gr − 0 P. Applying this to the output of the tangle in the first claim yields (12) once we note that the components of x in Gr ± 0 P embed as eu∈L± σ(eu)eu ∈ Gr ± 0 P Γ . Remark 3.3. The element associated to the free quasi-free state ϕ by (11) is T L ∞ , which we note is distinct from T L ∞ , the element associated to it via the pairing f * , · P on Gr 0 P. This difference is simply a consequence of the relationship between these two pairings for elements of Gr 0 P: coef(e 1 , . . . , e n )X e1 · · · X en we defined v∈V [ f * , x P Γ ](v) = |V + | f * + , x + P + |V − | f * − , x − P .Q R := n≥0 e1,...,en∈E |coef(e 1 , . . . , e n )|R n . We denote the closure P · R by P (R) . Moreover, P Writing Q = n≥0 π n (Q) where π n is the projection onto monomials of degree n, we also defined Q R,σ := n≥0 sup kn∈Z ρ kn (π n (Q)) R , where ρ : P → P is defined by ρ(X e1 · · · X en ) := σ ϕ −i (X en )X e1 · · · X en−1 and ρ(a) = a for a ∈ C. ρ(Q) is a called a σ-cyclic rearrangement of Q. The tangle induced by ρ on Gr 0 P Γ is the identity tangle but with the last string rotated clockwise around to the leftmost boundary point of the output disc. Equivalently, the tangle shifts the distinguished interval to the adjacent interval in the counter-clockwise direction. Let P f inite = {Q ∈ P : Q R,σ < ∞}, then it is easy to see that P ∩ M ϕ ⊂ P f inite and we let P (R,σ) = P · R,σ . We also denote P the elements in P (R,σ) which are fixed under ρ. Such elements are called σ-cyclically symmetric and have the same norm with respect to · R and · R,σ . Provided R is larger than all the operator norms X e , e ∈ E, then both norms · R and · R,σ dominate the operator norm on M . Hence P (R,σ) ⊂ P (R) ⊂ M for such R. Since, X e = l (u(e)) +l(u(e)) * ≤ 2 σ(e) + σ(e • ) ( e + e • ) < 2(1 + δ 1 4 ), we will usually consider only R ≥ 2(1 + δ 1 4 ). In fact, due to hypothesis of the transport theorems (cf. Theorem 3.17 in [8] for example) we will usually restrict ourselves to R ≥ 4δ 1 2 > 4 A , where we have used (6). Via the embeddingĉ, the norms · , · R , and · R,σ induce norms on Gr 0 P Γ , which we denote in the same way, and maps σ ϕ z , z ∈ C, and ρ induce a maps on Gr 0 P Γ , again still denoted in the same way. Let (Gr 0 P Γ ) (R) := Gr 0 P Γ · R and (Gr 0 P Γ ) (R,σ) := Gr 0 P Γ · R,σ , (we will see below that w R,σ < ∞ for all w ∈ Gr 0 P Γ ). We similarly define (Gr 0 P) (R) and (Gr 0 P) (R,σ) . (Gr 0 P Γ ) (R) may be thought of the subalgebra of Gr 0 [[P]] of absolutely convergent power series on loops with a radius of convergence of at least R, where a loop of length 2n is given degree 2n (modulo the constants involved in translating from X to C). Similarly, (Gr 0 P Γ ) (R,σ) may be thought of as the subalgebra of Gr 0 [[P]] of absolutely convergent power series on the loops so that every rotation of its support loops has radius of convergence at least R. We also use the subscripts ϕ and c.s. to denote the corresponding subspaces. We make the following observations for a loop e 1 · · · e 2n ∈ L n,± : σ ϕ −i (e 1 · · · e 2n ) = 2n l=1 µ(t(e l )) µ(s(e l )) e 1 · · · e 2n = e 1 · · · e 2n , and for 1 ≤ k < 2n ρ k (e 1 · · · e 2n ) = 2n l=2n−k+1 µ(t(e l )) µ(s(e l )) e 2n−k+1 · · · e 2n e 1 · · · e 2n−k = µ(t(e 2n )) µ(s(e 2n−k+1 )) e 2n−k+1 · · · e 2n e 1 · · · e 2n−k . Let ∆ = max v,v ′ ∈V µ(v) µ(v ′ ) < ∞ and note that for e ∈ E C e R = σ(e) + σ(e • ) 2 (X e ± iX e • ) R ≤ 1 + δ 1/2 R,(13) where we used the bound µ(v) µ(v ′ ) < δ for adjacent vertices v, v ′ ∈ V . Thus for w = u∈Ln,± β(u)u ∈ P Γ n,± we have the bound w R ≤ u∈Ln,± |β(u)|(1 + δ 1/2 ) n R 2n , and using (13) we obtain w R,σ ≤ ∆ u∈Ln,± |β(u)|(1 + δ 1/2 ) n R 2n . In particular, for any w ∈ Gr 0 P Γ , w R,σ < ∞. Using (9) and (10) this is equivalent to ψ(D c V #Q) = ψ ⊗ ψ op ⊗ Tr(J c Q) Q ∈ P N .(14) The solution ψ is the free Gibbs state with potential V and is often denoted ϕ V . v x f = x f f ,(15) where on the left we sum over the endpoints of v which are connected to x and on the right we sum over the positions of the right endpoint of the string. Proof. Let y = v x , and suppose x embeds as u∈L σ x (u)u ∈ Gr 0 P Γ . Then by Lemma 3.1 f * , y P = v∈V+ 1 |V + | [ f * , y + P Γ ](v) + v∈V− 1 |V − | [ f * , y − P Γ ](v) = ψ •ĉ 1 |V + | y + + 1 |V − | y − = eu∈L σ x (eu) V (e) ψ(D eĉ (v) ·ĉ(u)), where V (e) = |V + | if e ∈ E + and V (e) = |V − | otherwise. Next applying (14) yields f * , y P = eu∈L σ x (eu) V (e) ψ ⊗ ψ op (∂ eĉ (u)), which is equivalent to the right-hand side of (15) by Lemma 3.2. Recall that in [8] we considered the following potential V 0 = 1 2 e,f ∈E 1 + A 2 ef X f X e , which satisfied DV 0 = X. The free Gibbs state with potential V 0 is the vacuum state ϕ. Rewriting V 0 in terms of the C e via (7) yields ] which satisfy the Schwinger-Dyson planar tangle for potential v close to v 0 with respect to the · R,σ norm. Our convention will be to denote the difference by w = v − v 0 . We will also construct an embedding of Gr 0 P Γ into M taking the edges e ∈ E to noncommutative random variables whose joint law with respect to ϕ is the free Gibbs state with potential V =ĉ(v). V 0 = 1 2 e∈E σ(e)C e C e • , and D c V 0 = U #DV 0 = C. Observe that V 0 =ĉ(v 0 ) where v 0 ∈ Gr 0 P is Transport For the remainder of the paper we fix R ′ > R ≥ 4δ 1 2 , so that the operator norm is dominated by · S for any S ≥ R. The constants obtained in the following will depend only on R, R ′ , and |E|. 4.1. Constructing the transport element. The main theorem of [8] showed that if Z is an N -tuple of random variables in some non-commutative probability (L, ψ) whose joint law ψ Z is the free Gibbs state with potential V , and V − V 0 R,σ is sufficiently small then (W * (Z), ψ) ∼ = (W * (X), ϕ) and the isomorphism is state-preserving. Stated more succinctly, the theorem gives W * (ϕ V ) ∼ = W * (ϕ V0 ) for V − V 0 R,σ sufficiently small. In this section we will show that if v ∈ (Gr 0 P) (R,σ) c.s. with v − v 0 R,σ is sufficiently small, then there is an element satisfying the Schwinger-Dyson planar tangle with potential v. Recall that the map N : P → P is defined by multiplying a monomial of degree n by n, and Σ is its inverse on monomials of degree one or higher. These induce maps on Gr 0 P Γ , which we also denote N and Σ: N (e 1 · · · e 2n ) = 2ne 1 · · · e 2n Σ(e 1 · · · e 2n ) = 1 2n e 1 · · · e 2n , (for n > 0), or for x ∈ P n ⊂ P Γ n N (x) = 2nx Σ(x) = 1 2n x. Lemma 4.1. Let w ∈ (Gr 0 P) (R ′ +1,σ) c.s. and denote W :=ĉ(w). Consider the following map defined on {G ∈ P (R ′ ,σ) c.s. : G R ′ ,σ ≤ 1}: F (G) = − W (C + D c ΣG) − 1 2 e∈E σ(e) (D e ΣG) (D e • ΣG) + m≥1 (−1) m+1 m (1 ⊗ ϕ) • Tr U 2A −1 1 + A U T −1 J c D c ΣG# (J c C#J c D c ΣG) m−1 + m≥1 (−1) m+1 m (ϕ ⊗ 1) • Tr U 2A 1 + A U T −1 J c D c ΣG# (J c C#J c D c ΣG) m−1 . Consider the following planar tangles on Gr 0 P Γ : T 1 (g) = v 0 + Σg · · · v 0 + Σg w where the number discs containing v 0 + Σg varies according to the components of w, T 2 (g) = Σg Σg T 3,m (g) = Σg Σg · · · Σg Σg T L ∞ where there are exactly m discs containing Σg, and T 4,m (g) = T L ∞ Σg Σg · · · Σg Σg where again there are exactly m discs containing Σg. Then on Gr 0 P Γ , F •ĉ =ĉ • T where T is the planar tangle T = −T 1 − 1 2 T 2 + m≥1 (−1) m+1 m (T 3,m + T 4,m ) . and convergence is with respect to the · R ′ -norm. Proof. We will prove this equivalence term by term. For w ∈ Gr 0 P and W =ĉ(w), we have thatĉ • T 1 (g) = W (C + D c Σĉ(g)) immediately by Lemma 3.1. For w ∈ (Gr 0 P) (R ′ +1,σ) c.s. , we can sum over the support of w to obtain the equality since convergence is guaranteed by W (C + D c Σĉ(g) R ′ ≤ W R ′ +1 (cf. Lemma 2.5 in [8]). LetT We will showĉ •T 2 (u 1 , u 2 ) = e∈E σ(e)(D eĉ (u 1 ))(D e •ĉ(u 2 )). First assume u l , l ∈ {1, 2}, is a delta function supported on the loop e l,1 · · · e l,n l . Theñ T 2 (u 1 , u 2 ) = n1 j1=1 n2 j2=1 δ e2,j 2 =e • 1,j 1 σ(e 2,j2 )σ(e 1,j1+1 ) 2 · · · σ(e 1,n1 ) 2 σ(e 2,j2+1 ) 2 · · · σ(e 2,n2 ) 2 × e 1,j1+1 · · · e 1,n1 e 1,1 · · · e 1,j1−1 e 2,j2+1 · · · e 2,n2 e 2,1 · · · e 2,j2−1 = e∈E n1 j1=1 δ e1,j 1 =e • σ(e)σ(e • )σ(e 1,j1+1 ) 2 · · · σ(e 1,n1 ) 2 e 1,j1+1 · · · e 1,n1 e 1,1 · · · e 1,j1−1 × n2 j2=1 δ e2,j 2 =e σ(e)σ(e 2,j2+1 ) 2 · · · σ(e 2,n2 ) 2 e 2,j2+1 · · · e 2,n2 e 2,1 · · · e 2,j2−1 Applyingĉ yieldŝ c •T 2 (u 1 , u 2 ) = e∈E σ(e)[D e (C e1,1 · · · C e1,n 1 )][D e • (C e2,1 · · · C e2,n 2 )] = e∈E σ(e)(D eĉ (u 1 ))(D e •ĉ(u 2 )). Using the multilinearity of each side we have for arbitrary g ∈ Gr 0 P Γ c •T 2 (Σg, Σg) = σ e∈E σ(e)(D e Σĉ(g))(D e • Σĉ(g)), and we note that the left-hand side isĉ • T 2 (g). LetT 3,m (u 1 , . . . , u m ) = u 1 u 2 · · · u m−1 u m T L ∞ We claim that c •T 3,m (u 1 , . . . , u m ) = (1 ⊗ ϕ) • Tr U 2A −1 1 + A U T −1 J c D cĉ (u 1 )#J c C#J c D cĉ (u 2 )# · · · #J c C#J c D cĉ (u m ) . First note that because of (8), for each l = 1, . . . , m − 1 and e, f ∈ E we have σ(e l,k ) 2   ×ĉ(e l,j l +1 · · · e l,i l −1 ) ⊗ĉ(e l,i l +1 · · · e l,j l −1 ) = 1≤j l ,i l ≤n j l =i l δ e=e l,j l σ(e l,j l +1 ) 2 · · · σ(e l,n l ) 2 σ(e • l,i l )δ e l,i l =f • [J c C#J c D cĉ (u l )] ef =σ(e • )[J c D cĉ (u l )] e • f = σ(e • )∂ f D e •ĉ(u l ) =σ(e • ) ×ĉ(e l,j l +1 · · · e l,i l −1 ) ⊗ĉ(e l,i l +1 · · · e l,j l −1 ). Also it follows from a simple computation that U 2A −1 1 + A U T −1 (e) = 0 σ(e • ) 3 σ(e) 3 0 e ∈ E + , so that U 2A −1 1 + A U T −1 #J c D cĉ (u 1 ) ef =σ(e • ) 3 [J c D cĉ (u 1 )] e • f = σ(e)∂ f D e •ĉ(u 1 ) = 1≤j1,i1≤n j1 =i1 σ(e • 1,j1 ) 2 δ e=e1,j 1 σ(e 1,j1+1 ) 2 · · · σ(e 1,n1 ) 2 σ(e • 1,i1 )δ e1,i 1 =f •(17) ×ĉ(e 1,j1+1 · · · e 1,i1−1 ) ⊗ĉ(e 1,i1+1 · · · e 1,j1−1 ). Assume each u l is the delta function supported on the loop e l,1 · · · e l,n l . Theñ σ(e l,j l +1 ) 2 · · · σ(e l,n l ) 2 σ(e • l,i l )δ e l,i l =e • l+1,j l+1 × σ(e m,jm+1 ) 2 · · · σ(e m,nm ) 2 σ(e m,im )δ em,i m =e • 1,j 1 × e 1,j1+1 · · · e 1,i1−1 · · · e m,jm+1 · · · e m,im−1 × [T r 0 (e m,im+1 · · · e m,jm−1 · · · e 1,i1+1 · · · e 1,j1−1 )] (s(e m,im+1 )). Make the substitution σ(e m,im )δ em,i m =e • 1,j 1 = σ(e • m,im )δ em,i m =e • 1,j 1 σ(e • 1,j1 ) 2 , and then group the factors σ(e • 1,j1 ) 2 δ e • m,im =e1,j 1 with the factor corresponding to l = 1 in the scalar product in the above equation. Also, group the factor δ e l,i l =e • l+1,j l+1 = δ e • l,i l =e l+1,j l+1 with the factor corresponding to l + 1 rather than l. Finally, recall that if u starts at v then [T r 0 (u)](v) = φ v (c(u)) = ϕ(ĉ(u)). With these changes we havẽ T 3,m (u 1 , . . . , u m ) = (1 ⊗ [ϕ •ĉ])     1≤j1,i1≤n j1 =i1 σ(e • 1,j1 ) 2 δ e • m,im =e1,j 1 σ(e 1,j1+1 ) 2 · · · σ(e 1,n1 ) 2 σ(e • 1,i1 ) ×(e 1,j1+1 · · · e 1,i1−1 ⊗ e 1,i1+1 · · · e 1,j1−1 # m l=2     1≤j l ,i l ≤n j l =i l δ e • l−1,i l−1 =e l,j l σ(e l,j l +1 ) 2 · · · σ(e l,n l ) 2 σ(e • l,i l ) × e l,j l +1 · · · e l,i l −1 ⊗ e l,i l +1 · · · e l,j l −1         . Applyingĉ and comparing this to (16) and (17) demonstrates the claimed equivalence. Then using the multilinearity of each side to replace u l with Σg for each l = 1, . . . , m then showŝ c • T 3,m (g) = (1 ⊗ ϕ) • Tr U 2A −1 1 + A U T −1 J c D cĉ (g)# (J σ C#J σ D c Σĉ(g)) m−1 . A similar argument demonstrateŝ c • T 4,m (g) = (ϕ ⊗ 1) • Tr U 2A 1 + A U T −1 J c D Σĉ (g)# (J σ C#J c D c Σĉ(g)) m−1 . Finally, a term by term comparison then yields the equivalence F •ĉ =ĉ • T on Gr 0 P Γ . Using (9) and (10) it is not hard to see that the map F defined in Lemma 4.1 is equivalent to the map considered in Corollary 3.14 of [8] where the W in this latter map is being thought of as a polynomial on the X e (for the purposes of composing with X + DG). Corollary 3.18 of [8] (with N = |E|) then says that there is constant ǫ > 0 so that if W =ĉ(w) for w ∈ (Gr 0 P) (R ′ +1,σ) c.s. with w R ′ +1,σ < ǫ then there exists G ∈ P (R ′ ,σ) c.s. so that the joint law of the N -tuple Y = X + DG is the free Gibbs state with potential V 0 + W . By (14) this is equivalent to joint law of the N -tuple C + D c G satisfying the Schwinger-Dyson equation with potential V 0 + W but with the differential operators D c and J σ . That is, ϕ ((C e + D e G) · Q(C + D c G)) =ϕ ⊗ ϕ op ([∂ e Q](C + D c G)) − ϕ ([D e W ](C + D c G) · Q(C + D c G)) ,(18) where here [Q](P ) for Q ∈ P (R) and P ∈ (P (R) ) |E| means Q evaluated as a power series in the C e at C e = P e . This G = ΣĜ whereĜ is the · R ′ ,σ -norm limit of the sequence G k = (S ΠF ) k (W ). Thus if we define g k = (S ΠT ) k (w), then G k =ĉ(g k ) by Lemma 4.1 and hence the · R ′ ,σ -norm limitĝ of {g k } satisfieŝ c(ĝ) =Ĝ. Let g = Σĝ. Definition 4.2. The element g ∈ (Gr 0 P) (R ′ ,σ) c.s. is called the transport element from v 0 to v. Define η : Gr 0 P → (Gr 0 P) (R ′ ) by η(x) = v 0 + g · · · v 0 + g x . From Lemma 3.1 it follows thatĉ • η(x) = [ĉ(x)](C + D c G). Moreover, we claim η(x) ∈ (Gr 0 P) (R) for each x ∈ Gr 0 P. Fix x ∈ Gr 0 P. Since g ∈ (Gr 0 P) (R ′ ) , there is a sequence {h n } n∈N ⊂ Gr 0 P so that g − h n R ′ → 0. Let x n = v 0 + h n · · · v 0 + h n x , then x k ∈ Gr 0 P and η(x) is the · R -limit of the x n by Lemma 2.5 in [8]. It is clear that the element associated to ϕ •ĉ • η via the duality in (11) is T L (v) ∞ = T L ∞ v 0 + g · · · v 0 + g ∈ Gr 0 [[P]], where we sum over the number of input discs with g in them. Define T r (v) 0 (x) := T L (v) ∞ , x P (we note T L (v) ∞ = (T L (v) ∞ ) * since v 0 , g,(R ′ +1,σ) c.s. satisfies v − v 0 R ′ +1,σ < ǫ, there is g ∈ (Gr 0 P) (R ′ ,σ) c.s. so that T L (v) ∞ ∈ Gr 0 [[P] ] defined above satisfies the Schwinger-Dyson planar tangle. Moreover, the mapĉ • η sends Gr 0 P Γ to a subalgebra of W * (C e + D eĝ : e ∈ E), whose joint law with respect to the free quasi-free state ϕ is the free Gibbs state with potential [ĉ(v)](C + D cĉ (g)) =ĉ • η(v). , the requirement that w = v − v 0 be invariant under ρ prevents us from fully generalizing the result in [2]. For example, consider the Temperley-Lieb diagram B = which embeds as u∈L σ B (u)u = e∈E−, f ∈E+ t(e)=s(f ) σ(e)σ(f )ef f • e • ∈ P Γ 2,− . Proposition 2 of [2] can be applied to the perturbation v 0 + t u∈L σ B (u)u, but as this element is not invariant under ρ the above proposition does not apply. In fact, symmetrizing t u∈L σ B (u)u with respect to ρ yields t e∈E−, f ∈E+ t(e)=s(f ) σ(e)σ(f )ef f • e • + t e,f ∈E+ s(e)=s(f ) σ(e)σ(f )ee • f f • , The later summation is t times the embedding of the Temperley-Lieb diagram B ′ = , which is the diagram obtained from B by shifting the distinguished interval one interval counter-clockwise, i.e. applying ρ to B. 4.2. Equality of non-commutative probability spaces. Usingĉ to realize Gr 0 P as a subalgebra of M , we let M 0 = W * (ĉ(Gr 0 P)) ⊂ M and M 0,± = W * (ĉ(Gr ± 0 P)). Note that by our choice of R ≥ 4δ 1 2 , · S dominates the operator norm for any S ≥ R and therefore (Gr 0 P) (S) ⊂ M 0 for every S ≥ R. Thus, for v ∈ (Gr 0 P) (R ′ +1,σ) c.s. with v − v 0 R ′ ,σ < ǫ (ǫ as in Proposition 4.3) we have η(x) ∈ (Gr 0 P) (R) ⊂ M 0 for each x ∈ Gr 0 P. Consider M (v) 0 = W * (ĉ • η(Gr 0 P)) ⊂ M 0 and M (v) 0,± = W * (ĉ • η(Gr ± 0 P)) . In this section we show that by making ǫ smaller if necessary we have M 0 = M H e = ue • ∈L σ(e • )σ h (ue • )ĉ(u). Then L(H) =ĉ(h) and for u 1 eu 2 ∈ L (e ∈ E) we have e u 1 u 2 hĉ −→ĉ (u 1 )H eĉ (u 2 ). Proof. The assertion L(H) =ĉ(h) follows immediately from the definition of H and L(H). To see that the output of the planar tangle embeds as stated, one simply notes that the string connecting h to e must have e • as its endpoint in h and contributes a factor of σ(e • ) to the tangle. Theorem 4.6. There exists a constant ǫ > 0 so that for v ∈ (Gr 0 P) (R ′ +1,σ) c.s. with v − v 0 R ′ +1,σ < ǫ, M 0 = Mh k+1 = v 0 −ĝ h k · · · h k , whereĝ = N g ∈ (Gr 0 P) (R ′ ,σ) c.s. and g is the transport element from v 0 to v. We claim that h k ∈ (Gr P 0 ) (R) and if x k = x h k · · · h k ∈ (Gr 0 P) (R) , then η(x k ) ∈ (Gr 0 P) (R) and η(x k ) → x in the · R -norm. Indeed, supposeĜ =ĉ(ĝ) = e∈E ue∈L σ g (ue)ĉ(x)C e . From Equation (15) in Lemma 2.5 of [8] it follows that if f = D cĉ (g) = D c ΣĜ, then f e = ue • ∈L σ(e • )σ g (ue • )ĉ(u). It is easy to see that L(f ) =Ĝ =ĉ(ĝ). For each k, define an |E|-tuple of (a priori formal) power series H k in the C e , so that L(H k ) =ĉ(h k ). In particular, H 0 = C since L(C) =ĉ(v 0 ). Then these H k satisfy the recursive relationship H k+1 = C − f (H k ) since by Lemma 4.5, L(H k+1 ) =ĉ(h k+1 ) = V 0 − e∈E e1···er e∈L σ g (e 1 · · · e r e)[H k ] e1 · · · [H k ] er C e = V 0 − e∈E σ(e • )[f (H k )] e • C e = L(C − f (H k )), and the map L is injective. The sequence {U −1 #H k } k∈N (now thought of as power series in X), is precisely the sequence considered in Lemma 2.8 of [8]. In particular (after shrinking ǫ if necessary for f to satify the hypotheses of this lemma) U −1 #H k (Y ) ∈ P (R) |E| so it follows that H k (C + D cĝ ) ∈ P (R) |E| . Lemma 2.8 of [8] also gives that U −1 #H k ∈ P (R) |E| so that H k ∈ P (R) |E| and hence h k R = L(H k ) R ≤ |E| √ δ max e∈E [H k ] e R < ∞. Consequently x k ∈ (Gr 0 P) (R) since P (R) is a Banach algebra. Now, if x embeds as u∈L σ x (u)u ∈ Gr 0 P Γ , then by Lemma 4.5 c(x k ) = e1···er ∈L σ x (e 1 · · · e r )[H k ] e1 · · · [H k ] er thusĉ • η(x k ) = e1···er ∈L σ x (e 1 · · · e r )[H k (C + D cĉ (g))] e1 · · · [H k (C + D cĉ (g))] er ∈ (P) (R)(19) since P (R) is a Banach algebra. Thus η(x k ) ∈ (Gr 0 P) (R) as claimed. Lemma 2.8 of [8] also showed that U −1 #H k (Y ) ∈ P (R) N and U −1 #H k (Y ) → X (here we are evaluating U −1 #H k as power series in X) with respect to the · R -norm. Consequently H k (C + D cĉ (g)) → C (evaluating H k as a power series in C) with respect to the · R -norm. Considering (19) we then havê c • η(x k ) → e1···er ∈L σ x (e 1 · · · e r )C e1 · · · C er =ĉ(x), implying η(x k ) → x as claimed. Finally, let π n : Gr 0 [[P]] → P n,+ ⊕ P n,− be the projection onto the nth component. Write each x k = n≥0 π n (x k ), where this sum converges with respect to the · R -norm. Then η(x k ) = lim N η N n=0 π n (x k ) with respect to the · R -norm. This shows that η(x k ) ∈ M (v) 0 and hence x ∈ M (v) 0 as the · R -limit of the η(x k ). The * -automorphism on M is simply the extension of C e → C e + D eĉ (g). Remark 4.7. Because of (4), the embeddingĉ : (Gr 0 P, T r 0 ) ֒→ (M 0 , ϕ) is not trace-preserving. However, restricting to either Gr + 0 P and Gr − 0 P and normalizingĉ by 1 |V±| does yield a trace-preserving embedding. Similarly, 1 |V±|ĉ • η is a trace-preserving embedding of (Gr 0 P, T r (v) 0 ) ֒→ (M (v) 0,± , ϕ). Since it is clear that Theorem 4.6 also gives the equalities M 0,± = M (v) 0,± , we observe that 1 |V±|ĉ and 1 |V±|ĉ • η are distinct embeddings of Gr ± 0 P into B(F ) which generate the same von Neumann algebra. Remark 4.8. Since the proof Theorem 4.6 relied only on operator norm convergence, the result also holds when the von Neumann algebras are replaced with the C * -algebras. 4.3. Tower of non-commutative probability spaces. In this section we recall the embeddings of Gr k P Γ into B(F ) considered in [1], and show that perturbing these embeddings by the transport element g still yields the same von Neumann algebra. For k ≥ 1 consider the mapĉ k : Gr k P Γ → B(F ) defined bŷ c k (uf • k · · · f • 1 e 1 · · · e k ) =l(e 1 ) · · ·l(e k )ĉ(u)l(f k ) * · · ·l(f 1 ) * , where e 1 , . . . , e k , f 1 , . . . , f k ∈ E and uf • k · · · f • 1 e 1 · · · e k ∈ L. We letĉ 0 =ĉ. The reason for the apparent rotation of the edges in the definition ofĉ k is that when we represent x ∈ Gr k P as the diagram x we want to send the strings on the left to operators of the form l(e), the strings on the right to operators of the form l(e • ) * , and the strings on top to operators of the formĉ(e). Because l(f ) * l(e) = δ f =e e 2 Ω,ĉ k is a * -homomorphism from Gr k P Γ with multiplication ∧ k to M k ⊂ B(F ) where M k = span{l(e 1 ) · · ·l(e k )ĉ(u)l(f k ) * · · ·l(f 1 ) * : e 1 · · · e k uf • k · · · f • 1 ∈ L}. Also considered in [1] was the trace ϕ k : M k → C defined by ϕ k (·) = δ −k f1,...,f k ∈E µ(s(f 1 )) µ(t(f k )) 1 2 f 1 ⊗ · · · ⊗ f k , · f 1 ⊗ · · · ⊗ f k F , which satisfies ϕ k (ĉ(x)) = v∈V [T r k (x)](v) for x ∈ Gr k P, and the embeddings i . For each k ≥ 0, let M k = W * (ĉ k (Gr k P)) ⊂ M k and M k,± = W * (ĉ k (Gr ± k P)). In [1], the embedding c : C e ∈ E → B(F A ) rather thanĉ was used to define these von Neumann algebras on the GNS space corresponding to the weight φ from Section 2.3. However, since ϕ •ĉ = φ • c these are isomorphic to the M k defined here. Consequently, Theorem 8 of [1] implies that the standard invariant of the subfactors i k−1 k (M k−1,+ ) ⊂ M k,+ is isomorphic to the subfactor planar algebra P. Let v ∈ (Gr 0 P) (R ′ +1,σ) c.s. be sufficiently close to v 0 so that the transport element g from v 0 to v exists. We then define η k on Gr k P by η k (x) = v 0 + g · · · v 0 + g x x ∈ Gr k P Proof. Let g ∈ (Gr 0 P) (R,σ) c.s. be the transport element from v 0 to v. Then the embeddings are simply {ĉ k • η k } k≥0 and the equality of the generated von Neumann algebras follows from Theorem 4.9. The isomorphism L k ∼ = M k,+ follows from the fact that both representations π k andĉ k • η k are trace-preserving. Lemma 3. 2 . 2Given a linear functional ψ : M → C, suppose the element f ∈ Gr 0 [[P Γ ]] associated to ψ as above belongs to the subspace Gr 0 [[P]]. intersection P (R) ∩ M ϕ with the centralizer of M : the elements fixed under the modular automorphism group. = P (R,σ) ∩ M ϕ and further denote by P (R,σ) c.s. 3. 3 . 3The Schwinger-Dyson planar tangle. Let ψ : M → C be a state on the free Araki-Woods factor M and let V ∈ P (R,σ) c.s. , R ≥ 4δ 1 2 . Then ψ is said to satisfy the Schwinger-Dyson equation with potential V if ψ(DV #Q) = ψ ⊗ ψ op ⊗ Tr(J σ Q) Q ∈ P N . Lemma 3. 4 . 4Let ψ be the free Gibbs state with potential V . Assume that V =ĉ(v) for some v ∈ (Gr 0 P)(R,σ)c.s. , and that the element f ∈ Gr 0 [[P Γ ]] associated to ψ by the duality in (11) satisfies f ∈ Gr 0 [[P]]. Then the following equivalence of planar tangles holds: Definition 3. 5 . 5For v ∈ (Gr 0 P)(R,σ)c.s. , we say f ∈ Gr 0 [[P]] satisfies the Schwinger-Dyson planar tangle with potential v if (15) holds for all x ∈ Gr 0 P. σ(e • )ee • ∈ Gr 0 P Γ . Since ϕ satisfies with Schwinger-Dyson equation with potential V 0 and T L ∞ is the element associated to it by the duality in (11), we know T L ∞ satisfies the Schwinger-Dyson planar tangle with potential v 0 by the previous lemma.However, this is true by visual inspection within the context of the planar algebra: note thatv 0 = + Hence the Schwinger-Dyson planar tangle holds simply by following the leftmost string attached to x through the diagrams in T L ∞ . In Section 4.1, we construct elements T L (v) ∞ ∈ Gr 0 [[P] 1≤j l ,i l ≤n j l =i l σ(e)δ e l,j l =e σ(f )δ e l,i l =f • and T L ∞ are all self-adjoint), then T r(v) 0 = T r 0 • η.The above observations and Lemma 3.4 immediately imply the following proposition. Proposition 4. 3 . 3There exists ǫ > 0 such that when v ∈ (Gr 0 P) Remark 4 . 4 . 44The Schwinger-Dyson planar tangle on Gr 0 P was solved in Proposition 2 of[2] for potentials of the form v 0 + k i=1 t i B i , B 1 , . . . , B k ∈ Gr 0 P with i |t i | small. While the above proposition allows us to take perturbations v of v 0 in (Gr 0 P Γ ) (R ′ +1,σ) c.s. Lemma 4 . 5 . 45Let R > 0. For H ∈ P (R) |E| , define L(H) := (J c C#H)#C = e∈E σ(e)H e C e • . Suppose h ∈ (Gr 0 P) (R) with zero P 0 component embeds into M aŝ c(h) = e∈E ue∈L σ h (ue)ĉ(u)C e , and define H ∈ P (R) |E| by ⊂ Moreover, there exists a * -automorphism of M which fixes M 0 and takes the free Gibbs state with potentialĉ(v 0 ) to the free Gibbs state with potentialĉ(v). M 0 was already demonstrated at the beginning of this section. Towards showing the reverse inclusion, fix x ∈ Gr 0 P and consider the following recursively defined sequence: h 0 = v 0 and (u 1 , u 2 ) = u 1 u 2 } k≥0 are the same as in the tower {M k } k≥0 ; that is, P is recovered as the standard invariant of the tower {MProof. Let {h n } n≥0 ⊂ Gr 0 P be a sequence converging to v 0 + g in with respect to the · R ′ ,σ -norm. Given x ∈ Gr k P, suppose it embeds as uu • 2 u1∈L σ x (uu • 2 u 1 )uu • 2 u 1 where u 1 , u 2 are paths of length k. If u = e 1 · · · e k we letl(u) =l(e 1 ) · · ·l(e k ) and u = e 1 · · · e k . Thenis the · R -norm limit (and hence operator norm limit) of [ĉ(u)](D cĉ (h n )). AlsoThe reverse inclusion follows from the same argument since we showed in the proof of Theorem 4.6 that c(u) is the · R -norm limit of elements of the formĉ • η(u ′ ).The final statements are immediate from the equalities established above, but we also note that they follow from the fact that I k−1 k intertwines η k and η k−1 for each k. Remark 4.10. As with Theorem 4.6, Theorem 4.9 also holds when the von Neumann algebras are replaced with the corresponding C * -algebras.One should think of the embeddingsĉ k • η k , k ≥ 0 as small perturbations of the embeddingsĉ k of Gr k P. Thus, Theorems 4.6 and 4.9 say that when the perturbation is small enough, the von Neumann algebra generated by the Gr k P are the same and we can recover the subfactor planar algebra P as the standard invariant of the subfactors i k−1Recall that we can extend this to a series of traces τ k : Gr + k P → C, k ≥ 0, via (1). Let (H k , π k , ξ k ) be the GNS representation of (Gr + k P, ∧ k ) with respect to τ k , and let L k = π k (Gr k P + ) ′′ ⊂ B(H k ). The inclusion tangles I k−1. Thus when the L k are factors, one can consider the standard invariant associated to these inclusions. The following corollary shows that if f satisfies the Schwinger-Dyson planar tangle with a potential v close enough to v 0 , then L k ∼ = M k,+ for each k ≥ 0 and hence the standard invariant for {L k ⊂ L k+1 } k≥0 is simply P. . If and v − v 0 R,σ < ǫ then there exists trace-preserving embeddings (Gr k P, τ k ) ֒→ (M k , ϕ k ) for each k and the von Neumann algebra generated by Gr k P under this embedding is M k . Moreover, L k ∼ = M k,+ for each k ≥ 0. Random matrices, free probability, planar algebras and subfactors, Quanta of Maths. A Guionnet, V F R Jones, D Shlyakhtenko, Clay Math. Proc. 11Amer. Math. SocA. Guionnet, V. F. R. Jones, D. Shlyakhtenko; Random matrices, free probability, planar algebras and subfactors, Quanta of Maths, Clay Math. Proc., 11, pp. 201239. Amer. Math. Soc., Providence, RI, 2010. Loop models, random matrices and planar algebras. A Guionnet, V F R Jones, D Shlyakhtneko, P Zinn-Justin, Comm. Math. Phys. 316A. Guionnet, V. F. R. Jones, D. Shlyakhtneko, P. Zinn-Justin; Loop models, random matrices and planar algebras, Comm. Math. Phys. 316 (2012), 45-97. Free monotone transport. A Guionnet, D Shlyakhtenko, arXiv:1204.2182Invent. Math. to appearA. Guionnet and D. Shlyakhtenko, Free monotone transport, Invent. Math. to appear. arXiv:1204.2182 (2012). M Hartglass, D , arXiv:1401.2485Penneys; C * -algebras from planar algebras I: canonical C * -algebras associated to a planar algebra. M. Hartglass, D. Penneys; C * -algebras from planar algebras I: canonical C * -algebras associated to a planar algebra, arXiv:1401.2485 (2014). Index for subfactors. V F R Jones, 1-25. MR 696688 (84d:46097Ivent. Math. 721V. F. R. Jones; Index for subfactors, Ivent. Math. 72 (1983), no. 1, 1-25. MR 696688 (84d:46097). . V F R Jones, Planar algebrasV. F. R. Jones; Planar algebras, 1999. The planar algebra of a bipartite graph. V F R Jones, MR 1865703Knots in Hellas '98 (Delphi). River Edge, NJ2457003World Sci. Publ.V. F. R. Jones; The planar algebra of a bipartite graph, Knots in Hellas '98 (Delphi), Ser. Knots Everything, vol. 24, World Sci. Publ., River Edge, NJ, 2000, pp. 94-117. MR 1865703 (2003c:57003). Free monotone transport without a trace. B Nelson, arXiv:1311.1196Comm. Math. Phys. to appear. B. Nelson, Free monotone transport without a trace, Comm. Math. Phys. to appear. arXiv:1311.1196 (2013). An axiomatization of the lattice of higher relative commutants of a subfactor. S Popa, Invent. Math. 1203S. Popa, An axiomatization of the lattice of higher relative commutants of a subfactor, Invent. Math. 120(3):427-445, 1995. Free quasi-free states. D Shlyakhtenko, Pacific J. Mathematics. 177D. Shlyakhtenko; Free quasi-free states, Pacific J. Mathematics, 177(1997), 329-368. Free probability, Planar algebras, Subfactors and Random Matrices. D Shlyakhtenko, Proceedings of the International Congress of Mathematicians. the International Congress of MathematiciansHindustan Book Agency, New DelhiIII46096D. Shlyakhtenko; Free probability, Planar algebras, Subfactors and Random Matrices, Proceedings of the International Con- gress of Mathematicians, Volume III, Hindustan Book Agency, New Delhi, 2010, pp. 16031623. MR 2827857 (2012g:46096). UCLA Mathematics Department E-mail address: bnelson6@math. ucla.eduUCLA Mathematics Department E-mail address: [email protected]
[]
[ "Consistent description of fluctuations requires negative temperatures", "Consistent description of fluctuations requires negative temperatures" ]
[ "Luca Cerino \nDipartimento di Fisica\nUniversità La Sapienza\nCNR -ISC\np.le A. Moro 200185RomeItaly\n", "Andrea Puglisi \nDipartimento di Fisica\nCNR-ISC\nUniversità La Sapienza, p.le A. Moro 200185RomeItaly\n", "Angelo Vulpiani \nDipartimento di Fisica\nUniversità La Sapienza\nCNR -ISC\np.le A. Moro 200185RomeItaly\n" ]
[ "Dipartimento di Fisica\nUniversità La Sapienza\nCNR -ISC\np.le A. Moro 200185RomeItaly", "Dipartimento di Fisica\nCNR-ISC\nUniversità La Sapienza, p.le A. Moro 200185RomeItaly", "Dipartimento di Fisica\nUniversità La Sapienza\nCNR -ISC\np.le A. Moro 200185RomeItaly" ]
[]
We review two definitions of temperature in statistical mechanics, TB and TG, corresponding to two possible definitions of entropy, SB and SG, known as surface and volume entropy respectively. We restrict our attention to a class of systems with bounded energy and such that the second derivative of SB with respect to energy is always negative: the second request is quite natural and holds in systems of obvious relevance, i.e. with a number N of degrees of freedom sufficiently large (examples are shown where N ∼ 100 is sufficient) and without long-range interactions. We first discuss the basic role of TB, even when negative, as the parameter describing fluctuations of observables in a sub-system. Then, we focus on how TB can be measured dynamically, i.e. averaging over a single long experimental trajectory. On the contrary, the same approach cannot be used in a generic system for TG, since the equipartition theorem may be spoiled by boundary effects due to the limited energy. These general results are substantiated by the numerical study of a Hamiltonian model of interacting rotators with bounded kinetic energy. The numerical results confirm that the kind of configurational order realized in the regions at small SB, or equivalently at small |TB|, depends on the sign of TB.
10.1088/1742-5468/2015/12/p12002
[ "https://arxiv.org/pdf/1509.07369v1.pdf" ]
119,192,888
1509.07369
af235860b7544528fe40faab7e114cd911d7d05f
Consistent description of fluctuations requires negative temperatures 24 Sep 2015 Luca Cerino Dipartimento di Fisica Università La Sapienza CNR -ISC p.le A. Moro 200185RomeItaly Andrea Puglisi Dipartimento di Fisica CNR-ISC Università La Sapienza, p.le A. Moro 200185RomeItaly Angelo Vulpiani Dipartimento di Fisica Università La Sapienza CNR -ISC p.le A. Moro 200185RomeItaly Consistent description of fluctuations requires negative temperatures 24 Sep 2015PACS numbers: We review two definitions of temperature in statistical mechanics, TB and TG, corresponding to two possible definitions of entropy, SB and SG, known as surface and volume entropy respectively. We restrict our attention to a class of systems with bounded energy and such that the second derivative of SB with respect to energy is always negative: the second request is quite natural and holds in systems of obvious relevance, i.e. with a number N of degrees of freedom sufficiently large (examples are shown where N ∼ 100 is sufficient) and without long-range interactions. We first discuss the basic role of TB, even when negative, as the parameter describing fluctuations of observables in a sub-system. Then, we focus on how TB can be measured dynamically, i.e. averaging over a single long experimental trajectory. On the contrary, the same approach cannot be used in a generic system for TG, since the equipartition theorem may be spoiled by boundary effects due to the limited energy. These general results are substantiated by the numerical study of a Hamiltonian model of interacting rotators with bounded kinetic energy. The numerical results confirm that the kind of configurational order realized in the regions at small SB, or equivalently at small |TB|, depends on the sign of TB. I. INTRODUCTION Two different definitions of temperature in equilibrium statistical mechanics have been recently the subject of an intense debate [1][2][3][4][5][6][7][8][9][10], after the publication of experimental measurements of a negative absolute temperature [11,12]. In [11] it was demonstrated the possibility to prepare a state where the observed distribution of the modified kinetic energy per atom appeared to be inverted, i.e. with the largest population in the high energy states, yielding a de facto negative absolute temperature. The possibility of a negative absolute temperature is well known since the theoretical work by Onsager on the statistical hydrodynamics of point vortices [13] and the experimental and theoretical results on nuclear spin systems by Pound, Ramsey and Purcell (see [14][15][16] for a review and discussion). In those investigations, it was clear that an inverse temperature parameter β ranging in the full infinite real line (−∞, ∞) did not lead to any inconsistency or paradox. Ramsey in 1956 already realised that "the Carathéodory form of the second law is unaltered." [14] A negative absolute temperature appears whenever the microcanonical entropy is non-monotonic in the energy, a condition which can be realized when the total energy has a global maximum, which may happen when the phase space is bounded. There are also cases where the phase space is bounded but the energy diverges: again this may lead to a non-monotonic entropy, an important example is given by point vortices [13,[17][18][19][20][21]. It is crucial to highlight that the lack of monotonicity (for entropy vs. energy) is realised if one adopts the simplest definition of microcanonical entropy, which is related to the logarithm of the number of states with a given energy. Since such a definition appears in the so-called "tombstone formula" written on Boltzmann's grave, "S = k log W ", it is often referred to as Boltzmann's definition of entropy. Even if not historically precise [8], we adopt the same convention (but setting k = 1) and call "Boltzmann entropy" of a system with Hamiltonian H(Q, P) -where Q and P are vectors in R dN , being d the dimension of the system - S B (E, N ) = log ω(E),(1) being ω(E) the density of states, i.e. ω(E) = δ(H − E)d dN Qd dN P = ∂Σ(E) ∂E ,(2) and Σ(E) the total "number" of states with energy less or equal then E, that is Σ(E) = H<E d dN Qd dN P.(3) In definition (1) we have ignored an additive constant which is not relevant in our discussion. In [8] it is stated that the validity of the second principle of thermodynamics depends on the value of this arbitrary constant. Nonetheless, such an arbitrariness and the consequent paradox can be removed if all the quantities (energies, positions, momenta, time etc...) are considered adimensional. Propagating the denomination, it is customary to define the "Boltzmann temperature" through β B = 1 T B = ∂S B (E, N ) ∂E .(4) Some authors [1,8] have argued that a different definition of microcanonical entropy, proposed by Gibbs, has to be used in statistical mechanics, in order to be consistent with a series of "thermodynamic" requirements and avoid unpleasant paradoxes. The Gibbs entropy, which is always monotonically increasing, reads S G (E, N ) = log Σ(E),(5) and leads to the Gibbs temperature definition, which is always positive: β G = 1 T G = ∂S G (E, N ) ∂E ≥ 0.(6) Let us note that, since T B is defined directly on the surface of interest (i.e. that at constant energy E), from the point of view of the ergodic approach its use appears rather natural. The Gibbs temperature, on the other side, enters through an ensemble average in the equipartition formula of textbooks [22]: x i ∂H ∂x j = δ ij T G ,(7) where x i is any of the components of vector (Q, P) and the average is done in the microcanonical ensemble. In Section III, we will discuss the limits of application of formula (7) when the energy is bounded. We also mention that T G appears in the theory of Helmholtz monocycles (which had an important role in the development of the Boltzmann's ideas for the ergodic theory), for one-dimensional systems [23,24]. In spite of the fact that, in our opinion, the basic features of the different definitions of temperature do not present particular technical or conceptual subtleties, there is a certain confusion in the literature; therefore a general discussion of the topic can be useful. In this paper we present a line of reasoning where Boltzmann temperature T B (positive or negative) is the (unique) proper parameter which is relevant for the statistical properties of the energy fluctuations, as well as in determining the flux of energy between two systems at different temperatures, in addition it is measurable, without the appearance of any evident inconsistency. Let us remark that the systems discussed in [8], from which the authors try to show that only T G is the "good" temperature, are small (N = O(1)) and/or with long interactions. In Section II, after presenting the class of physically relevant systems which are the subject of our study, we describe how the Boltzmann temperature T B naturally describes fluctuations of observables in subsystems, in analogy with the derivation of the canonical ensemble from the microcanonical one. In Section III we discuss dynamical ("ergodic") measurements, which can reproduce T B but are in general unsuited to measure T G : in particular we show a possible failure of the equipartition theorem. In Section IV we report a series of numerical results with a model of interacting rotators with bounded kinetic energy, discussing the many practical uses of Boltzmann temperature. Summary and conclusions are drawn in Section V, together with a critique of some of the arguments used, in [8], to rule out the thermodynamic meaning of T B . II. THE RELEVANCE OF THE BOLTZMANN TEMPERATURE In this section we show, following the standard approach that can be found even in some textbooks, the unavoidable role of T B in many problems of statistical mechanics. A. Systems of physical relevance In the rest of the paper we consider systems made of a finite but large number N ≫ 1 of particles with local interactions, i.e. we exclude long-range potentials or mean-field models. It should be understood that long-range interactions certainly widen the phenomenology of statistical mechanics and may lead to complicate functional dependences for S B (E, N ), e.g. with several maxima or minima, even for large N . Nevertheless they are not necessary for the discussion of negative temperature and, most importantly, they represent quite a peculiar case where even thermodynamics is not obvious: for instance, it is not evident that the typical Gedankenexperiment of putting in contact two -previously isolated -systems can be realized, as the isolation condition is prevented by the long-range interaction. We also assume that S B (E, N ) is always convex, i.e. d 2 S B (E, N )/dE 2 ≤ 0. This is certainly true in the limit of vanishing interaction and in short-range-interacting systems for large N , since S B is strictly related to the large deviation function associated to the density of states [35]. Let us stress that these large values of N are not necessarily "thermodynamic" (N → ∞): for instance in Sec. IV we will exhibit a system that possesses all the required features already at N = 100. In general such a value of N will depend on the specific system, corresponding to situations in which some common approximations (e.g. Laplace approximation for exponential integrals) can be safely applied. In Sec. II.C we discuss in some details the origin of the convexity of S B (E, N ). It is easy to understand that this assumption implies the validity of the second principle of thermodynamics, as discussed in the next subsection. B. Second law and energy flux between two systems in contact Let us consider a system A of N A particles described by the variables {Q A , P A } and Hamiltonian H A (Q A , P A ), a system B of N B particles described by the variables {Q B , P B } and Hamiltonian H B (Q B , P B ) and a small coupling among the two, so that the global Hamiltonian is H = H A (Q A , P A ) + H B (Q B , P B ) + H I (Q A , Q B ).(8) If the two Hamiltonians have the same functional dependencies on the canonical variables (i.e. they correspond to systems with same microscopic dynamics, with possibly different sizes N A and N B ), for large N , we can introduce the (Boltzmann) entropy per particle S B (E, N ) = N S(e) , e = E N ,(9) with S(e) a convex function, identical for systems A and B. Let us now suppose that systems A and B have, respectively, energy E A = N A e A and E B = N B e B and the corresponding inverse Boltzmann temperatures β When the two systems are put in contact, a new system is realized with N = N A + N B particles. Let us call a = N A /N the fraction of particles from the system A. We have that the final energy is E f = E A + E B = N e f , where e f = ae A + (1 − a)e B and final entropy S B (E f , N ) = N S(e f ) ≥ N A S(e 1 ) + N B S(e B ) = N [aS(e A ) + (1 − a)S(e B )].(10) The previous inequality follows from the convexity assumption for S(e) which implies S(ae A + (1 − a)e B ) ≥ aS(e A ) + (1 − a)S(e B ).(11) The final inverse temperature β (f ) B is intermediate between β (A) B and β (B) B , e.g. if e B > e A -that is β (A) B > β (B) B -then β (B) B < β (f ) < β (A) B .(12) The energy flux obviously goes from smaller β B (hotter) to larger β B (colder). The consequence of convexity is that β B (E) is always decreasing and a negative value does not lead to any ambiguity. Confusion may arise from the fact that T B < 0 is, for the purpose of establishing the energy flux, hotter than T B > 0. However if β B is used, the confusion is totally removed [14]. We also briefly discuss a particularly interesting case with different Hamiltonians. Suppose that for the system A negative temperatures can be present, whereas system B has only positive temperatures; it is quite easy to see that the coupling of the system A at negative temperature with the system B at positive temperature always produces a system with final positive temperature. Indeed, at the initial time the total entropy is S I = S A (E A ) + S B (E B ),(13) while, after the coupling, it will be S F = S A (E ′ A ) + S B (E ′ B ),(14) where E ′ A + E ′ B = E A + E B and, within our assumptions, E ′ A is determined by the equilibrium condition [22] that S F takes the maximum possible value, i.e. β A = ∂S A (E ′ A ) ∂E ′ A = β B = ∂S B (E ′ B ) ∂E ′ B .(15) Since β B is positive for every value of E ′ B , the final common temperature must also be positive. The above conclusion can also be found, without a detailed reasoning, in some textbooks [25,26]. C. Subsystems Let us consider a vector X in R 2dN1 (with N 1 < N ), that is a subsystem of the full phase space (Q, P), and let us indicate with X in R 2d(N −N1) the remaining variables. We have H = H 1 (X) + H 2 ( X) + H I (X, X)(16) with an obvious meaning of symbols. Let us consider the case N ≫ 1 and N 1 ≪ N . In the microcanonical ensemble with energy E, the probability density function (pdf) for the full phase space (Q, P) is P (Q, P) = 1 ω(E, N ) δ(H(Q, P) − E).(17) The pdf of X can be obtained from the latter, by integrating over X. If the Hamiltonian H I (X, X) is negligible (a consequence of our assumption for non long-range interaction) then we have P (X) ≃ ω(E − H 1 (X), N − N 1 ) ω(E, N ) .(18) It is now possible to exploit the definition of S B and get ω(E, N ) = e SB (E,N ) (19) ω(E − H 1 (X), N − N 1 ) = e SB (E−H1(X),N −N1) ∝ e SB(E,N −N1)−βB(E)H1(X) ,(20) which, together with (18) leads to P (X) ∝ e −βB H1(X) .(21) When H 1 is bounded (as in our assumptions), the previous simple derivation can be done irrespective of the sign of β B . It is immediately clear from the above argument that T B is the temperature ruling the statistics of fluctuations of physical observables in a subsystem. For instance, the pdf of the subsystem (i.e. the canonical ensemble) energy E 1 reads P (E 1 , N 1 ) ∝ ω(E 1 , N 1 )e −βBE1 ∝ e [SB (E1,N1)−βB E1] .(22) Of course the above result holds in the (important) case where the two subsystems are weakly interacting and H 1 ≪ E. Therefore, for e 1 = E 1 /N 1 , one has P (e 1 , N 1 ) ∝ e N1[S(e1)−βB e1] ,(23) which is a large deviation law where the Cramer's function C(e 1 ) is C(e 1 ) = β B e 1 − S(e 1 ) + const. From general arguments of theory of probability, we know that -if a large deviation principle holds -d 2 C(e1) de 2 1 ≥ 0 so d 2 S(e1) de 2 1 ≤ 0. The validity of the large deviation principle can be easily shown for non-interacting systems. For weakly interacting systems it is quite common and reasonable, and can be stated under rigorous hypothesis [27,28]. D. The generalised Maxwell-Boltzmann distribution The extreme case of the above considerations is when N 1 = 1, that is to say the fluctuations of a single degree of freedom (e.g. a momentum component of a single particle) are observed. This becomes interesting when the Hamiltonian has the form H = N n=1 g(p n ) + N n,k V (q n , q k )(24) where the variables {p n } are limited and the same happens for the function g(p). Repeating the arguments in the previous subsection, one may compute the probability density for the distribution of a single momentum p, obtaining P (p) ≃ ω(E − g(p), N − 1) ω(E, N ) ∝ e −βB g(p) ,(25) which, again, is valid for both positive and negative β B . We mention that in the experiment in [11], the above recipe has been applied to measure both positive and negative system's temperatures. From Eqs. (22) and (25) the true deep meaning of the (Boltzmann) temperature is quite transparent: it is a quantity which rules the pdf of energy of a subsystem (or the momentum of a single particle). Let us note that since T B is associated to the large microcanonical system (in physical terms the reservoir) it is a non-fluctuating quantity [29] also for each sub-system and, in general, for non-isolated systems. In the conclusions, we discuss again such an aspect which is not always fully understood, see e.g. Ref. [8] E. Temperature and order In usual statistical mechanics, low temperatures -or, better, high values of inverse temperature -are usually associated to the possibility of some kind of order, the most noticeable example given by phase transitions. Intuitively, one would expect such a situation whenever ω(E) is relatively small, which usually corresponds to regions where |β B | is large irrespective of the temperature's sign. A famous example where such an order at negative (small) temperatures was observed is that of pointlike vortices discussed by Onsager in [13]. The system, obtained as a particular limit from two-dimensional Euler equations, describes N points of vorticities {Γ 1 , ..., Γ N } in a two-dimensional domain Ω: the equation of motions of the coordinates (x n , y n ) of the n-th point vortex are shown to be (see for instance [30]) Γ i dx i dt = ∂H ∂y i , Γ i dy i dt = − ∂H ∂x i(26) with Hamiltonian H = i =j Γ i Γ j G(r i,j )(27) where G(r) is the Green function of the Laplacian in Ω: in the infinite plane one has G(r) = −1/4π ln r where r i,j = (x i − x j ) 2 + (y i − y j ) 2 . The canonical variables in this case are q i = |Γ i |x i , p i = |Γ i | sign(Γ i ) y i(28) Onsager showed that if the domain of Ω is bounded, then negative T B are achieved at large values of the energy. At large energies a particular spatial order appears too: clusters of vortices with the same sign of the vorticity are the structures most easily found. It is interesting to notice that T B < 0 (and the corresponding clusterization) is not a peculiarity of the divergence of G(r) in r = 0, nor of the long range nature of the interaction: indeed, it can be obtained with any arbitrary G(r) having a maximum (even finite) in r = 0, and vanishing at large r, provided that the domain is bounded. The presence of spatial order at high values of energy, in the form of discrete breathers, has been observed also in the discrete non-linear Schrödinger equation and analogous systems [10,31]. In Section IV we introduce a different, in a way simpler, model which still exhibits spatial order at small negative temperatures. III. HOW TO MEASURE TB AND TG The definitions of β B and β G given in Eqs. (4) and (6) are based on the functional dependence of the phase space occupations ω(E) and Σ(E) upon the energy. In a real or numerical experiment it may be cumbersome or even impossible to make use of those definitions to measure the two temperatures: for instance, an empirical estimate of ω(E) (and therefore of Σ(E)) will always be limited by the available statistics (number of independent measurements of E) and therefore cannot provide a clear answer, for both β B and β G , in the interesting regimes where ω(E) ∼ 0. On the other hand it has been shown [32] that β B can be obtained as a microcanonical average of a certain observable. The recipe is the following β B =< R(X) > , R(X) = ∇ · ∇H |∇H| 2(29) where ∇ stands for the vector of derivative operators along the degrees of freedom in the full phase space X ≡ (Q, P). From (29) one has, assuming the ergodicity, that β B can be computed with a molecular dynamics simulation, and, at least in principle, by a long-time series from an experiment. It is interesting to notice that such a kind of recipe does not exist for S B (E, N ) or S G (E, N ) [32]. It is clear that, in view of the considerations in Sections II C and II D, one may always measure fluctuations of appropriate observables, such as subsystem's energy or single particle momentum, to get an estimate of T B . Coming to β G , a way, even discussed in textbooks and considered sometimes rather important [8], to approach the problem of its measurement is via the equipartition theorem, which states x i ∂H ∂x j = δ ij T G .(30) However the usual derivation of Eq. (30) implies the possibility to neglect boundary terms in an integration by parts. Such a possibility is challenged in the class of systems with bounded energy and phase space that we are considering. In particular it is easy to show that (30) does not hold under the simultaneous realization of the following conditions: • bounded space of the canonical variables; • bounded derivatives of the Hamiltonian ∂H ∂xj ; • bounded energy from above and below: E m ≤ E ≤ E M ; • vanishing density of states at the boundaries, i.e. ω(E M ) = 0. Given such conditions, one has that, on one side, T G (E) = Σ(E) ω(E)(31) diverges when E → E M . On the other side, x i ∂H ∂xj is limited, resulting in a contradiction. A failure or the equipartition formula Eq. (30) is also possible in systems where there are no negative temperatures, i.e. T G ≃ T B > 0 for all E. Consider, for instance, the following Hamiltonian H = N n=1 p 2 n 2 + ǫ N n=1 (1 − cos(φ n − φ n−1 ))(32) where φ n ∈ [−π, π). For large E, i.e. E ≫ ǫN , the contribution to Σ(E) of the variables {φ n } does not depend too much on the value of E, so that Σ ǫ (E) ≃ Σ 0 (E) ∝ E N/2 ,(33) and T G ≃ 2E/N and, for large N , T B = T G + O(1/N ). On the other hand it is easy to see that φ n ∂H ∂φ n ≤ 2πǫ ,(34) and, therefore, the equipartition formula φ n ∂H ∂φn = T G does not hold for large value of E and N . 2N (1 + ǫ)). The parameters of the system are: N = 100 and ǫ = 0.5. IV. NUMERICAL RESULTS FOR A SYSTEM WITH NEGATIVE TEMPERATURE In this Section we present a detailed study of a system composed of N "rotators" with canonical variables φ 1 , ..., φ N , p 1 ...p N with all φ i and p i defined in [−π, π), and with Hamiltonian H(φ 1 , . . . , φ N , p 1 , . . . , p N ) = N n=1 [1 − cos(p n )] + ǫ N n=1 [1 − cos(φ n − φ n−1 )].(35) Choosing, as boundary condition, φ 0 = 0 guarantees that the only conserved quantity by the dynamics is the total energy E. The equations of motion for the rotators can be readily obtained applying Hamilton's equations to Eq. (35):φ n = sin(p n ), p n = −ǫ (sin(φ n − φ n−1 ) + sin(φ n − φ n+1 )) . It is immediate to verify that the energy has a maximum value E M = 2N (1 + ǫ) which is realised when p n = π and φ n − φ n−1 = π for every n. When ǫ = 0 it is immediate to see that Hamiltonian in Eq. (35) implies negative Boltzmann temperatures. Indeed at small energy one has 1 − cos(p n ) ≃ p 2 n /2 so that Σ(E) ≃ C N E N/2 , ω(E) ≃ N 2 C N E N/2−1 (37) with C N = (2π) N π N/2 Γ(N/2+1) . Close to E M = 2N one has 1 − cos(p n ) ≃ (π − p n ) 2 /2, therefore when E approaches E M it is Σ(E) = Σ(E M )−(2π) N E<H<EM N n=1 dp n ≃ Σ(E M )−(2π) N n (π−pn ) 2 2 <(EM −E) N n=1 dp n = Σ(E M )−C N (E M −E) N/2(38) and therefore ω(E) ≃ N 2 C N (E M − E) N/2−1 .(39) In conclusion we have that ω(E) = 0 if E = 0 and E = E M , which implies a maximum in between and a region (at high energies) with negative β B . The previous scenario is expected to hold also in the prescence of a small interaction among the rotators and can be numerically confirmed with a sampling of the phase-space (see Fig. 1): random configurations of the system are extracted with an uniform distribution over the phase space and ω(E) is reconstructed by counting the number of configurations lying in a small interval of width δE around the energy E. It is rather evident from Fig. 1 that: the density of states ω(E) has a maximum inẼ ≈ E M /2; it is an increasing function for E <Ẽ whence T B > 0; it decreases for E >Ẽ whence T B < 0. Unfortunately, such a sampling is reliable only in a narrow region aroundẼ: indeed, there are very few configurations with energies much larger or smaller thañ E and, therfore, there is an extremely small probability to extract such configurations with this procedure. For this reason, we have performed dynamical measures through numerical simulations of the motion of the system: the integration of Eqs. (36) is done with the usual Verlet scheme with a time step ∆t = 10 −3 . A. Measure of TB Measurements of the Boltzmann temperature are done with the two methods discussed in the previous Sections. In particular, by computing the following average (over a single trajectory of the system) ρ(p) = lim τ →∞ 1 N τ τ 0 dt N i=1 δ (p i (t) − p) ,(40) for different values of p, and assuming that the system is ergodic, we recover the single-particle-momentum probability density function P (p), Eq. 1 T R (t) = 1 t t 0 dt ′ R(X(t ′ )),(41) for E = E + and E = E − . These two quantities converge, for large t, to an asymptotic value representing an estimate of the inverse Boltzmann temperature β B of the system. This value, as expected, is positive for E = E + and negative for E = E − : moreover, the values are in very good agreement with the slopes of the single particle distribution function, as shown by the dashed and solid lines in Fig. 2. B. Equivalence of ensembles and the equipartition formula Let us briefly discuss the problem of the equivalence of ensembles. In the usual treatment of textbooks one starts from Eq. (23): assuming that S(e) is convex and performing a steepest descent analysis, for large N , one obtains the canonical functions from the (Boltzmann) microcanonical ones, e.g.: T B (e)S(e) = e − f (T B (e)),(42) where f (T ) is the free energy per particle in the canonical ensemble. In addition the energy fluctuations are negligible. In such a derivation, the relevant point is only the convexity of S(e) and nothing about its first derivative is asked. Therefore, the equivalence of ensembles naturally holds under our hypothesis even for negative T B . Since T B and T G can be different even for large N , as in our model defined with Eq. (35), it is evident that T G is not relevant for the ensemble equivalence. A common way [8] to measure the Gibbs temperature is by means of the equipartition formula, Eq. (30): for the Hamiltonian in Eq. (35) one should get p k sin p k E = T G (E),(43) for every 1 ≤ k ≤ N . In the present subsection, we use the notation E to denote the average in the microcanonical ensemble, in order to distinguish it from a canonical average β which is useful to get some analytic expressions and where Z(β) is the partition funcion and β the (external) inverse temperature, that can be either positive or negative: if such a distribution is derived from a larger isolated system, as already discussed in Section II C, the temperature in the canonical ensemble is precisely the Boltzmann temperature of the whole system. A simple explicit expression (see details of analogous calculations in Ref. [33]) can be derived for the mean energy U (β) = H β = N 1 + ǫ − I 1 (β) I 0 (β) − ǫI 1 (βǫ) I 0 (βǫ) ,(45) where I 0 (x) and I 1 (x) are, respectively, the zeroth and the first modified Bessel function of the first kind. Analogously, one can get an analytic formula for the equipartition function p sin(p) β = 1 β − e −β βI 0 (β) .(46) Let us remark that Eqs. (45) and (46) hold for both positive and negative β. In Fig. 3 we report the plot of the parametric curve (U (β), p sin(p) β ) obtained by varying β both in the positive and in the negative region of the real axis. This curve is then compared with measures of p sin(p) E computed from molecular dynamics simulations in the microcanonical ensemble at different values of the energy E (Fig. 3). Such a comparison clearly shows that the results obtained in the two different ensembles are identical, a transparent evidence that the equivalence of ensemble already exists for this system quite far from the thermodynamic limit (N = 100). Fig. 3 also shows that the equipartition formula cannot be used to measure the Gibbs temperature: indeed, as already pointed out in Section III, the equipartition theorem can fail if the density of states ω(E) vanishes. This is the case of our system (Fig. 1), where T G = Σ(E)/ω(E) should diverge for E → 2N (1 + ǫ): on the other hand the results obtained in the canonical and in the microcanonical ensemble clearly indicate that p sin(p) E → 0 as E → 2N (1 + ǫ). in E = E M , i.e. that there is a small number of microscopic configurations corresponding to large values of E. In particular, the maximum of the energy E M = 2N (1 + ǫ) is attained by the unique microscopic state where, for every n, p n = π and φ n − φ n−1 = π; that is, where all the rotators are fixed (φ = sin π = 0) and the distance among two consecutive rotators is ∆φ = π. As a consequence, since φ 0 = 0, all the particles with even index (n = 0, 2, 4 . . .) must be at φ = 0 and the others (n = 1, 3, . . .) in φ = π. At smaller values of E < ∼ E M , see Fig. 4 B, such considerations can be extended, yielding a very similar situation: even and odd rotators must be close, respectively, to φ = 0 or φ = π. Let us note that an ordered phase exists whenever, at a given energy E, the number of corresponding configurations is small, i.e. when ω(E) vanishes: for instance, the clustering can also be observed at small energies, when the rotators accumulate around φ = 0, in order to minimize the interaction energy, see Fig. 4 B. The sign of the Boltzmann temperature plays a crucial role in this context, defining the features of the coherent phase. Indeed, in analogy with the single-particle-momentum distribution, it is easy to show that ρ(φ i − φ i−1 ) ∝ exp −β B 1 − cos(φ i − φ i−1 ) .(47) When E → E M or E → 0, the inverse temperature β B diverges and, depending on the sign of β B , the distribution Eq. (47) peaks around φ i − φ i−1 = 0 or φ i − φ i−1 = π, see Fig. 4 A. Let us stress that not every state with negative temperature is spatially ordered: the necessary condition is a small corresponding phase space volume, which implies a very high energy or, equivalently, a very small negative temperature. The same argument applies to small positive temperatures. Of course, if negative temperatures appear, they signal a reduction of phase space with increasing energy, and therefore announce a more ordered structure at higher energy. V. CONCLUSIONS In this paper we have given a series of arguments to support the thesis of the Boltzmann temperature T B as a useful parameter to describe the statistical features of a system with many particles and short-range interactions, even when it takes negative values. Let us draw our conclusions with a series of remarks on the role of the negative temperature and some comments on recent papers. We have shown that the temperature T B is the proper quantity which describes the distribution of the energy fluctuations in the canonical ensemble. It also enters in an immediate generalization of the Mawell-Boltzmann distribution to the case of "kinetic energy" which is not a quadratic function of momentum. For a particular model we have also demonstrated that at small |T B | (for both positive and negative values) some kind of spatial order induced by interactions appears, whose qualitative traits depend upon the temperature's sign. If the microcanonical entropy S(e) is a convex function, independently of the sign of T B , there is no ambiguity in determining the flux of energy: it always goes from the hotter system, i.e. with smaller β B to the colder one (with larger β B ). It should be reminded that the convexity of S(e) can be violated only for very small systems or systems with long range interaction, both cases being very well known examples that can violate thermodynamic requirements. From a physical point of view it is possible to obtain the canonical ensemble from the microcanical one only for large systems with short range interactions. In such a class of systems, if N ≫ 1, the S(e) is convex and it is easy to obtain the equivalence of the ensembles. Such a property is a fundamental requirement to obtain equilibrium thermodynamics, where there is no difference between thermostatted and isolated macroscopic systems. It is worth emphasizing that the equivalence of the ensembles only holds if one adopts the Boltzmann definition of entropy: for this reason, in systems exhibiting negative temperatures, where S B and S G are no longer equivalent in the large N limit, thermodynamic can be recovered for N → ∞ only through the Boltzmann formalism. In systems with few components and/or with long range interactions, one can still define a canonical ensemble at a formal level (i.e. assume that the phase space distribution is ∝ e −βH ), and then wonder about the equivalence of the ensembles. However such a formal mathematical approach, in our opinion, has no physical meaning. Since in presence of long range interactions (or equivalently a system with N = O(1)) it is not possible to make a clear distinction between the system and the reservoir, it is not possible to construct systems following a canonical distribution. For the same reason the question of the flux of energy among two systems appears to be meaningless in those cases. Following Rugh [32], T B can be computed via a molecular dynamics simulation, and (at least in principle) from the data of an experiment. The microcanonical formula (30), which, in most cases, allows for a practical definition of T G , can fail in systems with negative T B , therefore, as far as we know, at variance with T B , there is not a general method to compute T G in an experiment. We underline that the counterexamples used in [8] to support the claimed inconsistency of the use of T B are based on systems with very few degrees of freedom and non convex S(e). Let us note that the system in eq. (71) of [8] is nothing but the system considered in our Section IV, Eq. (35), with N = 1 and ǫ = 0: the claimed strange behavior of T B is present only if N = O(1). On the contrary for N ≫ 1 as a consequence of the convexity of S(e) one has a quite natural scenario, as discussed above. In a similar way we have shown that the consistency of T G with the microcanonical formula fails for large N . In the microcanonical ensemble the temperature T B is a function of the total energy E. In the canonical ensemble the temperature T B is a mere property of the reservoir and does not depend on the microscopic configuration of the system. In [8], see Sect. 3.D, the wrong concept of temperature (in non-isolated (sub)-systems) depending upon the energy of the microscopic configuration, see their Eq. (31), is used to claim the inconsistency of T B . Such confusion seems to be persistent, see [29] for a discussion of the topic of the (non existing) fluctuations of temperature. In conclusion our analysis, that applies to a large class of systems with many degrees of freedom and short-ranged interactions, shows that the Boltzmann temperature has the following properties: i) it is the proper quantity ruling the fluctuations of energy of a sub-system; ii) it can be measured by means of time-averages of a suitable observable; iii) it rules the direction of the fluxes of energies between two coupled systems at different initial temperatures. About the Gibbs temperature, we can mention that: i) the Gibbs entropy is an adiabatic invariant (although a mathematically rigorous proof exists only for one-dimensional systems); ii) the microcanonical formula for equipartition in general is not valid therefore -at variance with T B -a simple way to measure T G is not available. We note that the differences between T B and T G can survive for large N , even when the ensembles are equivalent in the thermodynamic limit. FIG. 1 : 1Phase space sampling: we report the reconstruction of the density of states ω(E) and its integral Σ(E) = E 0 dE ′ ω(E ′ ). The two functions are normalized with Σ(EM = (21). The result of such a measure is reported inFig. 2: for two different values of energy E + <Ẽ and E − >Ẽ the measured ρ(p) is plotted as a function of the "kinetic energy" of the individual rotator g(p) = 1 − cos(p). The presence of a negative temperature at E = E − can be readily indentified by means of the consideration in Section II D. Indeed, on one hand, the exponential behaviour of ρ(p) guarantees that the approximation used to obtain Eq. (25) is already valid (for every value of g(p)) at N = 100. On the other hand, the clear positive slope of the function at E = E − is a direct consequence of the fact that T B (E − ) < 0: the opposite situation is encountered at E = E + , where the decreasing behavior of ρ(p) indicates a temperature T B (E + ) > 0. These conclusions can also be drawn by measuring the time average of the function R(X), Eq. (29): in the inset ofFig. 2we report the temperature obtained with the cumulated average of R(X) up to time t, namely FIG. 2 : 2Measure of the Boltzmann temperature in the rotators chain for N = 100 and ǫ = 0.5. Probability distribution function of the momentum of the rotators as a function of their "kinetic energy" g(p) = 1 − cos(p) at energy E = E− = 170 (blu squares) and E = E+ = 130. The slopes of the two black straight lines are 1/T ∞ R (E), where T ∞ R (E) is the asymptotic value of the corresponding curve in the inset. Inset: The TR obtained from the cumulated average of the observable R(X(t)) over a trajectory up to time t at E = 170 (blue line) and E = 130 (red line). better investigate the validity of Eq. (43). The canonical probability density reads ρ(φ 1 , . . . , φ N , p 1 , . . . , p n ) = 1 Z(β) e −βH(φ1,...,φN ,p1,...,pn) , FIG. 3 : 3C. Spatial coherenceIn analogy with systems of point vortices discussed in Sec. II E, the rotators model in Eq.(35) possesses a spatially ordered phase at large values of E: this can be easily understood by noting that the density of states ω(E) Black line: p sin(p) β vs U (β) in the canonical ensemble (Eqs. (45) and (46)) as parametric functions of β ∈ (−∞, ∞). Red squares: time averages of the equipartition function in molecular dynamics simulations at fixed energy E (microcanonical ensemble). The values for the parameters of the model are N = 100 and ǫ = 0.5. FIG. 4 : 4A: Probability distribution function of angular distance between two consecutive rotators at high energy E = 298.96. B: Probability distribution function of rotators' positions φ in the high energy case E = 298.96 (blue squares) and in the low energy case E = 6.79 (red triangles). The two maxima of the high energy distribution correspond to the clusters around φ = 0 and φ = π discussed in the text. The other parameters are N = 100 and ǫ = 0.5. AcknowledgmentsThe authors acknowledge P. Buonsante, M. Cencini, M. Falcioni, U. Marini Bettolo Marconi and G.-L. Oppo for the many discussions and for reading the manuscript. We owe M. Cencini and M. Falcioni, who also contributed at a first stage of this work, special thanks. . J Dunkel, S Hilbert, Nat. Phys. 1067J. Dunkel and S. Hilbert, Nat. Phys. 10, 67 (2013). . J M G Vilar, J M Rubi, J. Chem. Phys. 1401J. M. G. Vilar and J. M. Rubi, J. Chem. Phys. 140, 1 (2014). . U Schneider, S Mandt, A Rapp, S Braun, H Weimer, I Bloch, A Rosch, arXiv:1407.4127U. Schneider, S. Mandt, A. Rapp, S. Braun, H. Weimer, I. Bloch, and A. Rosch, arXiv:1407.4127 (2014). . J Dunkel, S Hilbert, arXiv:1410.4619v1J. Dunkel and S. Hilbert, arXiv:1410.4619v1 (2014). . J.-S Wang, R H Swendsen, arXiv:1410.4619v1J.-S. Wang and R. H. Swendsen, arXiv:1410.4619v1 (2014). . D Frenkel, P B Warren, Am. J. Phys. 83163D. Frenkel and P. B. Warren, Am. J. Phys. 83, 163 (2015). . J Dunkel, S Hilbert, arXiv:1403.4299J. Dunkel and S. Hilbert, arXiv:1403.4299 (2014). . S Hilbert, P Hänggi, J Dunkel, Phys. Rev. E. 9062116S. Hilbert, P. Hänggi, and J. Dunkel, Phys. Rev. E 90, 062116 (2014). . M Campisi, Physical Review E. 9152147M. Campisi, Physical Review E 91, 052147 (2015). . P Buonsante, R Franzosi, A Smerzi, arXiv:1506.01933P. Buonsante, R. Franzosi, and A. Smerzi, arXiv:1506.01933 (2014). . S Braun, J P Ronzheimer, M Schreiber, S S Hodgman, T Rom, I Bloch, U Schneider, Science. 33952S. Braun, J. P. Ronzheimer, M. Schreiber, S. S. Hodgman, T. Rom, I. Bloch, and U. Schneider, Science 339, 52 (2013). . L D Carr, Science. 33942L. D. Carr, Science 339, 42 (2013). . L Onsager, Nuovo Cimento Suppl. VI Ser. IX. 279L. Onsager, Nuovo Cimento Suppl. VI Ser. IX, 279 (1949). . N F Ramsey, Phys. Rev. 10320N. F. Ramsey, Phys. Rev. 103, 20 (1956). . P T Landsberg, J. Phys. A: Math. Gen. 101773P. T. Landsberg, J. Phys. A: Math. Gen. 10, 1773 (1977). P T Landsberg, Thermodynamics and Statistical Mechanics. DoverP. T. Landsberg, Thermodynamics and Statistical Mechanics (Dover, 2014). . V Berdichevsky, I Kunin, F Hussain, Phys. Rev. A. 432050V. Berdichevsky, I. Kunin, and F. Hussain, Phys. Rev. A 43, 2050 (1991). . D Montgomery, Phys. Rev. A. 448437D. Montgomery, Phys. Rev. A 44, 8437 (1991). . V Berdichevsky, I Kunin, F Hussain, Phys. Rev. E. 472968V. Berdichevsky, I. Kunin, and F. Hussain, Phys. Rev. E 47, 2968 (1993). . K , L J Campbell, Phys. Rev. E. 472966K. O'Neil and L. J. Campbell, Phys. Rev. E 47, 2966 (1993). . V Berdichevsky, Phys. Rev. E. 514432V. Berdichevsky, Phys. Rev. E 51, 4432 (1995). K Huang, Statistical Mechanics. John Wiley & SonsK. Huang, Statistical Mechanics (John Wiley & Sons, 1988). . H Helmholtz, J. Reine Angew. Math. 1884111H. von Helmholtz, J. Reine Angew. Math. 1884, 111 (1884). . M Campisi, D H Kobe, American Journal of Physics. 78608M. Campisi and D. H. Kobe, American Journal of Physics 78, 608 (2009). R Kubo, M Toda, N Saito, Statistical Physics I: Equilibrium Statistical Mechanics. SpringerR. Kubo, M. Toda, and N. Saito, Statistical Physics I: Equilibrium Statistical Mechanics (Springer, 1992). H B Callen, Thermodynamics and an Introduction to Thermostatistics. John Wiley & SonsH. B. Callen, Thermodynamics and an Introduction to Thermostatistics (John Wiley & Sons, 2006). . H Touchette, Phys. Rep. 4781H. Touchette, Phys. Rep. 478, 1 (2009). A Vulpiani, F Cecconi, M Cencini, Large Deviations in Physics. A. Puglisi, and D. VergniSpringerA. Vulpiani, F. Cecconi, M. Cencini, A. Puglisi, and D. Vergni, eds., Large Deviations in Physics (Springer, 2014). . M Falcioni, D Villamaina, A Vulpiani, A Puglisi, A Sarracino, American Journal of Physics. 79777M. Falcioni, D. Villamaina, A. Vulpiani, A. Puglisi, and A. Sarracino, American Journal of Physics 79, 777 (2011). Chaos: from simple models to complex systems. M Cencini, F Cecconi, A Vulpiani, World Scientific PublishingM. Cencini, F. Cecconi, and A. Vulpiani, Chaos: from simple models to complex systems (World Scientific Publishing, 2010). . S Iubini, R Franzosi, R Livi, G.-L Oppo, A Politi, New Journal of Physics. 1523032S. Iubini, R. Franzosi, R. Livi, G.-L. Oppo, and A. Politi, New Journal of Physics 15, 023032 (2013). . H H Rugh, Phys. Rev. Lett. 78772H. H. Rugh, Phys. Rev. Lett. 78, 772 (1997). . R Livi, M Pettini, S Ruffo, A Vulpiani, Journal of Statistical Physics. 48539R. Livi, M. Pettini, S. Ruffo, and A. Vulpiani, Journal of Statistical Physics 48, 539 (1987). Statistical Mechanics. An Advanced Course with Problems and Solutions. R Kubo, ElsevierR. Kubo, Statistical Mechanics. An Advanced Course with Problems and Solutions. (Elsevier, 1965). On the other hand if ψ has a maximum at E * then Σ(E, N ) is roughly constant for E > E * . In summary, for "normal" systems the temperatures must coincide, while with our assumption, one can have different temperatures in the region E > E * . Note also that normal systems also satisfy our assumption. E , N ) ∼ E Nφ, Since Σ(E, N ) = E ω(E ′ )dE ′ , a simple steepest descend computation shows that, if dψ(E ′ /N )/dE ′ > 0 for E ′ < E, then ψ(E/N ) = φ(E/N ): this is equivalent to say that TB = TG in the thermodynamic limit (i.e. up to O(1/N )) whenever TB > 0 (see Fig. 1 for an example). while the opposite is not true. Moreover, even if not all the systems satisfying our assumption could be named "normal", all of them satisfy the equivalence of ensembles (as discussed belowIt is interesting to notice that Kubo in [34] uses the adjective "normal" for systems satisfying Σ(E, N ) ∼ e Nφ(E/N)+o(N) . It is easy to verify that for such systems one has βG = βB + O(1/N ). However our assumption is different: we ask that, in the large N limit, ω(E, N ) ∼ e Nψ(E/N)+o(N) . Since Σ(E, N ) = E ω(E ′ )dE ′ , a simple steepest descend computation shows that, if dψ(E ′ /N )/dE ′ > 0 for E ′ < E, then ψ(E/N ) = φ(E/N ): this is equivalent to say that TB = TG in the thermodynamic limit (i.e. up to O(1/N )) whenever TB > 0 (see Fig. 1 for an example). On the other hand if ψ has a maximum at E * then Σ(E, N ) is roughly constant for E > E * . In summary, for "normal" systems the temperatures must coincide, while with our assumption, one can have different temperatures in the region E > E * . Note also that normal systems also satisfy our assumption, while the opposite is not true. Moreover, even if not all the systems satisfying our assumption could be named "normal", all of them satisfy the equivalence of ensembles (as discussed below).
[]
[ "Noise-like Pulses from an All-Normal-Dispersion Fiber Laser with Weakened Spectrum Filtering", "Noise-like Pulses from an All-Normal-Dispersion Fiber Laser with Weakened Spectrum Filtering" ]
[ "Zhicheng Zhang ", "Sha Wang ", "Jun Wang " ]
[]
[]
Noise-like pulses (NLP) are extremely sought after in many fields. Here, we experimentally and numerically investigated the generation of noise-like pulses in an all-normal-dispersion fiber laser with weak spectrum filtering. With the insertion of the grating as a tunable spectrum filter, the laser operates at a stable dissipative soliton state with a 3.84 ps duration. Replacing the grating with a mirror, NLPs with double-scale intensity autocorrelation trace is ultimately attained. Numerical simulations are performed in detail and demonstrated that with the absence of a spectrum filter, the stable state cannot be established but form the random pulse cluster. The random pulse cluster achieves dynamic stability with suitable feedback, and the NLP is ultimately generated. The NLP here is directly evolved by the initial noise, and no other states occur during its evolution. These explorations could deepen the understanding of NLP and enrich the complex dynamics of the ANDi ultrafast fiber laser.Index Terms-ultrafast fiber laser, all-normal dispersion, noiselike pulses, spectral filtering.
null
[ "https://arxiv.org/pdf/2205.01393v1.pdf" ]
248,506,088
2205.01393
5dda020b17ea65f5dcf6d92b880fa7ca870f7b39
Noise-like Pulses from an All-Normal-Dispersion Fiber Laser with Weakened Spectrum Filtering Zhicheng Zhang Sha Wang Jun Wang Noise-like Pulses from an All-Normal-Dispersion Fiber Laser with Weakened Spectrum Filtering 1 > REPLACE THIS LINE WITH YOUR MANUSCRIPT ID NUMBER (DOUBLE-CLICK HERE TO EDIT) < Noise-like pulses (NLP) are extremely sought after in many fields. Here, we experimentally and numerically investigated the generation of noise-like pulses in an all-normal-dispersion fiber laser with weak spectrum filtering. With the insertion of the grating as a tunable spectrum filter, the laser operates at a stable dissipative soliton state with a 3.84 ps duration. Replacing the grating with a mirror, NLPs with double-scale intensity autocorrelation trace is ultimately attained. Numerical simulations are performed in detail and demonstrated that with the absence of a spectrum filter, the stable state cannot be established but form the random pulse cluster. The random pulse cluster achieves dynamic stability with suitable feedback, and the NLP is ultimately generated. The NLP here is directly evolved by the initial noise, and no other states occur during its evolution. These explorations could deepen the understanding of NLP and enrich the complex dynamics of the ANDi ultrafast fiber laser.Index Terms-ultrafast fiber laser, all-normal dispersion, noiselike pulses, spectral filtering. I. INTRODUCTION ltrafast fiber lasers are extremely sought in many fields. Various pulse states can be formed in normal or anomalous dispersion, such as conventional soliton [1], stretched pulse [2], self-similar soliton [3], dissipative soliton (DS) [4], dissipative soliton resonance [5], noise-like pulse (NLP) [6], etc. Among these, NLPs have gained recent research interests due to their low temporal and spatial coherence [7][8][9]. NLPs exhibit a fine inner structure of many narrow sub-pulses (a few hundred femtoseconds in width) with randomly varying intensity and duration, and on propagation through dispersive fiber NLPs do not broaden significantly [10]. They are widely used in optical coherence tomography and supercontinuum generation where low coherence, broad-spectrum, and high peak power are required [11,12]. Besides, NLPs contain very complex fine structures and dynamics providing a benchmark to study extreme events like optical rogue waves [13]. Reports reveal that NLP will be formed if laser parameters deviated from the ordinary operation conditions [14][15][16][17][18]. In 1997, Horowitz et al. first demonstrated the NLPs in an Erdoped mode-locked fiber laser and supposed that NLPs are caused by a polarization-dependent delay effect introduced by fiber birefringence [19]. While Smirnov et al. presented that NLP can also occur in weakly birefringent fiber lasers [20]. On the other hand, Tang et al. concluded that NLP generation is caused by the combination of the soliton collapse and cavity positive feedback for dispersion-managed fiber lasers [15]. Aguergaray et al. proposed that NLPs are caused by the Ramandriven destabilization of mode-locked long-cavity fiber lasers [21]. Recently, the all-normal dispersion (ANDi) fiber laser plays a dominant role in achieving high power and energy ultrafast pulse [4,[22][23][24][25]. In this context, the NLPs generated in ANDi fiber lasers have also been widely explored [17,18,20,[26][27][28][29]. In Ref. [20], Smirnov et al. observed NLPs in an ANDi laser, and point out that small intensity fluctuation introduced by the polarization controllers can result in the NLPs. In Ref. [14], Li et al. proposed that NLPs are attributed to the amplitude modulation introduced by negative feedback and the peakpower-limiting effect of the mode-locker. One can find that the reported NLPs in the ANDi laser is a usual result initiated from DS pulse while increasing pump power or/and adjusting polarization, and most of these studies mainly focused on how NLP is transformed from the coherent stable pulses. A natural question is: can NLP directly evolve by the initial noise in ANDi fiber lasers? Moreover, in addition to the saturable absorber, the spectrum filter is also employed as a crucial component to modulate the pulse in the ANDi laser [25], its impact on NLP generation still needs to be further explored. In this work, the spectrum filter is explored to attain the NLPs in the ANDi fiber laser. We experimentally demonstrate that the transition from DS to NLP can be achieved by weakening the filtering effect. With the insertion of the grating as a spectrum filter, the laser is at a stable DS state with a 3.84 ps duration. Replacing the grating with a mirror, NLP with a double-scale intensity autocorrelation trace can be attained. To offer a detailed description of the pulse dynamics, we erect the theoretical model based on the pulse-tracing and the simulation results emphasize that NLP directly evolves from the initial noise in ANDi fiber lasers under weak spectrum filtering. In this case, the stable coherent state cannot be established but forming the random pulse cluster, and the pulse cluster achieves dynamic stability with suitable feedback, i.e., NLP is generated. Sha The experiment setup and results are given in Section II. Then, Section III presents the simulation model and results. Finally, conclusions are made in Section IV. II. EXPERIMENT SETUP AND RESULTS Experiment setup The schematic of the all-normal dispersion fiber oscillator is shown in Figure 1. 30 cm highly Yb 3+ doped fibers (YDF) (Liekki Yb1200-4/125 1200 dB@976 nm) with a calculated GVD of 26.2 2 / are used as the gain medium. The fiber represented in black is about 4 m HI1060 fiber with GVD~24.7 2 / , i.e., a single-mode fiber (SMF) at 1 µm. The net cavity dispersion is around 0.11 2 . A 976 nm laser diode (LD) with a maximum pump of 350 mW is coupled into the cavity through a wavelength-division multiplexer (WDM). The fiber collimators (C1 and C2) transmit the signal to the spatial path. The quarter-wave plate (QWP1 and QWP2), halfwave plate (HWP1), and polarization splitting prism (PBS1) are served as a nonlinear polarization rotation (NPR) device to realize mode-locking. Meanwhile, PBS1 is also employed as the output coupler. The Faraday rotator (FR), HWP2, and longitudinally placed PBS2 act as an isolator by controlling the polarization, and this part plays the role in protecting the pump. The formation of dissipative soliton Firstly, we insert 300 lines/mm grating to form a single pass Gaussian filter [30]. Stable coherent pulses can be attained by controlling the mode-locker, and the output performances are measured under a 350 mW pump power. An ultrafast photodetector (Thorlabs™, model FDS025) with a rise time of 47 ps and a Rohde & Schwarz™ oscilloscope with a bandwidth of 2 GHz are used to characterize the pulse train. The output pulse trains observed in time windows of 6 ms and 200 ns are shown in Figures. 2(a) and 2(b), respectively. One can observe that the pulse interval is around 23.5 ns, and the corresponding repetition rate is about 42.5 MHz. It is in line with the calculated value with cavity length. Furthermore, an autocorrelator (Femtochrome™, model FR-103WS) with a time resolution ratio of 1 fs is adapted to parse the pulse. It can be seen from Figure 2(c) that the duration is about 3.84 ps after Gaussian fitting. Figure 2(d) shows the output power versus the pump. The laser starts a continuous-wave (CW) operation when the pump is about 60 mW. The self-started mode-locking can be obtained as the pump exceeds 180 mW. The output power increases almost linearly with the pump, the maximum output is ~92.8 mW, and the envelope efficiency is around 32%. It can be further calculated that the maximum single pulse energy is ~2.2 nJ and the peak power is about 573 W. A spectrum analyzer (Yokogawa™, AQ6370C) is further employed to characterize the spectrum. As presented in Figure 2(d), the spectrum is with steep edges, the central wavelength is around 1030 nm and a 3-dB width is ~11.3 nm. Owning these typical features, it can be analyzed that the pulse generated is the dissipative soliton [25]. To offer a detailed description of the pulse stability, the radio frequency (RF) spectrum is also measured by an RF analyzer (Keysight™, model N9000B). As shown in Figures 2(e) and (f), the fundamental repetition rate is about 42.5 MHz, and the signal-to-noise ratio is higher than 80 dB. Combining with the above parameters, one can observe that the mode-locking is very stable. During the experiment, the stable mode-locking can be maintained for several hours. The formation of noise-like pulse To investigate the impact of a spectrum filter, instead of the grating with a plane mirror, and fix the other parameters. In this case, only NLP can be attained, whether ergodically adjusting the waveplate or controlling the pump power. The measured pulse trains are shown in Figures 3(a) and (b). It can be observed that the pulses have a large random amplitude fluctuation, and the pulse interval is around 23.3 ns, corresponding repetition rate of about 43 MHz. Noise-like pulse is composed of many random sub-pulses, the autocorrelation trace with a spike is the typical feature (Figure 3(c)) [19]. The measured width of the pedestal is about 40.2 ps. As presented in Figure 3(d), the self-started noise-like pulse mode-locking can be obtained as the pump exceeds 280 mW, and the maximum output is about 108.2 mW. The calculated envelope efficiency and single pulse energy are around 37% and 2.5 nJ. The spectrum has an irregular shape, the central wavelength is at around 1030 nm and the 3-dB width is about 5.4 nm. We also measured its RF spectrum, as shown in Figures 3(e) and (f). The fundamental repetition rate is about 43 MHz, and the signal-tonoise ratio is about 60 dB. Different from the dissipative soliton mode-locked state, the RF spectrum has a small modulation and a sidelobe. Analyzing the above pulse dynamics, one can observe that the mode-locking is not stable, but it is in line with the characteristics of NLP. III. THEORETICAL MODEL AND SIMULATION Theory model For exploring the pulse dynamics, we erected a theoretical model based on the pulse-tracing. The simulation cavity is consistent with the experiment, which mainly consists of 0.3 m Yb 3+ doped fibers, 4 m SMF, a Gaussian bandpass filter, an NPR mode locker, and a 35% output coupler. The pulse evolution in fiber is described by the generalized nonlinear Schrödinger equation [31]: = − 2 2 2 2 + 2 + | | 2(1) Where u is the slow-varying envelope, t and z are the retarded time and transmission distance; represents the nonlinear coefficient, setting to 4.7 −1 −1 for SMF and 5 −1 −1 for YDF; 2 is the second-order dispersion, setting to 26.2 2 / for YDF and 24.7 2 / for SMF. Higherorder dispersion and nonlinear effects were ignored. The formula = × ( ) can calculate the gain coefficient with a gain spectrum g(ω) in the Lorentzian profile. The g(ω) is with a center wavelength ( 0 ) of 1030 nm and gain bandwidth of 50 nm. Considering the gain saturation, there are: = 0 1+∫| | 2 /(2) In Eq. (2), 0 is the small-signal gain at the central wavelength, which is set to 10 −1 for YDF and 0 −1 for SMF; E sat represents the gain saturation energy, which is relative to pump power [32], setting = 9.5 nJ. The NPR mode-locker exhibits a sinusoidal shape Where and represent the azimuth angles of the polarizer and the analyzer, concerning the fast axis of the fiber. ∆ = ∆ 0 + 2 (1 − 0 ⁄ ) ⁄ and ∆ = 2 (2 )/3 , denoting the linear and nonlinear phase delays, respectively. L and P are the total length of the cavity and instantaneous pulse power. ∆ 0 , and are the initial phase delay, birefringence beat length, and the wavelength detuning. The parameters are set as = 0.28 , = 0.36 , ∆ 0 = 0.25 , ⁄ = 0, and = 1. The NPR mode-locker can operate in the positive or negative feedback regime. The critical feedback power (CFP) is the power at which the feedback is transformed from positive to negative [17]. The simulated NPR transmission curve calculated by Eq. (3) is shown in Figure 4(a) (blue line). Considering the transmittance rate T is also related to the saturated absorption of the NPR, the CFP from positive feedback to negative feedback occurs at the maximum product of instantaneous power P and the corresponding transmittance rate T [34], as present in Figure 4(a) red line. That is, when the power is lower than the CFP, the higher the peak power, the larger the NPR transmittance, i.e., positive feedback. A bandwidth (∆λ) tunable first-order Gaussian spectrum filter is further adopted [35]. There exist three kinds of spectral filtering in this fiber oscillator: Gaussian spectrum filter introduced by the grating, gain filter introduced by the Yb 3+ Gain medium, and sinusoidal filter introduced by NPR [16]. Since the nonlinear coefficient is related to wavelength, NPR can be both used as a mode locker and a spectral filter to promote the pulse evolution. The formula = 2 2 ⁄ can calculate the nonlinear coefficient, where 2 and are the nonlinear-index coefficient and effective mode field area of the fiber, setting 2 = 2.35 × 10 −22 2 ⁄ , = 3.02 × 10 −11 2 , and = 1550 W. Figure 4(b) shows the calculated three kinds of spectrum filtering curves, and one can observed that the 8 nm Gaussian filter plays the leading role. The total intracavity loss is set at around 10%. The simulation starts from arbitrary Gaussian-windowed white noise after the filter. We employed the split-step Fourier method to solve this model [36], and the time window and sampling points are set to 100 ps and 10,000. The time resolution is 10 fs. Usually, several tens of roundtrips are required for the solution to stabilize. To investigate the influence of spectrum filtering on pulse dynamics, we increase filter bandwidth ∆λ to simulate. Figure 5 shows the pulse and spectrum before the NPR mode-locker. When ∆λ = 5 nm, the stable pulse can be observed, and the pulse peak, duration, and energy are 0.99 kW, 5.54 ps, and 5.06 nJ. The self-phase modulation (SPM) effect makes the spectrum a sharper edge and large bandwidth of ~25.7 nm, and the "Cat's ear" spectrum proves the forming of dissipative soliton (DS) [25]. As ∆λ increased from 5 nm to 15 nm, the output characteristics are always in line with the dissipative soliton, while the pulse peak, duration, and energy are increased to 1.283 kW, 13.63 ps, and 17.4 nJ. Further weakening the spectrum filtering (∆λ >15.5 nm), irregular pulse profiles can be observed. The pulse exhibits a fine inner structure of many narrow sub-pulses with randomly varying intensity and duration. We further simulated their autocorrelation trace to analyse the pulse state, as shown in Figure 6 (∆λ = 30 nm). Its intensity autocorrelation trace is two-scaled, consisting of a 29.3 ps pedestal and a 180 fs spike. The widths of the pedestal and the spike indicate the width of the wave packet and the average width of the sub-pulses inside the wave packet, respectively. The ratio of the spike to the pedestal is related to the density of the sub-pulse a larger ratio usually means a smaller sub-pulse density [26,37]. In the simulation, we found the ratio is positively correlated with the filter bandwidth. The results indicate that weakening the spectrum filtering can realize the transition from DS to NLP state. Simulation results and analysis To explore the formation mechanism of NLP, we further simulated its pulse and spectrum evolution versus roundtrips, as present in Figures 7(b) and (b) (see Supporting documents). One can observe that the noise-like pulse here is directly evolved by the initial noise signal, and no transition states occur. The pulse dynamics are much different from previous explorations in the ANDi laser, in which NLP originate from strong spectrum filtering or mode-locker introduced amplitude modulation on the coherent stable pulses [14,26]. In those cases, the NLP is a usual result initiated from DS pulse, and a stable transition state can be observed before NLP formation. The NLP evolution dynamics here are more consistent with the report on dispersion-managed oscillators [15]. In the ANDi fiber, the normal dispersion fiber will introduce the heavily up-chirp. In the absence of a suitable spectrum filter, the strong nonlinearity will lead to the explosive increase in new spectrum components, the stable state cannot be attained but form the random sub-pulses cluster. Figure 8 shows the peak power and energy versus roundtrips. Before the 69 th cycle, since the peak power is lower than the CFT, the subpulses with higher peak experience a higher transmission, and is strengthened (see Supporting documents). The peak power is increased dramatically owing to the positive feedback. In addition, the wave packet has also been narrowed, since the attenuation of weak sub-pulses. At the 70th cycle, the peak power exceeds the CFT, and negative feedback will occur and stabilize the peak power. The random pulse cluster achieves dynamic stability, and the NLP is ultimately generated. Random energy fluctuations can be observed in Figure 8(b), which is consistent with the experimental results. Combining the above analysis, strong nonlinearity and cavity feedback plays a key role in the formation of NLP in the ANDi fiber under weak spectrum filtering. V. CONCLUSION In conclusion, we have experimentally and numerically investigated the generation of noise-like pulses in an allnormal-dispersion fiber laser. The results emphasize that weakening the spectrum filtering can realized the transition from DS to NLP state. The NLP evolution dynamics are simulated in detail. Combining with previous reports, we can conclude that there is a great difference in NLP forming in the ANDi fiber laser with different spectral filtering effects. For strong spectral filtering, the amplitude modulation induced by reverse saturable absorption and the peak-power-limiting effect of the NPR plays a key role, and the NLP is a usual result initiated from the other stable pulse [14]. For weak spectrum filtering, strong nonlinearity and cavity feedback plays are the main incentives, the other stable pulse cannot be formed, NLP here is directly evolved by the initial noise. Convincingly, our work could help colleagues to further understand the NLP dynamics in the dissipative fiber laser system and assist their construction. Figure 2 . 2Measured DS parameters: (a) oscilloscope train in the range of 6 ms; (b) oscilloscope train in the range of 200 ns; (c) autocorrelation trace and its Gaussian fitting (red); (d) output power versus the pump power; Insert: measured mode-locked spectrum; (e) corresponding RF spectrum within 0-500 MHz; (f) RF spectrum around fundamental repetition rate. Figure 1 . 1Schematic of the all-normal dispersion fiber oscillator. > REPLACE THIS LINE WITH YOUR MANUSCRIPT ID NUMBER (DOUBLE-CLICK HERE TO EDIT) < Figure 3 . 3Measured NLP parameters: (a) oscilloscope train in the range of 6 ms; (b) oscilloscope train in the range of 200 ns;(c) autocorrelation trace and its Gaussian fitting (red); (d) output power versus the pump power; Insert: measured mode-locked spectrum; (e) corresponding RF spectrum within 0-500 MHz; (f) RF spectrum around fundamental repetition rate. > REPLACE THIS LINE WITH YOUR MANUSCRIPT ID NUMBER (DOUBLE-CLICK HERE TO EDIT) < transmissivity verse instantaneous power, the transmissivity can be written as[14,17,33]: Figure 5 . 5(a) Pulse envelope when ∆λ varied; (b) Spectrum envelope when ∆λ varied. Figure 4 . 4(a) NPR transmission (blue line) and the product of instantaneous power and transmission (red line); (b) calculated spectrum filtering curves. > REPLACE THIS LINE WITH YOUR MANUSCRIPT ID NUMBER (DOUBLE-CLICK HERE TO EDIT) < Figure 6 . 6Simulated autocorrelation trace when ∆λ = 30 nm. Figure 7 . 7(a) and (b) Simulated pulse and spectrum evolution of when ∆λ = 30 nm. Figure 8 . 8(a) Peak power versus roundtrips;(b) Pulse energy versus roundtrips. > REPLACE THIS LINE WITH YOUR MANUSCRIPT ID NUMBER (DOUBLE-CLICK HERE TO EDIT) < Wang is with the College of Electronics and Information Engineering, Sichuan University, Chengdu 610065, China, (e-mail: [email protected]). Jun Wang is with the College of Electronics and Information Engineering, Sichuan University, Chengdu 610065, China, and with the Suzhou Everbright Photonics Co., Ltd., Suzhou 215000 (e-mail: [email protected]). ACKNOWLEDGMENTThe authors would like to acknowledge Long Li and Xinxin Sun for their valuable help in paper writing. Selfstarting passively mode-locked fibre ring soliton laser exploiting nonlinear polarisation rotation. V J Matsas, T P Newson, D J Richardson, D N Payne, Electron. Lett. 2815V.J. Matsas, T.P. Newson, D.J. Richardson and D.N. Payne, "Selfstarting passively mode-locked fibre ring soliton laser exploiting nonlinear polarisation rotation," Electron. Lett. vol. 28, no. 15, pp. 1391- 1393, 1992. 77-fs pulse generation from a stretched-pulse modelocked all-fiber ring laser. K Tamura, E P Ippen, H A Haus, L E Nelson, Opt. Lett. 1813K. Tamura, E.P. Ippen, H.A. Haus and L.E. Nelson, "77-fs pulse generation from a stretched-pulse mode- locked all-fiber ring laser," Opt. Lett., vol. 18, no. 13, pp. 1080-1082, 1993. Self-Similar Evolution of Parabolic Pulses in a Laser. J R Buckley, W G Clark, F W Wise, F Ö Ilday, Phys. Rev. Lett. 9221213902J.R. Buckley, W.G. Clark, F.W. Wise and F.Ö. Ilday, "Self-Similar Evolution of Parabolic Pulses in a Laser," Phys. Rev. Lett., vol. 92, no. 21, pp. 213902, 2004. All-normal-dispersion femtosecond fiber laser. A Chong, J Buckley, W Renninger, F Wise, Opt. Express. 1421A. Chong, J. Buckley, W. Renninger and F. Wise, "All-normal-dispersion femtosecond fiber laser," Opt. Express, vol. 14, no. 21, pp. 10095-10100, 2006. . W Chang, A Ankiewicz, J Soto-Crespo, N , W. Chang, A. Ankiewicz, J. Soto-Crespo and N. Dissipative soliton resonances. Akhmediev, Phys. Rev. A. 78Akhmediev, "Dissipative soliton resonances," Phys. Rev. A, vol. 78, 2008. Noise-like pulse in a gain-guided soliton fiber laser. L M Zhao, D Y Tang, J Wu, X Q Fu, S C Wen, Opt. Express. 155L.M. Zhao, D.Y. Tang, J. Wu, X.Q. Fu and S.C. Wen, "Noise-like pulse in a gain-guided soliton fiber laser," Opt. Express, vol. 15, no. 5, pp. 2145-2150, 2007. High-energy dissipative soliton resonance and rectangular noise-like pulse in a figure-9 Tm fiber laser. K Zhao, Appl. Phys. Express. 12112002K. Zhao, et al., "High-energy dissipative soliton resonance and rectangular noise-like pulse in a figure- 9 Tm fiber laser," Appl. Phys. Express, vol. 12, no. 1, pp. 12002, 2018. Generation of noise-like pulses with a 920 fs pedestal in a nonlinear Yb-doped fiber amplifier. C Xu, Opt. Express. 272C. Xu, et al., "Generation of noise-like pulses with a 920 fs pedestal in a nonlinear Yb-doped fiber amplifier," Opt. Express, vol. 27, no. 2, pp. 1208-1216, 2019. Single and dual-wavelength noise-like pulses with different shapes in a double-clad Er/Yb fiber laser. E Bravo-Huerta, Opt. Express. 279E. Bravo-Huerta, et al., "Single and dual-wavelength noise-like pulses with different shapes in a double-clad Er/Yb fiber laser," Opt. Express, vol. 27, no. 9, pp. 12349-12359, 2019. Fiber-lasergenerated noise-like pulses and their applications. C Pan, A Zaytsev, Y You, C Lin, Fiber Laser. C. Pan, A. Zaytsev, Y. You and C. Lin, "Fiber-laser- generated noise-like pulses and their applications," Fiber Laser, pp. 211-243, 2016. Noise-like pulses with an extremely broadband spectrum in passively modelocked fiber lasers. A Komarov, K Komarov, D Meshcheriakov, A Dmitriev, L Zhao, Journal of the Optical Society of America B. 383A. Komarov, K. Komarov, D. Meshcheriakov, A. Dmitriev and L. Zhao, "Noise-like pulses with an extremely broadband spectrum in passively mode- locked fiber lasers," Journal of the Optical Society of America B, vol. 38, no. 3, pp. 961-967, 2021. Interrogation of fiber gratings by use of low-coherence spectral interferometry of noise like pulses. S Keren, M Horowitz, Opt. Lett. 266S. Keren and M. Horowitz, "Interrogation of fiber gratings by use of low-coherence spectral interferometry of noise like pulses," Opt. Lett., vol. 26, no. 6, pp. 328-330, 2001. Dissipative Rogue Waves Among Noise-Like Pulses in a Tm Fiber Laser Mode Locked by a Monolayer MoS2 Saturable Absorber. W P , H D , Z K , J L , X X , Y C , IEEE J. Sel. Top. Quant. 243W. P., H. D., Z. K., J. L., X. X. and Y. C., "Dissipative Rogue Waves Among Noise-Like Pulses in a Tm Fiber Laser Mode Locked by a Monolayer MoS2 Saturable Absorber," IEEE J. Sel. Top. Quant., vol. 24, no. 3, pp. 1-7, 2018. Fine-structure oscillations of noise-like pulses induced by amplitude modulation of nonlinear polarization rotation. X Li, S Zhang, M Han, J Liu, Opt. Lett. 4220X. Li, S. Zhang, M. Han and J. Liu, "Fine-structure oscillations of noise-like pulses induced by amplitude modulation of nonlinear polarization rotation," Opt. Lett., vol. 42, no. 20, pp. 4203-4206, 2017. Soliton collapse and bunched noise-like pulse generation in a passively mode-locked fiber ring laser. D Y Tang, L M Zhao, B Zhao, Opt. Express. 137D.Y. Tang, L.M. Zhao and B. Zhao, "Soliton collapse and bunched noise-like pulse generation in a passively mode-locked fiber ring laser," Opt. Express, vol. 13, no. 7, pp. 2289-2294, 2005. Multi-Shuttle Behavior Between Dissipative Solitons and Noise-Like Pulses in an All-Fiber Laser. C Xi, Q Huang, Z Huang, J. Lightwave Technol. 388C. Xi, Q. Huang, Z. Huang et al., "Multi-Shuttle Behavior Between Dissipative Solitons and Noise- Like Pulses in an All-Fiber Laser," J. Lightwave Technol., vol. 38, no. 8, pp. 2471-2476, 2020. On the formation of noise-like pulses in fiber ring cavity configurations. Y Jeong, L Vazquez-Zuniga, S Lee, Y Kwon, Opt. Fiber Technol. 20Y. Jeong, L. Vazquez-Zuniga, S. Lee and Y. Kwon, "On the formation of noise-like pulses in fiber ring cavity configurations," Opt. Fiber Technol., vol. 20, 2014. Investigation of noise-like pulses from a net normal Yb-doped fiber laser based on a nonlinear polarization rotation mechanism. J Lin, C Chen, C Chan, W Chang, Y Chen, Opt. Lett. 4122J. Lin, C. Chen, C. Chan, W. Chang and Y. Chen, "Investigation of noise-like pulses from a net normal Yb-doped fiber laser based on a nonlinear polarization rotation mechanism," Opt. Lett., vol. 41, no. 22, pp. 5310-5313, 2016. Noise like pulses with a broadband spectrum generated from an erbium-doped fiber laser. M Horowitz, Y Barad, Y Silberberg, Opt. Lett. 22M. Horowitz, Y. Barad, and Y. Silberberg, "Noise like pulses with a broadband spectrum generated from an erbium-doped fiber laser," Opt. Lett., vol. 22, pp. 799- 801, 1997. Three key regimes of single pulse generation per round trip of all-normal-dispersion fiber lasers modelocked with nonlinear polarization rotation. S Smirnov, S Kobtsev, S Kukarin, A Ivanenko, Opt. Express. 2024S. Smirnov, S. Kobtsev, S. Kukarin and A. Ivanenko, "Three key regimes of single pulse generation per round trip of all-normal-dispersion fiber lasers mode- locked with nonlinear polarization rotation," Opt. Express, vol. 20, no. 24, pp. 27447-27453, 2012. Raman-driven destabilization of modelocked long cavity fiber lasers. C Aguergaray, A Runge, M Erkintalo, N G R Broderick, C. Aguergaray, A. Runge, M. Erkintalo and N.G.R. Broderick, "Raman-driven destabilization of mode- locked long cavity fiber lasers: fundamental 7 DOUBLE-CLICK HERE TO EDIT) < limitations to energy scalability. &gt; Replace, Line, Your, Id Number, Opt. Lett. 3815> REPLACE THIS LINE WITH YOUR MANUSCRIPT ID NUMBER (DOUBLE-CLICK HERE TO EDIT) < limitations to energy scalability," Opt. Lett., vol. 38, no. 15, pp. 2644-2646, 2013. All-Fiber Dissipative Solitons Evolution in a Compact Passively Yb-Doped Mode-Locked Fiber Laser. X Li, Y Wang, W Zhao, J. Lightwave Technol. 3015X. Li, Y. Wang, W. Zhao et al., "All-Fiber Dissipative Solitons Evolution in a Compact Passively Yb-Doped Mode-Locked Fiber Laser," J. Lightwave Technol., vol. 30, no. 15, pp. 2502-2507, 2012. Mechanism of Dissipative-Soliton-Resonance Generation in Passively Mode-Locked All-Normal-Dispersion Fiber Lasers. L D , T D , Z L , S D , J. Lightwave Technol. 3318L. D., T. D., Z. L. and S. D., "Mechanism of Dissipative-Soliton-Resonance Generation in Passively Mode-Locked All-Normal-Dispersion Fiber Lasers," J. Lightwave Technol., vol. 33, no. 18, pp. 3781-3787, 2015. Observations of four types of pulses in a fiber laser with large net-normal dispersion. L Wang, X Liu, Y Gong, D Mao, L Duan, Opt. Express. 198L. Wang, X. Liu, Y. Gong, D. Mao and L. Duan, "Observations of four types of pulses in a fiber laser with large net-normal dispersion," Opt. Express, vol. 19, no. 8, pp. 7616-7624, 2011. Dissipative solitons for mode-locked lasers. P Grelu, N Akhmediev, Nat. Photonics. 62P. Grelu and N. Akhmediev, "Dissipative solitons for mode-locked lasers," Nat. Photonics, vol. 6, no. 2, pp. 84-92, 2012. Impact of spectral filtering on pulse breaking-up and noise-like pulse generation in allnormal dispersion fiber lasers. R Xu, Opt. Express. 2815R. Xu, et al., "Impact of spectral filtering on pulse breaking-up and noise-like pulse generation in all- normal dispersion fiber lasers," Opt. Express, vol. 28, no. 15, pp. 21348-21358, 2020. Simulation of generation of dissipative soliton, dissipative soliton resonance and noise-like pulse in Yb-doped mode-locked fiber lasers. Z Chengh, P Li, Wang, Opt. Express. 235Z. ChengH. Li and P. Wang, "Simulation of generation of dissipative soliton, dissipative soliton resonance and noise-like pulse in Yb-doped mode-locked fiber lasers," Opt. Express, vol. 23, no. 5, pp. 5972-5981, 2015. Nonlinear multimodal interference for ytterbium-doped all-fiber mode-locking noise-like pulse generation. Z Lv, Appl. Phys. Express. 12222004Z. Lv, et al., "Nonlinear multimodal interference for ytterbium-doped all-fiber mode-locking noise-like pulse generation," Appl. Phys. Express, vol. 12, no. 2, pp. 22004, 2019. Evolution of noise-like pulses in mode-locked fiber laser based on straight gradedindex multimode fiber structure. T Chen, Optics & Laser Technology. 143107347T. Chen, et al., "Evolution of noise-like pulses in mode-locked fiber laser based on straight graded- index multimode fiber structure," Optics & Laser Technology, vol. 143, pp. 107347, 2021. Pulse shaping mechanisms for high performance mode-locked fiber lasers. W H Renninger, Cornell UniversityW.H. Renninger, "Pulse shaping mechanisms for high performance mode-locked fiber lasers", Cornell University, 2012. Nonlinear Fiber Optics. G , 18G. Agrawal, "Nonlinear Fiber Optics,", Vol.18, 2001. Analytical model for rare-earth-doped fiber amplifiers and lasers. B C , M P , C J , K M , IEEE J. Quantum Elect. 308B. C., M. P., C. J. and K. M., "Analytical model for rare-earth-doped fiber amplifiers and lasers," IEEE J. Quantum Elect., vol. 30, no. 8, pp. 1817-1830, 1994. Mechanism of intrinsic wavelength tuning and sideband asymmetry in a passively modelocked soliton fiber ring laser. W S Man, H Y Tam, M S Demokan, P K A Wai, D Y Tang, JOSA B. 171W.S. Man, H.Y. Tam, M.S. Demokan, P.K.A. Wai and D.Y. Tang, "Mechanism of intrinsic wavelength tuning and sideband asymmetry in a passively mode- locked soliton fiber ring laser," JOSA B, vol. 17, no. 1, pp. 28-33, 2000. Mechanism of multisoliton formation and soliton energy quantization in passively mode-locked fiber lasers. D Y Tang, L Zhao, B Zhao, A Q Liu, Physical Review A. 72443816D.Y. Tang, L. Zhao, B. Zhao and A.Q. Liu, "Mechanism of multisoliton formation and soliton energy quantization in passively mode-locked fiber lasers," Physical Review A, vol. 72, no. 4, pp. 43816, 2005. Impact of reverse saturable absorption on pulse dynamics in the ultrafast fiber laser. Z Zhang, B Wang, Y Xiao, S Wang, J Wang, Opt. Commun. 508127739Z. Zhang, B. Wang, Y. Xiao, S. Wang and J. Wang, "Impact of reverse saturable absorption on pulse dynamics in the ultrafast fiber laser," Opt. Commun., vol. 508, pp. 127739, 2022. Split-Step Methods for the Solution of the Nonlinear Schrödinger Equation. J A C Weideman, B M Herbst, SIAM J. Numer. Anal. 233J.A.C. Weideman and B.M. Herbst, "Split-Step Methods for the Solution of the Nonlinear Schrödinger Equation," SIAM J. Numer. Anal., vol. 23, no. 3, pp. 485-507, 1986. Numerical Study on Autocorrelation of Noise-Like Pulse in Fiber Lasers. J Xinxin, L Lei, L Jiaolin, G Yanqi, Z Qian, Z Luming, Laser & Optoelectronics Progress. 52121902J. Xinxin, L. Lei, L. Jiaolin, G. Yanqi, Z. Qian and Z. Luming, "Numerical Study on Autocorrelation of Noise-Like Pulse in Fiber Lasers," Laser & Optoelectronics Progress, vol. 52, pp. 121902, 2015.
[]
[ "Orbital order of spinless fermions near an optical Feshbach resonance", "Orbital order of spinless fermions near an optical Feshbach resonance" ]
[ "Philipp Hauke \nICFO -Institut de Ciències Fotòniques\nParc Mediterrani de la Tecnologia\n08860CastelldefelsSpain\n\nKavli Institute for Theoretical Physics\nUniversity of California\n93106Santa BarbaraCA\n", "Erhai Zhao \nKavli Institute for Theoretical Physics\nUniversity of California\n93106Santa BarbaraCA\n\nDepartment of Physics and Astronomy\nGeorge Mason University\n22030FairfaxVA\n", "Krittika Goyal \nCenter for Quantum Information and Control (CQuIC)\nDepartment of Physics and Astronomy\nUniversity of New Mexico\n87131AlbuquerqueNM\n", "Ivan H Deutsch \nCenter for Quantum Information and Control (CQuIC)\nDepartment of Physics and Astronomy\nUniversity of New Mexico\n87131AlbuquerqueNM\n", "W Vincent Liu \nKavli Institute for Theoretical Physics\nUniversity of California\n93106Santa BarbaraCA\n\nDepartment of Physics and Astronomy\nUniversity of Pittsburgh\n15260PittsburghPA\n", "Maciej Lewenstein \nICFO -Institut de Ciències Fotòniques\nParc Mediterrani de la Tecnologia\n08860CastelldefelsSpain\n\nKavli Institute for Theoretical Physics\nUniversity of California\n93106Santa BarbaraCA\n\nICREA -Institució Catalana de Recerca i Estudis Avançats\nLluis Companys 23E-08010BarcelonaSpain\n" ]
[ "ICFO -Institut de Ciències Fotòniques\nParc Mediterrani de la Tecnologia\n08860CastelldefelsSpain", "Kavli Institute for Theoretical Physics\nUniversity of California\n93106Santa BarbaraCA", "Kavli Institute for Theoretical Physics\nUniversity of California\n93106Santa BarbaraCA", "Department of Physics and Astronomy\nGeorge Mason University\n22030FairfaxVA", "Center for Quantum Information and Control (CQuIC)\nDepartment of Physics and Astronomy\nUniversity of New Mexico\n87131AlbuquerqueNM", "Center for Quantum Information and Control (CQuIC)\nDepartment of Physics and Astronomy\nUniversity of New Mexico\n87131AlbuquerqueNM", "Kavli Institute for Theoretical Physics\nUniversity of California\n93106Santa BarbaraCA", "Department of Physics and Astronomy\nUniversity of Pittsburgh\n15260PittsburghPA", "ICFO -Institut de Ciències Fotòniques\nParc Mediterrani de la Tecnologia\n08860CastelldefelsSpain", "Kavli Institute for Theoretical Physics\nUniversity of California\n93106Santa BarbaraCA", "ICREA -Institució Catalana de Recerca i Estudis Avançats\nLluis Companys 23E-08010BarcelonaSpain" ]
[]
We study the quantum phases of a three-color Hubbard model that arises in the dynamics of the p-band orbitals of spinless fermions in an optical lattice. Strong, color-dependent interactions are induced by an optical Feshbach resonance. Starting from the microscopic scattering properties of ultracold atoms, we derive the orbital exchange constants at 1/3 filling on the cubic optical lattice. Using this, we compute the phase diagram in a Gutzwiller ansatz. We find novel phases with 'axial orbital order' in which pz and px + ipy (or px − ipy) orbitals alternate.
10.1103/physreva.84.051603
[ "https://arxiv.org/pdf/1103.5964v2.pdf" ]
118,841,915
1103.5964
fe0c9bbf9a05c554d84fc6a001bfc9b2515c55e3
Orbital order of spinless fermions near an optical Feshbach resonance Philipp Hauke ICFO -Institut de Ciències Fotòniques Parc Mediterrani de la Tecnologia 08860CastelldefelsSpain Kavli Institute for Theoretical Physics University of California 93106Santa BarbaraCA Erhai Zhao Kavli Institute for Theoretical Physics University of California 93106Santa BarbaraCA Department of Physics and Astronomy George Mason University 22030FairfaxVA Krittika Goyal Center for Quantum Information and Control (CQuIC) Department of Physics and Astronomy University of New Mexico 87131AlbuquerqueNM Ivan H Deutsch Center for Quantum Information and Control (CQuIC) Department of Physics and Astronomy University of New Mexico 87131AlbuquerqueNM W Vincent Liu Kavli Institute for Theoretical Physics University of California 93106Santa BarbaraCA Department of Physics and Astronomy University of Pittsburgh 15260PittsburghPA Maciej Lewenstein ICFO -Institut de Ciències Fotòniques Parc Mediterrani de la Tecnologia 08860CastelldefelsSpain Kavli Institute for Theoretical Physics University of California 93106Santa BarbaraCA ICREA -Institució Catalana de Recerca i Estudis Avançats Lluis Companys 23E-08010BarcelonaSpain Orbital order of spinless fermions near an optical Feshbach resonance (Dated: January 18, 2013)numbers: 0375Ss0530Fk6785-d7110Fd We study the quantum phases of a three-color Hubbard model that arises in the dynamics of the p-band orbitals of spinless fermions in an optical lattice. Strong, color-dependent interactions are induced by an optical Feshbach resonance. Starting from the microscopic scattering properties of ultracold atoms, we derive the orbital exchange constants at 1/3 filling on the cubic optical lattice. Using this, we compute the phase diagram in a Gutzwiller ansatz. We find novel phases with 'axial orbital order' in which pz and px + ipy (or px − ipy) orbitals alternate. We study the quantum phases of a three-color Hubbard model that arises in the dynamics of the p-band orbitals of spinless fermions in an optical lattice. Strong, color-dependent interactions are induced by an optical Feshbach resonance. Starting from the microscopic scattering properties of ultracold atoms, we derive the orbital exchange constants at 1/3 filling on the cubic optical lattice. Using this, we compute the phase diagram in a Gutzwiller ansatz. We find novel phases with 'axial orbital order' in which pz and px + ipy (or px − ipy) orbitals alternate. Orbital physics of electrons plays an important role in strongly-correlated solid-state systems, e.g., transitionmetal oxides (see, e.g., [1,2] and references therein). In particular, intriguing quantum phases emerge due to the coupling of the orbital degree of freedom to the charge, spin, or lattice degrees of freedom [3,4]. Such coupling, while leading to interesting effects, also complicates the theoretical treatment. It is, therefore, desirable to study simpler systems with the orbital degree of freedom decoupled from all others. Ultracold atoms in higher bands of optical lattices provide an ideal tool to study orbital dynamics in a well-controlled environment, including orbital-only models of single-species (spinless) fermions. Several groups have now achieved loading and manipulating ultracold atoms in higher (such as p-) bands of optical lattices [5][6][7][8][9]. Techniques such as lattice ramping or radio-frequency pulses have been used to transfer atoms from the s-to higher bands, where they can stay in a metastable state for a sufficiently long time. For spinless fermionic atoms, the p-band can also be simply populated by first completely filling the s-band, requiring larger particle numbers, but less experimental control. To avoid undesired collisions between ground and excited-band atoms, the s-band atoms may be removed afterwards using laser pulses [10]. The interaction between fermionic atoms is usually weak at low temperatures because the Pauli exclusion principle only allows scattering in high partial-wave channels (p, f , etc.). One way to increase the p-wave elastic scattering cross section is to employ a Feshbach resonance (FR) [11]. Typically, this is done by coupling channels in the electronic ground state through magnetic fields. For † K. G. previously published under the name K. Kanjilal. * Electronic address: [email protected] the case of p-waves, however, this method usually leads to significant atom losses through three-body inelastic collisions because the scattering state is well localized by the angular momentum barrier, and has good Franck-Condon overlap with more deeply bound molecules [12]. To circumvent this problem, recently Ref. [13] considered enhanced p-wave interactions via an optical FR (OFR) between a scattering state and an electronically excited "purely-long-range" molecule. Such molecules have inner turning points at very large distances (e.g., > 50a 0 in 171 Yb), well beyond the chemical binding region, and thus three-body recombination should be highly suppressed. This approach not only allows to study stronglycorrelated phases, but also provides for a high degree of control. In particular, the interaction strength among different p-orbitals can be tuned differently. Motivated by these developments, we investigate in this article the phase diagram of spinless fermions on a cubic lattice near an OFR described by the following Hubbard-like model: H = − i;µ,ν t µ,ν (c † µ,i c µ,i+eν + h.c.) + i V 1 n x,i n y,i +V 2 (n x,i n z,i + n y,i n z,i ) + (iV 3 c † x,i c y,i n z,i + h.c.) . (1) The operator c µ,i destroys a fermion in the orbital p µ at site i, and n µ,i is the corresponding number operator. The lattice spacing is set to 1, e ν is the unit vector in direction ν, and µ, ν = x, y, z. The nearest-neighbor hopping amplitude t µ,ν describes hopping of fermions in orbital p µ along the direction e ν . Due to the anisotropy of the p-orbital Wannier wave functions, it is direction and orbital dependent [14][15][16], t µ,ν = t δ µ,ν + t ⊥ (1 − δ µ,ν ). The interactions V 1,2,3 are induced by an OFR laser [13] which couples the electronic ground state of the atom to an excited state. The interaction can be expressed in terms of the (p-wave) pseudo-potential Expanding field operators in the Wannier basis, ψ (r) = i,µ w µ (r − i) c µ,i , the interaction term d 3 r 1 d 3 r 2 ψ † (r 1 )ψ † (r 2 )V m p (r 1 − r 2 )ψ(r 1 )ψ(r 2 ) leads to the on-site, inter-orbital inter- action H int = i V µ,ν,µ ,ν c † µ ,i c † ν ,i c µ,i c ν,i , where re- peated indices are summed over. (We neglect all off- site interactions.) The matrix element V µ,ν,µ ,ν = m d 3 r 1 d 3 r 2 w µ (r 1 − i) w ν (r 2 − i) V m p (r 1 − r 2 ) w µ (r 1 − i) w ν (r 2 − i) can now be computed by separating the relative and center-of-mass coordinates. For deep lattices, the p-orbital Wannier functions are well approximated by the first excited states of harmonic oscillators (with the oscillator length ζ controlled by the lattice depth). The only non-zero interaction terms are the ones given in Eq. (1), with V 1 = 1 4 (U 1 + U −1 ), V 2 = 1 8 (U 1 + U −1 + 2U 0 ), and V 3 = 1 8 (U −1 − U 1 ). Here, U m = 3 √ 2R/( √ πζ 5 M ) defines the interaction strength in the scattering channel with angular momentum m = 1, 0, −1. A Zeemann splitting, which may be introduced by a magnetic field, leads to different detuning of the OFR laser for the three scattering channels. This makes the scattering length a m p dependent on m, and consequently the U m 's can be different in magnitude and even in sign. Thus, the relative strengths and signs of V 1,2,3 can be varied by changing the strength of the Zeemann splitting together with the detuning of the OFR laser. By contrast, in a standard magnetic FR, U −1 = U +1 . In our case, breaking the symmetry between U −1 and U +1 leads to the orbital-changing term V 3 . Physically, it allows (p x or p y ) particles to move on the two dimensional plane, instead of along a chain only. Since it explicitly breaks time-reversal symmetry (TRS), we can expect it to lead to novel phases reflecting that intriguing property. Hamiltonian (1) generalizes the models of Refs. [17][18][19][20][21][22]. For V 1 = V 2 , and V 3 = 0, it reduces to the SU(3) Hubbard model. One can visualize p-band fermions as particles carrying a color index representing the p x , p y , and p z orbital states. Then, Hamiltonian (1) describes a three-color fermion model with color-dependent interaction, a novel color-changing term V 3 , and spatially anisotropic and color-dependent tunneling. We will show below that this model has a rich phase diagram with novel phases. Here, we focus on the strong-coupling limit for p-band filling 1/3, and determine the orbital order using a Gutzwiller mean-field ansatz. In the strong-coupling limit, t V 1 , t V 2 −V 3 , and t V 2 +V 3 ,(2) double occupancy of the same site is suppressed. At 1/3 filling of the p-band, there is on average one p-band particle per site, and density fluctuations are frozen. Virtual hopping induces exchange interactions between nearestneighbor orbitals (see Fig. 1). The situation bears some resemblance to the emergence of magnetic models, such as the Heisenberg model, in the strong-coupling limit of the Hubbard model. The difference here is that three orbital (instead of two spin) states are involved. Since |t ⊥ | t , perpendicular tunneling t ⊥ can safely be neglected [19], and, for brevity, we write t = t . Treating the tunneling t in (1) as a perturbation and following standard second-order perturbation theory, we obtain the effective Hamiltonian for 1/3 filling J 1 i i+e x x y x y t t V 1 1 x y x y J 2 i i+e x x z x z t t V V 2 x z x z 2 3 V 2 2 J 2 i i+e z z x z x t t V V 2 z x z x 2 3 V 2 2 z y t V V 2 z y z y 3 3 V 2 2 J 3H eff = − i µ=x,y,z δ=±eµ J µ n µ,i (1 − n µ,i+δ ) + µ=x,y δ=±eµ (J 2 − J 1 ) n µ,i n z,i+δ − δ=±ez J 3 (ic † x,i c y,i n z,i+δ + h.c.) ,(3) where we have used the constraint n x,i + n y,i + n z,i = 1, and defined J 1 ≡ t 2 /V 1 , J 2 ≡ t 2 V 2 /(V 2 2 − V 2 3 ), J 3 ≡ t 2 V 3 /(V 2 2 − V 2 3 ), and J x = J y = J 1 , J z = J 2 . For V 3 = 0, V 1 = V 2 , Eq. (3) reduces to J µ n µ,i n µ,i+δ , a hallmark of the quantum 3-state Potts-like model [23]. To see which orbital order is favored, we first discuss the simple case of J 3 = 0. The first term of Eq. (3) always favors configurations where the orbitals at neighboring sites differ. (A) For J 1 > max (J 2 , 0), both the first and second terms favor an alternating pattern between p x -and p y -particles in the xy-plane. (B) For J 2 > max (J 1 , 0), the favored configuration is an alternating pattern between p z and not-p z . (C) For (the unstable case) J 1 , J 2 < 0, the best configuration is a homogeneously filled lattice. Certain aspects of Hamiltonian (3) become clearer when we rewrite it in terms of the generators of the SU(3) group. In terms of the Gell-Mann matrices λ (i) and the so-called F -spin operators Y = 1 √ 3 c † µ λ (8) µ,ν c ν and T (α) = 1 2 c † µ λ (α) µ,ν c ν (α = 1, 2, 3), H eff becomes H eff = 4 3 i (J 2 − J 1 ) Y i − J 3 T (2) i + 2 i δ=ex,ey J 1 T (3) i T (3) i+δ + 2J 2 − J 1 4 Y i Y i+δ + J 2 2 T (3) i Y i+δ + J 2 2 Y i T (3) i+δ (4) + J 2 Y i Y i+ez + J 3 T (2) i Y i+ez + J 3 Y i T (2) i+ez , where we neglected constant terms. In the basis (p x , p y , p z ), Y and T (3) are diagonal, which means that terms such as Y i Y j , Y i T (3) j , or T (3) i T (3) j are Isinglike. The orbital-changing term V 3 leads to T (2) = 1 2i T (+) − T (−) , where T (±) are ladder operators of the T -spin. T (3) and T (2) do not commute, but both commute with Y . This means that one can replace Y by its eigenvalues − 2 3 (for |p z ) and 1 3 (for |p x and |p y ), which gives some insight into the physics of Hamiltonian (4). Assuming that the ground state is bipartite with respect to the eigenvalue of Y [24], there are three different cases: (A) at all sites the eigenvalue of Y is 1 3 , (B) the eigenvalues − 2 3 and 1 3 alternate, and (C) all sites have eigenvalue − 2 3 . In the last case, there is one |p z -particle per site, whence there is no virtual tunneling, and the Hamiltonian vanishes. In the sectors A and B, it reads (neglecting constant terms) H (A) eff = J 1 2 i δ=ex,ey σ (3) i σ (3) i+δ ; (5a) H (B) eff = −2J 3 i∈Ω σ (2) i . (5b) Here, σ denotes the usual Pauli matrices, which act on the subspace spanned by |p x and |p y . Sector A is reduced to the Ising model on decoupled xy-planes, which favors an antiferromagnetic ground state. This is just the model found in the 2D-case treated in [19,20]. In sector B, Ω denotes the partition where Y has eigenvalue 1 3 . On these sites, J 3 acts as a magnetic field in the y-direction, lifting the degeneracy between |p x and |p y and leading to the ground state (|p x ± i |p y ) / √ 2 (for J 3 ≷ 0). Having obtained a qualitative picture of the expected phases, we now analyze the phase diagram of Hamiltonian (4) quantitatively. To this, we assume that correlations between sites are small so that the ground state can be approximated by a product over sites. To find the ground state of Hamiltonian (4), we employ the Gutzwiller variational wave function |Ψ = i (cos θ |p x i + sin θ cos φ |p y i + sin θ sin φ |p z i ), which is a product over sites i, and minimize the energy of a cube with side length L (up to L = 8) under periodic boundary conditions. Note, however, that close to phase transitions, where fluctuations become important, such a mean-field ansatz is not valid. The energy per site for even L is smaller than for odd L, showing that the ground state periodicity is indeed 2 [25]. In agreement with the qualitative picture above, we find three classes of ground states with different orbital order (summarized in Fig. 2): (A) For J 1 > J 2 + |J 3 | /2 and J 1 > 0 we find an 'antiferromagnetic phase' similar to, e.g., the 2D-model of Ref. [19]: in each xy-plane, sites with p x -and p y -orbitals alternate (similar to the antiferromagnetic Néel state). Since p x -and p y -particles do not tunnel in the z-direction, the xy-planes are decoupled, and within our approximation (e.g., neglecting t ⊥ ), there is no long-range order in the z-direction. It is possible, however, that long-range order among the planes develops at low temperature for finite t ⊥ . (B) For J 1 < J 2 + |J 3 | /2 and J 2 > − |J 3 | /2 the ground state shows axial orbital order. The state is bipartite with |p z on one sublattice and (|p x ± i |p y ) / √ 2 (for J 3 ≷ 0, respectively) on the other sublattice (right-hand panel of Fig. 2). The degeneracy between |p x and |p y is lifted by a finite J 3 . The state (|p x ± i |p y ) / √ 2 has finite angular momentum, this novel phase breaks TRS [26]. (C) For J 1 < 0 and J 2 < − |J 3 | /2 Pauli exclusion prohibits all tunneling t (by filling αβ-planes (αβ = xy, xz, yz) uniformly with p α or p β ). This state is unstable, however, because it cannot fulfill the strong-coupling requirements (2). Interestingly, phases A and C preserve TRS, although V 3 in Hamiltonian (1) breaks it explicitly. Experimentally, the different phases can, e.g., be distinguished by measuring the density distribution after a time of flight t tof . This relates to the in-trap momentum distribution via n (r) t tof = In phase B, the sites occupied by (|px ± i |py ) / √ 2 give a similar doughnut structure, but the hole at kx = ky = 0 is filled by the other half of the sites with pz-particles. Similarly, viewing along the x-direction reveals the existence of pz-particles in phase B, contrary to phase A (upper row). [M/( t tof )] 3 µ,ν w µ (k) w ν (k) c † µ (k) c ν (k) , with w µ (k) the Fourier transform of the Wannier orbital w µ (r), c µ (k) = i e ik·i c µ,i /L 3/2 , and k = M r/( t tof ). k is k modulo reciprocal lattice vectors. Features in the density distribution appear because of its non-trivial p-orbital Wannier envelope. This allows to distinguish phases A and B by their column density (i.e., the density integrated along one spatial direction), see Fig. 3. Observation of these novel phases requires that we simultaneously achieve strong interactions, V t and low temperatures k B T t 2 /V , for the characteristic tunneling rate t and interaction energy V . At experimentally feasible temperatures, this requires a significant enhancement of the real part of the p-wave scattering volume via the OFR. In practice, however, this is limited by spontaneous emission, which broadens the resonance and also leads to recoil heating. For the example considered in [13] based on the 1 S 0 → 3 P 1 intercombination line in 171 Yb, the atomic linewidth is ≈180 kHz, which limits the useful OFR p-wave enhancement. Other species such as 87 Sr, where the same transition has a linewidth of ≈7.5 kHz, should result in a substantial OFR, with a reasonable linewidth. Experimental studies of OFRs in related isotopes are currently underway [27]. In summary, we investigated the orbital order of spinless fermions in the p-band of a cubic lattice with interaction controlled by an OFR. The system can be realized with current technology. The model Hamiltonian can be expressed elegantly by Gell-Mann matrices. We analyzed the orbital order in the strong-coupling limit at pband filling 1/3 using a Gutzwiller-type ansatz. Besides a phase where all tunneling is blocked and an antiferroorbital phase where p x -and p y -orbitals alternate, we found a novel phase with axial orbital order which not only breaks translational symmetry but also has macroscopic orbital angular momentum. We expect our results to stimulate future work on this subject. For example, it is interesting to investigate how quantum fluctuations affect the phase diagram: they might distort it [22] or even lead to disordered 'orbital liquid' states. Fluctuations are also expected to lift the degeneracy between p x -and p y orbitals at J 3 = 0, and possibly lead to spontaneous TRS breaking. Moreover, phase B ± may have interesting topological properties. For example, at an interface of two domains with p x + ip y and p x − ip y order, chiral zero mode fermions may arise. Finally, other lattices and the limit of small interactions, where related models show non-trivial color-superfluidity [17,18,21], are also interesting. PACS numbers: 03.75.Ss,05.30.Fk,67.85.-d,71.10.Fd two particles with mass M and relative an-arXiv:1103.5964v2 [cond-mat.quant-gas] , can be tuned by the detuning and the intensity of the OFR laser. Figure 1 : 1Sketch of the virtual hopping processes leading to the effective Hamiltonian (3). Neglecting t ⊥ , these -plus the ones obtained by interchanging x and y -are the only ones. Note in particular the orbital-changing process J3. Gray ovals denote sites, the blue t tunneling processes, and the green fractions denote interactions. Orbitals pµ are abbreviated as µ. Figure 2 : 2Left: The phase diagram of H eff [Eq. (3)] at 1/3 filling shows four phases: (A) antiferro-orbital order (empty region), (B+) axial orbital order (red region, J3 > 0) and similarly (B−) (orange region, J3 < 0), and finally (C) with tunneling completely frozen (blue region). The gray wedge indicates the region satisfying the strong-coupling conditions (2), 0 ≤ J1,2 1, J3 J2. Right: sketch of phase B+, in which |pz and |px + i |py orbitals alternate, and phase A. Phase B− can be visualized from phase B+ by replacing |px + i |py with |px − i |py . Figure 3 : 3Predicted time-of-flight (TOF) density distributions, allowing to distinguish phases A and B in experiment. Lower (upper) row: n (r) t tof integrated along z (x) in arbitrary scale. For example, when viewed along the z-direction, phase A displays a doughnut form (lower left panel) because of an incoherent addition of px-and py-Wannier envelopes. . Y Tokura, N Nagaosa, Science. 288462Y. Tokura and N. Nagaosa, Science 288, 462 (2000). Orbital Physics in Transition Metal Oxides: Magnetism and Optics. Handbook of Magnetism and Advanced Magnetic Materials. P Horsch, P. Horsch, Orbital Physics in Transition Metal Oxides: Magnetism and Optics. Handbook of Magnetism and Ad- vanced Magnetic Materials (2007). . K I Kugel, D I Khomskii, Sov. Phys. Usp. 25231K. I. Kugel and D. I. Khomskii, Sov. Phys. Usp. 25, 231 (1982). . G Khaliullin, Prog. Theor. Phys. Suppl. 160155G. Khaliullin, Prog. Theor. Phys. Suppl. 160, 155 (2005). . A Browaeys, Phys. Rev. A. 7253605A. Browaeys et al., Phys. Rev. A 72, 053605 (2005). . M , Phys. Rev. Lett. 9480403M. Köhl et al., Phys. Rev. Lett. 94, 080403 (2005). . T Müller, S Fölling, A Widera, I Bloch, Phys. Rev. Lett. 99200405T. Müller, S. Fölling, A. Widera, and I. Bloch, Phys. Rev. Lett. 99, 200405 (2007). . M Anderlini, Nature. 448452M. Anderlini et al., Nature 448, 452 (2007). . G Wirth, M Ölschläger, A Hemmerich, Nature Phys. 7147G. Wirth, M. Ölschläger, and A. Hemmerich, Nature Phys. 7, 147 (2011). For an empty s-band, energy conservation can suppress undesired collisions p + p → s + d when using anharmonic on-site potentials created by optical superlattices. For an empty s-band, energy conservation can suppress undesired collisions p + p → s + d when using anharmonic on-site potentials created by optical superlattices. . C A Regal, C Ticknor, J L Bohn, D S Jin, Phys. Rev. Lett. 9053201C. A. Regal, C. Ticknor, J. L. Bohn, and D. S. Jin, Phys. Rev. Lett. 90, 053201 (2003). . C Chin, R Grimm, P Julienne, E Tiesinga, Rev. Mod. Phys. 821225C. Chin, R. Grimm, P. Julienne, and E. Tiesinga, Rev. Mod. Phys. 82, 1225 (2010). . K Goyal, I Reichenbach, I Deutsch, Phys. Rev. A. 8262704K. Goyal, I. Reichenbach, and I. Deutsch, Phys. Rev. A 82, 062704 (2010). . A Isacsson, S M Girvin, Phys. Rev. A. 7253604A. Isacsson and S. M. Girvin, Phys. Rev. A 72, 053604 (2005). . A B Kuklov, Phys. Rev. Lett. 97110405A. B. Kuklov, Phys. Rev. Lett. 97, 110405 (2006). . W V Liu, C Wu, Phys. Rev. A. 7413607W. V. Liu and C. Wu, Phys. Rev. A 74, 013607 (2006). . A Rapp, G Zaránd, C Honerkamp, W Hofstetter, Phys. Rev. Lett. 98160405A. Rapp, G. Zaránd, C. Honerkamp, and W. Hofstetter, Phys. Rev. Lett. 98, 160405 (2007). . A Rapp, W Hofstetter, G Zaránd, Phys. Rev. B. 77144520A. Rapp, W. Hofstetter, and G. Zaránd, Phys. Rev. B 77, 144520 (2008). . E Zhao, W V Liu, Phys. Rev. Lett. 100160403E. Zhao, and W. V. Liu, Phys. Rev. Lett. 100, 160403 (2008). . C Wu, Phys. Rev. Lett. 100200406C. Wu, Phys. Rev. Lett. 100, 200406 (2008). . S Miyatake, K Inaba, S Suga, Physica C. 470916S. Miyatake, K. Inaba, and S. Suga, Physica C 470, S916 (2009). . T A Tóth, A M Läuchli, F Mila, K Penc, Phys. Rev. Lett. 105265301T. A. Tóth, A. M. Läuchli, F. Mila, and K. Penc, Phys. Rev. Lett. 105, 265301 (2010). Orbital order in a simpler model without OFR, and its relation to the Potts model were discussed in the unpub. arXiv:0801.0888v1C. WuOrbital order in a simpler model without OFR, and its relation to the Potts model were discussed in the unpub- lished work of arXiv:0801.0888v1 by C. Wu. non-bipartite) partitions are possible, but the numerical mean-field analysis (see below) shows that these are the only relevant ones. In principle, more complex (i.e.In principle, more complex (i.e., non-bipartite) partitions are possible, but the numerical mean-field analysis (see below) shows that these are the only relevant ones. Further, we checked that for even L the occurring phases do not depend on L. Further, we checked that for even L the occurring phases do not depend on L. The derivation of (3) neglected repulsive off-site interactions n µ,i n µ,i+δ , which for p-band fermions might be important compared to the exchange couplings J. However, the alternating occupations in the (physically relevant) phases A and B also minimize possible energy contributions from off-site repulsions. Hence. neglecting these is justified a posterioriThe derivation of (3) neglected repulsive off-site interac- tions n µ,i n µ,i+δ , which for p-band fermions might be im- portant compared to the exchange couplings J. However, the alternating occupations in the (physically relevant) phases A and B also minimize possible energy contribu- tions from off-site repulsions. Hence, neglecting these is justified a posteriori. . S Blatt, Phys. Rev. Lett. 10773202S. Blatt et al., Phys. Rev. Lett. 107, 073202 (2011).
[]
[ "Slide Reduction, Revisited- Filling the Gaps in SVP Approximation", "Slide Reduction, Revisited- Filling the Gaps in SVP Approximation" ]
[ "Divesh Aggarwal ", "Jianwei Li ", "† Phong ", "Q Nguyen ", "Noah Stephens-Davidowitz " ]
[]
[]
We show how to generalize Gama and Nguyen's slide reduction algorithm [STOC '08] for solving the approximate Shortest Vector Problem over lattices (SVP). As a result, we show the fastest provably correct algorithm for δ-approximate SVP for all approximation factors n 1/2+ε ≤ δ ≤ n O(1) . This is the range of approximation factors most relevant for cryptography.
10.1007/978-3-030-56880-1_10
[ "https://arxiv.org/pdf/1908.03724v1.pdf" ]
199,543,384
1908.03724
e445d364a235eaf6414a9bb94d63da5e66ad96a5
Slide Reduction, Revisited- Filling the Gaps in SVP Approximation Divesh Aggarwal Jianwei Li † Phong Q Nguyen Noah Stephens-Davidowitz Slide Reduction, Revisited- Filling the Gaps in SVP Approximation We show how to generalize Gama and Nguyen's slide reduction algorithm [STOC '08] for solving the approximate Shortest Vector Problem over lattices (SVP). As a result, we show the fastest provably correct algorithm for δ-approximate SVP for all approximation factors n 1/2+ε ≤ δ ≤ n O(1) . This is the range of approximation factors most relevant for cryptography. Introduction A lattice L ⊂ R m is the set of integer linear combinations L := L(B) = {z 1 b 1 + · · · + z n b n : z i ∈ Z} of linearly independent basis vectors B = (b 1 , . . . , b n ) ∈ R m×n . We call n the rank of the lattice. The Shortest Vector Problem (SVP) is the computational search problem in which the input is (a basis for) a lattice L ⊆ Z m , and the goal is to output a non-zero lattice vector y ∈ L with minimal length, y = λ 1 (L) := min x∈L =0 x . For δ ≥ 1, the δ-approximate variant of SVP (δ-SVP) is the relaxation of this problem in which any non-zero lattice vector y ∈ L =0 with y ≤ δ · λ 1 (L) is a valid solution. A closely related problem is δ-Hermite SVP (δ-HSVP, sometimes also called Minkowski SVP), which asks us to find a non-zero lattice vector y ∈ L =0 with y ≤ δ · vol(L) 1/n , where vol(L) := det(B T B) 1/2 is the covolume of the lattice. Hermite's constant γ n is (the square of) the minimal possible approximation factor that can be achieved in the worst case. I.e., γ n := sup λ 1 (L) 2 vol(L) 2/n , where the supremum is over lattices L ⊂ R n with full rank n. Hermite's constant is only known exactly for 1 ≤ n ≤ 8 and n = 24, but it is known to be asymptotically linear in n, i.e., γ n = Θ (n). HSVP and Hermite's constant play a large role in algorithms for δ-SVP. Starting with the celebrated work of Lenstra, Lenstra, and Lovász in 1982 [LLL82], algorithms for solving δ-(H)SVP for a wide range of parameters δ have found innumerable applications, including factoring polynomials over the rationals [LLL82], integer programming [Len83,Kan83,DPV11], cryptanalysis [Sha84,Odl90,JS98,NS01], etc. More recently, many cryptographic primitives have been constructed whose security is based on the (worst-case) hardness of δ-SVP or closely related lattice problems [Ajt96,Reg09,GPV08,Pei09,Pei16]. Such lattice-based cryptographic constructions are likely to be used on massive scales (e.g., as part of the TLS protocol) in the not-too-distant future [NIS18], and in practice, the security of these constructions depends on the fastest algorithms for δ-(H)SVP, typically for δ = poly (n). Work on δ-(H)SVP has followed two distinct tracks. There has been a long line of work showing progressively faster algorithms for exact SVP (i.e., δ = 1) [Kan83, AKS01, NV08, PS09, MV13]. However, even the fastest such algorithm (with proven correctness) runs in time 2 n+o (n) [ADRS15,AS18]. So, these algorithms are only useful for rather small n. This paper is part of a separate line of work on basis reduction algorithms [LLL82, Sch87, SE94, GHKN06, GN08, HPS11,MW16]. (See [NV10] and [MW16] for a much more complete list of works on basis reduction.) At a high level, these are reductions from δ-(H)SVP on lattices with rank n to exact SVP on lattices with rank k ≤ n. More specifically, these algorithms divide a basis B into projected blocks B [i,i+k−1] with block size k, where B [i,j] = (π i (b i ), π i (b i+1 ), . . . , π i (b j )) and π i is the orthogonal projection onto the subspace orthogonal to b 1 , . . . , b i−1 . Basis reduction algorithms use their SVP oracle to find short vectors in these (low-rank) blocks and incorporate these short vectors into the lattice basis B. By doing this repeatedly (at most poly(n, log B ) times) with a cleverly chosen sequence of blocks, such algorithms progressively improve the "quality" of the basis B until b 1 is a solution to δ-(H)SVP for some δ ≥ 1. The goal, of course, is to take the block size k to be small enough that we can actually run an exact algorithm on lattices with rank k in reasonable time while still achieving a relatively good approximation factor δ. For HSVP, the DBKZ algorithm due to Micciancio and Walter yields the best proven approximation factor for all ranks n and block sizes k [MW16]. Specifically, it achieves an approximation factor of δ MW,H := γ n−1 2(k−1) k . (1) (Recall that γ k = Θ(k) is Hermite's constant. Here and throughout the introduction, we have left out low-order factors that can be made arbitrarily close to one.) Using a result due to Lovász [Lov86], this can be converted into an algorithm for δ 2 MW,H -SVP. However, the slide reduction algorithm of Gama and Nguyen [GN08] achieves a better approximation factor for SVP. It yields δ GN,H := γ n k −1 2(k−1) k δ GN,S := γ n k −k k−1 k ,(2) for HSVP and SVP respectively, where we write n k := k · n/k for n rounded up to the nearest multiple of k. (We have included the result for HSVP in Eq. (2) for completeness, though it is clearly no better than Eq. (1).) The discontinuous approximation factor in Eq. (2) is the result of an unfortunate limitation of slide reduction: it only works when the block size k divides the rank n. If n is not divisible by k, then we must artificially pad our basis so that it has rank n k , which results in the rather odd expressions in Eq. (2). Of course, for n k, this rounding has little effect on the approximation factor. But, for cryptographic applications, we are interested in small polynomial approximation factors δ ≈ n c for relatively small constants c, i.e., in the case when k = Θ (n). For such values of k and n, this rounding operation can cost us a constant factor in the exponent of the approximation factor, essentially changing n c to n c . Such constants in the exponent have a large effect on the security of lattice-based cryptography. 1 Our results Our first main contribution is a generalization of Gama and Nguyen's slide reduction [GN08] without the limitation that the rank n must be a multiple of the block size k. Indeed, we achieve exactly the approximation factor shown in Eq. (2) without any rounding, as we show below. As a very small additional contribution, we allow for the possibility that the underlying SVP algorithm for lattices with rank k only solves δ-approximate SVP for some δ > 1. This technique was already known to folklore and used in practice, and the proof requires no new ideas. Nevertheless, we believe that this work is the first to formally show that a δ-SVP algorithm suffices and to compute the exact dependence on δ. (This minor change proves quite useful when we instantiate our δ-SVP subroutine with the 2 0.802k -time δ-SVP algorithm for some large constant δ 1 due to Liu, Wang, Xu, and Zheng [LWXZ11,WLW15]. See Table 1 and Figure 1.) Theorem 1.1 (Informal, slide reduction for n ≥ 2k). For any approximation factor δ ≥ 1 and block size k := k(n) ≥ 2, there is an efficient reduction from δ H -HSVP and δ S -SVP on lattices with rank n ≥ 2k to δ-SVP on lattices with rank k, where δ H := (δ 2 γ k ) n−1 2(k−1) δ S := δ(δ 2 γ k ) n−k k−1 . Notice in particular that this matches Eq. (2) in the case when δ = 1 and k divides n. (This is not surprising, since our algorithm is essentially identical to the original algorithm from [GN08] in this case.) Theorem 1.1 also matches the approximation factor for HSVP achieved by [MW16], as shown in Eq. (1), so that the best (proven) approximation factor for both problems is now achieved by a single algorithm. However, Theorem 1.1 only applies for n ≥ 2k. Our second main contribution is an algorithm that works for k ≤ n ≤ 2k. To our knowledge, this is the first algorithm that provably achieves sublinear approximation factors for SVP and is asymptotically faster than, say, the fastest algorithm for O(1)-SVP. (We overcame a small barrier here. See the discussion in Section 3.) Theorem 1.2 (Informal, slide reduction for n ≤ 2k). For any approximation factor δ ≥ 1 and block size k ∈ [n/2, n], there is an efficient reduction from δ S -SVP on lattices with rank n to δ-SVP on lattices with rank k, where δ S := δ 2 √ γ k (δ 2 γ q ) q+1 q−1 · n−k 2k δ(δ 2 γ k ) n 2k , and q := n − k ≤ k. Together, these algorithms yield the asymptotically fastest proven running times for δ-SVP for all approximation factors n 1/2+ε ≤ δ ≤ n O(1) -with a particularly large improvement when δ = n c for 1/2 < c < 1 or for any c slightly smaller than an integer. Table 1 and Figure 1 summarize the current state of the art. , and we write [*] for this work. The "folklore" column represents a result that was likely known to many experts in the field but apparently never published. Our techniques We first briefly recall some of the details of Gama and Nguyen's slide reduction. Slide reduction divides the basis B = (b 1 , . . . , b n ) ∈ R m×n evenly into disjoint "primal blocks" B [ik+1,(i+1)k] of length k. (Notice that this already requires n to be divisible by k.) It also defines certain "dual blocks" B [ik+2,(i+1)k+1] , which are the primal blocks shifted one to the right. The algorithm then tries to simultaneously satisfy certain primal and dual conditions on these blocks. Namely, it tries to SVP-reduce each primal block-i.e., it tries to make the first vector in the block b * ik+1 a shortest vector in L(B [ik+1,(i+1)k] ), where b * j := π j (b j ). Simultaneously, it tries to dual SVP-reduce (DSVPreduce) the dual blocks. (See Section 2.3 for the definition of DSVP reduction.) We call a basis that satisfies all of these conditions simultaneously slide-reduced. An SVP oracle for lattices with rank k is sufficient to enforce all primal conditions or all dual conditions separately. (E.g., we can enforce the primal conditions by simply finding a shortest non-zero vector in each primal block and including this vector in an updated basis for the block.) Furthermore, if all primal and dual conditions hold simultaneously, then b 1 ≤ δ GN,S λ 1 (L) with δ GN,S as in Eq. (2), so that b 1 yields a solution to δ GN,S -SVP. This follows from repeated appli-Reduced Blocks Figure 2: Slide reduction of an upper-triangular matrix for n = pk+q ≥ 2k (left) and n = k+q ≤ 2k (right). (The original notion of slide reduction in [GN08] used only SVP-reduced and DSVP-reduced blocks of fixed size k.) cation of a "gluing" lemma on such bases, which shows how to "glue together" two reduced block to obtain a larger reduced block. (See Lemma 2.2.) Finally, Gama and Nguyen showed that, if we alternate between SVP-reducing the primal blocks and DSVP-reducing the dual blocks, then the basis will converge quite rapidly to a slide-reduced basis (up to some small slack) [GN08]. Combining all of these facts together yields the main result in [GN08]. (See Section 4.) HSVP DHSVP SVP DSVP b 1 b * k+q+1 b * 2k+q+1 b * (p−1)k+q+1 b * pk+q = b * n b 1 b * q+1 b * k b * k+q = b * n The case n > 2k. We wish to extend slide reduction to the case when n = pk + q for 1 ≤ q < k. So, intuitively, we have to decide what to do with "the extra q vectors in the basis." We start by observing that the analysis of slide reduction (and, in particular, this "gluing" property) does not quite require the first block B [1,k] to be SVP-reduced. Instead, it essentially only requires it to be "HSVP-reduced." I.e., we do not really need b 1 ≤ δ S λ 1 (L(B [1,k] )); we basically only need b 1 ≤ δ H vol(B [1,k] ) 1/k . Something similar holds for the first dual block, so that at least for the first block and the corresponding dual block, we basically only need an HSVP oracle. 2 This suggests that we might want to simply add the extra q vectors to the first block. I.e., we can take one "big block" B [1,k+q] of length k + q, and p − 1 "regular" blocks B [ik+q+1,(i+1)k+q] of length k. The regular blocks satisfy the same conditions as in [GN08]-they are SVP-reduced and the corresponding dual blocks are DSVP-reduced. For the big first block, we replace SVP reduction by an appropriate notion of HSVP reduction. Similarly, we replace DSVP reduction of the (big) first dual block by the appropriate dual notion of HSVP reduction. To get the best results, we instantiate our HSVP oracle with the algorithm from [MW16]. Since we only need an oracle for HSVP, we are able to take advantage of the very impressive approximation factor achieved by [MW16] for this problem (i.e., Eq. (1)). In fact, the approximation factor achieved by [MW16] is exactly what we need to apply our gluing lemma. (This is not a coincidence, as we explain in Section 4.) The result is Theorem 1.1. The case n < 2k. For n = k + q < 2k, the above idea cannot work. In particular, a "big block" of size k + q in this case would be our entire basis! So, instead of working with one big block and some "regular blocks" of size k, we work with a "small block" of size q and one regular block of size k. We then simply perform slide reduction with (primal) blocks B [1,q] and B [q+1,n] = B [n−k+1,n] . If we were to stop here, we would achieve an approximation factor of roughly γ q , which for q = Θ(k) is essentially the same as the approximation factor of roughly γ k that we get when the rank is 2k. I.e., we would essentially "pay for two blocks of length k," even though one block has size q < k. However, we notice that a slide-reduced basis guarantees more than just a short first vector. It also promises a very strong bound on vol(B [1,q] ). In particular, since q < k and since we have access to an oracle for lattices with rank k, it is natural to try to extend this small block ) = vol(B [1,q] ) · vol(B [q+1,k] ), which implies that λ 1 (L(B [1,k] ) ) is relatively short. We can therefore find a short vector by making an additional SVP oracle call on L(B [1,k] Table 1 suggests an obvious open question: can we find a non-trivial basis reduction algorithm that provably solves δ-SVP for δ ≤ O( √ n)? More formally, can we reduce O( √ n)-SVP on lattices with rank n to exact SVP on lattices with rank k = cn for some constant c < 1. Our current proof techniques seem to run into a fundamental barrier here in that they seem more-or-less incapable of achieving δ √ γ k . This setting is interesting in practice, as many record lattice computations use block reduction with k ≥ n/2 as a subroutine, such as [CN12]. (One can provably achieve approximation factors δ √ γ k when k = (1 − o(1))n with a bit of work, 3 but it is not clear if these extreme parameters are useful.) Next, we recall that this work shows how to exploit the existing very impressive algorithms for HSVP (in particular, DBKZ [MW16]) to obtain better algorithms for SVP. This suggests two closely related questions for future work: (1) can we find better algorithms for HSVP (e.g., for δ-HSVP with δ ≈ √ γ n -i.e., "near-exact" HSVP); and (2) where else can we profitably replace SVP oracles with HSVP oracles? Indeed, most of our analysis (and the analysis of other basis reduction algorithms) treats the δ-SVP oracle as a δ √ γ k -HSVP oracle. We identified one way to exploit this to actually get a faster algorithm, but perhaps more can be done here-particularly if we find faster algorithms for HSVP. We also leave it to future work to implement our algorithms and to study how they perform in practice. Indeed, Micciancio and Walter showed that (a slightly optimized version of) slide reduction is competitive with even the best heuristic algorithms in practice, in terms of both the running time and the approximation factor [MW16]. Since our algorithms are generalizations of slide reduction, one might guess that they also perform well in practice. We leave it to others to confirm or refute this guess. Open questions and directions for future work Finally, we note that we present two distinct (though similar) algorithms: one for lattices with rank n ≤ 2k and one for lattices with rank n ≥ 2k. It is natural to ask whether there is a single algorithm that works in both regimes. Perhaps work on this question could even lead to better approximation factors. Preliminaries We denote column vectors x ∈ R m by bold lower-case letters. Matrices B ∈ R m×n are denoted by bold upper-case letters, and we often think of a matrix as a list of column vectors, B = (b 1 , . . . , b n ). For a matrix B = (b 1 , . . . , b n ) with n linearly independent columns, we write L(B) := {z 1 b 1 +· · ·+ z n b n : z i ∈ Z} for the lattice generated by B and B = max{ b 1 , . . . , b n } for the maximum norm of a column. We often implicitly assume that m ≥ n and that a basis matrix B ∈ R m×n has rank n (i.e., that the columns of B are linearly independent). We use the notation log := log 2 to mean the logarithm with base two. Lattices For any lattice L, its dual lattice is L × = {w ∈ span(L) : w, y ∈ Z for all y ∈ L} . If B ∈ R m×n is a basis of L, then L × has basis B × := B(B T B) −1 , called the dual basis of B. The reversed dual basis B −s of B is simply B × with its columns in reversed order [GHN06]. Gram-Schmidt-Orthogonalization For a basis B = (b 1 , . . . , b n ) ∈ R m×n , we associate a sequence of projections π i := π {b 1 ,...,b i−1 } ⊥ . Here, π W ⊥ means the orthogonal projection onto the subspace W ⊥ orthogonal to W . As in [GN08], B [i,j] denotes the projected block (π i (b i ), π i (b i+1 ), . . . , π i (b j )). We also associate to B its Gram-Schmidt orthogonalization (GSO) B * : = (b * 1 , . . . , b * n ), where b * i := π i (b i ) = b i − j<i µ i,j b * j , and µ i,j = b i , b * j / b * j 2 . We say that B is size-reduced if |µ i,j | ≤ 1 2 for all i = j: then B ≤ √ n B * . Transforming a basis into this form without modifying L(B) or B * is called size reduction, and this can be done easily and efficiently. Lattice basis reduction LLL reduction. Let B = (b 1 , . . . , b n ) be a size-reduced basis. For ε ∈ [0, 1], we say that B is ε-LLL-reduced [LLL82] if every rank-two projected block B [i,i+1] satisfies Lovász's condition: b * i 2 ≤ (1 + ε) µ i,i−1 b * i−1 + b * i 2 for 1 < i ≤ n. For ε ≥ 1/poly(n), one can efficiently compute an ε-LLL-reduced basis for a given lattice. SVP reduction and its extensions. Let B = (b 1 , . . . , b n ) be a basis of a lattice L and δ ≥ 1 be an approximation factor. We say that B is δ-SVP-reduced if b 1 ≤ δ · λ 1 (L). Similarly, we say that B is δ-HSVP-reduced if b 1 ≤ δ · vol(L) 1/n . B is δ-DSVP-reduced [GN08] (where D stands for dual) if the reversed dual basis B −s is δ- SVP-reduced and B is 1 3 -LLL-reduced. Similarly, we say that B is δ-DHSVP-reduced if B −s is δ-HSVP-reduced. The existence of such δ-DSVP-reduced bases is guaranteed by a classical property of LLL that b * n never decreases during the LLL-reduction process [LLL82]. We can efficiently compute a δ-(D)SVP-reduced basis for a given rank n lattice L ⊆ Z m with access to an oracle for δ-SVP on lattices with rank at most n. Furthermore, given a basis B = (b 1 , . . . , b n ) ∈ Z m×n of L and an index i ∈ [1, n − k + 1], we can use a δ-SVP oracle for lattices with rank at most k to efficiently compute a size-reduced basis C Here, λ k (·) denotes the k-th minimum. With size-reduction, we can iteratively perform poly(n, log B ) many such operations efficiently. In particular, doing so will not increase B * by more than a factor of 2 poly(n,log B ) , and therefore the same is true of B . That is, all intermediate entries and the total cost during execution (excluding oracle queries) remain polynomially bounded in the initial input size; See, e.g., [GN08,LN14] for the evidence. Therefore, to bound the running time of basis reduction, it suffices to bound the number of calls to these block reduction subprocedures. = (b 1 , . . . , b i−1 , c i , . . . , c i+k−1 , b i+k , . . . , b n ) of L such that the block C [i,i+k−1] is δ-SVP reduced or δ-DSVP reduced: • If C [i,i+k−1] is δ-SVP-reduced, the procedures in [GN08, MW16] equipped with δ-SVP-oracle ensure that C * ≤ B * ; • If C [i,i+k−1] is δ-DSVP-reduced, the inherent LLL reduction implies C * ≤ 2 k B * . Indeed, the GSO of C [i,i+k−1] satisfies (C [i,i+k−1] ) * ≤ 2 k/2 λ k (L(C [i, Twin reduction and gluing. We define the following notion, which was implicit in [GN08] and will arise repeatedly in our proofs. B = (b 1 , . . . , b d+1 ) is δ-twin-reduced if B [1,d] is δ-HSVP-reduced and B [2,d+1] is δ-DHSVP-reduced. The usefulness of twin reduction is illustrated by the following fact, which is the key idea behind Gama and Nguyen's slide reduction (and is remarkably simple in hindsight). Fact 2.1. If B := (b 1 , . . . , b d+1 ) ∈ R m×(d+1) is δ-twin-reduced, then b 1 ≤ δ 2d/(d−1) b * d+1 .(3) Furthermore, δ −d/(d−1) b 1 ≤ vol(B) 1/(d+1) ≤ δ d/(d−1) b * d+1 .(4) Proof. By definition, we have b 1 d ≤ δ d vol(B [1,d] ), which is equivalent to b 1 d−1 ≤ δ d vol(B [2,d] ) . Similarly, vol(B [2,d] ) ≤ δ d b * d+1 d−1 . Combining these two inequalities yields Eq. (3). Finally, we have b 1 d b * d+1 ≤ δ d vol(B). Applying Eq. (3) implies the first inequality in Eq. (4), and similar analysis yields the second inequality. The following gluing lemma, which is more-or-less implicit in prior work, shows conditions on the blocks B Lemma 2.2 (The gluing lemma). Let B := (b 1 , . . . , b n ) ∈ R m×n , α, β, η ≥ 1, and 1 ≤ d ≤ n. 1 . If B [d+1,n] is β-SVP-reduced, b 1 ≤ α b * d+1 , and λ 1 (L(B)) < λ 1 (L(B [1,d] )), then B is αβ-SVP-reduced. 2. If B [1,d] is η d−1 -HSVP-reduced, B [d+1,n] is η n−d−1 -HSVP-reduced, and b 1 ≤ η 2d b * d+1 , then B is η n−1 -HSVP-reduced. Proof. For Item 1, since λ 1 (L(B)) < λ 1 (L(B [1,d] )), there exists a shortest non-zero vector u ∈ L(B) with u = λ 1 (L(B)) and π d (u) = 0. Since (L(B)). Finally, we have b 1 ≤ α b * d+1 ≤ αβλ 1 (L) as needed. Turning to Item 2, we note that the HSVP conditions imply that b 1 n] ). Using the bound on b 1 relative to b * d+1 , we have B [d+1,n] is β-SVP-reduced, it follows that b * d+1 /β ≤ π d (u) ≤ u = λ 1d ≤ η d(d−1) vol(B [1,d] ) and b * d+1 n−d ≤ η (n−d)(n−d−1) vol(B [d+1,b 1 n ≤ η 2d(n−d) b 1 d · b * d+1 n−d ≤ η 2(n−d)d+d(d−1)+(n−d)(n−d−1) vol(B) = η n(n−1) vol(B) , as needed. The Micciancio-Walter DBKZ algorithm We recall Micciancio and Walter's elegant DBKZ algorithm [MW16], as we will need it later. Formally, we slightly generalize DBKZ by allowing for the use of a δ-SVP-oracle. We provide only a high-level sketch of the proof of correctness, as the full proof is the same as the proof in [MW16], with Hermite's constant γ k replaced by δ 2 γ k . Theorem 2.3. For integers n > k ≥ 2, an approximation factor 1 ≤ δ ≤ 2 k , an input basis B 0 ∈ Z m×n for a lattice L ⊆ Z m , and N := (2n 2 /(k − 1) 2 ) · log(n log(5 B 0 )/ε) for some ε ∈ [2 −poly (n) , 1], Algorithm 1 outputs a basis B of L in polynomial time (excluding oracle queries) such that b 1 ≤ (1 + ε) · (δ 2 γ k ) n−1 2(k−1) vol(L) 1/n by making N · (2n − 2k + 1) + 1 calls to the δ-SVP oracle for lattices with rank k. Algorithm 1 The Micciancio-Walter DBKZ algorithm [MW16, Algorithm 1] Input: A block size k ≥ 2, number of tours N , a basis B = (b 1 , · · · , b n ) ∈ Z m×n , and access to a δ-SVP oracle for lattices with rank k. Output: A new basis of L(B). 1: for = 1 to N do 2: for i = 1 to n − k do Proof sketch. We briefly sketch a proof of the theorem, but we outsource the most technical step to a claim from [MW16], which was originally proven in [Neu17]. Let B ( ) be the basis immediately after the th tour, and let x y i := (n − k − i + 1)(k + i − 1) k − 1 · log(δ √ γ k ) for i = 1, . . . , n − k . By [MW16, Claim 3] (originally proven in [Neu17]), we have max 1≤i≤n−k x ( ) i /y i − 1 ≤ (1 − ξ) max 1≤i≤n−k x ( −1) i /y i − 1 , where ξ := 1/(1 + n 2 /(4k(k − 1))) ≥ 4(k − 1) 2 /(5n 2 ). Furthermore, notice that max 1≤i≤n−k x (0) i /y i − 1 ≤ k(n − k) log(5 B (0) ) y 1 . It follows that x (N ) 1 − y 1 y 1 ≤ (1 − ξ) N max 1≤i≤n−k x (0) i /y i − 1 ≤ e −4(k−1) 2 N/(5n 2 ) · k(n − k) log(5 B (0) ) y 1 ≤ k log(1 + ε) y 1 . In other words, vol B (N ) [1,k] ≤ (1 + ε) k · (δ 2 γ k ) (n−k)k 2(k−1) vol(L) k/n . Notice that the first vector b 1 of the output basis is a δ-approximate shortest vector in L B (N ) [1,k] . Therefore, b 1 ≤ δ √ γ k · vol B (N ) [1,k] 1/k ≤ (1 + ε)(δ 2 γ k ) Slide reduction for n ≤ 2k In this section, we consider a generalization of Gama and Nguyen's slide reduction that applies to the case when k < n ≤ 2k [GN08]. Our definition in this case is not particularly novel or surprising, as it is essentially identical to Gama and Nguyen's except that our blocks are not the same size. 4 What is surprising about this definition is that it allows us to achieve sublinear approximation factors for SVP when the rank is n = k + q for q = Θ(k). Before this work, it seemed that approximation factors less than roughly γ q ≈ n could not be achieved using the techniques of slide reduction (or, for that matter, any other known techniques with formal proofs). Indeed, our slide-reduced basis only achieves b 1 γ q λ 1 (L), which is the approximation factor resulting from the gluing lemma, Lemma 2.2. (This inequality is tight.) We overcome this barrier by using our additional constraints on the primal together with some additional properties of slide-reduced bases (namely, Eq. (4)) to bound λ 1 (L(B [1,k] )). Perhaps surprisingly, the resulting bound is much better than the bound on b 1 , which allows us to find a much shorter vector with an additional oracle call. Definition 3.1 (Slide reduction). Let n = k + q where 1 ≤ q ≤ k are integers. A basis B of a lattice with rank n is (δ, k)-slide-reduced (with block size k ≥ 2 and approximation factor δ ≥ 1) if it is size-reduced and satisfies the following set of conditions. 1. Primal conditions: The blocks B [1,q] and B [i,n] for i ∈ [q +1, max{k, q +1}] are δ-SVP-reduced. Dual condition: the block B [2,q+1] is δ-DSVP-reduced. A reader familiar with the slide reduction algorithm from [GN08] will not be surprised to learn that such a basis can be found (up to some small slack) using polynomially many calls to a δ-SVP oracle on lattices with rank at most k. Before presenting and analyzing the algorithm, we show that such a slide-reduced basis is in fact useful for approximating SVP with sub-linear factors. (We note in passing that a slight modification of the proof of Theorem 3.2 yields a better result when q = o(k). This does not seem very useful on its own, though, since when q = o(k), the running times of our best SVP algorithms are essentially the same for rank k and rank k + q.) Theorem 3.2. Let L be a lattice with rank n = k + q where 2 ≤ q ≤ k are integers. For any δ ≥ 1, if a basis B of L is (δ, k)-slide-reduced, then, λ 1 (L(B [1,k] )) ≤ δ √ γ k (δ 2 γ q ) q+1 q−1 · n−k 2k λ 1 (L) . Proof. Let B = (b 1 , . . . , b n ). We distinguish two cases. First, suppose that there exists an index i ∈ [q + 1, max{k, q + 1}] such that b * i > δλ 1 (L). Let v be a shortest non-zero vector of L. We claim that π i (v) = 0, i.e., that v ∈ L(B [1,i−1] ). If this is not the case, since B [i,n] is δ-SVP-reduced, we have that b * i /δ ≤ π i (v) ≤ v = λ 1 (L), which is a contradiction. Thus, we see that v ∈ L(B [1,i−1] ) ⊆ L(B [1,k] ), and hence λ 1 (L(B [1,k] )) = λ 1 (L) (which is much stronger than what we need). Now, suppose that b * i ≤ δλ 1 (L) for all indices i ∈ [q + 1, max{k, q + 1}]. By definition, the primal and dual conditions imply that B [1,q+1] is δ √ γ q -twin-reduced. Therefore, by Eq. (4) of Fact 2.1, we have vol(B [1,k] ) = vol(B [1,q] ) · k i=q+1 b * i ≤ (δ √ γ q ) q(q+1)/(q−1)) b * q q · k i=q+1 b * i ≤ (δ 2 γ q ) q+1 q−1 · n−k 2 (δλ 1 (L)) k , where we have used the assumption that b * i ≤ δλ 1 (L) for all indices i ∈ [q +1, max{k, q +1}] (and by convention we take the product to equal one in the special case when q = k). By the definition of Hermite's constant, this implies that λ 1 (L(B [1,k] )) ≤ √ γ k vol(B [1,k] ) 1/k ≤ δ √ γ k (δ 2 γ q ) q+1 q−1 · n−k 2k λ 1 (L) , as needed. The slide reduction algorithm for n ≤ 2k We now present our slight generalization of Gama and Nguyen's slide reduction algorithm that works for all k + 2 ≤ n ≤ 2k. Algorithm 2 The slide reduction algorithm for n ≤ 2k (adapted from [GN08, Algorithm 1]) Input: Block size k, slack ε > 0, approximation factor δ ≥ 1, a basis B = (b 1 , . . . , b n ) ∈ Z m×n of a lattice L with rank n = k + q where 2 ≤ q ≤ k, and access to a δ-SVP oracle for lattices with rank at most k. Output: A ((1 + ε)δ, k)-slide-reduced basis of L. if (1 + ε) b * q+1 < c * q+1 then 8: B ← C. 9: end if 10: end while 11: return B. Our proof that Algorithm 2 runs in polynomial time (excluding oracle calls) is essentially identical to the proof in [GN08]. Theorem 3.3. For ε ≥ 1/poly(n), Algorithm 2 runs in polynomial time (excluding oracle calls), makes polynomially many calls to its δ-SVP oracle, and outputs a ((1 + ε)δ, k)-slide-reduced basis of the input lattice L. Proof. First, notice that if Algorithm 2 terminates, then its output must be ((1 + ε)δ, k)-slidereduced. So, we only need to argue that the algorithm runs in polynomial time (excluding oracle calls). Let B 0 ∈ Z m×n be the input basis and let B ∈ Z m×n denote the current basis during the execution of the algorithm. As is common in the analysis of basis reduction algorithms [LLL82,GN08,LN14], we consider an integral potential of the form P (B) := vol(B [1,q] ) 2 ∈ Z + . The initial potential satisfies log P (B 0 ) ≤ 2q · log B 0 , and every operation in Algorithm 2 either preserves or significantly decreases P (B). More precisely, if the δ-DSVP-reduction step (i.e., Step 8) occurs, then the potential P Proof. On input (a basis for) an integer lattice L ⊆ Z m with rank n, the reduction first calls Algorithm 2 to compute a ((1 + ε)δ, k)-slide-reduced basis B of L with, say, ε = 1/n. The reduction then uses its δ-SVP oracle once more on B [1,k] and returns the resulting nonzero short lattice vector. It is immediate from Theorem 3.3 that this reduction is efficient, and by Theorem 3.2, the output vector is a δ -approximate shortest vector, where δ = δ 2 √ γ k ((1 + ε) 2 δ 2 γ q ) q+1 q−1 · n−k 2k ≤ O(δ 2c+1 n c ) , as needed. Slide reduction for n ≥ 2k We now introduce a generalized version of slide reduction for lattices with any rank n ≥ 2k. As we explained in Section 1.2, at a high level, our generalization of the definition from [GN08] is the same as the original, except that (1) our first block B [1,k+q] is bigger than the others (out of necessity, since we can no longer divide our basis evenly into disjoint blocks of size k); and (2) we only η-HSVP reduce the first block (since we cannot afford to δ-SVP reduce a block with size larger than k). Thus, our notion of slide reduction can be restated as "the first block and the first dual block are η-(D)HSVP reduced and the rest of the basis B [k+q+1,n] is slide-reduced in the sense of [GN08]." 5 However, the specific value of η that we choose in our definition below might look unnatural at first. We first present the definition and then explain where η comes from. Definition 4.1 (Slide reduction). Let n, k, p, q be integers such that n = pk + q with p, k ≥ 2 and 0 ≤ q ≤ k − 1, and let δ ≥ 1. A basis B ∈ R m×n is (δ, k)-slide-reduced if it is size-reduced and satisfies the following three sets of conditions. There are two ways to explain our specific choice of η. Most simply, notice that the output of the DBKZ algorithm-due to [MW16] and presented in Section 2.4-is η-HSVP reduced when the input basis has rank k +q (up to some small slack ε). In other words, one reason that we choose this value of η is because we actually can η-HSVP reduce a block of size k + q efficiently with access to a δ-SVP oracle for lattices with rank k. If we could do better, then we would in fact obtain a better algorithm, but we do not know how. Second, this value of η is natural in this context because it is the choice that "makes the final approximation factor for HSVP match the approximation factor for the first block." I.e., the theorem below shows that when we plug in this value of η, a slide-reduced basis of rank n is (δ 2 γ k ) n−1 2(k−1) -HSVP, which nicely matches the approximation factor of η = (δ 2 γ k ) k+q−1 2(k−1) -HSVP that we need for the first block (whose rank is k + q). At a technical level, this is captured by Fact 2.1 and Lemma 2.2. Of course, the fact that these two arguments suggest the same value of η is not a coincidence. Both arguments are essentially disguised proofs of Mordell's inequality, which says that γ n ≤ γ (n−1)/(k−1) k for 2 ≤ k ≤ n. E.g., with δ = 1 the primal Mordell condition says that b 1 yields a witness to Mordell's inequality for B [1,k+q] . where 0 ≤ q ≤ k − 1 is such that n = pk + q. Proof. Let d := k + q. Theorem A.1 of Appendix A shows that B [d+1,n] is both (δ 2 γ k ) The slide reduction algorithm for n ≥ 2k We now present our slight generalization of Gama and Nguyen's slide reduction algorithm that works for all n ≥ 2k. Our proof that the algorithm runs in polynomial time (excluding oracle calls) is essentially identical to the proof in [GN08]. Algorithm 3 The slide-reduction algorithm for n ≥ 2k Input: Block size k ≥ 2, slack ε > 0, approximation factor δ ≥ 1, basis B = (b 1 , . . . , b n ) ∈ Z m×n of a lattice L of rank n = pk + q ≥ 2k for 0 ≤ q ≤ k − 1, and access to a δ-SVP oracle for lattices with rank k. Output: A ((1 + ε)δ, k)-slide-reduced basis of L(B). (1 + ε)η-HSVP-reduce B [1,k+q] using Alg. 1 for η := (δ 2 γ k ) k+q−1 2(k−1) . 3: for i = 1 to p − 1 do (1 + ε) 1/2 η-DHSVP-reduce B [2,k+q+1] using Alg. 1. if (1 + ε) b * (i+1)k+q+1 < c * (i+1)k+q+1 then 12: B ← C. Theorem 4.3. For ε ∈ [1/poly(n), 1], Algorithm 3 runs in polynomial time (excluding oracle calls), makes polynomially many calls to its δ-SVP oracle, and outputs a ((1 + ε)δ, k)-slide-reduced basis of the input lattice L. Proof. First, notice that if Algorithm 3 terminates, then its output is ((1 + ε)δ, k)-slide-reduced. So, we only need to argue that the algorithm runs in polynomial time (excluding oracle calls). Let B 0 ∈ Z m×n be the input basis and let B ∈ Z m×n denote the current basis during the execution of Algorithm 3. As is common in the analysis of basis reduction algorithms [LLL82, GN08, LN14], we consider an integral potential of the form P (B) := p−1 i=1 vol(B [1,ik+q] ) 2 ∈ Z + . The initial potential satisfies log P (B 0 ) ≤ 2n 2 · log B 0 , and every operation in Algorithm 3 either preserves or significantly decreases P (B). In particular, the potential is unaffected by the primal steps (i.e., Steps 2 and 4), which leave vol(B [1,ik+q] ) unchanged for all i. The dual steps (i.e., Steps 7 and 12) either leave vol(B [1,ik+q] ) for all i or decrease P (B) by a multiplicative factor of at least (1 + ε). Figure 1 : 1Running time T as a function of approximation factor δ for δ-SVP. The y-axis is log 2 (T )/n, and the x-axis is log n δ. i+k− 1 ] 1)) (by [LLL82, p. 518, Line 27]) and λ k (L(C [i,i+k−1] )) ≤ √ k B * . [1,d] and B[d+1,n] that are sufficient to imply (H)SVP reduction of the full basis B. Notice in particular that the decay of the Gram-Schmidt vectors guaranteed by Eq.(3)is what is needed for Item 2 of the lemma below, when η = δ 1/(d−1) . And, with this same choice of η, the HSVP reduction requirement on B[1,d] in Fact 2.1 is the same as the one in Item 2 of Lemma 2.2. δ-SVP-reduce B [1,k] . 10: return B. (L) for i = 1, . . . , n − k. Let k−1) vol(L) 1/n , as needed. 1: while vol(B [1,q] ) 2 is modified by the loop do 2: δ-SVP-reduce B [1,q] . 3: for i = q + 1 to max{k, q + 1} do 4: δ-SVP reduce B [i,n] . Find a new basis C := (b 1 , c 2 , . . . , c q+1 , b q+2 , . . . , b n ) of L by δ-DSVP-reducing B [2,q+1] . 7: (B) decreases by a multiplicative factor of at least (1 + ε) 2 . No other step changes L(B [1,q] ) or P (B). Therefore, Algorithm 2 updates L(B [1,q] ) at most log P (B 0 ) 2 log(1+ε) times, and hence it makes at most qk log B 0 log(1+ε) calls to the δ-SVP-oracle. From the complexity statement in Section 2.3, it follows that Algorithm 2 runs efficiently (excluding the running time of oracle calls). Corollary 3 . 4 . 34For any constant c ∈ (1/2, 1] and δ := δ(n) ≥ 1, there is an efficient reduction from O(δ 2c+1 n c )-SVP on lattices with rank n to δ-SVP on lattices with rank k := n/(2c) . Theorem 4 . 2 . 42For any δ ≥ 1, k ≥ 2, and n ≥ 2k, if B = (b 1 , . . . , b n ) ∈ R m×n is a (δ, k)-slidereduced basis of a lattice L, then b 1 ≤ (δ 2 γ k ) Furthermore, if λ 1 (L(B [1,k+q] )) > λ 1 (L) , then b 1 ≤ δ(δ 2 γ k ) 6 -SVP-reduced.(We relegate this theorem and its proof to the appendix because it is essentially just a restatement of [GN08, Theorem 1], since B [d+1,n] is effectively just a slide-reduced basis in the original sense of[GN08].) Furthermore, B [1,d+1] is (δ 2 γ k ) When p = 2, there are simply no dual conditions. 1: while vol(B [1,ik+q] ) 2 is modified by the loop for some i ∈ [1, p − 1] do 2: Find a new basisC := (b 1 , . . . , b ik+q+1 , c ik+q+2 , . . . , c (i+1)k+q+1 , b ik+q+2 , . . . , b n ) of L by δ-DSVPreducing B [ik+q+2,(i+1)k+q+1] .11: Table 1 : 1Algorithms for solving SVP. We write [A]+[B] to denote the algorithm that uses basis reduction from [A] with the exact/near-exact SVP algorithm from [B] B [1,q] with low volume to a larger block B [1,k] of length k that still has low volume. Indeed, we can use our SVP oracle to guarantee that B [q+1,k] consists of relatively short vectors so that vol(B [q+1,k] ) is relatively small as well. (Formally, we SVP-reduce B [i,n] for i ∈ [q + 1, k]. Again, we are ignoring a certain degenerate case, as in Footnote 2.) This allows us to upper bound vol(B [1,k] ).(Micciancio and Walter used a similar idea in[MW16].) 1. Mordell conditions: The block B [1,k+q] is η-HSVP-reduced and the block B [2,k+q+1] is η-DHSVP-reduced for η := (δ 2 γ k ) Primal conditions: for all i ∈ [1, p − 1], the block B [ik+q+1,(i+1)k+q] is δ-SVP-reduced.3. Dual conditions: for all i ∈ [1, p − 2], the block B [ik+q+2,(i+1)k+q+1] is δ-DSVP-reduced. 6k+q−1 2(k−1) . 2. 13 : 13end if 14: end for 15: end while 16: return B. The security of lattice-based cryptography is actually assessed using heuristic algorithms that outperform Eq. (2) empirically[APS15], so that Eq. (2) is not and should not be used directly for this purpose. In this work, we restrict our attention to what we can prove. We are ignoring a certain degenerate case here for simplicity. Namely, if all short vectors happen to lie in the span of the first block, and these vectors happen to be very short relative to the volume of the first block, then calling an HSVP oracle on the first block might not be sufficient to solve approximate SVP. Of course, if we know a low-dimensional subspace that contains the shortest non-zero vector, then finding short lattice vectors is much easier. This degenerate case is therefore easily handled separately (but it does in fact need to be handled separately). For example, it is immediate from the proof of Theorem 3.2 that the (very simple) notion of a slide-reduced basis for n ≤ 2k in Definition 3.1 is already enough to obtain δ ≈ γ n−k ≈ n − k. So, for n k + √ k, this already achieves δ √ n. With a bit more work, one can show that an extra oracle call like the one used in Corollary 3.4 can yield a still better approximation factor in this rather extreme setting of k = (1 − o(1))n. The only difference, apart from the approximation factor δ, is that we use SVP reduction instead of HKZ reduction for the primal. It is clear from the proof in[GN08] that only SVP reduction is required, as was observed in[MW16]. We do require that additional blocks B[i,n] for q + 1 ≤ i ≤ k are SVP-reduced, which is quite similar to simply HKZ-reducing B[q+1,n] , but this requirement plays a distinct role in our analysis, as we discuss below. Apart from the approximation factor δ, there is one minor difference between our primal conditions and those of[GN08]. We only require the primal blocks to be SVP-reduced, while [GN08] required them to be HKZ-reduced, which is a stronger condition. It is clear from the proof in[GN08] that only SVP reduction is required, as was observed in[MW16]. A Properties of Gama and Nguyen's slide reductionIn the theorem below, B [d+1,n] is essentially just a slide-reduced basis in the sense of[GN08]. So, the following is more-or-less just a restatement of [GN08, Theorem 1]., which implies (7) by induction.We prove (8) and (9) by induction over p. If p = 1, then both inequalities hold as B [d+1,n] is δ-SVP reduced by the definition of slide reduction. Now, assume that Eqs. (8) and (9) hold for p − 1 ≥ 1. Then B satisfies the requirements of the theorem with d := d + k and p := p − 1. Therefore, by the induction hypothesis, we haveSince B [d+1,d+k] is δ √ γ k -HSVP reduced, we may apply Lemma 2.2, which proves (8) for B[d+1,n].as needed. If not, then λ 1 (L(B [d+1,n] )) = λ 1 (L(B [d+1,d+k+1] )), and b 1 ≤ δλ 1 (L(B [d+1,n] )) because B [d+1,d+k+1] is δ-SVP reduced. In all cases, we proved (9). Algorithm 2 updates vol(B [1,ik+q] ) for some i at most log P (B 0 )/ log(1 + ε) times. Therefore, Therefore, Algorithm 2 updates vol(B [1,ik+q] ) for some i at most log P (B 0 )/ log(1 + ε) times. Hence, it makes at most 4pn 2 log B 0 / log(1 + ε) calls to the SVP oracle in the SVP and DSVP reduction steps (i.e., Steps 4 and 12), and similarly at most 4n 2 log B 0 / log(1 + ε) calls to Algorithm 1. From the complexity. statement in Section 2.3, it follows that Algorithm 2 runs efficiently (excluding the running time of oracle calls), as neededHence, it makes at most 4pn 2 log B 0 / log(1 + ε) calls to the SVP oracle in the SVP and DSVP reduction steps (i.e., Steps 4 and 12), and similarly at most 4n 2 log B 0 / log(1 + ε) calls to Algo- rithm 1. From the complexity statement in Section 2.3, it follows that Algorithm 2 runs efficiently (excluding the running time of oracle calls), as needed. For any constant c ≥ 1 and δ := δ(n) ≥ 1, there is an efficient reduction from. SVP on lattices with rank n to δ-SVP on lattices with rank k := n. Corollary 4.4.Corollary 4.4. For any constant c ≥ 1 and δ := δ(n) ≥ 1, there is an efficient reduction from O(δ 2c+1 n c )-SVP on lattices with rank n to δ-SVP on lattices with rank k := n/(c + 1) . of L with, say, ε = 1/n. Then, the reduction uses the procedure from Corollary 3.4 on the lattice L(B [1,2k] ) with c = 1 (i.e., slide reduction on a lattice with rank 2k), to find a vector v ∈ L(B [1,2k] ) with 0 < v ≤ O(δ 3 n)λ 1 (L(B [1,2k] )). Finally, the reduction outputs the shorter of the two vectors b 1 and v. · · · , On input (a basis for) an integer lattice L ⊆ Z m with rank n, the reduction first calls Algorithm 3 to compute a ((1+ε)δ, k)-slide-reduced basis B =. ] )) = λ 1 (L). Then, v ≤ O(δ 3 n)λ 1 (L(B [1,2k] )) ≤ O(δ 2c+1 n c )λ 1 (L) , so that the algorithm will output a O(δ 2c+1 n c )-approximate shortest vector. On the other hand. if λ 1 (L(B [1,k+q] )) > λ 1 (L), then by Theorem 4.2, we have b 1 ≤ (1 + ε)δ(Proof. On input (a basis for) an integer lattice L ⊆ Z m with rank n, the reduction first calls Algorithm 3 to compute a ((1+ε)δ, k)-slide-reduced basis B = (b 1 , · · · , b n ) of L with, say, ε = 1/n. Then, the reduction uses the procedure from Corollary 3.4 on the lattice L(B [1,2k] ) with c = 1 (i.e., slide reduction on a lattice with rank 2k), to find a vector v ∈ L(B [1,2k] ) with 0 < v ≤ O(δ 3 n)λ 1 (L(B [1,2k] )). Finally, the reduction outputs the shorter of the two vectors b 1 and v. It is immediate from Corollary 3.4 and Theorem 4.3 that this reduction is efficient. To prove correctness, we consider two cases. First, suppose that λ 1 (L(B [1,k+q] )) = λ 1 (L). Then, v ≤ O(δ 3 n)λ 1 (L(B [1,2k] )) ≤ O(δ 2c+1 n c )λ 1 (L) , so that the algorithm will output a O(δ 2c+1 n c )-approximate shortest vector. On the other hand, if λ 1 (L(B [1,k+q] )) > λ 1 (L), then by Theorem 4.2, we have b 1 ≤ (1 + ε)δ((1 Solving the Shortest Vector Problem in 2 n time via Discrete Gaussian Sampling. Divesh Aggarwal, Daniel Dadush, Oded Regev, Noah Stephens-Davidowitz, STOC. 24Divesh Aggarwal, Daniel Dadush, Oded Regev, and Noah Stephens-Davidowitz. Solving the Shortest Vector Problem in 2 n time via Discrete Gaussian Sampling. In STOC, 2015. http://arxiv.org/abs/1412.7994. 2, 4 Generating hard instances of lattice problems. Miklós Ajtai, STOC. Miklós Ajtai. Generating hard instances of lattice problems. In STOC, 1996. 2 A sieve algorithm for the Shortest Lattice Vector Problem. Miklós Ajtai, Ravi Kumar, D Sivakumar, STOC. Miklós Ajtai, Ravi Kumar, and D. Sivakumar. A sieve algorithm for the Shortest Lattice Vector Problem. In STOC, 2001. 2 On the concrete hardness of Learning with Errors. Martin R Albrecht, Rachel Player, Sam Scott, J. Mathematical Cryptology. 93Martin R. Albrecht, Rachel Player, and Sam Scott. On the concrete hardness of Learn- ing with Errors. J. Mathematical Cryptology, 9(3), 2015. http://eprint.iacr.org/ 2015/046. 3 Just take the average! An embarrassingly simple 2 n -time algorithm for SVP (and CVP). Divesh Aggarwal, Noah Stephens-Davidowitz, In SOSA. Divesh Aggarwal and Noah Stephens-Davidowitz. Just take the average! An em- barrassingly simple 2 n -time algorithm for SVP (and CVP). In SOSA, 2018. http: //arxiv.org/abs/1709.01535. 2 Faster algorithms for approximate common divisors: Breaking fully-homomorphic-encryption challenges over the integers. Yuanmi Chen, Phong Q Nguyen, EURO-CRYPT. Yuanmi Chen and Phong Q. Nguyen. Faster algorithms for approximate common divi- sors: Breaking fully-homomorphic-encryption challenges over the integers. In EURO- CRYPT, 2012. 6 Enumerative lattice algorithms in any norm via M -ellipsoid coverings. Daniel Dadush, Chris Peikert, Santosh Vempala, FOCS. Daniel Dadush, Chris Peikert, and Santosh Vempala. Enumerative lattice algorithms in any norm via M -ellipsoid coverings. In FOCS, 2011. 2 Rankin's constant and blockwise lattice reduction. Nicolas Gama, Nick Howgrave-Graham, Henrik Koy, Phong Q Nguyen, CRYPTO. Nicolas Gama, Nick Howgrave-Graham, Henrik Koy, and Phong Q. Nguyen. Rankin's constant and blockwise lattice reduction. In CRYPTO, 2006. 2 Symplectic lattice reduction and NTRU. Nicolas Gama, Nick Howgrave-Graham, Phong Q Nguyen, EUROCRYPT. Nicolas Gama, Nick Howgrave-Graham, and Phong Q. Nguyen. Symplectic lattice reduction and NTRU. In EUROCRYPT, 2006. 7 Finding short lattice vectors within Mordell's inequality. Nicolas Gama, Phong Q Nguyen, STOC. 1518Nicolas Gama and Phong Q. Nguyen. Finding short lattice vectors within Mordell's inequality. In STOC, 2008. 2, 3, 4, 5, 7, 8, 11, 12, 13, 14, 15, 18 Trapdoors for hard lattices and new cryptographic constructions. Craig Gentry, Chris Peikert, Vinod Vaikuntanathan, STOC. Craig Gentry, Chris Peikert, and Vinod Vaikuntanathan. Trapdoors for hard lattices and new cryptographic constructions. In STOC, 2008. https://eprint.iacr.org/ 2007/432. 2 Analyzing blockwise lattice algorithms using dynamical systems. Guillaume Hanrot, Xavier Pujol, Damien Stehlé, CRYPTO. Guillaume Hanrot, Xavier Pujol, and Damien Stehlé. Analyzing blockwise lattice algo- rithms using dynamical systems. In CRYPTO, 2011. 2 Lattice reduction: A toolbox for the cryptanalyst. Antoine Joux, Jacques Stern, J. Cryptology. 113Antoine Joux and Jacques Stern. Lattice reduction: A toolbox for the cryptanalyst. J. Cryptology, 11(3), 1998. 2 Improved algorithms for integer programming and related lattice problems. Ravi Kannan, STOC. Ravi Kannan. Improved algorithms for integer programming and related lattice prob- lems. In STOC, 1983. 2 Integer programming with a fixed number of variables. Hendrik W LenstraJr, Mathematics of Operations Research. 84Hendrik W. Lenstra, Jr. Integer programming with a fixed number of variables. Math- ematics of Operations Research, 8(4), 1983. 2 Factoring polynomials with rational coefficients. K Arjen, Hendrik W Lenstra, Jr Lenstra, László Lovász, Mathematische Annalen. 261415Arjen K. Lenstra, Hendrik W. Lenstra, Jr., and László Lovász. Factoring polynomials with rational coefficients. Mathematische Annalen, 261(4), 1982. 2, 8, 13, 15 Approximating the densest sublattice from Rankin's inequality. Jianwei Li, Phong Q Nguyen, LMS J. of Computation and Mathematics. 17815Jianwei Li and Phong Q. Nguyen. Approximating the densest sublattice from Rankin's inequality. LMS J. of Computation and Mathematics, 17(A), 2014. 8, 13, 15 An algorithmic theory of numbers, graphs and convexity. László Lovász, Society for Industrial and Applied Mathematics. 2László Lovász. An algorithmic theory of numbers, graphs and convexity. Society for Industrial and Applied Mathematics, 1986. 2 Shortest lattice vectors in the presence of gaps. Mingjie Liu, Xiaoyun Wang, Guangwu Xu, Xuexin Zheng, Mingjie Liu, Xiaoyun Wang, Guangwu Xu, and Xuexin Zheng. Shortest lattice vectors in the presence of gaps. http://eprint.iacr.org/2011/139, 2011. 3 A deterministic single exponential time algorithm for most lattice problems based on Voronoi cell computations. Daniele Micciancio, Panagiotis Voulgaris, SIAM J. on Computing. 423Daniele Micciancio and Panagiotis Voulgaris. A deterministic single exponential time algorithm for most lattice problems based on Voronoi cell computations. SIAM J. on Computing, 42(3), 2013. 2 Practical, predictable lattice basis reduction. Daniele Micciancio, Michael Walter, Eurocrypt. 1314Daniele Micciancio and Michael Walter. Practical, predictable lattice basis reduction. In Eurocrypt, 2016. http://eprint.iacr.org/2015/1123. 2, 3, 6, 7, 8, 9, 10, 11, 13, 14 Bounding basis reduction properties. Designs, Codes and Cryptography. Arnold Neumaier, 2017. 1084Arnold Neumaier. Bounding basis reduction properties. Designs, Codes and Cryptog- raphy, 84(1), 2017. 10 . Computer Security Division NIST. Post-quantum cryptography. 2Computer Security Division NIST. Post-quantum cryptography. https://csrc.nist. gov/Projects/Post-Quantum-Cryptography, 2018. 2 The two faces of lattices in cryptology. Q Phong, Jacques Nguyen, Stern, CaLC. Phong Q. Nguyen and Jacques Stern. The two faces of lattices in cryptology. In CaLC, 2001. 2 Sieve algorithms for the Shortest Vector Problem are practical. Q Phong, Thomas Nguyen, Vidick, J. Mathematical Cryptology. 22Phong Q. Nguyen and Thomas Vidick. Sieve algorithms for the Shortest Vector Problem are practical. J. Mathematical Cryptology, 2(2), 2008. 2 The LLL algorithm: Survey and applications. Phong Q. Nguyen and Brigitte ValléeSpringer-VerlagPhong Q. Nguyen and Brigitte Vallée, editors. The LLL algorithm: Survey and appli- cations. Springer-Verlag, 2010. 2 The rise and fall of knapsack cryptosystems. M Andrew, Odlyzko, Cryptology and Computational Number Theory. 422Andrew M Odlyzko. The rise and fall of knapsack cryptosystems. Cryptology and Computational Number Theory, 42, 1990. 2 Public-key cryptosystems from the worst-case Shortest Vector Problem. Chris Peikert, STOC. Chris Peikert. Public-key cryptosystems from the worst-case Shortest Vector Problem. In STOC, 2009. 2 A decade of lattice cryptography. Chris Peikert, Foundations and Trends in Theoretical Computer Science. 104Chris Peikert. A decade of lattice cryptography. Foundations and Trends in Theoretical Computer Science, 10(4), 2016. 2 Solving the Shortest Lattice Vector Problem in time 2 2.465n. Xavier Pujol, Damien Stehlé, Xavier Pujol and Damien Stehlé. Solving the Shortest Lattice Vector Problem in time 2 2.465n , 2009. http://eprint.iacr.org/2009/605. 2 On lattices, learning with errors, random linear codes, and cryptography. Oded Regev, J. ACM. 566Oded Regev. On lattices, learning with errors, random linear codes, and cryptography. J. ACM, 56(6), 2009. 2 A hierarchy of polynomial time lattice basis reduction algorithms. Claus-Peter Schnorr, Theor. Comput. Sci. 5323Claus-Peter Schnorr. A hierarchy of polynomial time lattice basis reduction algorithms. Theor. Comput. Sci., 53(23), 1987. 2 Lattice basis reduction: Improved practical algorithms and solving subset sum problems. Claus-Peter Schnorr, M Euchner, Mathmatical Programming. 662Claus-Peter Schnorr and M. Euchner. Lattice basis reduction: Improved practical algorithms and solving subset sum problems. Mathmatical Programming, 66, 1994. 2 A polynomial-time algorithm for breaking the basic Merkle-Hellman cryptosystem. Adi Shamir, IEEE Trans. Inform. Theory. 305Adi Shamir. A polynomial-time algorithm for breaking the basic Merkle-Hellman cryp- tosystem. IEEE Trans. Inform. Theory, 30(5), 1984. 2 Finding shortest lattice vectors in the presence of gaps. Wei Wei, Mingjie Liu, Xiaoyun Wang, CT-RSA34Wei Wei, Mingjie Liu, and Xiaoyun Wang. Finding shortest lattice vectors in the presence of gaps. In CT-RSA, 2015. 3, 4
[]
[ "A brief introduction to bulk viscosity of fluids", "A brief introduction to bulk viscosity of fluids" ]
[ "Bhanuday Sharma \nDepartment of Aerospace Engineering\nIndian Institute of Technology Kanpur\nIndia\n", "Rakesh Kumar \nDepartment of Aerospace Engineering\nIndian Institute of Technology Kanpur\nIndia\n" ]
[ "Department of Aerospace Engineering\nIndian Institute of Technology Kanpur\nIndia", "Department of Aerospace Engineering\nIndian Institute of Technology Kanpur\nIndia" ]
[]
Fluid flows are typically studied by solving the Navier-Stokes equation. One of the fundamental assumptions of this equation is Stokes' hypothesis. This hypothesis assumes bulk viscosity, µ b , to be identically zero. The Stokes' hypothesis is a reasonable approximation for commonly observed fluid flows; therefore, Navier-Stokes equation gives satisfactory results in these situations. However, there are circumstances where this hypothesis does not hold good, and therefore, the classical Navier-Stokes equation becomes inapt. These situations include the absorption of sound waves, hypersonic flows, turbulent flows, and flow of Martian air etc. Reliable analytical and computational studies of these flows requires account of bulk viscosity in the governing equations. In this article, we provide a brief review of the subject of bulk viscosity. We start with a brief background of this topic. Then we discuss the underlying microscopic mechanisms that give rise to bulk viscosity effects. It was followed by a review of methods available in the literature for estimation of this parameter. Finally, a review of the studies that analyze the effects of bulk viscosity in various fluid flows is provided.
null
[ "https://export.arxiv.org/pdf/2303.08400v1.pdf" ]
257,532,651
2303.08400
28724dd828905c5e54310fde424fab6ddefcc822
A brief introduction to bulk viscosity of fluids March 16, 2023 Bhanuday Sharma Department of Aerospace Engineering Indian Institute of Technology Kanpur India Rakesh Kumar Department of Aerospace Engineering Indian Institute of Technology Kanpur India A brief introduction to bulk viscosity of fluids March 16, 2023Bulk viscosityvolume viscosityStokes' hypothesis Fluid flows are typically studied by solving the Navier-Stokes equation. One of the fundamental assumptions of this equation is Stokes' hypothesis. This hypothesis assumes bulk viscosity, µ b , to be identically zero. The Stokes' hypothesis is a reasonable approximation for commonly observed fluid flows; therefore, Navier-Stokes equation gives satisfactory results in these situations. However, there are circumstances where this hypothesis does not hold good, and therefore, the classical Navier-Stokes equation becomes inapt. These situations include the absorption of sound waves, hypersonic flows, turbulent flows, and flow of Martian air etc. Reliable analytical and computational studies of these flows requires account of bulk viscosity in the governing equations. In this article, we provide a brief review of the subject of bulk viscosity. We start with a brief background of this topic. Then we discuss the underlying microscopic mechanisms that give rise to bulk viscosity effects. It was followed by a review of methods available in the literature for estimation of this parameter. Finally, a review of the studies that analyze the effects of bulk viscosity in various fluid flows is provided. Introduction Ever since Sir George Gabriel Stokes (1819Stokes ( -1903 proposed the complete set of equations for the dynamics of viscous fluids in 1845 [1], bulk viscosity (sometimes also referred as volume or dilatational viscosity) has remained as one of the controversial subjects of fluid dynamics [2]. The recent surge of interest in Mars missions has made the study of bulk viscosity more relevant as the Martian atmosphere consists approximately 96% of carbon dioxide gas, which has a reported bulk to shear viscosity ratio of ≈ 2000 [3,4]. The account of bulk viscosity in the analysis has also enabled more accurate modeling of several fluid mechanical phenomena [5][6][7][8][9][10][11][12][13][14][15][16][17]. On the other hand, in contrast to shear viscosity, which is a very well studied transport property, bulk viscosity is still not completely explored. There are considerable ambiguities/uncertainties about the nature, effects, and applicability of the concept of bulk viscosity. Even for most common fluids, existing experimental values of bulk viscosity are spread over a broad range, and widely accepted values are still not available [18]. In this article, we provide a brief review of the subject of bulk viscosity. We start with a brief background of this subject. Then we discuss the underlying microscopic mechanisms that give rise to bulk viscosity effects. It was followed by a review of methods available in the literature for estimation of this parameter. Finally, a review of the studies that analyze the effects of bulk viscosity in various fluid flows is provided. The stress-strain rate relationship (i.e., constitutive relation) for a Newtonian fluid is given as follows: σ ik = −P thermo δ ik + µ ∂u i ∂x k + ∂u k ∂x i + λ ∂u j ∂x j δ ik(1) where, σ ik , is Cauchy's stress tensor, δ ik is the Kronecker delta, u i is velocity of the fluid, x i is spatial coordinate, and the scalar quantity P thermo is the thermodynamic pressure or hydrostatic pressure. The above relation contains two independent coefficients -first is the coefficient of shear viscosity (µ), which is sometimes also termed as the first coefficient of viscosity, and the second is the coefficient of longitudinal viscosity (λ), which is also referred to as the second coefficient of viscosity. The above relation, Eq. (1), can be rearranged as follows by separating isotropic and deviatoric parts of the strain-rate tensor: σ ik = −P thermo δ ik + µ ∂u i ∂x k + ∂u k ∂x i − 2 3 ∂u j ∂x j δ ik + µ b ∂u j ∂x j δ ik(2) where, µ b = 2 3 µ + λ . The coefficient µ b is known as the coefficient of bulk viscosity, and it represents the irreversible resistance, over and above the reversible resistance, caused by isentropic bulk modulus to change of volume [19]. Its values are expressed in the same units as shear viscosity, i.e., Pa s or poise. The second law of thermodynamics constrains both the shear viscosity and bulk viscosity to have only non-negative values [20]. The values of bulk viscosity for common dilute gases at 300 K are listed in Table 1 [21]. It should be noted that the bulk viscosity of monatomic gases in the dilute gas limit is zero. The mechanical pressure, P mech , is defined as the negative average of the diagonal terms of stress tensor, as given below: P mech = − 1 3 (σ 11 + σ 22 + σ 33 ) = P thermo − µ b ∇ · u(3) Gas iso-Butane 2.00 Carbon dioxide 3828 Table 1: Ratio of bulk viscosity to shear viscosity for common gases at 300 K [21] Historically, Stokes [1] assumed the bulk viscosity (µ b ) to be identically zero for all fluids. It implies that mechanical pressure is always equal to thermodynamic pressure, irrespective of the process through which the system is undergoing, i.e., the viscous forces do not depend upon the rate of expansion or compression at all. This assumption is known as the Stokes' hypothesis. Later, it became customary to use this hypothesis in fluid mechanics. However, Stokes [1] himself did not take this hypothesis as always true. He mentioned that in commonly encountered flows, if analysis with and without considering bulk viscosity produces the same results, it would be because of small ∇ · u rather than µ b being zero. Indeed, there are certain instances, where bulk viscosity effects are not negligible. The Section 4 briefly discusses these scenarios, and some of the past works to model them while considering finite bulk viscosity. µ b /µ Gas µ b /µ 2 Microscopic picture of mechanisms of generation of normal stresses in an expanding/contracting dilute gas To understand the microscopic origin of the normal stress, let's re-write Eq. 2 for the normal stresses acting in the x-direction on yz-plane of the fluid element. σ 11 = −P thermo + 2µ ∂u 1 ∂x 1 − 1 3 ∇ · u + µ b ∇ · u(4) The second and third term in the above equation are contributions of shear and bulk viscosity to normal stress, respectively. Mechanisms responsible for stresses due to these terms are discussed as follows. Shear viscosity In dilute gases, the mechanism of generation of normal stress due to shear viscosity is analogous to the mechanism that produces shear stress, i.e., the transport of momentum in adjacent layers of fluid due to thermal diffusion of molecules. Consider a flow field dilating only in one direction with three different layers of fluid having velocities, as shown in Fig. 1. The left, middle and right layers have a bulk velocity equal to u, u + du, and u + 2du, respectively, in the direction as shown in Fig. 1. When any molecule from the leftmost layer jumps to middle layer due to its random-thermal-motion, it will decrease the average momentum of middle layer. As a result, middle layer will be pulled towards the left side. Similarly, if any molecule from the rightmost layer jumps to the middle layer, it will increase the average momentum of middle layer. As a consequence, there acts a pull on the middle layer towards the right side. The combined effect of these two pulls will be a normal stress on the fluid element of middle layer along the direction of velocity gradient. Therefore, in contrast to what the name 'shear viscosity' can mislead to, shear viscosity can not only produce shear stresses but also normal stresses. Bulk viscosity The pressure in any fluid is summation of two components- P = P kinetic + P virial(5) The first contribution, i.e., P kinetic is represents force caused by bombardment of molecules on a unit area. The second contribution, P virial is caused by inter-molecular forces. It depends upon the strength of inter-molecular interaction. It should be noted that the P can refer to both mechanical and thermodynamic pressure depending on the reference. The mechanical pressure, P mech , is instanteous pressure of the fluid at all situations, i.e., both at equilibrium or non-equilibrium. Hydrostatic/thermodynamic pressure, P thermo , is the mechanical pressure of the system under hydrostatic/equilibrium conditions, i.e., when fluid is at rest. Therefore, for a system in a state of non-equilibrium, the term 'thermodynamic pressure' loses its meaning [20]. However, for such a system, it can still be defined as the mechanical pressure of the system if the system is brought to equilibrium state adiabatically [42]. The origin of bulk viscosity in fluids is due to the fact that the pressure at non-equilibrium (i.e., mechanical pressure) is not same as the pressure at equilibrium (i.e., thermodynamic pressure). There are primarily two mechanisms responsible for deviation of equilibrium pressure from non-equilibrium pressure-(a) The finite rate of relaxation of internal degrees of freedom with random translational energy (b) The finite rate of structural rearrangement of molecules after a change in the thermodynamic state of the fluid. The bulk viscosity due to first mechanism is called as 'apparent bulk viscosity', while bulk viscosity due to the second mechanism is called as 'intrinsic bulk viscosity' [43]. The total bulk viscosity of fluid is summation due to the two said contributions. Apparent bulk viscosity The apparent bulk viscosity appears when the P kinetic at non-equilibrium is not same as P kinetic at equilibrium due to presence of internal degree of freedoms, i.e., (viz., rotational and vibrational). Herzfeld and Rice [44] first suggested that microscopic cause of bulk viscosity is the finite rate of exchange of energy between the translational mode and internal degrees of freedom. The mechanism can be explained by considering a simple example in which a polyatomic dilute gas, say nitrogen, is expanding adiabatically in a piston-cylinder arrangement, as shown in Fig. 2. Since the gas is considered as dilute, the virial pressure is neglected. Let us assume that initially the piston was at rest, and a polyatomic dilute gas was in equilibrium at a temperature of 300 K. With the assumption of inactive vibrational modes Figure 2: Expansion of gas in a piston-cylinder arrangement at the prevailing temperature conditions, the gas has three translational and two rotational degrees of freedom. Since the gas is in equilibrium, these all five degrees of freedom possess equal amount of energy because of the equipartition law of energy. Now, when the gas expands, it does work against the piston and loses its energy. The energy that the gas loses comes directly from its translational mode, whereas the energy associated with rotational mode remains momentarily unaffected. It causes an imbalance in the equipartition of energy among translational and rotational degrees of freedom. At this stage, the system is in a state of non-equilibrium, and its current translational kinetic energy is less than that which would be if the system is again brought to equilibrium adiabatically. Similarly, its rotational kinetic energy is more than that which would be if the system is brought to equilibrium. The system tries to regain its state such that the whole kinetic energy is equally distributed among all degrees of freedom. In this attempt, the system transfers some of its kinetic energy from internal modes to translational modes by means of inter-molecular collisions amongst gas molecules. The mechanical pressure, P mech , at any point in the fluid is the negative average of normal stresses acting on that point. For a dilute gas with negligible virial pressure, it represents actual force caused by bombardment of molecules on a unit area, and therefore, depends only upon the random translational kinetic energy of the gas molecules. This is also the reason why PV work comes at the cost of translational kinetic energy. Kinetic theory of gases relates mechanical pressure to translational kinetic energy (E trans ) by the following relation: P mech = 2 E trans 3V (6) where V is volume of the system. Moreover, due to the equipartition law of energy, the translational kinetic energy (E trans ) of the system at equilibrium is equal to 3/f of the total energy (E total ), where f is number of total degrees of freedom of gas. Thus, the thermodynamic pressure can be given as follows: P thermo = 2 3V (E trans at equilibrium) = 2 3V 3 f E total (7) P thermo = 2 E total f V(8) From the above discussion, it can be deduced that for the time duration, when translational kinetic energy is less than its value at equilibrium, the mechanical pressure is also less than the mechanical pressure at equilibrium (i.e., the thermodynamic pressure). In case of compression, the situation would be reverse. Since, the energy supplied to the gas first goes to translational mode, instantaneous translational kinetic energy is more than its value at equilibrium. Therefore, the mechanical pressure is also higher than the mechanical pressure at equilibrium (i.e., the thermodynamic pressure). Intrinsic bulk viscosity This mechanism was first proposed by Hall [45] in 1948. Consider a compression of dense fluid such as liquid water. During compression, two different processes take place simultaneously. The first one is that molecules are brought uniformly closer together (just like zoom in/out of a computer graphics). It can be called molecular compression, and it is an almost instantaneous process. However, even if the molecular arrangement before the compression was a stable one, i.e., the one with minimum intermolecular potential energy, the molecular compression alone does not guarantee that the resulting molecular arrangement would also be a stable one. The reason for this is the nature of the intermolecular interactions, e.g., Lennard-Jones potential or hydrogen bonds. Therefore, a second process also happens simultaneously. In this process, the molecules are rearranged or repacked more closely to achieve a more stable configuration. Hall identified this process as configurational or structural compression. This process involves the breaking of intermolecular bonds (e.g., hydrogen bonds) [46] or flow past energy barriers, which stabilizes the equilibrium configuration. This is a finite rate process. Thus it is of relaxational nature and is a source of nonequilibrium. This mechanism of bulk viscosity is present in all fluids including monatomic gases. Hence, monatomic gases at atmospheric conditions have a small (O(10 −10 ) Pa s) [49] but non-zero bulk viscosity. It should also be noted that at hypothetical dilute gas conditions, the bulk viscosity of monatomic gases is considered to be absolute zero. Monatomic gases in their dilute gas limit do not possess any appreciable inter-molecular potential energy, therefore have negligible intrinsic bulk viscosity. Moreover, because of monatomic nature, they do not possess any internal degrees of freedom. Therefore, the mechanical pressure at any instant will always be equal to the thermodynamic pressure. Hence, we can expect them to exhibit negligible bulk viscosity. Both theories and experiments [47,48] also confirm the same. However, it is possible that factors other than rotation/vibration and potential energy, like electronic excitation or chemical reaction [20] can also cause nonequilibrium in dilute monatomic gases. For instance, Istomin et al. [50] has shown that the bulk viscosity is not zero in electronically excited monatomic gases at temperatures higher than 2000 K. Methods for determination of bulk viscosity Unlike shear viscosity, determination of bulk viscosity has always remained a challenging task. We present a brief summary of various approaches available for estimation of bulk viscosity, µ b , of non-relativistic classical fluids, and QGP and hadronic matter in Secs. 3.1 and 3.2, respectively, with a particular focus on the former. Determination of bulk viscosity of classical fluids A schematic overview of the methods available in the literature for the determination of bulk viscosity is shown in Fig. 3. A brief discussion on these methods is given below. Theoretical methods Theoretically, bulk viscosity of dilute gases can be related to the relaxation time of equilibration processes by Tisza's [3] formulation, given as follows: µ b = ρ eq a 2 (γ − 1) γ i c v,i c v τ i (9) where ρ eq is density of gas at equilibrium, a is speed of sound in absence of viscosity, γ is the ratio of specific heats at equilibrium, c v,i is heat capacity of i th internal mode, c v is total heat capacity of the gas, τ i is the relaxation time of that internal mode, and the summation is performed over all internal degrees of freedoms (i.e., rotational, vibrational). However, the applicability of this expression is limited to the low frequency regime where ωτ << 1; ω being the frequency of sound wave. [60]. Li et al. [61] related bulk viscosity to bulk modulus and relaxation time as follows: µ b = Kτ tot(10) where, K is bulk modulus of the fluid, defined as K = −V(∂P /∂V), and τ tot is total average relaxation time of internal energy in all excited modes. Kustova et al. [38] questioned a priory split of bulk viscosity in rotational and vibrational components as done by Tisza [3] and Cramer [21]. They argued that this splitting lead to overprediction of bulk viscosity at low temperatures and bulk to shear viscosity ratio of ∼ 2000 for carbon dioxide at low temperatures is unjustified. They first derived a expression for bulk viscosity using modified Chapman-Enskog method [62] under the assumption of local thermodynamic equilibrium µ b = R c int c 2 v p τ int = (γ − 1) 2 R c int p τ int(11) where c int and τ int are total heat capacity of internal degrees of freedoms and corresponding relaxation time, respectively. Then following the work by Mason and Monchick [63], they assumed that collisions with simultaneous exchange of rotational and vibrational energies are rare. By invoking this assumption, it can be shown that c int τ int = c rot τ rot + c vib τ vib(12) By using Eq (11) and (12), bulk viscosity can be expressed as µ b = pR c int c v 2 c rot τ rot + c vib τ vib −1(13) On the basis of above formulation, Kustova et al. also deduced that bulk to shear viscosity ratio for CO 2 at 300-1000 K should be in the range 1 to 3. Experimental methods The experimental determination of bulk viscosity is not as straightforward as shear viscosity and usually based on indirect techniques, such as absorption and dispersion of sound wave, and Rayleigh-Brillouin scattering [19,64,65]. Absorption of the sound wave is the additional decrease in intensity with distance, over and above the geometric reduction caused by the inverse square law. It has been found that the experimentally observed absorption is much higher than the predictions based on the theory that accounts only for classical absorption, i.e., absorption due to shear viscosity, thermal conductivity, and thermal radiation. Since this excess absorption cannot be attributed to dissipation phenomenon because of translational motion of molecules (i.e., shear viscosity, heat conduction), it is assumed that this excess absorption is because of bulk viscosity [66]. The absorption of sound is characterized by absorption coefficient (α), and it is related to bulk viscosity, µ b , as following [47]: αP eq ω 2 = 2π 2 γa 4 3 µ + (γ − 1) 2 γ M κ R + µ b(14) where, P eq is equilibrium pressure, M is molar mass, κ is thermal conductivity, and R is gas constant. However, the assumption that the excess absorption is due to bulk viscosity can only be examined when some direct measurements of bulk viscosity from an independent method are made and values are compared [4,66]. Furthermore, this approach of measuring bulk viscosity is susceptible to considerable errors since it involves subtraction of classical absorption coefficient from total absorption coefficient to get absorption due to bulk viscosity. For the calculation of classical absorption, the use of µ and κ taken from different sources also introduces error in estimates of bulk viscosity made using this method [65]. Alternatively, bulk viscosity can also be measured by sound dispersion experiments. Dispersion of sound causes speed of sound to be frequency dependent, and this dependency is given as follows [67]- a 2 = a 2 0 + (a 2 ∞ − a 2 0 ) ω 2 τ 2 1 + ω 2 τ 2(15) where a is the speed of sound at frequency ω, a 0 and a ∞ are speed of sound for very low and very high frequencies, respectively. The obtained relaxation time then can be related to bulk viscosity. Pan et al. [68] suggested that bulk viscosity of dilute gases can also be measured using Coherent Rayleigh -Brillouin Scattering (CRBS). In this technique, gas density perturbations are generated and measured using laser beams. The experimentally observed scattering profile is then compared with that obtained from the theoretical models to get transport coefficients, including bulk viscosity. However, in contrast to acoustic experiments, which measure bulk viscosity at megahertz frequencies, these experiments measure bulk viscosity in gigahertz frequencies. Because of this reason, a significant difference between the estimated values from the two above mentioned methods is usually observed [68][69][70][71][72][73][74]. Furthermore, Emanuel et al. [7] has deduced that for dense polyatomic gases, the densitybased thickness of shock wave consists of many thousands of mean-free paths, and varies linearly with the ratio µ b /µ. Thus, ideally, the experimentally measured shock wave thickness can be used to calculate bulk viscosity; nevertheless, we could not find any experimental implementation of this technique in the literature. Computational methods The methods employed for the determination of transport properties using numerical simulations are broadly classified in two categories-non-equilibrium simulation based methods (e.g., non-equilibrium molecular dynamics (NEMD)), and equilibrium simulation based methods (e.g., equilibrium molecular dynamics (EMD)) (see Fig. 3). In the former approach, the non-equilibrium responsible for the desired transport property is produced at the microscopic level, and then the transport property is related to other variables as in physical experiments. The latter approach uses relations, such as Green-Kubo relation [75,76], Einstein relation [77,78], or expressions derived from the Chapman-Enskog expansion [48,[79][80][81][82]. Equilibrium based methods The Green-Kubo method uses the Green-Kubo relations for calculation of transport properties [75,76]. For shear viscosity, µ, and bulk viscosity, µ b , these relations are given as follows: µ = V k B T ∞ 0 P ij (t) P ij (0) dt(16) where k B is the Boltzmann constant, T is temperature, P ij (t) denotes the instantaneous value of ij th off-diagonal element of the pressure tensor at a time t, and the angle bracket indicates the ensemble average. Further, to reduce the statistical error in calculation of µ, averaging is performed over three different values obtained from three different components of pressure tensor viz., P ij , P jk , and P ki . µ b = V k B T ∞ 0 δP (t) δP (0) dt(17) Here, P (t) is instantaneous value of the average of three diagonal terms of pressure tensor at a time t, i.e., P (t) = 1 3 [P ii (t) + P jj (t) + P kk (t)]. The fluctuations, δP (t), is aberration of mean pressure from equilibrium pressure, i.e., δP (t) = P (t) − P eq ; where P eq is equilibrium pressure of the system, and it is calculated by time average of P(t) over a long time. The Green-Kubo method is a robust way to measure transport coefficients, but sometimes, they suffer from several issues. For example, for the correct estimation of viscosity coefficients, the auto-correlation function should decay to zero with time. In such a case, the integral of auto-correlation function would reach a constant value. Nevertheless, it does not necessarily happen in practice. The auto-correlation function might show either long-time tails or fluctuations [83]. Further, viscosities should be estimated from the region of the graph of the integral (of auto-correlation function) vs. time, when it reaches a constant value. First of all, it is difficult to identify such a region, and second, even if we can identify such a region, there will always be arbitrariness in the value of viscosity because of ambiguity in determining the cut-off time [83]. The Einstein relations [84,85] also find their origin in linear response theory. These relations relate shear and bulk viscosity to the slope of generalized mean-squared displacement functions as given below µ = V 2k B T lim t→∞ d dt m V N n=1 [v n,i (t)r n,j (t) − v n,i (0)r n,j (0)] 2 (18) µ b = V k B T lim t→∞ d dt m 3V N n=1 [ v n (t) · r n (t) − v n (0) · r n (0)] − P eq t 2 .(19) Here, N is total number of particles, m is particle mass, r n (t) and v n (t) are the position and velocity vectors of n th particle at time t, and r n,i (t) and v n,j (t) are the Cartesian components of position and velocity vectors in direction i and j. The Einstein relations are theoretically equivalent to Green-Kubo relations. However, they can not be directly implemented in molecular dynamics (MD) simulations with periodic boundaries [86]. The Einstein relations implicitly assumes the particles to follow a continuous trajectory. Whereas, in case of periodic boundaries, particles regularly exit from one face and enter from opposite face of the domain, therefore, the trajectory becomes discontinuous. This problem is overcome by using modified Einstein relations where the generalized displacement functions are replaced with time integral of pressure tensor components. The modified Einstein relations are given as follow µ = V 2k b T lim t→∞ d dt t 0 P ij (t)dt 2 (20) µ b = V 2k b T lim t→∞ d dt t 0 δP (t)dt 2 .(21) The Chapman-Enskog theory based expressions for transport coefficients of dilute polyatomic gases, derived by Taxman [81], requires the calculation of collision integrals. These collision integrals are usually evaluated through the Monte Carlo quadrature method [87], in which proper equilibrium distribution function is used to sample pre-collision state of molecules, and then post-collision states are calculated by solving molecular trajectory classically [80]. The Chapman-Enskog method is computationally more efficient than the Green-Kubo method, however, the applicability of this method depends upon the availability of explicit expressions, whereas the Green-Kubo method, though less efficient, does not have any such constraint [80]. In another approach, transport coefficients can also be estimated by numerical simulation of the Rayleigh-Brillouin scattering experiment and its coherent version, coherent Rayleigh Brillouin scattering. In this method, a gas is simulated at equilibrium, and density fluctuations are sampled as time-series data. The discrete power spectrum is then obtained using the square of the Fourier transformation of this time series [79,88,89]. All the three above-mentioned methods, i.e., Chapman-Enskog expressions, Green-Kubo relations, and simulation of Rayleigh-Brillouin scattering can be employed in either MD and classical trajectory direct simulation Monte Carlo (CT-DSMC) simulation framework. More details on these three methods are available in Ref. [74,79,80,90]. Non-equilibrium based methods Non-equilibrium based methods measures transport coefficients by directly measuring the gradient of the corresponding parameter. Such methods are well developed for shear viscosity, heat conductivity, and mass diffusivity. However, for bulk viscosity, historically it has been seen that implementation of such a method is a challenging task. To the best of our knowledge, only one attempt has been made so far to use non-equilibrium based methods for the estimation of bulk viscosity. In this work, Hoover et al. [19] cyclically compressed and expanded the fluid in molecular dynamics framework in the following manner to produce measurable effects: L/L 0 = 1 + ξ sin(ωt)(22) where, L 0 is mean length, L is instantaneous length of the cubic simulation domain, ξ is strain amplitude, and ω is the frequency describing the linearized strain rate. = ξω cos(ωt) (23) As the linearized strain rate (˙ ) approaches zero, authors expected the average pressure of the system to deviate from equilibrium pressure by −3ξωµ b cos(ωt). Also, if the deformation given by Eq. (22) takes place through the mechanism of external work, then the lost work due to irreversible heating will give rise to energy increase per cycle, ∆E per cycle , (the cycle time being 2π/ω), which is equal to ∆E per cycle = (2π/ω)9ξ 2 ω 2 µ b V/2(24) Based on the measurement of average pressure and rise in the energy of the system, authors estimated bulk viscosity of a soft sphere fluid modeled with potential φ(r) = (σ/r) 12 . All of the methods discussed above have one or more limitations. The sound absorption and dispersion method give frequency-dependent bulk viscosity values. However, since bulk viscosity is a transport coefficient, it should depend only on the state of fluid, not on the process it is undergoing. The optical method, i.e., Rayleigh-Brillouin scattering, can not account for vibrational effects, since the frequency used is of GHz range, and 1/frequency becomes smaller than vibrational relaxation time. In the second vertical, i.e., numerical methods, the Green-Kubo method sometimes faces difficulties in convergence due to long-time tails. Chapman-Enskog method is very robust and fastest among all numerical methods. However, Chapman-Enskog relations are difficult to obtain except for simple cases like monatomic and diatomic gases. In the second sub-vertical, there is only one nonequilibrium method, i.e., cyclic compression and expansion by Hoover et al. [19]. There is only one application available in the literature [91]. Since this method uses energy absorbed over many compression and expansion cycles to estimate bulk viscosity, it may also give frequency-dependent bulk viscosity values. Moreover, this method does not give much insights associated with physics of one single cycle. Therefore, to address these issue, Sharma et al. [90] proposed a first principle based continuous compression/expansion method which can determine bulk viscosity directly from the difference between mechanical (P mech ) and thermodynamic pressure (P thermo ). Their method uses numerical measurement of P mech and P thermo in an NEMD simulation of an expanding fluid, and then relates the bulk viscosity to them by the relation µ b = (P thermo − P mech )/∇ · u, where ∇ · u is the controlled rate of expansion of the fluid per unit volume. The key success of this method was that it, being inherently based on expansion/compression of the gas, allowed them to investigate the effects of nonequilibrium on bulk viscosity, e.g., variation of bulk viscosity with magnitude (|∇ · u|) and direction of volumetric change (i.e., expansion vs. compression). Hence, this method enabled them to gain detailed insights of associated flow physics. Determination of bulk biscosity of QGP and hadronic matter For the estimation of transport coefficient of QGP and hadronic matter, two standard approaches are mainly used -the Boltzmann equation based relaxation time approximation (RTA) approach and the linear response theory based Kubo/Green-Kubo formulation. A brief review of these methods can be found in Ref. [41,51,52]. A vast amount of research has been done on this topic. Here we summarize only some of the key contributions in this field. Gavin [53] used the well known non-relativistic form of the Boltzmann equation to calculate the transport coefficients for both the QGP and hadronic matter using the relaxation time approximation (RTA) method. Prakash et al. [54,55] studied the equilibration of hot hadronic matter in the framework of relativistic kinetic theory. They calculated transport coefficients considering only elastic collisions in the dilute gas limit using extended Chapman-Enskog formalism. For a general review of the relativistic kinetic theory, reader may refer to the classical text by Groot et al. [56]. Chakraborty et al. [57] extended the classical works of Prakash et al. [54,55] and Gavin [53], and presented a theoretical framework for the calculation of shear and bulk viscosity of hot hadronic matter. Their work accounted for not only inelastic collisions but also formation and decay of resonances, temperature-dependent mean fields, and temperature-dependent masses. Demir et al. [41,58,59] carried out Ultra-relativistic Quantum Molecular Dynamics (UrQMD) simulations of hadronic media, and calculated bulk viscosity using both Green-Kubo method and relaxation time approximation. Applications of bulk viscosity To evaluate the validity of Stokes' hypothesis, let us consider a supersonic flow of air and analyze the terms of the equation P mech = P thermo − µ b ∇ · u. For bulk viscosity effects to be significant, µ b ∇ · u should be comparable to P thermo . Assuming pressure is ∼ 10 5 Pa and µ b is ∼ 10 −5 Pa s, even if ∇ · u is as high as ≈ 10 4 s −1 , the difference between mechanical and thermodynamic pressure would be just 0.1 Pa. This difference P thermo − P mech = 0.1 Pa is six orders of magnitude smaller than the P thermo . Therefore, in most of the commonly encountered flow problems, it is safe to assume bulk viscosity to be zero. However, bulk viscosity effects may become important when either the ∇ · u is very high (e.g., inside a shock wave), or when fluid is compressed and expanded in repeated cycles such that the cumulative effect of the small contributions from each cycle is no more negligible (e.g., sound wave) [22], or when the atmosphere consists of the majority of those gases, such as CO 2 , which exhibit a large bulk viscosity [5], or when results of interest might get affected by even small disturbances, e.g., the study of Rayleigh-Taylor instability [15]. In such cases, it becomes necessary to account for the bulk viscosity terms in the Navier-Stokes equation. Several researchers have investigated the effects of the incorporation of bulk viscosity in analytical or CFD studies of various flow scenarios. Emanuel et al. [5,7,23,24] reviewed bulk viscosity and suggested that the effects of bulk viscosity should be accounted for in the study of high-speed entry into planetary atmospheres. They observed that the inclusion of bulk viscosity could significantly increase heat transfer in the hypersonic boundary layer [5]. Chikitkin [14] studied the effects of bulk viscosity in flow past a spacecraft. They reported that the consideration of bulk viscosity improved the agreement of velocity profile and shock wave thickness with experiments. Shevlev [25] studied the effects of bulk viscosity on CO 2 hypersonic flow around blunt bodies. The conclusions of their study were in line with that of Emanuel. They suggested that incorporation of bulk viscosity may improve predictions of surface heat transfer and other flow properties in shock layer. Elizarova et al. [10] and Claycomb et al. [26] carried out CFD simulations of normal shock. They found that including bulk viscosity improved the agreement with experimental observations for shock wave thickness. A recent study by Kosuge and Aoki [27] on shockwave structure for polyatomic gases also confirms the same. Bahmani et al. [13] studied the effects of large bulk to shear viscosity ratio on shock boundary layer interaction. They found that a sufficiently high bulk to shear viscosity ratio can suppress the shock-induced flow separation. Singh and Myong [16] studied the effects of bulk viscosity in shock-vortex interaction in monatomic and diatomic gases. They reported a substantially strengthened enstrophy evolution in the case of diatomic gas flow. Singh et al. [28] investigated the impact of bulk viscosity on the flow morphology of a shock-accelerated cylindrical light bubble in diatomic and polyatomic gases. They found that the diatomic and polyatomic gases have significantly different flow morphology than monatomic gases. They produce larger rolled-up vortex chains, various inward jet formations, and large mixing zones with strong, large-scale expansion. Touber [29] studied the effects of bulk viscosity in the dissipation of energy in turbulent flows. He found that large bulk-to-shear viscosity ratios may enhance transfers to small-scale solenoidal kinetic energy, and therefore, faster dissipation rates. Riabov [30] questioned the ability of bulk viscosity to model spherically-expanding nitrogen flows in temperature range 10 to 1000 K by comparing results to Navier-Stokes equations to relaxation equation. He reported that the bulk viscosity approach predicts much thinner spherical shock wave areas than those predicted by relaxation equations. Moreover, the distributions of rotational temperature along the radial direction predicted by the bulk viscosity approach had neither any physical meaning nor matches with any known experimental data for expanding nitrogen flows. Fru et al. [31] performed direct numerical simulations (DNS) study of high turbulence combustion of premixed methane gas. They found that the incorporation of bulk viscosity does not impact flame structures in both laminar and turbulent flow regimes. Later, the same group extended their study to other fuels, viz., hydrogen and synthetic gas. In this study [11], they found that though flame structures of methane remained unchanged before and after incorporation of bulk viscosity, the same for hydrogen and syngas showed noticeable modifications. Sengupta et al. [15] studied the role of bulk viscosity on Rayleigh Taylor instability. They found that the growth of the mixing layer depends upon bulk viscosity. Pan et al. [32] has shown that bulk viscosity effects cannot be neglected for turbulent flows of fluids with high bulk to shear viscosity ratio. They found that bulk viscosity increases the decay rate of turbulent kinetic energy. Boukharfane et al. [33] studied the mechanism through which bulk viscosity affects the turbulent flow. They found that the local and instantaneous structure of the mixing layer may vary significantly if bulk viscosity effects are taken into account. They identified that the mean statistical quantities, e.g., the vorticity thickness growth rate, do not get affected by bulk viscosity. On this basis of their study, they concluded that results of refined large-eddy simulations (LES) might show dependence on the presence/absence of bulk viscosity, but Reynolds-averaged Navier-Stokes (RANS) simulations might not as they are based on statistical averages. Connor [34] studied the effects of bulk viscosity in the compressible turbulent one-, two-, and three-dimensional Couette flows through DNS simulations. The objective of the study was to test whether invoking the Stokes' hypothesis introduces significant errors in the analysis of compressible flow of solar thermal power plants, and carbon capture and storage (CCS) compressors. They found that most of the energy is contained in the solenoidal velocity for both CCS and concentrated solar power plants. Therefore, assuming bulk viscosity to be zero does not produce any significant errors, despite the compressors operating at supersonic conditions. However, bulk viscosity effects may become significantly close to the thermodynamic critical point. Billet et al. [35] showed that the inclusion of bulk viscosity in CFD simulations of supersonic combustion modifies the vorticity of the flow. Lin et al. [17,36] have shown that acoustic wave attenuation in CFD simulations can be accounted for by incorporating bulk viscosity. Nazari [37] studied the influence of liquid bulk viscosity on the dynamics of a single cavitation bubble. They reported that bulk viscosity significantly affects the collapse phase of the bubble at high ultrasonic amplitudes and high viscosities. High bulk viscosity values also altered the maximum pressure value inside the bubble. The classical Navier-Stokes equation loses its accuracy as the deviation from equilibrium increases, or the extent of nonequilibrium is high. This usually happens in the study of high-temperature gas dynamics and rarefied gas dynamics. There are primarily three models used in the continuum framework to model nonequilibrium, viz., state-to-state, multitemperature, and one-temperature model. The state-to-state model is computationally most costly and can be used in systems that are far from equilibrium and have coupled fluid, thermal and chemical kinetics. In comparison, the one-temperature model is computationally most simple and suitable for near-equilibrium systems. In one temperature model, both rotational and vibrational relaxations are accounted for using bulk viscosity coefficient. On the other hand, in the state-to-state approach, the vibrational chemical kinetics is described by master equations for populations of vibrational states, and the fast rotational relaxation is accounted for through bulk viscosity parameters. Similarly, in the multi-temperature model, vibrational kinetics is governed by relaxation equations for various vibrational modes, and the rotational relaxation is modeled through bulk viscosity. For a more detailed discussion on these models, the reader is referred to the Ref. [38]. In addition to these classical fluid-dynamics scenarios, bulk viscosity may also play an important role in several cosmological phenomena, e.g., damping of vibrations created during the formation of a new neutron star, and growth of gravitational-wave instability in rapidly rotating neutron star [39,40]. Origin of bulk viscosity in these circumstances is primarily due to the chemical nonequilibrium caused by nuclear reactions. Bulk viscosity along with other transport properties is also of central importance to the space-time description of the heavy-ion collision experiments being conducted at the Brookhaven National Laboratory's Relativistic Heavy Ion Collider (RHIC) and CERN's Large Hadron Collider (LHC). One of the primary objectives of these experiments was the formation and investigation of Quark-Gluon-Plasma (QGP), the state of the matter of the Early Universe (first 30 µs after the Big Bang). [41] 5 Concluding remarks and future scope Although of great importance in several fluid mechanical phenomena, bulk viscosity is still one of the gray areas of fluid mechanics. Despite the fact that numerous theoretical, experimental, and computational studies have been carried out in the past, there is still lack of well-established values for bulk viscosity of common gases such as nitrogen, oxygen, and carbon dioxide. For most gas mixtures, data is not available at all, or if available, experimental or numerical, is spread in a wide range. Therefore, studies that determine the bulk viscosity of common fluids are needed to achieve a common consensus on accepted values. Disclaimer The contents of this article is primarily based on Introduction chapter of the thesis: Bhanuday Sharma Figure 1 : 1Three layers of different velocity in a divergent flow field. Dots represent gas molecules and two vertical dashed lines are imaginary boundaries separating these three fluid layers. Figure 3 : 3A survey of available methods for estimation of bulk viscosity. 2000 • 2000. "On the Nature of Bulk Viscosity of Dilute Gases". PhD thesis. Indian Institute of Technology Kanpur, 2022 4. RO Davies. "Kinetic and thermodynamic aspects of the second coefficient of viscosity". In: Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences 226.1164 (1954), pp. 24-34 5. SV Borovinskii and OV Mazurin. "On the concept of volume viscosity". In: Progress and Trends in Rheology II. Springer, 1988, pp. 79-81 6. C Truesdell. "The present status of the controversy regarding the bulk viscosity of fluids". In: Proc. R. Soc. Lond. A 226.1164 (1954), pp. 59-65 7. George Emanuel. "Bulk viscosity of a dilute polyatomic gas". In: Physics of Fluids A: Fluid Dynamics 2.12 (1990), pp. 2252-2254 8. Willard E Meador, Gilda A Miner, and Lawrence W Townsend. "Bulk viscosity as a relaxation parameter: fact or fiction?" In: Physics of fluids 8.1 (1996), pp. 258-261 9. Hui Dong, Nan Su, and Qun Wang. "Bulk viscosity in nuclear and quark matter". In: Journal of Physics G: Nuclear and Particle Physics 34.8 (2007), S643 • For complete treatment of equation of motion of viscous fluids without Stokes' hypothesis: 1. George Emanuel. Analytical fluid dynamics. CRC press, For fundamentals of equilibrium statistical mechanics: 1. Biman Bagchi. Statistical Mechanics for Chemistry and Materials Science. CRC Press, 2018 2. Michael P Allen and Dominic J Tildesley. Computer simulation of liquids. Oxford university press, 2017 Appendix A: Selected references for further readingReader is redirected to following resources for further understanding of concepts and historic developments in the subject of bulk viscosity. Enumerated items are arranged in the suggested order of reading. • For historic developments: The origin of ultrasonic absorption in water. Leonard Hall, Physical Review. 73775Leonard Hall. "The origin of ultrasonic absorption in water". In: Physical Review 73.7 (1948), p. 775 The second coefficient of viscosity of liquids and gases. S M Karim, Rosenhead, In: Reviews of Modern Physics. 24108SM Karim and L Rosenhead. "The second coefficient of viscosity of liquids and gases". In: Reviews of Modern Physics 24.2 (1952), p. 108 A discussion on the first and second viscosities of fluids. L Rosenhead, Proc. R. Soc. London. 226L Rosenhead. "A discussion on the first and second viscosities of fluids". In: Proc. R. Soc. London 226 (1954), pp. 1-69 On the theories of the internal friction of fluids in motion, and of the equilibrium and motion of elastic solids. Georges Gabriel Stokes, Transactions of the Cambridge Philosophical Society. 8Georges Gabriel Stokes. "On the theories of the internal friction of fluids in motion, and of the equilibrium and motion of elastic solids". In: Transactions of the Cam- bridge Philosophical Society 8 (1880), pp. 287-305. url: https://pages.mtu.edu/ fmorriso/cm310/StokesLaw1845.pdf. Questions in fluid mechanics: Stokes' hypothesis for a Newtonian, isotropic fluid. Mohamed Gad-El Hak, Journal of Fluids Engineering. 1175Mohamed Gad-el Hak. "Questions in fluid mechanics: Stokes' hypothesis for a New- tonian, isotropic fluid". In: Journal of Fluids Engineering 117.3 (1995), p. 5. Supersonic absorption and Stokes' viscosity relation. L Tisza, Physical Review 61. 531L Tisza. "Supersonic absorption and Stokes' viscosity relation". In: Physical Review 61.7-8 (1942), p. 531. The present status of the controversy regarding the bulk viscosity of fluids. C Truesdell, Proc. R. Soc. Lond. A 226.1164. R. Soc. Lond. A 226.1164C Truesdell. "The present status of the controversy regarding the bulk viscosity of fluids". In: Proc. R. Soc. Lond. A 226.1164 (1954), pp. 59-65. Effect of bulk viscosity on a hypersonic boundary layer. George Emanuel, Physics of Fluids A: Fluid Dynamics. 4George Emanuel. "Effect of bulk viscosity on a hypersonic boundary layer". In: Physics of Fluids A: Fluid Dynamics 4.3 (1992), pp. 491-495. Effect of bulk viscosity on Couette flow. H Gonzalez, G Emanuel, Physics of Fluids A: Fluid Dynamics. 5H Gonzalez and G Emanuel. "Effect of bulk viscosity on Couette flow". In: Physics of Fluids A: Fluid Dynamics 5.5 (1993), pp. 1267-1268. Linear dependence of the bulk viscosity on shock wave thickness. George Emanuel, M Brian, Argrow, Physics of Fluids. 6George Emanuel and Brian M Argrow. "Linear dependence of the bulk viscosity on shock wave thickness". In: Physics of Fluids 6.9 (1994), pp. 3203-3205. Second viscosity enhancement in turbulent nonequilibrium flow. Chabi Jean, Joseph A Johnson Orou, Iii, Physics of Fluids. 6Jean Chabi Orou and Joseph A Johnson III. "Second viscosity enhancement in tur- bulent nonequilibrium flow". In: Physics of Fluids 6.1 (1994), pp. 415-417. Continuum formulation for non-equilibrium shock structure calculation. Kun Xu, Eswar Josyula, Communications in computational physics. 13Kun Xu and Eswar Josyula. "Continuum formulation for non-equilibrium shock struc- ture calculation". In: Communications in computational physics 1.3 (2006), pp. 425- 448. Numerical simulation of shock wave structure in nitrogen. Anton A Tatiana G Elizarova, Salvador Khokhlov, Montero, Physics of Fluids. 1968102Tatiana G Elizarova, Anton A Khokhlov, and Salvador Montero. "Numerical simula- tion of shock wave structure in nitrogen". In: Physics of Fluids 19.6 (2007), p. 068102. Impact of volume viscosity on the structure of turbulent premixed flames in the thin reaction zone regime. Gordon Fru, Gábor Janiga, Dominique Thévenin, Flow, turbulence and combustion. 88Gordon Fru, Gábor Janiga, and Dominique Thévenin. "Impact of volume viscosity on the structure of turbulent premixed flames in the thin reaction zone regime". In: Flow, turbulence and combustion 88.4 (2012), pp. 451-478. M S Cramer, F Bahmani, Effect of large bulk viscosity on large-Reynolds-number flows. 751MS Cramer and F Bahmani. "Effect of large bulk viscosity on large-Reynolds-number flows". In: Journal of Fluid Mechanics 751 (2014), pp. 142-163. Suppression of shock-induced separation in fluids having large bulk viscosities. F Bahmani, Cramer, Journal of Fluid Mechanics. 756F Bahmani and MS Cramer. "Suppression of shock-induced separation in fluids having large bulk viscosities". In: Journal of Fluid Mechanics 756 (2014). Effect of bulk viscosity in supersonic flow past spacecraft. Av Chikitkin, Bv Rogov, S V Ga Tirsky, Utyuzhnikov, Applied Numerical Mathematics. 93AV Chikitkin, BV Rogov, GA Tirsky, and SV Utyuzhnikov. "Effect of bulk viscosity in supersonic flow past spacecraft". In: Applied Numerical Mathematics 93 (2015), pp. 47-60. Roles of bulk viscosity on Rayleigh-Taylor instability: Non-equilibrium thermodynamics due to spatio-temporal pressure fronts. K Tapan, Aditi Sengupta, Nidhi Sengupta, Soumyo Sharma, Ashish Sengupta, K S Bhole, Shruti, Physics of Fluids. 2894102Tapan K Sengupta, Aditi Sengupta, Nidhi Sharma, Soumyo Sengupta, Ashish Bhole, and KS Shruti. "Roles of bulk viscosity on Rayleigh-Taylor instability: Non-equilibrium thermodynamics due to spatio-temporal pressure fronts". In: Physics of Fluids 28.9 (2016), p. 094102. A computational study of bulk viscosity effects on shockvortex interaction using discontinuous Galerkin method. S Singh, Myong, In: Journal of Computational Fluids Engineering. 22S Singh and RS Myong. "A computational study of bulk viscosity effects on shock- vortex interaction using discontinuous Galerkin method". In: Journal of Computa- tional Fluids Engineering 22.2 (2017), pp. 86-95. Bulk viscosity model for nearequilibrium acoustic wave attenuation. Jeffrey Lin, Carlo Scalo, Lambertus Hesselink, arXiv:1707.05876arXiv preprintJeffrey Lin, Carlo Scalo, and Lambertus Hesselink. "Bulk viscosity model for near- equilibrium acoustic wave attenuation". In: arXiv preprint arXiv:1707.05876 (2017). Evaluating the second coefficient of viscosity from sound dispersionor absorption data. J Samuel, Marcy, AIAA Journal. 28Samuel J Marcy. "Evaluating the second coefficient of viscosity from sound disper- sionor absorption data". In: AIAA Journal 28.1 (1990), pp. 171-173. Bulk viscosity via nonequilibrium and equilibrium molecular dynamics. G William, Hoover, J C Anthony, Richard B Ladd, Brad Lee Hickman, Holian, Physical Review A. 211756William G Hoover, Anthony JC Ladd, Richard B Hickman, and Brad Lee Holian. "Bulk viscosity via nonequilibrium and equilibrium molecular dynamics". In: Physical Review A 21.5 (1980), p. 1756. Lev Davidovich Landau, Evgenii Mikhailovich Lifshitz, Course of Theoretical Physics -Fluid Mechanics. Oxford, EnglandElsevierLev Davidovich Landau and Evgenii Mikhailovich Lifshitz. Course of Theoretical Physics -Fluid Mechanics. Oxford, England: Elsevier, 2013. Numerical estimates for the bulk viscosity of ideal gases. S Mark, Cramer, Physics of fluids. 2466102Mark S Cramer. "Numerical estimates for the bulk viscosity of ideal gases". In: Physics of fluids 24.6 (2012), p. 066102. A note on Stokes' hypothesis. Guido Buresti, Acta Mechanica. 226Guido Buresti. "A note on Stokes' hypothesis". In: Acta Mechanica 226.10 (2015), pp. 3555-3559. Bulk viscosity of a dilute polyatomic gas. George Emanuel, Physics of Fluids A: Fluid Dynamics. 2George Emanuel. "Bulk viscosity of a dilute polyatomic gas". In: Physics of Fluids A: Fluid Dynamics 2.12 (1990), pp. 2252-2254. Bulk viscosity in the Navier-Stokes equations. George Emanuel, International journal of engineering science. 3611George Emanuel. "Bulk viscosity in the Navier-Stokes equations". In: International journal of engineering science 36.11 (1998), pp. 1313-1323. Bulk-viscosity effect on CO2 hypersonic flow around blunt bodies. D Yu, Shevelev, Ng Syzranova, E V Ea Nagnibeda, Kustova, Doklady Physics. 60Yu D Shevelev, NG Syzranova, EA Nagnibeda, and EV Kustova. "Bulk-viscosity effect on CO2 hypersonic flow around blunt bodies". In: Doklady Physics. Vol. 60. 5. 2015, pp. 207-209. Extending CFD Modeling to the Transition Regime by Enhanced Thermophysical Modeling. Abram Claycomb, Robert Greendyke, 40th Thermophysics Conference. 3930Abram Claycomb and Robert Greendyke. "Extending CFD Modeling to the Transi- tion Regime by Enhanced Thermophysical Modeling". In: 40th Thermophysics Con- ference. 2008, p. 3930. Shock-wave structure for a polyatomic gas with large bulk viscosity. Shingo Kosuge, Kazuo Aoki, Physical Review Fluids. 323401Shingo Kosuge and Kazuo Aoki. "Shock-wave structure for a polyatomic gas with large bulk viscosity". In: Physical Review Fluids 3.2 (2018), p. 023401. Impact of bulk viscosity on flow morphology of shock-accelerated cylindrical light bubble in diatomic and polyatomic gases. Satyvir Singh, Marco Battiato, Myong, Physics of Fluids. 3366103Satyvir Singh, Marco Battiato, and RS Myong. "Impact of bulk viscosity on flow morphology of shock-accelerated cylindrical light bubble in diatomic and polyatomic gases". In: Physics of Fluids 33.6 (2021), p. 066103. Small-scale two-dimensional turbulence shaped by bulk viscosity. Emile Touber, Journal of Fluid Mechanics. 875Emile Touber. "Small-scale two-dimensional turbulence shaped by bulk viscosity". In: Journal of Fluid Mechanics 875 (2019), pp. 974-1003. Limitations of the bulk viscosity approach in modeling the expanding nitrogen flows. V Vladimir, Riabov, AIP Conference Proceedings. AIP Publishing LLC2132150003Vladimir V Riabov. "Limitations of the bulk viscosity approach in modeling the ex- panding nitrogen flows". In: AIP Conference Proceedings. Vol. 2132. 1. AIP Publishing LLC. 2019, p. 150003. Direct numerical simulations of the impact of high turbulence intensities and volume viscosity on premixed methane flames. Gordon Fru, Gábor Janiga, Dominique Thévenin, Journal of Combustion. Gordon Fru, Gábor Janiga, and Dominique Thévenin. "Direct numerical simulations of the impact of high turbulence intensities and volume viscosity on premixed methane flames". In: Journal of Combustion 2011 (2011). The role of bulk viscosity on the decay of compressible, homogeneous, isotropic turbulence. Shaowu Pan, Eric Johnsen, Journal of Fluid Mechanics. 833Shaowu Pan and Eric Johnsen. "The role of bulk viscosity on the decay of compress- ible, homogeneous, isotropic turbulence". In: Journal of Fluid Mechanics 833 (2017), pp. 717-744. On the role of bulk viscosity in compressible reactive shear layer developments. Radouan Boukharfane, Pedro José Martínez Ferrer, Arnaud Mura, Vincent Giovangigli, European Journal of Mechanics-B/Fluids. 77Radouan Boukharfane, Pedro José Martínez Ferrer, Arnaud Mura, and Vincent Gio- vangigli. "On the role of bulk viscosity in compressible reactive shear layer develop- ments". In: European Journal of Mechanics-B/Fluids 77 (2019), pp. 32-47. Bulk viscosity effects in compressible turbulent Couette flow. Teddy Szemberg, O&apos; Connor, Imperial College LondonPhD thesisTeddy Szemberg O'Connor. "Bulk viscosity effects in compressible turbulent Couette flow". PhD thesis. Imperial College London, 2018. url: http://hdl.handle.net/ 10044/1/62657. Impact of volume viscosity on a shockhydrogen-bubble interaction. G Billet, G De Giovangigli, Gassowski, Combustion Theory and Modelling. 12G Billet, V Giovangigli, and G De Gassowski. "Impact of volume viscosity on a shock- hydrogen-bubble interaction". In: Combustion Theory and Modelling 12.2 (2008), pp. 221-248. High-fidelity simulation of an ultrasonic standing-wave thermoacoustic engine with bulk viscosity effects. Jeffrey Lin, Carlo Scalo, Lambertus Hesselink, 55th AIAA Aerospace Sciences Meeting. 929Jeffrey Lin, Carlo Scalo, and Lambertus Hesselink. "High-fidelity simulation of an ultrasonic standing-wave thermoacoustic engine with bulk viscosity effects". In: 55th AIAA Aerospace Sciences Meeting. 2017, p. 0929. How important is the liquid bulk viscosity effect on the dynamics of a single cavitation bubble?. H Nazari-Mahroo, K Pasandideh, R Ha Navid, Sadighi-Bonabi, In: Ultrasonics Sonochemistry. 49H Nazari-Mahroo, K Pasandideh, HA Navid, and R Sadighi-Bonabi. "How important is the liquid bulk viscosity effect on the dynamics of a single cavitation bubble?" In: Ultrasonics Sonochemistry 49 (2018), pp. 47-52. Relaxation processes in carbon dioxide. E Kustova, A Mekhonoshina, Kosareva, Physics of Fluids. 3146104E Kustova, M Mekhonoshina, and A Kosareva. "Relaxation processes in carbon diox- ide". In: Physics of Fluids 31.4 (2019), p. 046104. Bulk viscosity of hot neutron-star matter and the maximum rotation rates of neutron stars. F Raymond, Sawyer, Physical Review D. 393804Raymond F Sawyer. "Bulk viscosity of hot neutron-star matter and the maximum rotation rates of neutron stars". In: Physical Review D 39.12 (1989), p. 3804. Bulk viscosity in nuclear and quark matter. Hui Dong, Nan Su, Qun Wang, Journal of Physics G: Nuclear and Particle Physics. 34643Hui Dong, Nan Su, and Qun Wang. "Bulk viscosity in nuclear and quark matter". In: Journal of Physics G: Nuclear and Particle Physics 34.8 (2007), S643. Extraction of hot QCD matter transport coefficients utilizing microscopic transport theory. Nasser Soliman Demir, Duke UniversityPhD thesisNasser Soliman Demir. "Extraction of hot QCD matter transport coefficients utilizing microscopic transport theory". PhD thesis. Duke University, 2010. New formula for the bulk viscosity constructed from the interatomic potential and the pair distribution function. Hisashi Okumura, Fumiko Yonezawa, The Journal of chemical physics. 116Hisashi Okumura and Fumiko Yonezawa. "New formula for the bulk viscosity con- structed from the interatomic potential and the pair distribution function". In: The Journal of chemical physics 116.17 (2002), pp. 7400-7410. Intrinsic Bulk Viscosity in Monatomic and Diatomic Gases. Re Nettleton, Journal of Applied Physics. 29RE Nettleton. "Intrinsic Bulk Viscosity in Monatomic and Diatomic Gases". In: Jour- nal of Applied Physics 29.2 (1958), pp. 204-212. Dispersion and absorption of high frequency sound waves. Kf Herzfeld, Rice, Physical Review. 31691KF Herzfeld and FO Rice. "Dispersion and absorption of high frequency sound waves". In: Physical Review 31.4 (1928), p. 691. The origin of ultrasonic absorption in water. Leonard Hall, Physical Review. 73775Leonard Hall. "The origin of ultrasonic absorption in water". In: Physical Review 73.7 (1948), p. 775. Molecular origins of bulk viscosity in liquid water. Ahmad Yahya, Luoxi Tan, Stefania Perticaroli, Eugene Mamontov, Daniel Pajerowski, Joerg Neuefeind, Georg Ehlers, Jonathan D Nickels, Physical Chemistry Chemical Physics. 22Ahmad Yahya, Luoxi Tan, Stefania Perticaroli, Eugene Mamontov, Daniel Pajerowski, Joerg Neuefeind, Georg Ehlers, and Jonathan D Nickels. "Molecular origins of bulk viscosity in liquid water". In: Physical Chemistry Chemical Physics 22.17 (2020), pp. 9494-9502. Ultrasonic determination of the volume viscosity of N 2 , CO, CH 4 and CD 4 between 77 and 300 K. G J Prangsma, A H Alberga, Jjm Beenakker, Physica 64. 2G J Prangsma, A H Alberga, and JJM Beenakker. "Ultrasonic determination of the volume viscosity of N 2 , CO, CH 4 and CD 4 between 77 and 300 K". In: Physica 64.2 (1973), pp. 278-288. The mathematical theory of non-uniform gases: an account of the kinetic theory of viscosity, thermal conduction and diffusion in gases. Sydney Chapman, Thomas George Cowling, David Burnett, Cambridge university pressSydney Chapman, Thomas George Cowling, and David Burnett. The mathematical theory of non-uniform gases: an account of the kinetic theory of viscosity, thermal conduction and diffusion in gases. Cambridge university press, 1990. Bulk viscosity of dilute monatomic gases revisited. Bhanuday Sharma, Savitha Pareek, Rakesh Kumar, European Journal of Mechanics-B/Fluids. 98Bhanuday Sharma, Savitha Pareek, and Rakesh Kumar. "Bulk viscosity of dilute monatomic gases revisited". In: European Journal of Mechanics-B/Fluids 98 (2023), pp. 32-39. Transport coefficients and heat fluxes in non-equilibrium high-temperature flows with electronic excitation. E V Va Istomin, Kustova, Physics of Plasmas. 2422109VA Istomin and EV Kustova. "Transport coefficients and heat fluxes in non-equilibrium high-temperature flows with electronic excitation". In: Physics of Plasmas 24.2 (2017), p. 022109. Transport coefficients in quantum chromodynamics. Detlof Wilhelm Von Oertzen, University of Cape TownPhD thesisDetlof Wilhelm von Oertzen. "Transport coefficients in quantum chromodynamics". PhD thesis. University of Cape Town, 1990. A comparative study of Bulk Viscosity of strongly interacting systems. Kinkar Saha, Sabyasachi Ghosh, Sudipa Upadhaya, DAE Symp. Nucl. Phys. 62Kinkar Saha, Sabyasachi Ghosh, and Sudipa Upadhaya. "A comparative study of Bulk Viscosity of strongly interacting systems". In: DAE Symp. Nucl. Phys. Vol. 62. 2017, pp. 944-945. Transport coefficients in ultra-relativistic heavy-ion collisions. Sean Gavin, Nuclear Physics A. 4353-4Sean Gavin. "Transport coefficients in ultra-relativistic heavy-ion collisions". In: Nu- clear Physics A 435.3-4 (1985), pp. 826-843. Non-equilibrium properties of hadronic mixtures. Madappa Prakash, Manju Prakash, Raju Venugopalan, Gerd Welke, Physics Reports. 227Madappa Prakash, Manju Prakash, Raju Venugopalan, and Gerd Welke. "Non-equilibrium properties of hadronic mixtures". In: Physics Reports 227.6 (1993), pp. 321-366. How fast is equilibration in hot hadronic matter?. Madappa Prakash, Manju Prakash, Raju Venugopalan, Gerd M Welke, In: Physical review letters. 701228Madappa Prakash, Manju Prakash, Raju Venugopalan, and Gerd M Welke. "How fast is equilibration in hot hadronic matter?" In: Physical review letters 70.9 (1993), p. 1228. Sr De Groot, G Wa Van Leeuwen, Van Weert, Ch, Relativistic Kinetic Theory: Principles and Applications. New-York A OxfordNorth-Holland Publ. Company. AmsterdamÅSR De Groot, WA van Leeuwen, and G van Weert Ch. Relativistic Kinetic Theory: Principles and Applications. North-Holland Publ. Company. AmsterdamÅ New-York A Oxford, 1980. Quasiparticle theory of shear and bulk viscosities of hadronic matter. P Chakraborty, Joseph I Kapusta, Physical Review C. 8314906P Chakraborty and Joseph I Kapusta. "Quasiparticle theory of shear and bulk vis- cosities of hadronic matter". In: Physical Review C 83.1 (2011), p. 014906. Extracting hadronic viscosity from microscopic transport models. Nasser Demir, A Steffen, Bass, The European Physical Journal C. 62Nasser Demir and Steffen A Bass. "Extracting hadronic viscosity from microscopic transport models". In: The European Physical Journal C 62.1 (2009), pp. 63-68. Shear-viscosity to entropy-density ratio of a relativistic hadron gas. Nasser Demir, A Steffen, Bass, Physical review letters. 102172302Nasser Demir and Steffen A Bass. "Shear-viscosity to entropy-density ratio of a rela- tivistic hadron gas". In: Physical review letters 102.17 (2009), p. 172302. Bulk viscosity as a relaxation parameter: fact or fiction?. E Willard, Gilda A Meador, Lawrence W Miner, Townsend, In: Physics of fluids. 8Willard E Meador, Gilda A Miner, and Lawrence W Townsend. "Bulk viscosity as a relaxation parameter: fact or fiction?" In: Physics of fluids 8.1 (1996), pp. 258-261. Continuum perspective of bulk viscosity in compressible fluids. Xin-Dong Li, Zong-Min Hu, Zong-Lin Jiang, Journal of Fluid Mechanics. 812Xin-Dong Li, Zong-Min Hu, and Zong-Lin Jiang. "Continuum perspective of bulk viscosity in compressible fluids". In: Journal of Fluid Mechanics 812 (2017), pp. 966- 990. Non-equilibrium reacting gas flows: kinetic theory of transport and relaxation processes. Ekaterina Nagnibeda, Elena Kustova, Springer Science & Business MediaChennai, IndiaEkaterina Nagnibeda and Elena Kustova. Non-equilibrium reacting gas flows: kinetic theory of transport and relaxation processes. Chennai, India: Springer Science & Busi- ness Media, 2009. Heat conductivity of polyatomic and polar gases. A Eo, L Mason, Monchick, The Journal of Chemical Physics. 366Eo A Mason and L Monchick. "Heat conductivity of polyatomic and polar gases". In: The Journal of Chemical Physics 36.6 (1962), pp. 1622-1639. Theoretical evaluation of bulk viscosity: Expression for relaxation time. Ali Hossein Mohammad Zaheri, Sunita Srivastava, K Tankeshwar, Physical Review E. 7641204Ali Hossein Mohammad Zaheri, Sunita Srivastava, and K Tankeshwar. "Theoretical evaluation of bulk viscosity: Expression for relaxation time". In: Physical Review E 76.4 (2007), p. 041204. Bulk viscosity: past to present. E Rick, Brian M Graves, Argrow, Journal of Thermophysics and Heat Transfer. 13Rick E Graves and Brian M Argrow. "Bulk viscosity: past to present". In: Journal of Thermophysics and Heat Transfer 13.3 (1999), pp. 337-342. The second coefficient of viscosity of liquids and gases. S M Karim, Rosenhead, In: Reviews of Modern Physics. 24108SM Karim and L Rosenhead. "The second coefficient of viscosity of liquids and gases". In: Reviews of Modern Physics 24.2 (1952), p. 108. High-Temperature Ultrasonic Measurements of Rotational Relaxation in Hydrogen, Deuterium, Nitrogen, and Oxygen. G Thomas, Winter, L Garnett, Hill, The Journal of the Acoustical Society of America. 42Thomas G Winter and Garnett L Hill. "High-Temperature Ultrasonic Measurements of Rotational Relaxation in Hydrogen, Deuterium, Nitrogen, and Oxygen". In: The Journal of the Acoustical Society of America 42.4 (1967), pp. 848-858. Power spectrum of coherent Rayleigh-Brillouin scattering in carbon dioxide. Xingguo Pan, N Mikhail, Richard B Shneider, Miles, Physical Review A. 7145801Xingguo Pan, Mikhail N Shneider, and Richard B Miles. "Power spectrum of coherent Rayleigh-Brillouin scattering in carbon dioxide". In: Physical Review A 71.4 (2005), p. 045801. Coherent Rayleigh-Brillouin scattering in molecular gases. Xingguo Pan, N Mikhail, Richard B Shneider, Miles, Physical Review A. 69333814Xingguo Pan, Mikhail N Shneider, and Richard B Miles. "Coherent Rayleigh-Brillouin scattering in molecular gases". In: Physical Review A 69.3 (2004), p. 033814. Coherent and spontaneous Rayleigh-Brillouin scattering in atomic and molecular gases and gas mixtures. M O Vieitez, E J Van Duijn, W Ubachs, Witschas, A Meijer, N J S De Wijn, W Dam, Van De Water, Physical Review A. 8243836M O Vieitez, E J Van Duijn, W Ubachs, B Witschas, A Meijer, A S De Wijn, N J Dam, and W Van de Water. "Coherent and spontaneous Rayleigh-Brillouin scattering in atomic and molecular gases and gas mixtures". In: Physical Review A 82.4 (2010), p. 043836. Temperature-dependent bulk viscosity of nitrogen gas determined from spontaneous Rayleigh-Brillouin scattering. Ziyu Gu, Wim Ubachs, Optics letters. 38Ziyu Gu and Wim Ubachs. "Temperature-dependent bulk viscosity of nitrogen gas determined from spontaneous Rayleigh-Brillouin scattering". In: Optics letters 38.7 (2013), pp. 1110-1112. A systematic study of Rayleigh-Brillouin scattering in air, N 2 , and O 2 gases. Ziyu Gu, Wim Ubachs, The Journal of chemical physics. 141104320Ziyu Gu and Wim Ubachs. "A systematic study of Rayleigh-Brillouin scattering in air, N 2 , and O 2 gases". In: The Journal of chemical physics 141.10 (2014), p. 104320. Coherent Rayleigh-Brillouin scattering measurements of bulk viscosity of polar and nonpolar gases, and kinetic theory. As Meijer, De Wijn, Peters, W Nj Dam, Van De Water, The Journal of chemical physics. 133164315AS Meijer, AS de Wijn, MFE Peters, NJ Dam, and W van de Water. "Coherent Rayleigh-Brillouin scattering measurements of bulk viscosity of polar and nonpo- lar gases, and kinetic theory". In: The Journal of chemical physics 133.16 (2010), p. 164315. Bulk viscosity and compressibility measurement using acoustic spectroscopy. S Andrei, Philip J Dukhin, Goetz, The Journal of chemical physics. 130124519Andrei S Dukhin and Philip J Goetz. "Bulk viscosity and compressibility measure- ment using acoustic spectroscopy". In: The Journal of chemical physics 130.12 (2009), p. 124519. Markoff random processes and the statistical mechanics of timedependent phenomena. II. Irreversible processes in fluids. S Melville, Green, The Journal of Chemical Physics. 22Melville S Green. "Markoff random processes and the statistical mechanics of time- dependent phenomena. II. Irreversible processes in fluids". In: The Journal of Chem- ical Physics 22.3 (1954), pp. 398-413. Statistical-mechanical theory of irreversible processes. I. General theory and simple applications to magnetic and conduction problems. Ryogo Kubo, Journal of the Physical Society of Japan. 12Ryogo Kubo. "Statistical-mechanical theory of irreversible processes. I. General the- ory and simple applications to magnetic and conduction problems". In: Journal of the Physical Society of Japan 12.6 (1957), pp. 570-586. Transport coefficients from dissipation in a canonical ensemble. Eugene Helfand, Physical Review. 1191Eugene Helfand. "Transport coefficients from dissipation in a canonical ensemble". In: Physical Review 119.1 (1960), p. 1. Transport and Helfand moments in the Lennard-Jones fluid. I. Shear viscosity. Sébastien Viscardy, James Servantie, Pierre Gaspard, The Journal of chemical physics. 126184512Sébastien Viscardy, James Servantie, and Pierre Gaspard. "Transport and Helfand moments in the Lennard-Jones fluid. I. Shear viscosity". In: The Journal of chemical physics 126.18 (2007), p. 184512. Direct simulation Monte Carlo simulation of thermal fluctuations in gases. Domenico Bruno, Physics of Fluids. 3147105Domenico Bruno. "Direct simulation Monte Carlo simulation of thermal fluctuations in gases". In: Physics of Fluids 31.4 (2019), p. 047105. Oxygen transport properties estimation by classical trajectory-direct simulation Monte Carlo. Domenico Bruno, Aldo Frezzotti, Gian Pietro Ghiroldi, Physics of Fluids. 2757101Domenico Bruno, Aldo Frezzotti, and Gian Pietro Ghiroldi. "Oxygen transport prop- erties estimation by classical trajectory-direct simulation Monte Carlo". In: Physics of Fluids 27.5 (2015), p. 057101. Classical theory of transport phenomena in dilute polyatomic gases. Norman Taxman, Physical Review. 1101235Norman Taxman. "Classical theory of transport phenomena in dilute polyatomic gases". In: Physical Review 110.6 (1958), p. 1235. Transport phenomena in polyatomic gases. - Cs Wang, G E Chang, Uhlenbeck, Research Rep. CM-681. University of Michigan EngineeringCS Wang-Chang and GE Uhlenbeck. "Transport phenomena in polyatomic gases". In: Research Rep. CM-681. University of Michigan Engineering (1951). Reliable viscosity calculation from equilibrium molecular dynamics simulations: a time decomposition method. Yong Zhang, Akihito Otani, Edward J Maginn, Journal of chemical theory and computation. 11Yong Zhang, Akihito Otani, and Edward J Maginn. "Reliable viscosity calculation from equilibrium molecular dynamics simulations: a time decomposition method". In: Journal of chemical theory and computation 11.8 (2015), pp. 3537-3546. . Karsten Meier, Arno Laesecke, Stephan Kabelac, Transport coefficients of the Lennard-Jones model fluid. I. Viscosity". In: The Journal of chemical physics. 121Karsten Meier, Arno Laesecke, and Stephan Kabelac. "Transport coefficients of the Lennard-Jones model fluid. I. Viscosity". In: The Journal of chemical physics 121.8 (2004), pp. 3671-3687. Karsten Meier, Arno Laesecke, Stephan Kabelac, Transport coefficients of the Lennard-Jones model fluid. III. Bulk viscosity. 12214513Karsten Meier, Arno Laesecke, and Stephan Kabelac. "Transport coefficients of the Lennard-Jones model fluid. III. Bulk viscosity". In: The Journal of chemical physics 122.1 (2005), p. 014513. Einstein-Kubo-Helfand and McQuarrie relations for transport coefficients. J Jerome, Erpenbeck, Physical Review E. 514296Jerome J Erpenbeck. "Einstein-Kubo-Helfand and McQuarrie relations for transport coefficients". In: Physical Review E 51.5 (1995), p. 4296. Calculation of the transport and relaxation properties of methane. I. Shear viscosity, viscomagnetic effects, and self-diffusion. Robert Hellmann, Eckard Bich, Eckhard Vogel, Alan S Dickinson, Velisa Vesovic, The Journal of chemical physics. 12964302Robert Hellmann, Eckard Bich, Eckhard Vogel, Alan S Dickinson, and Velisa Vesovic. "Calculation of the transport and relaxation properties of methane. I. Shear viscosity, viscomagnetic effects, and self-diffusion". In: The Journal of chemical physics 129.6 (2008), p. 064302. Particle simulation of complex flows in dilute systems. Florence Baras, A L Malek Mansour, Michel Garcia, Mareschal, Journal of Computational Physics. 119Florence Baras, M Malek Mansour, AL Garcia, and Michel Mareschal. "Particle sim- ulation of complex flows in dilute systems". In: Journal of Computational Physics 119.1 (1995), pp. 94-104. Experimental and numerical analysis of narrowband coherent Rayleigh-Brillouin scattering in atomic and molecular species. Barry M Cornella, F Sergey, Gimelshein, N Mikhail, Shneider, C Taylor, Andrew D Lilly, Ketsdever, Optics express. 20Barry M Cornella, Sergey F Gimelshein, Mikhail N Shneider, Taylor C Lilly, and Andrew D Ketsdever. "Experimental and numerical analysis of narrowband coherent Rayleigh-Brillouin scattering in atomic and molecular species". In: Optics express 20.12 (2012), pp. 12975-12986. Estimation of bulk viscosity of dilute gases using a nonequilibrium molecular dynamics approach. Bhanuday Sharma, Rakesh Kumar, https:/link.aps.org/doi/10.1103/PhysRevE.100.013309Phys. Rev. E. 10013309Bhanuday Sharma and Rakesh Kumar. "Estimation of bulk viscosity of dilute gases using a nonequilibrium molecular dynamics approach". In: Phys. Rev. E 100 (1 2019), p. 013309. doi: 10.1103/PhysRevE.100.013309. url: https://link.aps.org/ doi/10.1103/PhysRevE.100.013309. Bulk viscosity of model fluids. A comparison of equilibrium and nonequilibrium molecular dynamics results. Claus Hoheisel, The Journal of chemical physics. 86Claus Hoheisel. "Bulk viscosity of model fluids. A comparison of equilibrium and nonequilibrium molecular dynamics results". In: The Journal of chemical physics 86.4 (1987), pp. 2328-2333. On the Nature of Bulk Viscosity of Dilute Gases. Bhanuday Sharma, Indian Institute of Technology KanpurPhD thesisBhanuday Sharma. "On the Nature of Bulk Viscosity of Dilute Gases". PhD thesis. Indian Institute of Technology Kanpur, 2022. Absorption and dispersion of ultrasonic waves. F Karl, Theodore A Herzfeld, Litovitz, Academic Press7Karl F Herzfeld and Theodore A Litovitz. Absorption and dispersion of ultrasonic waves. Vol. 7. Academic Press, 2013. A discussion on the first and second viscosities of fluids. L Rosenhead, Proc. R. Soc. London. 226L Rosenhead. "A discussion on the first and second viscosities of fluids". In: Proc. R. Soc. London 226 (1954), pp. 1-69. Kinetic and thermodynamic aspects of the second coefficient of viscosity. Ro Davies, Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences. 226RO Davies. "Kinetic and thermodynamic aspects of the second coefficient of viscos- ity". In: Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences 226.1164 (1954), pp. 24-34. On the concept of volume viscosity. Sv Borovinskii, Ov Mazurin, Progress and Trends in Rheology II. SpringerSV Borovinskii and OV Mazurin. "On the concept of volume viscosity". In: Progress and Trends in Rheology II. Springer, 1988, pp. 79-81. Analytical fluid dynamics. George Emanuel, CRC pressGeorge Emanuel. Analytical fluid dynamics. CRC press, 2000. Statistical Mechanics for Chemistry and Materials Science. Biman Bagchi, CRC PressBiman Bagchi. Statistical Mechanics for Chemistry and Materials Science. CRC Press, 2018. Computer simulation of liquids. P Michael, Dominic J Allen, Tildesley, Oxford university pressMichael P Allen and Dominic J Tildesley. Computer simulation of liquids. Oxford university press, 2017. On the estimation of bulk viscosity of dilute nitrogen gas using equilibrium molecular dynamics approach. Bhanuday Sharma, Rakesh Kumar, Prateek Gupta, Savitha Pareek, Ashish Singh, Physics of Fluids. 3457104Bhanuday Sharma, Rakesh Kumar, Prateek Gupta, Savitha Pareek, and Ashish Singh. "On the estimation of bulk viscosity of dilute nitrogen gas using equilibrium molecular dynamics approach". In: Physics of Fluids 34.5 (2022), p. 057104. Bulk viscosity of dilute gases and their mixtures. Bhanuday Sharma, Rakesh Kumar, Savitha Pareek, 28In: Fluids 8.1 (2023Bhanuday Sharma, Rakesh Kumar, and Savitha Pareek. "Bulk viscosity of dilute gases and their mixtures". In: Fluids 8.1 (2023), p. 28.
[]
[ "Classifier and Exemplar Synthesis for Zero-Shot Learning", "Classifier and Exemplar Synthesis for Zero-Shot Learning" ]
[ "Soravit Changpinyo \nDepartment of Computer Science\nTencent AI Lab\nDepartment of Computer Science\nCornell University\nUniversity of Southern California\n\n", "Wei-Lun Chao \nDepartment of Computer Science\nTencent AI Lab\nDepartment of Computer Science\nCornell University\nUniversity of Southern California\n\n", "Boqing Gong [email protected] \nDepartment of Computer Science\nTencent AI Lab\nDepartment of Computer Science\nCornell University\nUniversity of Southern California\n\n", "· Fei \nDepartment of Computer Science\nTencent AI Lab\nDepartment of Computer Science\nCornell University\nUniversity of Southern California\n\n", "Sha \nDepartment of Computer Science\nTencent AI Lab\nDepartment of Computer Science\nCornell University\nUniversity of Southern California\n\n", "Soravit Changpinyo \nDepartment of Computer Science\nTencent AI Lab\nDepartment of Computer Science\nCornell University\nUniversity of Southern California\n\n", "Google Ai \nDepartment of Computer Science\nTencent AI Lab\nDepartment of Computer Science\nCornell University\nUniversity of Southern California\n\n", "Wei-Lun Chao \nDepartment of Computer Science\nTencent AI Lab\nDepartment of Computer Science\nCornell University\nUniversity of Southern California\n\n", "Boqing Gong \nDepartment of Computer Science\nTencent AI Lab\nDepartment of Computer Science\nCornell University\nUniversity of Southern California\n\n", "Sha Fei [email protected] \nDepartment of Computer Science\nTencent AI Lab\nDepartment of Computer Science\nCornell University\nUniversity of Southern California\n\n" ]
[ "Department of Computer Science\nTencent AI Lab\nDepartment of Computer Science\nCornell University\nUniversity of Southern California\n", "Department of Computer Science\nTencent AI Lab\nDepartment of Computer Science\nCornell University\nUniversity of Southern California\n", "Department of Computer Science\nTencent AI Lab\nDepartment of Computer Science\nCornell University\nUniversity of Southern California\n", "Department of Computer Science\nTencent AI Lab\nDepartment of Computer Science\nCornell University\nUniversity of Southern California\n", "Department of Computer Science\nTencent AI Lab\nDepartment of Computer Science\nCornell University\nUniversity of Southern California\n", "Department of Computer Science\nTencent AI Lab\nDepartment of Computer Science\nCornell University\nUniversity of Southern California\n", "Department of Computer Science\nTencent AI Lab\nDepartment of Computer Science\nCornell University\nUniversity of Southern California\n", "Department of Computer Science\nTencent AI Lab\nDepartment of Computer Science\nCornell University\nUniversity of Southern California\n", "Department of Computer Science\nTencent AI Lab\nDepartment of Computer Science\nCornell University\nUniversity of Southern California\n", "Department of Computer Science\nTencent AI Lab\nDepartment of Computer Science\nCornell University\nUniversity of Southern California\n" ]
[]
Zero-shot learning (ZSL) enables solving a task without the need to see its examples. In this paper, we propose two ZSL frameworks that learn to synthesize parameters for novel unseen classes. First, we propose to cast the problem of ZSL as learning manifold embeddings from graphs composed of object classes, leading to a flexible approach that synthesizes "classifiers" for the unseen classes. Then, we define an auxiliary task of synthesizing "exemplars" for the unseen classes to be used as an automatic denoising mechanism for any existing ZSL approaches or as an effective ZSL model by itself. On five visual recognition benchmark datasets, we demonstrate the superior performances of our proposed frameworks in various scenarios of both conventional and generalized ZSL. Finally, we provide valuable insights through a series of empirical analyses, among which are a comparison of semantic representations on the full ImageNet benchmark as well as a comparison of metrics used in generalized ZSL. Our code and data are publicly available at https: //github.com/pujols/Zero-shot-learning-journal.
10.1007/s11263-019-01193-1
[ "https://arxiv.org/pdf/1812.06423v1.pdf" ]
56,256,307
1812.06423
d277a60f2904606193170131ffd0c1affc416c7b
Classifier and Exemplar Synthesis for Zero-Shot Learning Soravit Changpinyo Department of Computer Science Tencent AI Lab Department of Computer Science Cornell University University of Southern California Wei-Lun Chao Department of Computer Science Tencent AI Lab Department of Computer Science Cornell University University of Southern California Boqing Gong [email protected] Department of Computer Science Tencent AI Lab Department of Computer Science Cornell University University of Southern California · Fei Department of Computer Science Tencent AI Lab Department of Computer Science Cornell University University of Southern California Sha Department of Computer Science Tencent AI Lab Department of Computer Science Cornell University University of Southern California Soravit Changpinyo Department of Computer Science Tencent AI Lab Department of Computer Science Cornell University University of Southern California Google Ai Department of Computer Science Tencent AI Lab Department of Computer Science Cornell University University of Southern California Wei-Lun Chao Department of Computer Science Tencent AI Lab Department of Computer Science Cornell University University of Southern California Boqing Gong Department of Computer Science Tencent AI Lab Department of Computer Science Cornell University University of Southern California Sha Fei [email protected] Department of Computer Science Tencent AI Lab Department of Computer Science Cornell University University of Southern California Classifier and Exemplar Synthesis for Zero-Shot Learning Received: date / Accepted: dateInternational Journal of Computer Vision manuscript No. (will be inserted by the editor)Zero-shot learning · Generalized zero-shot learning · Transfer learning · Object recognition · Semantic embeddings Zero-shot learning (ZSL) enables solving a task without the need to see its examples. In this paper, we propose two ZSL frameworks that learn to synthesize parameters for novel unseen classes. First, we propose to cast the problem of ZSL as learning manifold embeddings from graphs composed of object classes, leading to a flexible approach that synthesizes "classifiers" for the unseen classes. Then, we define an auxiliary task of synthesizing "exemplars" for the unseen classes to be used as an automatic denoising mechanism for any existing ZSL approaches or as an effective ZSL model by itself. On five visual recognition benchmark datasets, we demonstrate the superior performances of our proposed frameworks in various scenarios of both conventional and generalized ZSL. Finally, we provide valuable insights through a series of empirical analyses, among which are a comparison of semantic representations on the full ImageNet benchmark as well as a comparison of metrics used in generalized ZSL. Our code and data are publicly available at https: //github.com/pujols/Zero-shot-learning-journal. Introduction Visual recognition has made a significant progress due to the widespread use of deep learning architectures [35,66,68,25] that are optimized on large-scale datasets of human-labeled images [62]. Despite the exciting advances, to recognize objects "in the wild" remains a daunting challenge. In particular, the amount of annotation effort is vital to deep learning architectures in order to discover and exploit powerful discriminating visual features. There are many application scenarios, however, where collecting and labeling training instances can be laboriously difficult and costly. For example, when the objects of interest are rare (e.g., only about a hundred of northern hairy-nosed wombats alive in the wild) or newly defined (e.g., images of futuristic products such as Tesla's Model Y), not only the number of labeled training images but also the statistical variation among them is limited. These restrictions prevent one from training robust systems for recognizing such objects. More importantly, the number of such objects could be significantly greater than the number of common objects. In other words, the frequencies of observing objects follow a long-tailed distribution [63,89,70]. Zero-shot learning (ZSL) has since emerged as a promising paradigm to remedy the above difficulties. Unlike supervised learning, ZSL distinguishes between two types of classes: seen and unseen. Labeled examples are only available for the seen classes whereas no (labeled or unlabeled) examples are available for the un-seen ones. The main goal of zero-shot learning is to construct classifiers for the unseen classes, extrapolating from what we learned from the seen ones. To this end, we need to address two key interwoven challenges [51]: (1) how to relate unseen classes to seen ones and (2) how to attain optimal discriminative performance on the unseen classes even though we do not have access to their representative labeled data? The first challenge can be overcome by the introduction of a shared semantic space that embeds all categories. Given access to this semantic space, zero-shot learners can exploit the semantic relationship between seen and unseen classes to establish the visual relationship. Multiple types of semantic information have been exploited in the literature: visual attributes [16,37], word vector representations of class names [17,67,50], textual descriptions [15,39,56], hierarchical ontology of classes (such as WordNet [48]) [3,42,75], and human gazes [31]. The second challenge requires developing the appropriate objectives or algorithmic procedures for ZSL. Many ZSL methods take a two-stage approach: (i) predicting the embedding of a (visual) input in the semantic space; (ii) inferring the class labels by comparing the embedding to the unseen classes' semantic representations [16,37,51,67,83,27,50,42]. More recent ZSL methods take a unified approach by jointly learning the functions to predict the semantic embeddings as well as to measure similarity in the embedding space [2,3,17,61,85,86]. We refer the readers to Sect. 5 and recent survey articles by [78,76,20] for the descriptions and comparison of these representative methods. In this paper, we propose two zero-shot learning frameworks, where the major common theme is to learn to "synthesize" representative parameters (a "summary") for the unseen classes. One natural choice of such parameters are "classifiers" that, as the name suggests, can be used to recognize object classes in a straightforward manner 1 . Other choices of class summaries exist but additional steps may be needed to perform zeroshot recognition. We explore one such choice and define "visual exemplars" as (average) dimensionality-reduced visual features of different classes. We learn to predict these exemplars and then use them to perform zeroshot recognition in two different manners. Below, we describe concrete implementations of both frameworks. In the first framework of Synthesized Classifiers (SynC; Fig. 1), we take ideas from manifold learning [26,6] and cast zero-shot learning as a graph alignment problem. One one end, we view the object classes in a semantic space as a weighted graph where the nodes 1 In this work, classifiers are taken to be the normals of hyperplanes separating different classes (i.e., linear classifiers). correspond to object class names and the weights of the edges represent how they are related. Semantic representations can be used to infer those weights. On the other end, we view models or classifiers for recognizing images of those classes as if they live in a space of models. The parameters for each object model are nothing but coordinates in this model space whose geometric configuration also reflects the relatedness among objects. To reduce the complexity of alignment, we introduce a set of phantom object classes -interpreted as bases (classifiers) -from which a large number of classifiers for real classes can be synthesized. In particular, the model for any real class is a convex combination of the coordinates of those phantom classes. Given these components, we learn to synthesize the classifier weights (i.e., coordinates in the model space) for the unseen classes via convex combinations of adjustable and optimized phantom coordinates and with the goal of preserving their semantic graph structures. In the other framework of EXEMplar synthesis (EXEM; Fig. 2), we first define visual exemplars as target summaries of object classes and then learn to predict them from semantic representations. We then propose two ways to make use of these predicted exemplars for zeroshot recognition. One way is to use the exemplars as improved semantic representations in a separate zero-shot learning algorithm. This is motivated by the evidence that existing semantic representations are barely informative about visual relatedness (cf. Sect. 2.2). Moreover, as the predicted visual exemplars live in the visual feature space, we also use them to construct nearestneighbor style classifiers, where we treat each of them as a data instance. Our empirical studies extensively test the effectiveness of different variants of our approaches on five benchmark datasets for conventional and four for generalized zero-shot learning. We find that SynC performs competitively against many strong baselines. Moreover, EXEM enhances not only the performance of SynC but also those of other ZSL approaches. In general, we find that EXEM, albeit simple, is overall the most effective ZSL approach and that both SynC and EXEM achieve the best results on the large-scale ImageNet benchmark. We complement our studies with analyses on the effect of different semantic representations and metrics. We obtain several interesting results. One is from an empirical comparison between the metrics used in generalized zero-shot learning; we identify shortcomings of the widely-used uncalibrated harmonic mean and recommend that the calibrated harmonic mean or the Area under Seen-Unseen Accuracy curve (AUSUC) be used instead. Another interesting result is that we obtain higher-quality semantic representations and use them to establish the new state-of-the-art performance on the large-scale ImageNet benchmark. Finally, based on the idea in EXEM, we investigate how much the ImageNet performance can be improved by ideal semantic representations and see a large gap between those results and existing ones obtained by our algorithms. This work unifies and extends our previously published conference papers [8,9]. Firstly, we unify our ZSL methods SynC and EXEM using the "synthesis" theme, providing more consistent terminology, notations, and figures as well as extending the discussion of related work. Secondly, we provide more coherent experimental design and comprehensive, updated results. Our experiments have been extended extensively to include new results on an additional dataset (AwA2 [76]), stronger visual features (ResNet), better semantic representations (our improved word vectors on ImageNet and ideal semantic representations), new and more rigorous training/validation/test data splits, recommended by [76], newly proposed metrics (per-class accuracy on Ima-geNet, AUSUC, uncalibrated and calibrated harmonic mean), additional variants of our methods, and additional baselines. We also provide a summarized comparison of ZSL methods (Sect. 4.1). For more details on which results are newly reported by this work, please refer to our tables ("reported by us"). Thirdly, we extend our results and analysis on generalized ZSL. On selected multiple strong baselines, we provide empirical evidence of a shortcoming of the widely-used metric and propose its calibrated version that is built on top of calibrated stacking [10]. Finally, we further empirically demonstrate the importance of high-quality semantic representations for ZSL, and establish upperbound performance on ImageNet in various scenarios of conventional ZSL. The rest of the paper is organized as follows. We describe our classifier and exemplar systhesis frameworks in Sect. 2. We validate our approaches using the experimental setup in Sect. 3 and present our results in Sect. 4. We discuss related work in Sect. 5. Finally, we conclude in Sect. 6. Approach We describe our methods for addressing (conventional) zero-shot learning, where the task is to classify images from unseen classes into the label space of unseen classes. We first describe, SynC, a manifold-learningbased method for synthesizing classifiers of unseen classes. We then describe, EXEM, an approach that automatically improves semantic representations through visual exemplar synthesis. EXEM can generally be combined with any zero-shot learning algorithms, and can by itself operate as a zero-shot learning algorithm. Notations: We denote by D = {(x n ∈ R D , y n )} N n=1 the training data with the labels coming from the label space of seen classes S = {1, 2, · · · , S}. Denote by U = {S+1, · · · , S+U} the label space of unseen classes. Let T = S ∪ U. For each class c ∈ T , we assume that we have access to its semantic representation a c . Classifier Synthesis We propose a zero-shot learning method of synthesized classifiers, called SynC. We focus on linear classifiers in the visual feature space R D that assign a labelŷ to a data point x bŷ y = arg max c w T c x,(1) where w c ∈ R D , although our approach can be readily extended to nonlinear settings by the kernel trick [64] 2 . Main idea: manifold learning The main idea behind our approach is to align the semantic space and the model space. The semantic space coordinates of objects are designated or derived based on external information (such as textual data) that do not directly examine visual appearances at the lowest level, while the model space concerns itself largely for recognizing low-level visual features. To align them, we view the coordinates in the model space as the projection of the vertices on the graph from the semantic space -there is a wealth of literature on manifold learning for computing (low-dimensional) Euclidean space embeddings from the weighted graph, for example, the well-known algorithm of Laplacian eigenmaps [6]. This idea is shown by the conceptual diagram in Fig. 1. Each class c has a coordinate a c and they live on a manifold in the semantic representation space. We use attributes in this text to illustrate the idea but in the experiments we test our approach on multiple types of semantic representations. Additionally, we introduce a set of phantom classes associated with semantic representations b r , r = 1, 2, . . . , R. We stress that they are phantom as they themselves do not correspond to any real objects -they are introduced to increase the modeling flexibility, as shown below. Fig. 1 Illustration of SynC for zero-shot learning. Object classes live in two spaces. They are characterized in the semantic space with semantic representations (as) such as attributes or word vectors of their names. They are also represented as models for visual recognition (ws) in the model space. In both spaces, those classes form weighted graphs. The main idea behind our approach is that these two spaces should be aligned. In particular, the coordinates in the model space should be the projection of the graph vertices from the semantic space to the model space -preserving class relatedness encoded in the graph. We introduce adaptable phantom classes (b and v) to connect seen and unseen classes -classifiers for the phantom classes are bases for synthesizing classifiers for real classes. In particular, the synthesis takes the form of convex combination. w 3 v 1 v 2 v 3 b 3 Bobolink The real and phantom classes form a weighted bipartite graph, with the weights defined as s cr = exp{−d(a c , b r )} R r=1 exp{−d(a c , b r )}(2) to relate a real class c and a phantom class r, where d(a c , b r ) = (a c − b r ) T Σ −1 (a c − b r ),(3) and Σ −1 is a parameter that can be learned from data, modeling the correlation among attributes. For simplicity, we set Σ = σ 2 I and tune the scalar free hyperparameter σ by cross-validation (Appendix B). The specific form of defining the weights is motivated by several manifold learning methods such as SNE [26]. In particular, s cr can be interpreted as the conditional probability of observing class r in the neighborhood of class c. However, other forms can be explored and are left for future work. In the model space, each real class is associated with a classifier w c and the phantom class r is associated with a virtual classifier v r . We align the semantic and the model spaces by viewing w c (or v r ) as the embedding of the weighted graph. In particular, we appeal to the idea behind Laplacian eigenmaps [6], which seeks the embedding that maintains the graph structure as much as possible. Equivalently, the distortion error w c − R r=1 s cr v r 2 2 (4) with respect to w c , v r is minimized. This objective has an analytical solution w c = R r=1 s cr v r , ∀ c ∈ T = {1, 2, · · · , S + U}.(5) In other words, the solution gives rise to the idea of synthesizing classifiers from those virtual classifiers v r . For conceptual clarity, from now on we refer to v r as base classifiers in a dictionary from which new classifiers can be synthesized. We identify several advantages. First, we could construct an infinite number of classifiers as long as we know how to compute s cr . Second, by making R S, the formulation can significantly reduce the learning cost as we only need to learn R base classifiers. Learning phantom classes Learning base classifiers: We learn the base classifiers {v r } R r=1 from the training data (of the seen classes only). We experiment with two settings. To learn oneversus-other classifiers, we optimize, min v1,··· ,v R S c=1 N n=1 (x n , I yn,c ; w c ) + λ 2 S c=1 w c 2 2 ,(6)s.t. w c = R r=1 s cr v r , ∀ c ∈ T = {1, · · · , S}, where (x, y; w) = max(0, 1 − yw T x) 2 is the squared hinge loss. The indicator I yn,c ∈ {−1, 1} denotes whether or not y n = c. Alternatively, we apply the Crammer-Singer multi-class SVM loss [12], given by cs (x n , y n ; {w c } S c=1 )(7) = max(0, max c∈S−{yn} ∆(c, y n ) + w c T x n − w yn T x n ). We have the standard Crammer-Singer loss when the structured loss ∆(c, y n ) = 1 if c = y n , which, however, ignores the semantic relatedness between classes. We additionally use the 2 distance for the structured loss ∆(c, y n ) = a c − a yn 2 to exploit the class relatedness in our experiments. These two learning settings have separate strengths and weaknesses in our empirical studies. Learning semantic representations: The weighted graph Eq. (2) is also parameterized by adaptable embeddings of the phantom classes b r . For this work, however, for simplicity, we assume that each of them is a sparse linear combination of the seen classes' attribute vectors: b r = S c=1 β rc a c , ∀r ∈ {1, · · · , R}.(8) Thus, to optimize those embeddings, we solve the following optimization problem min {vr} R r=1 ,{βrc} R,S r,c=1 S c=1 N n=1 (x n , I yn,c ; w c ) (9) + λ 2 S c=1 w c 2 2 + η R,S r,c=1 |β rc | + γ 2 R r=1 ( b r 2 2 − h 2 ) 2 , s.t. w c = R r=1 s cr v r , ∀ c ∈ T = {1, · · · , S}, where h is a predefined scalar equal to the norm of real attribute vectors (i.e., 1 in our experiments since we perform 2 normalization). Note that in addition to learning {v r } R r=1 , we learn combination weights {β rc } R,S r,c=1 . Clearly, the constraint together with the third term in the objective encourages the sparse linear combination of the seen classes' attribute vectors. The last term in the objective demands that the norm of b r is not too far from the norm of a c . We perform alternating optimization for minimizing the objective function with respect to {v r } R r=1 and {β rc } R,S r,c=1 . While this process is nonconvex, there are useful heuristics to initialize the optimization routine. For example, if R = S, then the simplest setting is to let b r = a r for r = 1, . . . , R. If R ≤ S, we can let them be (randomly) selected from the seen classes' attribute vectors {b 1 , b 2 , · · · , b R } ⊆ {a 1 , a 2 , · · · , a S }, or first perform clustering on {a 1 , a 2 , · · · , a S } and then let each b r be a combination of the seen classes' attribute vectors in cluster r. If R > S, we could use a combination of the above two strategies 3 . There are four hyper-parameters λ, σ, η, and γ to be tuned. To reduce the search space during cross-validation, we first tune λ, σ while fixing b r for r = 1, . . . , R to the initial values as mentioned above. Then we fix λ and σ and tune η and γ. 3 In practice, we found these initializations to be highly effective -even keeping the initial br intact while only learning vr for r = 1, . . . , R can already achieve comparable results. In most of our experiments, we thus only learn vr for r = 1, . . . , R. Classification with synthesized classifiers Given a data sample x from U unseen classes and their corresponding attribute vectors (or coordinates in other semantic spaces), we classify it in the label space U bŷ y = arg max c ∈ U w c T x(10) with the classifiers being synthesized according to Eq. (5). Exemplar Synthesis The previous subsection describes, SynC, an approach for synthesizing classifiers for the unseen classes in zeroshot learning. SynC preserves graph structures in the semantic representation space. This subsection describes another route for constructing representative parameters for the unseen classes. We define the visual exemplar of a class to be the target "cluster center" of that class, characterized by the average of visual feature vectors. We then learn to predict the object classes' visual exemplars. One motivation for this is the evidence that class semantic representations are hard to get right. While they may capture high-level semantic relationships between classes, they are not well-informed about visual relationships. For example, visual attributes are humanunderstandable so they correspond well with our object class definition. However, they are not always discriminative [52,83], not necessarily machine detectable [14,27], often correlated among themselves ("brown" and "wooden") [28], and possibly not category-independent ("fluffy" animals and "fluffy" towels) [11]. Word vectors of class names have been shown to be inferior to attributes [3,8]. Derived from texts, they have little knowledge about or are barely aligned with visual information. Intuitively, this problem would weaken zeroshot learning methods that rely heavily on "semantic" relationships of classes (such as SynC). We therefore propose the method of predicting visual exemplars (EXEM) to transform the (original) semantic representations into semantic embeddings in another space to which visual information is injected. More specifically, the main computation step of EXEM is reduced to learning (from the seen classes) a predictive function from semantic representations to their corresponding centers of visual feature vectors. This function is used to predict the locations of visual exemplars of the unseen classes. Once predicted, they can be effectively used in any zero-shot learning algorithms as improved semantic representations. For instance, we could use the predicted visual exemplars in SynC to alleviate Fig. 2 Illustration of our method EXEM for improving semantic representations as well as for zero-shot learning. Given semantic information and visual features of the seen classes, we learn a kernel-based regressor ψ(·) such that the semantic representation ac of class c can predict well its visual exemplar (center) zc that characterizes the clustering structure. The learned ψ(·) can be used to predict the visual feature vectors of unseen classes for nearest-neighbor (NN) classification, or to improve the semantic representations for existing ZSL approaches. its naive reliance on the object classes' semantic representations. As another example, as the predicted visual exemplars live in the visual feature space, we could use them to construct nearest-neighbor style classifiers, where we treat each of them as a data instance. Fig. 2 illustrates the conceptual diagram of our approach. Our two-stage approach for zero-shot learning consists of learning a function to predict visual exemplars from semantic representations (Sect. 2.2.1) and then apply this function to perform zero-shot learning given novel semantic representations (Sect. 2.2.2). Learning a function to predict visual exemplars from semantic representations For each class c, we would like to find a transformation function ψ(·) such that ψ(a c ) ≈ z c , where z c ∈ R d is the visual exemplar for the class. In this paper, we create the visual exemplar of a class by averaging the PCA projections of data belonging to that class. That is, we consider z c = 1 |Ic| n∈Ic M x n , where I c = {i : y i = c} and M ∈ R d×D is the PCA projection matrix computed over training data of the seen classes. We note that M is fixed for all data points (i.e., not classspecific) and is used in Eq. (12). Given training visual exemplars and semantic representations, we learn d support vector regressors (SVR) with the RBF kernel -each of them predicts each dimension of visual exemplars from their corresponding semantic representations. Specifically, for each dimension d = 1, . . . , d, we use the ν-SVR formulation [65]. min q,ξ,ξ , 1 2 q T q + λ(ν + 1 S S c=1 (ξ c + ξ c )) s.t.q T θ rbf (a c ) − z c ≤ + ξ c (11) z c − q T θ rbf (a c ) ≤ + ξ c ξ c ≥ 0, ξ c ≥ 0, where θ rbf is an implicit nonlinear mapping based on the RBF kernel. We have dropped the subscript d for aesthetic reasons but readers are reminded that each regressor is trained independently with its own parameters. λ and ν ∈ (0, 1] (along with hyper-parameters of the kernel) are the hyper-parameters to be tuned. The resulting ψ(·) = [q T 1 θ rbf (·), · · · , q T d θ rbf (·)] T , where q d is from the d-th regressor. Note that the PCA step is introduced for both computational and statistical benefits. In addition to reducing dimensionality for faster computation, PCA decorrelates the dimensions of visual features such that we can predict these dimensions independently rather than jointly. Zero-shot learning based on predicted visual exemplars Now that we learn the transformation function ψ(·), how do we use it to perform zero-shot classification? We first apply ψ(·) to all semantic representations a u of the unseen classes. We then consider two main approaches that depend on how we interpret these predicted exemplars ψ(a u ). Predicted exemplars as training data: An obvious approach is to use ψ(a u ) as data directly. Since there is only one data point per class, a natural choice is to use a nearest neighbor classifier. Then, the classifier outputs the label of the closest exemplar for each novel data point x that we would like to classify: y = arg min u dis N N (M x, ψ(a u )),(12) where we adopt the Euclidean distance or the standardized Euclidean distance as dis N N in the experiments. Predicted exemplars as improved semantic representations: The other approach is to use ψ(a u ) as the improved semantic representations ("improved" in the sense that they have knowledge about visual features) and plug them into any existing zero-shot learning framework. We provide two examples. In the method of convex combination of semantic embeddings (ConSE) [50], their original class semantic embeddings are replaced with the corresponding predicted exemplars, while the combining coefficients remain the same. In SynC described in the previous section, the predicted exemplars are used to define the similarity values between the unseen classes and the bases, which in turn are used to compute the combination weights for constructing classifiers. In particular, their similarity measure is of the form in Eq. (2). In this case, we simply need to change such a similarity measure to s cr = exp{−dis(ψ(a c ), ψ(b r ))} R r=1 exp{−dis(ψ(a c ), ψ(b r ))} .(13) In the experiments, we empirically show that existing semantic representations for ZSL are far from the optimal. Our approach can thus be considered as a way to improve semantic representations for ZSL. Experimental Setup In this section, we describe experimental setup and protocols for evaluating zero-shot learning methods, including details on datasets and their splits, semantic representations, visual features, and metrics. We make distinctions between different settings to ensure fair comparison. Datasets and Splits We use five benchmark datasets in our experiments. Table 1 summarizes their key characteristics and splits. More details are provided below. -The Animals with Attributes (AwA) dataset [38] consists of 30,475 images of 50 animal classes. -The Animals with Attributes 2 (AwA2) dataset [76] consists of 37,322 images of 50 animal classes. This dataset has been recently introduced as a replacement to AwA, whose images may not be licensed for free use and redistribution. -The CUB-200-2011 Birds (CUB) dataset [72] consists of 11,788 images of 200 fine-grained bird classes. -The SUN Attribute (SUN) dataset [53] consists of 14,340 images of 717 scene categories (20 images from each category). The dataset is drawn from the the SUN database [79]. -The ImageNet dataset [13] consists of two disjoint subsets. (i) The ILSVRC 2012 1K dataset [62] contains 1,281,167 training and 50,000 validation images from 1,000 categories and is treated as the seenclass data. (ii) Images of unseen classes come from the rest of the ImageNet Fall 2011 release dataset [13] that do not overlap with any of the 1,000 categories. We will call this release the ImageNet 2011 21K dataset (as in [17,50]). Overall, this dataset contains 14,197,122 images from 21,841 classes, and we conduct our experiment on 20,842 unseen classes 4 . For each dataset, we select popular class splits in existing literature and make distinctions between them. On CUB and SUN, Changpinyo et al. [8] randomly split each dataset into 4 and 10 disjoint subsets, respectively. In this case, we report the average score over those subsets; when computing a score on one subset, we use the rest as training classes. Moreover, we differentiate between standard and new splits. Test classes in standard splits (SS or SS0) may overlap with classes used to pre-train deep neural networks for feature extraction (cf. Sect. 5.2 in [76] for details), but almost all previous ZSL methods have adopted them for evaluation. On the other hand, new splits (NS), recently proposed by [76], avoid such problematic class overlapping. We summarize different class splits in Table 1. We use 4 There is one class in the ILSVRC 2012 1K dataset that does not appear in the ImageNet 2011 21K dataset. Thus, we have a total of 20,842 unseen classes to evaluate. Table 1 Key characteristics of datasets and their class splits. SS (SS0) indicates the standard splits adopted in almost all previous ZSL methods. NS means the new splits proposed by [76]. SS0 on CUB and SUN to denote splits proposed by [2] and [76], respectively. On ImageNet, only SS exists as we do not have the problem of unseen classes "leaking" during pre-training. The seen classes are selected from ImageNet ILSVRC 2012 1K [62] and are normally used for the pre-training of feature extractors. For the generalized zero-shot learning (GZSL) setting (cf. Sect. 3.4.2) on AwA, AwA2, CUB, and SUN, the test set must be the union of the seen classes' instances and the unseen classes' instances. The NS splits remain the same as before as they already reserve a portion of seen classes' instances for testing. For the SS or SS0 splits, we modify their original train and test sets following [10]; we train the models using the 80% of the seen classes' instances and test on the remaining 20% (and the original unseen classes' instances). Semantic Representations In our main experiments, we focus on attributes as semantic representations on AwA, AwA2, CUB, and SUN, and word vectors as semantic representations on ImageNet. We use 85-, 312-and 102-dimensional continuous-valued attributes for the classes in AwA (and AwA2), CUB, and SUN, respectively. For each class in SUN, we average attribute vectors over all images belonging to that class to obtain a class-level attribute vector. For ImageNet, we train a skip-gram model [46,47] on the Wikipedia dump corpus 5 consisting of more than 3 billion words to extract a 500dimensional word vector for each class. Following [17,8] we train the model for a single epoch. We ignore classes without word vectors in the experiments, resulting in 20,345 (out of 20,842) unseen classes. Other details are in Appendix A. For both the continuous at-tribute vectors and the word vector embeddings of the class names, we normalize them to have unit 2 norms unless stated otherwise. Additional experimental setup and results on the effect of semantic representations can be found in Sect. 4.5.1. Visual Features We employ the strongest and most popular deep visual features in the literature: GoogLeNet [68] and ResNet [25]. On all datasets but AwA2, GoogLeNet features are 1,024-dimensional activations of the pooling units of the Inception v1 pre-trained on the ILSVRC 2012 1K dataset (AwA, CUB, ImageNet) [62] or the Places database (SUN) [88,87], extracted using the Caffe package [29]. We perform pre-processing on CUB by cropping all images with the provided bounding boxes following [18] and on ImageNet by center-cropping all images (without data augmentation or other preprocessing). We obtained the ResNet features on all datasets from [78,76]. These features are 2,048-dimensional activations of the pooling units of the ResNet-101 pretrained on the ILSVRC 2012 1K dataset [62]. Throughout the experiments, we denote GoogLeNet v1 features with G and ResNet features with R. Evaluation Protocols Denote by A O→Y the accuracy of classifying test data whose labels come from O into the label space Y. Note that the accuracy denotes the "per-class" multi-way classification accuracy (defined below). Conventional zero-shot learning The performance of ZSL methods on infrequent unseen classes whose examples are scarce (i.e., the tail) may not be reflected if we use per-sample multi-way classification accuracy (averaged over all test images): A ps U →U = c∈U # correct predictions in c c∈U # test images in c .(14) For this reason, as in most previous work, on all datasets (with some exceptions on ImageNet below), we use the per-class multi-way classification accuracy (averaged over all classes, and averaged over all test images in each class): A U →U := A pc U →U = 1 |U| c∈U # correct predictions in c # test images in c .(15) Note that we use A U →U to denote A pc U →U in this paper. Evaluating zero-shot learning on the large-scale Im-ageNet allows for different scenarios from evaluating on the other four datasets. We consider multiple subsets of the test set of ImageNet based on different characteristics. Following the procedure in [17,50], we evaluate on the following subsets of increasing difficulty: 2-hop and 3-hop. These, respectively, correspond to 1,509 and 7,678 unseen classes that are within two and three tree hops of the 1K seen classes according to the ImageNet label hierarchy 6 . Furthermore, following the procedure in [76], we evaluate on the 500, 1K, and 5K most populated and least populated unseen classes. Finally, we evaluate on All: all 20,345 unseen classes in the ImageNet 2011 21K dataset that are not in the ILSVRC 2012 1K dataset. Note that the numbers of unseen classes are slightly different from what are used in [17,50] due to the missing semantic representations (i.e., word vectors) for certain class names. To aid comparison with previous work on AlexNet and GoogLeNet features [17,50,8,9], we also adopt two additional evaluation metrics: Flat hit@K (F@K) and Hierarchical precision@K (HP@K). F@K is defined as the percentage of test images for which the model returns the true label in its top K predictions. Note that F@1 is the per-sample multi-way classification accuracy, which we report in the main text. We refer the reader to Appendix C for the details on HP@K and the rest of the results. Generalized zero-shot learning (GZSL) In the generalized zero-shot learning (GZSL) setting, test data come from both seen and unseen classes. The label space is thus T = S ∪U. This setting is of practical importance as real-world data should not be unrealistically assumed (as in conventional ZSL) to come from the unseen classes only. Since no labeled training data of the unseen classes are available during training, the bias of the classifiers toward the seen classes are difficult to avoid, making GZSL extremely challenging [10]. Following [10], we use the Area Under Seen-Unseen accuracy Curve (AUSUC) to evaluate ZSL methods in the GZSL setting. Below we describe briefly how to compute AUSUC, given a ZSL method. We assume that the ZSL method has a scoring function f for each class c ∈ T . The approach of calibrated stacking [10] adapts the ZSL method so the prediction in the GZSL setting iŝ y = arg max c ∈ T f c (x) − γI[c ∈ S],(16) where γ is the calibration factor. Adjusting γ can balance two conflicting forces: recognizing data from seen classes versus those from unseen ones. Recall that T = S ∪ U is the union of the seen set S and the unseen set U of classes, where S = {1, · · · , S} and U = {S + 1, · · · , S + U}. Varying γ, we can compute a series of classification accuracies (A U →T , A S→T ). We then can create the Seen-Unseen accuracy Curve (SUC) with two ends for the extreme cases (γ → −∞ and γ → +∞). The Area Under SUC (AUSUC) summarizes this curve, similar to many curves whose axes representing conflicting goals, such as the Precision-Recall (PR) curve and the Receiving Operator Characteristic (ROC) curve. Recently, [76] alternatively proposed the harmonic mean of seen and unseen accuracies defined as H = 2 * A S→T * A U →T A S→T + A U →T .(17) While easier to implement and faster to compute than AUSUC, the harmonic mean may not be an accurate measure for the GZSL setting. It captures the performance of a zero-shot learning algorithm given a fixed degree of bias toward seen (or unseen) classes. This bias can vary across zero-shot learning algorithms and limit us to fairly compare them. We expand this point through our experiments in Sect. 4.4. Baselines We consider 12 zero-shot learning baseline methods in [76] (cf. [34], GFZSL [71]. Additionally, we consider COSTA [44], HAT [4], BiDiLEL [73], and CCA [42] in some of our experiments. These baselines are diverse in their approaches to zero-shot learning. Note that DAP [38] and IAP [38] require binary semantic representations, and we follow the setup in [8] to obtain them. For further discussion of these methods, see Sect. 5 as well as [76,20]. Summary of Variants of Our Methods We consider the following variants of SynC that are different in the type of loss used in the objective function (cf. Sect. 2.1.2). -SynC o-vs-o : one-versus-other with the squared hinge loss. -SynC cs : Crammer-Singer multi-class SVM loss [12] with ∆(c, y n ) = 1 if c = y n and 0 otherwise. -SynC struct : Crammer-Singer multi-class SVM loss [12] with ∆(c, y n ) = a c − a yn 2 . Unless stated otherwise, we adopt the version of SynC that sets the number of base classifiers R to be the number of seen classes S, and sets b r = a c for r = c (i.e., without learning semantic representations EXEM (ZSL method) regards the predicted exemplars as the improved semantic representations. On the other hand, EXEM (1NN) treats predicted exemplars as data prototypes. The standardized Euclidean distance in EXEM (1NNs) is introduced as a way to scale the variance of different dimensions of visual features. In other words, it helps reduce the effect of collapsing data that is caused by our usage of the average of each class' data as cluster centers. Experimental Results The outline of our experimental results in this section is as follows. We first provide a summary of our main results (Sect. 4.1, Table 2), followed by detailed results in various experimental scenarios. We provide detailed conventional ZSL results on 4 small datasets AwA, AwA2, CUB, SUN (Sect. 4.2, Table 3) and on the large-scale ImageNet (Sect. 4.2, Table 4). We separate GZSL results on small datasets (Sect. 4.4) into two parts: one using AUSUC (Table 5) and the other comparing multiple metrics ( Table 6). The rest are additional results on ImageNet (Sect. 4.5), including an empirical comparison between semantic representations (Table 7), a comparison to recent state-of-the-art with per-sample accuracy (Table 8), and results with ideal semantic representations (Table 9). Results on Ima-geNet using an earlier experimental setup [17,50,8] can be found in Appendix C. Finally, further analyses on SynC and EXEM are in Appendix D and Appendix E, respectively. Table 2 summarizes our results on both conventional and generalized zero-shot learning using the ResNet features. On AwA, AwA2, CUB, SUN, we use visual attributes and the new splits (where the unseen/test classes does not overlap with those used for feature extraction; see Sect. 3.1). On ImageNet, we use word vectors of the class names and the standard split. We use per-class multi-way classification accuracy for the conventional zero-shot learning task and AUSUC for the generalized zero-shot learning task. Main Experimental Results On the word-vector-based ImageNet (All: 20,345 unseen classes), our SynC and EXEM outperform baselines by a significant margin. To encapsulate the general performances of various ZSL methods on small visualattribute-based datasets, we adopt the non-parametric Friedman test [23] as in [76]; we compute the mean rank of each method across small datasets and use it to order 23 methods in the conventional task and 16 methods in the generalized task. We find that on small datasets EXEM (1NN), EXEM (1NNs) and EXEM (SynC struct ) are the top three methods in each setting. Additionally, SynC struct performs competitively with the rest of the baselines. Notable strong baselines are GFZSL, ALE, and SJE. The rankings in both settings demonstrate that a positive correlation appears to exist between the ZSL and GZSL performances, but this is not always the case. For example, ALE outperforms SynC struct on SUN in ZSL (58.1 vs. 55.9) but it underperforms in GZSL (0.193 vs. 0.241). The same is true in many other cases such as DAP vs. IAP on all datasets and EXEM (SynC cs ) vs. EXEM (ESZSL) on CUB and SUN. This observation stresses the importance of GZSL as an evaluation setting. Conventional Zero-Shot Learning Results In Table 3, we provide detailed results on small datasets (AwA, AwA2, CUB, and SUN), including other popular scenarios for zero-shot learning that were investigated by past work. In particular, we include results for other visual features and data splits. All zero-shot learning methods use visual attributes as semantic representations. Similar to before, we find that the variants of EXEM consistently outperform other ZSL approaches. Other observations are discussed below. Variants of SynC: On AwA and AwA2, SynC struct outperforms SynC cs and SynC o-vs-o consistently, but it is inconclusive whether SynC cs or SynC o-vs-o is more Table 2 Main results: our results and the previously published ones on the conventional ZSL task based on per-class multi-way classification accuracy (in %) and on the generalized ZSL task based on AUSUC. The ResNet features are used with the new splits (NS) on small datasets and standard split (SS) on ImageNet (All: 20,345 unseen classes). For each dataset, the best is in red; the second best in blue; the third best in green. We also summarize the results on small datasets by ordering zero-shot methods based on their mean ranks (in brackets) in ZSL (bottom left) and GZSL (bottom right) settings. Element (i, j) indicates the number of times method i ranks at j-th. ZSL Variants of EXEM: We find that there is no clear winner between using predicted exemplars as improved semantic representations or as data prototypes. The former seems to perform better on datasets with fewer seen classes. Nonetheless, we note that using 1-nearestneighbor classifiers clearly scales much better than using most zero-shot learning methods; EXEM (1NN) and EXEM (1NNs) are more efficient than EXEM (SynC), EXEM (ESZSL), EXEM (ConSE) in training. Finally, while we expect that using the standardized Euclidean distance (EXEM (1NNs)) instead of the Euclidean distance (EXEM (1NN)) for nearest neighbor classifiers would help improve the accuracy, this is only the case on CUB (and on ImageNet as we will show in Sect. 4.3). We hypothesize that accounting for the variance of visual exemplars' dimensions is important in fine-grained ZSL recognition. EXEM (ZSL method) improves over ZSL method: Our approach of treating predicted visual exemplars as the improved semantic representations significantly outperforms taking semantic representations as given. EXEM (SynC), EXEM (ConSE), and EXEM (ESZSL) outperform their corresponding base ZSL methods by relatively 2.1-13.3%, 11.4-32.7%, and 1.6-12.0%, respectively. Thus, we conclude that the semantic representations (on the predicted exemplar space) are indeed improved by EXEM. We further qualitatively and quantitatively analyze the nature of the predicted exemplars in Appendix E. Visual features and class splits: The choice of visual features and class splits affect performance greatly, suggesting that these choices should be made explicit or controlled in zero-shot learning studies. On the same standard split of AwA, we observe that the ResNet features are generally stronger than the GoogLeNet features, but not always (DAP [38] and IAP [38]), suggesting that further investigation on the algorithm-specific transferability of different types of features may be needed. As observed in [76], zero-shot learning on the new splits is a more difficult task because pre-trained visual features have not seen the test classes. However, the effect of class splits on the fine-grained benchmark CUB is not apparent as in other datasets. This suggests that, when object classes are very different, class splits create very different zero-shot learning tasks where some are much harder than others. Evaluating the "possibility" of transfer of these different tasks is important and likely can be more easily approached using coarsegrained benchmarks. Large-Scale Conventional Zero-Shot Learning Results In Table 4, we provide detailed results on the large-scale ImageNet, including scenarios for zero-shot learning that were investigated by [76] and [17,50]. In particular, we include results for other visual features and other test subsets of ImageNet. All zero-shot learning methods use word vectors of the class names as semantic representations. To aid comparison with previous work, we use per-class accuracy when evaluating on ResNet features (R) and per-sample accuracy when evaluating on GoogLeNet features (G). We compare the two types of accuracy in Appendix C and find that the per-sample Table 4 Comparison between existing ZSL approaches in multi-way classification accuracy (in %) on ImageNet. Normal text denotes per-class accuracy and italicized text denotes per-sample accuracy (following previous work). All methods use word vectors of the class names as semantic representations. Each row corresponds to a ZSL method. Each column corresponds to a scenario with a particular combination of a selected test set and visual features. We use GoogLeNet features (G) and ResNet features (R). For each scenario, the best is in red and the second best in blue. Reported accuracy is a more optimistic metric than the per-class accuracy is, reasonably reflected by the fact that Ima-geNet's classes are highly unbalanced. Furthermore, in Sect. 4.5.1, we analyze the effect of different semantic representations and report the best published results on this dataset. We compare and contrast these results with the one on small datasets (Table 3). We observe that, while SynC does not clearly outperform other baselines on small datasets, it does so on ImageNet, in line with the observation in [76] (cf. Table 5 which only tested on SynC o-vs-o ). Any variants of EXEM further improves over SynC in all scenarios (in each column). As in small datasets, EXEM (ZSL method) improves over method. Variants of SynC: SynC o-vs-o generally outperforms SynC struct in all scenarios but the setting "3-hop (G)" and "All." This is reasonable as the semantic distances needed in SynC struct may not be reliable as they are based on word vectors. This hypothesis is supported by the fact that EXEM (SynC struct ) manages to reduce the gap to or even outperforms EXEM (SynC o-vs-o ) after semantic representations have been improved by the task of predicting visual exemplars. SynC struct becomes more effective against SynC o-vs-o when the labeling space grows larger (i.e., when we move from 2-hop to 3-hop to All or when we move from 500 to 1K to 5K). In fact, it even achieves slightly better performance when we consider All classes (1.5 vs. 1.4 and 0.99 vs. 0.98). One hypothesis is that, when the labeling space is large, the ZSL task becomes so difficult that both methods become equally bad. Another hypothesis is that the semantic distances in SynC struct only helps when we consider a large number of classes. Variants of EXEM: First, we find that, as in CUB, using the standardized Euclidean distance instead of the Euclidean distance for nearest neighbor classifiers helps improve the accuracy -EXEM (1NNs) outperforms EXEM (1NN) in all cases. This suggests that there is a certain effect of collapsing actual data during training. Second, EXEM (SynC struct ) generally outperforms EXEM (SynC o-vs-o ), except when classes are very frequent or very rare. Third, EXEM (1NNs) is better than EXEM (SynC o-vs-o ) on rare classes, but worse on frequent classes. Finally, EXEM (1NNs) and EXEM (SynC struct ) are in general the best approaches but one does not clearly outperform the other. Visual features: Comparing the columns "G" (GoogLeNet) of Table 4 against the row "wv-v1" of Table 12 in Appendix C (ResNet), we show that, when evaluated with the same "per-sample" metrics on 2-hop, 3-hop, and All test subsets of ImageNet, ResNet features are clearly stronger (i.e., more transferable) than GoogLeNet features. Generalized Zero-Shot Learning Results Comparison among ZSL approaches We now present our results on generalized zero-shot learning (GZSL). We focus on AwA, AwA2, CUB, and SUN because all ImageNet's images from seen classes are used either for pre-training for feature extraction or for hyper-parameter tuning. As in conventional zero-shot learning experiments, we include popular scenarios for ZSL that were investigated by past work and all ZSL methods use visual attributes as semantic representations. Table 5 Comparison between existing ZSL approaches on the task of GZSL based on the Area Under Seen-Unseen accuracy Curve (AUSUC) [10] on small datasets. Each row corresponds to a ZSL method. Each column corresponds to a scenario with a particular combination of dataset, its class split, and visual features. We use GoogLeNet features (G) and ResNet features (R). Class splits include both standard (SS or SS0) and new (NS) split. All approaches use calibrated stacking [10] to combine the scores for seen and unseen classes. For each scenario, the best is in red and the second best in blue. Reported by Fig. 3 Seen-Unseen accuracy curves of ZSL methods on SUN. The area under each curve is also included in the legend. All approaches use calibrated stacking [10] to combine the scores for seen and unseen classes, leading to curves of (A U →T , A S→T ). Dashed lines correspond to ZSL methods that involve EXEM. We use ResNet features on the new split in all cases. Best viewed in color. We first present our main results on GZSL in Table 5. We use the Area Under Seen-Unseen accuracy curve with calibrated stacking (AUSUC) [10]. Calibrated stacking introduces a calibrating factor that adaptively changes how we combine the scores for seen and unseen classes. AUSUC is the final score that integrates over all possible values of this factor. Besides similar trends stated in Sect. 4.1 and Sect. 4.2, we notice that EXEM (SynC struct ) performs particularly well in the GZSL setting. For instance, the relative performance of EXEM (1NN) against EXEM (SynC struct ) drops when moving from conventional ZSL to generalized ZSL. To illustrate why one method may perform better or worse than another, we show the Seen-Unseen accuracy curves of ZSL methods on SUN in Fig. 3. We observe that a method might perform well on one axis but poorly on the other. For example, IAP outperforms DAP on A S→T but not on A U →T . As another example, ESZSL, ALE, and LatEm achieve similar A U →T to the one by SynC struct but perform significantly worse on A S→T , resulting in a lower AUSUC. Our results hence emphasize once again the importance of the GZSL setting evaluation. Comparison among evaluation metrics We then focus on ResNet features and the new splits and present an empirical comparison between different metrics used for GZSL in Table 6. First, we consider the harmonic mean of A U →T and A S→T , as in [76]. The blue curve is the Seen-Unseen accuracy curve, where we plot A U →T (y-axis) vs. A S→T (xaxis). The red curve is the Harmonic mean-Unseen accuracy curve, where we plot the harmonic mean (y-axis) vs. A U →T (xaxis). The heart and square correspond to the harmonic mean without and with calibrated stacking, respectively. We see that calibrated stacking drastically improves the value of harmonic mean and that the Harmonic mean-Unseen accuracy curve is biased toward A U →T (left-skewed). We use ResNet features on the new split. Best viewed in color. Second, we consider the "calibrated" harmonic mean; we propose to select the calibrating factor (cf. Eq. (16)) for each ZSL method using cross-validation, resulting in new values of A U →T and A S→T and hence a new value for the harmonic mean. Finally, we use the AUSUC with calibrated stacking [10] as in Table 5. Fig. 4 illustrates what happens after we apply calibrated stacking. We plot two curves based on results by SynC struct on SUN. One (blue) is the Seen-Unseen accuracy curve [10]. The other (red) is the harmonic mean vs. A U →T , which we will call the Harmonic mean-Unseen accuracy curve. In other words, as we vary the calibrating factor, the harmonic mean changes. The uncalibrated harmonic mean (heart) and the calibrated harmonic mean (square) reported in Table 5 are also shown. Clearly, we see a large improvement in the harmonic mean with calibration. Furthermore, we see that the harmonic mean curve is left-skewed; it goes up until A U →T nearly reaches 50%. We have the following important observations and implications. First, we discuss critical issues with the uncalibrated harmonic mean metric. We observe that it is correlated with the standard metric used in zero-shot learning A U →T and that it can be made much higher after calibration. ZSL methods that have bias toward predicting a label from unseen classes can perform well under this metric, while in fact other methods may do just as well or better when this bias is calibrated. For instance, ConSE and SynC struct become much more competitive under the calibrated harmonic mean metric. Fig. 4 also evidently supports this observation. We therefore conclude that the uncalibrated harmonic mean may be a misleading metric in the GZSL evaluation. Second, we discuss the calibrated harmonic mean and AUSUC. We observe a certain degree of positive correlation between the two metrics, but exceptions exist. For example, on AwA, one might mistakenly conclude that EXEM (SynC o-vs-o ) does not improve over SynC o-vs-o (i.e., predicting exemplars does not help) while AUSUC says the opposite. We therefore advocate using both evaluation metrics in the GZSL evaluation. Additional Results on ImageNet Our approaches with other types of semantic representations How much do semantic representations affect the performance of our zero-shot learning algorithms? We focus on the ImageNet dataset and investigate this question in more details. In the main ImageNet experiments, we consider the word vectors derived from a skip-gram model [46]. In this section, we obtain higherquality word vectors as well as consider another type of semantic representations. First, we train a skip-gram model in the same manner (the same corpus, the same vector dimension, etc.) as in Sect. 3.2 but we let it train for 10 epochs instead of for one epoch that was used in DeViSE [17]. We call the word vectors from one-epoch and 10-epoch training "word vectors version 1 (wv-v1)" and "word vectors version 2 (wv-v2)", respectively. Additionally, we derive 21,632 dimensional semantic vectors of the class names using multidimensional scaling (MDS) on the WordNet hierarchy [48], following [42]. We denote such semantic vectors "hie". As before, we normalize each semantic representation to have a unit 2 norm unless stated otherwise. We also consider the combination of either version of word vectors and the hierarchy embeddings. As both SynC and EXEM use the RBF kernel in computing the semantic relatedness among classes (cf. Eq. (2) and Eq. (11)), we perform convex combination of the kernels from different semantic representations instead of directly concatenating them. The combination weight is a hyper-parameter to tune. In Table 7, we see how improved semantic representations lead to substantially improved ZSL performances. In particular, word vectors trained for a larger number of iterations can already improve the overall accuracy by an absolute 0.2-0.3%. The hierarchy embeddings improve the performances further by an absolute 0.1-0.2%. Finally, we see that these two types of semantic representations are complementary; the combination of either version of word vectors and the hierarchy embeddings improves over either the word vectors or the hierarchy embeddings alone. In the end, the best result we obtain is 2.18% by EXEM (SynC struct ) with "wv-v2 + hie", achieving a 69% improvement over the word vectors "wv-v1." Comparison to recently published results with per-sample accuracy Recent studies, GCNZ [74] and ADGPM [30], obtained very strong ZSL results on ImageNet. Both methods apply graph convolutional networks [32] to predict recognition models given semantic representations, where their "graph" corresponds to the WordNet hierarchy [48]. They use ResNet-50 visual features and word vectors extracted using GloVe [54]. In Table 8, we report their best results as well as our results with the strongest visual features (ResNet-101) and semantic representations (the combination of "wv-v2" and "hie"). The comparison is subject to a slight variation due to differences in visual features (and whether we fine-tune them or not), semantic representations, and the number of unseen classes 7 . Nevertheless, in contrast to the conclusion in [74,30] that these approaches outperform ours (i.e., EXEM (1NNs) with GoogLeNet and "wv-v1"), our results can be greatly improved by using stronger visual features and semantic representations, especially the later. This suggests that the conclusion there may result from visual feature/semantic representation differences rather than the methodological superiority of their approaches. How far are we from ideal performance? It is clear that the success of zero-shot learning relies heavily on how accurate semantic information represents visual similarity among classes. We investigate this in more details. We focus on the strongest semantic representations on ImageNet and ask what would happen in the ideal scenario where predicted visual exemplars of the unseen classes are very accurate. Concretely, for each class, ideal semantic representations can be obtained by averaging visual features of images belonging to that class [10]. For seen classes, we use all the data to compute ideal semantic representations and to train the ZSL models. For unseen classes, we Table 7 Comparison between different semantic representations on ImageNet. We consider (i) "wv-v1": word vectors of the class names trained for one epoch used in Table 4, (ii) "wv-v2": word vectors of the class names trained for 10 epochs, (iii) "hie": the WordNet-hierarchy embeddings obtained using multidimensional scaling [42], (iv) "wv-v1 + hie": the combination of (i) and (iii), (v) "wv-v2 + hie": the combination of (ii) and (iii). We use "per-class" accuracy (in %) and ResNet features in all cases. For each scenario, the best is in red and the second best in blue. Approach Table 8 Comparison between our approaches using the strongest visual features (ResNet-101) and semantic representations (wv-v2 + hie) and the best published results on ImageNet. Following those results, we use "per-sample" accuracy (italicized text, in %). For each scenario, the best is in red and the second best in blue. ( randomly reserve 50% of the data along with their labels for computing ideal semantic representations. The remaining 50% will be used as test data. Our goal is not to outperfrom the accuracies obtained in Table 7 (the numbers are not comparable anyway due to the difference in data splitting). Rather, we aim to see how large the gap is between existing and ideal performances. In Table 9, we see that the relative gap to the ideal per-formance is larger as the test set is semantically further from seen classes (from 2-hop to 3-hop to All) and as the test classes become more rare. These observations suggest that developing improved semantic representations (e.g., with more visual information) is a promising future direction for zero-shot learning. Table 9 Performance of zero-shot learning with ideal semantic representations on ImageNet. We compare "wv-v2 + hie": the combination of "wv-v2" and "hie" vs. "ideal": the average of visual features belong to each class. We use "per-class" accuracy (in %) and ResNet features in all cases. For each scenario, the best is in red and the second best in blue. We note that the numbers with "wv-v2 + hie" are not exactly the same as in Table 7 Related Work Zero-Shot Learning Background Morgado and Vasconcelos [49] distinguish "recognition using independent semantics (RIS)" and "recognition using semantic embeddings (RULE)." Wang and Chen [73] group ZSL algorithms into "direct mapping", "model parameter transfer", and "common space learning" approaches. Fu et al. [20] argue that solving zero-shot recognition involves "embedding models" and "recognition models in the embedding space," where the embedding models can be further categorized into semantic embedding, Bayesian models, embedding into common spaces, or deep embedding approaches. Xian et al. [76] categorize 13 ZSL methods into "learning linear compatibility", "learning nonlinear compatibility", "learning intermediate attribute classifiers," and "hybrid models." To facilitate the discussions in our work, we divide zero-shot learning algorithms into the following two themes: two-stage approaches and unified approaches. This criterion is most similar to the one used by Wang and Chen [73]. Two-stage approaches The theme of two-stage approaches is to identify and learn an intermediate subtask that is then used to infer the final prediction. Two popular subtasks are predicting the embeddings of images in the semantic space, and generating instances of each class given its corresponding semantic representation. It is possible that the selected intermediate subtask is trained jointly with the zero-shot recognition in a unified manner (Sect. 5.1.2), but this is not fully investigated in the literature and in some cases may lead to other technical difficulties. Learning to predict semantic embeddings: Given an image, one can project it to the semantic embedding space, and then infer its class label by comparing the predicted semantic embedding to those of unseen classes using a similarity measure. The projection mapping can be trained using standard classification or regression models on image-semantic embedding pairs from the seen classes. The semantic embedding space is usually chosen to be the one where given semantic representations live in. As for label inference, there are two popular approaches. One is based on probabilistic models of class labels based on semantic representations [37,38,50]. The other is based on nearest neighbor classifiers on the semantic space [16,51,67,80]. If we assume that semantic representations capture all information one needs to predict the class labels (i.e., they are highly discriminative), then focusing on accurately predicting the semantic embeddings would solve zero-shot learning. In practice, however, this paradigm suffers from the unreliability of semantic predictions. Several techniques are as a result proposed to alleviate this problem. Jayaraman and Grauman [27] propose a random forest based approach to take this unreliability into account. Al-Halah and Stiefelhagen [4] construct the hierarchy of concepts underlying the attributes to improve reliability. Gan et al. [22] transform visual features to reduce the mismatches between attributes in different categories, thus enhancing reliability. Learning to generate instances of each class: Recent advances in conditional generative models (e.g., [57,81]) lead to interest in exploiting them for generating labeled data from corresponding semantic representations. Once those examples are generated, one can employ any supervised learning technique to learn classifiers [36,90,77,7]. Note that all these methods focus on directly generating features rather than image pixels. Unified approaches The other type of ZSL approaches focuses on the task of zero-shot classification directly. There are two main sub-themes, where the difference lies in whether the emphasis is on learning common space (or compatibility) or on learning model parameters, but the distinction between the two is thin. Common space or compatibility learning: This approach learns a common representation to which visual features and semantic representations are projected with the objective of maximizing the compatibility score of projected instances in this space. The difference among methods of this category lies in their choices of common spaces or compatibility functions. The linear or bilinear compatibility functions are extensively used [2,3,17,61,40]. Some propose to use canonical correlation analysis (CCA) [18,19,42]. Nonlinear methods are scarce but have also been explored such as dictionary learning and sparse coding [33,86]. Model parameter learning: One can also build the classifiers of unseen classes by relating them to seen ones via similarities computed from semantic representations [60,59,15,44,24,39,82,85]. For example, Mensink et al. and Gan et al. [44,21] propose to construct classifiers of unseen objects by combining classifiers of seen objects, where the combining coefficients are determined based on semantic relatedness. Putting Our Methods in Context Both SynC, EXEM (1NN) and EXEM (1NNs) fall into model parameter learning approaches but the details in how they construct classifiers/exemplars are different. EXEM (1NN) and EXEM (1NNs) can also be viewed as learning to generate one instance for each class -without modeling variations explicitly. EXEM (ZSL method) falls into the approach of learning to predict semantic embeddings but we show that the (projected) space of visual features is an extremely effective semantic embedding space. We provide detailed discussions of each of our methods with respect to their most relevant work below. Detailed discussions of SynC: COSTA [44] combines pre-trained classifiers of seen classes to construct new classifiers. To estimate the semantic embedding of a test image, ConSE [50] uses the decision values of pretrained classifiers of seen objects to weightedly average the corresponding semantic representations. Neither of them has the notion of base classifiers as in SynC, which we introduce to construct classifiers but nothing else. We thus expect using bases to be more effective in transferring knowledge between seen and unseen classes than overloading the pre-trained and fixed classifiers of the seen classes for dual duties. We note that ALE [2] and SJE [3] can be considered as special cases of SynC. In ALE and SJE, each attribute corresponds to a base and each "real" object classifier is represented as a linear combination of those bases, where the weights are the real objects' "descriptions" in the form of attributes. This modeling choice is inflexible as the number of bases is fundamentally constrained by the number of attributes. Moreover, the model is strictly a subset of SynC 8 . SSE and JLSE [86,85] propose similar ideas of aligning the visual and semantic spaces but take different approaches from ours. Our convex combination of base classifiers for synthesizing real classifiers can also be motivated from multi-task learning with shared representations [5]. While labeled examples of each task are required in [5], our method has no access to data of the unseen classes. Detailed discussions of EXEM: DeViSE [17] and ConSE [50] predict an image's semantic embedding from its visual features and compare to unseen classes' semantic representations. We perform an "inverse prediction": given an unseen class's semantic representation, we predict the visual feature exemplar of that class. One appealing property of EXEM is its scalability: we learn and predict at the exemplar (class) level so the runtime and memory footprint of our approach depend only on the number of seen classes rather than the number of training data points. This is much more efficient than other ZSL algorithms that learn at the level of each individual training instance [16,37,51,2,83,17,67,50,27,44,3,61,85,86,42]. Several methods propose to learn visual exemplars by preserving structures obtained in the semantic space, where exemplars are used loosely here and do not necessarily mean class-specific feature averages. Examples of such methods include SynC, BiDiLEL [73] and UVDS [41]. However, EXEM predicts them with a regressor such that they may or may not strictly follow the structure in the semantic space, and thus they are more flexible and could even better reflect similarities between classes in the visual feature space. Similar in spirit to our work, Mensink et al. propose using nearest class mean classifiers for ZSL [45]. The Mahalanobis metric learning in this work could be thought of as learning a linear transformation of semantic representations (their "zero-shot prior" means, which are in the visual feature space). Our approach learns a highly non-linear transformation. Moreover, our EXEM (1NNs) (cf. Sect. 3) learns a (simpler, i.e., diagonal) metric over the learned exemplars. Finally, the main focus of [45] is on incremental, not zero-shot, learning settings (see also [58,55]). DEm [84] uses a deep feature space as the semantic embedding space for ZSL. Though similar to EXEM, this approach does not compute the average of visual features (exemplars) but train neural networks to predict all visual features from their semantic representations. Their model learning takes significantly longer time than ours. There has been a recent surge of interests in applying deep learning models to generate images (see, e.g., [43,57,81]). Most of these methods are based on probabilistic models (in order to incorporate the statistics of natural images). Unlike them, EXEM deterministically predicts visual features. Note that generating features directly is likely easier and more effective than generating realistic images first and then extracting visual features. Recently, researchers became interested in generating visual features of unseen classes using conditional generative models such as variants of generative adversarial networks (GANs) [77,90] and variational autoencoders (VAEs) [36] for ZSL. Conclusion SynC is a concrete realization of a novel idea that casts zero-shot learning as learning manifold embeddings from graphs composed of object classes. In this classifier synthesis framework, we show how to parameterize the graphs with the locations of the phantom classes, and how to derive recognition models as convex combinations of base classifiers. EXEM is a two-stage zero-shot learning method that incorporates the task of visual exemplar predictions to automatically denoise semantic representations. We show that this task can be done efficiently and effectively using kernelized support vector regression and PCA. We use EXEM to improve upon SynC and several other ZSL methods as well as to construct nearest-neighbor-style classifiers. Our results demonstrate the effectiveness of our proposed approaches on both conventional and generalized zero-shot learning settings in diverse sets of scenarios. We also show that semantic representations significantly contribute to the performance of ZSL methods on the large-scale ImageNet benchmark -we derive better semantic representations, achieve the state-ofthe-art performance, and see a large gap between the effectiveness of existing and "ideal" semantic representations. Our study also raises an important issue regarding the evaluation metrics in the generalized ZSL setting, suggesting that the AUSUC or the calibrated harmonic mean should be used instead of the uncalibrated harmonic mean. We believe that such insights will greatly benefit the community. Vectors on ImageNet We use the word2vec package 9 . We preprocess the input corpus with the word2phrase function so that we can directly obtain word vectors for both single-word and multiple-word terms, including those terms in the ImageNet synsets; each class of ImageNet is a synset: a set of synonymous terms, where each term is a word or a phrase. We impose no restriction on the vocabulary size. Following [17], we use a window size of 20, apply the hierarchical softmax for predicting adjacent terms, and train the model for a single epoch. As one class may correspond to multiple word vectors by the nature of synsets, we simply average them to form a single word vector for each class. Appendix B Hyper-parameter Tuning B.1 For Conventional Zero-shot Learning The standard approach for cross-validation (CV) in a classification task splits training data into several folds such that they share the same set of class labels. This strategy is less sensible in zero-shot learning as it does not imitate what actually happens at the test stage. We thus adopt the strategy in [15,3,61,85,61]. In this scheme, we split training data into several folds such that the class labels of these folds are disjoint. We then hold out data from one fold as pseudo-unseen classes, train our models on the remaining folds (which belong to the remaining classes), and tune hyper-parameters based on a certain performance metric on the held-out fold. For clarity, we denote the standard CV as samplewise CV and the zero-shot CV scheme as class-wise CV. Fig. 5 illustrates the two scenarios. We use this strategy to tune hyper-parameters in both our approaches (SynC and EXEM) and the baselines. In SynC, the main hyper-parameters are the regularization parameter λ in Eq. (6) and the scaling parameter σ in Eq. (3). When learning semantic representations (Eq. (9)), we also tune η and γ. To reduce the search space during CV, we first fix b r = a r for r = 1, . . . , R and tune λ, σ. Then we fix λ and σ and tune η and γ. The metric is the classification accuracy. In EXEM, we tune (a) projected dimensionality d for PCA and (b) λ, ν, and the RBF-kernel bandwidth in SVR 10 . Since EXEM is a two-stage approach, we consider the following two performance metrics. The first one minimizes the distance between the predicted exemplars and the ground-truth (average of the holdout data of each class after the PCA projection) in R d . We use the Euclidean distance in this case. We term this measure "CV-distance." This approach does not assume the downstream task at training and aims to measure the quality of predicted exemplars by its faithfulness. The other approach "CV-accuracy" maximizes the perclass classification accuracy on the hold-out fold. This measure can easily be obtained for EXEM (1NN) and EXEM (1NNs), which use simple decision rules that have no further hyper-parameters to tune. Empirically, we found that CV-accuracy generally leads to slightly better performance. The results reported in the main text for these two approaches are thus based on this measure. On the other hand, EXEM (ZSL method) (where ZSL method = SynC, ConSE, ESZSL) requires further hyper-parameter tuning. For computa-tional purposes, we use CV-distance 11 for tuning hyperparameters of the regressors, followed by the hyperparameter tuning for ZSL methods using the predicted exemplars. As SynC and ConSE construct their classifiers based on the distance values between class semantic representations, we do not expect a significant performance drop in this case. B.2 For Generalized Zero-shot Learning To perform class-wise CV in the generalized zero-shot learning (GZSL) setting, we further separate each fold into two splits, each with either 80% or 20% of data. We then hold out one fold, train models on the 80% splits of the remaining folds, and tune hyper-parameters based on a certain performance metric on (i) the 80% split of the hold-out fold and (ii) the 20% splits of the training (i.e., remaining) folds. In this way we can mimic the GZSL setting in hyper-parameter tuning. Specifically, for metrics with calibration (cf. Table 6), we first compute AUSUC using (i) and (ii) to tune the hyperparameters mentioned in Sect. B.1, and select the calibration factor γ that maximizes the harmonic mean. For the uncalibrated harmonic mean, we follow [76] to tune hyper-parameters in the same way as in the conventional ZSL setting. Appendix C Experimental Results on ImageNet with Previous Experimental Setups The first ZSL work on ImageNet and much of its follow-up considers only 2-hop, 3-hop, and All test sets and other evaluation metrics. We include our results here in Table 10, 11, and 12 to aid comparison with such work. As mentioned in Sect. 3.4.1, we also consider Flat hit@K (F@K) and Hierarchical precision@K (HP@K). F@K is defined as the percentage of test images for which the model returns the true label in its top K predictions. HP@K is defined as the percentage of overlapping (i.e., precision) between the model's top K predictions and the ground-truth list. For each class, the ground-truth list of its K closest categories is generated based on the ImageNet hierarchy. Note that F@1 is the per-sample multi-way classification accuracy. When computing Hierarchical precision@K (HP@K), we use the algorithm in the Appendix of [17] to compute the ground-truth list, a set of at least K classes that are considered to be correct. This set is called hCorrectSet and it is computed for each K and class c. See Algorithm 1 for more details. The main idea is to expand the radius around the true class c until the set has at least K classes. Algorithm 1 Algorithm for computing hCorrectSet for HP@K [17] 1: Input: K, class c, ImageNet hierarchy 2: hCorrectSet ← ∅ 3: R ← 0 4: while NumberElements(hCorrectSet) < K do 5: radiusSet ← all nodes in the hierarchy which are R hops from c 6: validRadiusSet ← ValidLabelNodes(radiusSet) 7: hCorrectSet ← hCorrectSet ∪ validRadiusSet 8: R ← R + 1 9: end while 10: return hCorrectSet Note that validRadiusSet depends on which classes are in the label space to be predicted (i.e., depending on whether we consider 2-hop, 3-hop, or All. We obtain the label sets for 2-hop and 3-hop from the authors of [17,50]. We implement Algorithm 1 to derive hCorrectSet ourselves. Appendix D Analysis on SynC In this section, we focus on SynC o-vs-o together with GoogLeNet features and the standard split (SS). We look at the effect of modifying the regularization term, learning base semantic representations, and varying the number of base classes and their correlations. D.1 Different Forms of Regularization In Eq. (6) and (9), w c 2 2 is the regularization term. Here we consider modifying that term to v r 2 2 -regularizing the bases directly. Table 13 shows that v r 2 2 leads to better results. However, we find that learning with v r 2 2 converges much slower than with w c 2 2 . Thus, we use w c 2 2 in our main experiments (though it puts our methods at a disadvantage). D.2 Learning Phantom Classes' Semantic Representations So far we adopt the version of SynC that sets the number of base classifiers to be the number of seen classes S, and sets b r = a c for r = c. Here we study whether we can learn optimally the semantic representations for the phantom classes that correspond to base classifiers. Fig. 6 We vary the number of phantom classes R as a percentage of the number of seen classes S and investigate how much that will affect classification accuracy (the vertical axis corresponds to the ratio with respect to the accuracy when R = S). The results in Table 14 suggest that learning representations could have a positive effect. D.3 How Many Base Classifiers Are Necessary? In Fig. 6, we investigate how many base classifiers are needed -so far, we have set that number to be the number of seen classes out of convenience. The plot shows that in fact, a smaller number (∼ 60%) is enough for our algorithm to reach the plateau of the performance curve. Moreover, increasing the number of base classifiers does not seem to have an overwhelming effect. Note that the semantic representations b r of the phantom classes are set equal to a r , ∀r ∈ {1, · · · , R} at 100% (i.e., R = S). For percentages smaller than 100%, we perform K-means and set b r to be the cluster centroids after 2 normalization (in this case, R = K). For percentages larger than 100%, we set the first S b r to be a r , and the remaining b r as the random combinations of a r (also with 2 normalization on b r ). We have shown that even by using fewer base (phantom) classifiers than the number of seen classes (e.g., around 60 %), we get comparable or even better results, especially for CUB. We surmise that this is because Table 10 Expanded results of Table 4. The metric is "per-sample" accuracy (italicized text) for F@K to aid comparison with previous published results. Comparison of ZSL methods on ImageNet using word vectors of the class names as semantic representations. For both types of metrics (in %), the higher the better. The best is in red. AlexNet is by [35]. The number of actual unseen classes are given in parentheses. Table 11 Expanded results of the third section of Table 7. The metric is "per-sample" accuracy (italicized text) to aid comparison with previous published results. Comparison of ZSL methods using "hie", the WordNet-hierarchy embeddings by multidimensional scaling [42], as semantic representations. The higher, the better (in %). The best is in red. CUB is a fine-grained recognition benchmark and has higher correlations among classes, and provide analysis in Fig. 7 to justify this. Test We train one-versus-other classifiers for each value of the regularization parameter on both AwA and CUB, and then perform PCA on the resulting classifier matrices. We then plot the required number (in percentage) of PCA components to capture 95% of variance in the classifiers. Clearly, AwA requires more. This explains why we see the drop in accuracy for AwA but not CUB when using even fewer base classifiers. Particularly, the low percentage for CUB in Fig. 7 implies that fewer base classifiers are possible. Given that CUB is a fine-grained recognition benchmark, this result is not surprising in retrospection as the classes are highly correlated. Appendix E Analysis on EXEM In this section, we provide more analysis on EXEM. We focus on GoogLeNet features and the standard split (SS). We provide both qualitative and quantitative measures of predicted exemplars. We also investigate neural networks for exemplar prediction functions and the effect of PCA. E.1 Quality of Predicted Exemplars We first show that predicted visual exemplars better reflect visual similarities between classes than semantic representations. Let D au be the pairwise Euclidean distance matrix between unseen classes computed from semantic representations (i.e., U by U), D ψ(au) the dis- Table 12 Expanded results of Table 7 with "per-sample" accuracy (italicized text) used to differentiate this accuracy from the "per-class" one. Except for the metric, the setting in Table 7 is still employed to compare different semantic representations on ImageNet. We consider (i) "wv-v1": word vectors of the class names trained for one epoch used in Table 4, (ii) "wv-v2": word vectors of the class names trained for 10 epochs, (iii) "hie": the WordNet-hierarchy embeddings obtained using multidimensional scaling [42], (iv) "wv-v1 + hie": the combination of (i) and (iii), (v) "wv-v2 + hie": the combination of (ii) and (iii). We use ResNet features in all cases. For each scenario, the best is in red and the second best in blue. Approach tance matrix computed from predicted exemplars, and D vu the distance matrix computed from real exemplars (which we do not have access to). Table 15 shows that the Pearson correlation coefficient 12 between D ψ(au) and D vu is much higher than that between D au and D vu . Importantly, we improve this correlation without access to any data of the unseen classes. Besides the correlation used in Table 15, we provide another evidence that predicted exemplars bet- ter reflect visual similarities (as defined by real exemplars) than semantic representations. Let %kNNoverlap(D) be the percentage of k-nearest neighbors (neighboring classes) using distances D that overlap with knearest neighbors using real exemplar distances. In Table 16, we report %kNNoverlap (semantic representation distances) and %kNNoverlap (predicted exemplar distances). We set k to be 40% of the number of unseen classes, but we note that the trends are consistent for different k's. Similar to the results in the main text, we observe clear improvement in all cases. We then show some t-SNE [69] visualization of predicted visual exemplars of the unseen classes. Ideally, we would like them to be as close to their corresponding real images as possible. In Fig. 8, we demonstrate that this is indeed the case for many of the unseen classes. For those unseen classes (each of which denoted by a color), their real images (crosses) and our predicted visual exemplars (circles) are well-aligned. The quality of predicted exemplars (here based on the distance to the real images) depends on two main factors: the predictive capability of semantic representations and the number of semantic representation-visual exemplar pairs available for training, which in this case is equal to the number of seen classes S. On AwA where we have only 40 training pairs, the predicted exemplars are surprisingly accurate, mostly either placed in their corresponding clusters or at least closer to their clusters than predicted exemplars of the other unseen classes. Thus, we expect them to be useful for discriminating among the unseen classes. On ImageNet, the predicted exemplars are not as accurate as we would have hoped, but this is expected since the word vectors are purely learned from text. We also observe relatively well-separated clusters in the semantic embedding space (in our case, also the visual feature space since we only apply PCA projections to the visual features), confirming our assumption about the existence of clustering structures. On CUB, we observe that these clusters are more mixed than on other datasets. This is not surprising given that it is a fine-grained classification dataset of bird species. E.2 Exemplar Prediction Function We compare two approaches for predicting visual exemplars: kernel-based support vector regressors (SVR) and 2-layer multi-layer perceptron (MLP) with ReLU nonlinearity. MLP weights are 2 regularized, and we cross-validate the regularization constant. Similar to [84], our multi-layer perceptron is of the form: 1 S S c=1 v c − W 2 · ReLU(W 1 · a c ) 2 2 + λ · R(W 1 , W 2 ),(18) where R denotes the 2 regularization, S is the number of seen classes, v c is the visual exemplar of class c, a c is the semantic representation of class c, and the weights W 1 and W 2 are parameters to be optimized. Following [84], we randomly initialize the weights W 1 and W 2 , and set the number of hidden units for AwA and CUB to be 300 and 700, respectively. We use Adam optimizer with a learning rate 0.0001 and minibatch size of S. We tune λ on the same splits of data as in other experiments with class-wise CV (Sect. B). Our code is implemented in TensorFlow [1]. Table 17 shows that SVR performs more robustly than MLP. One explanation is that MLP is prone to overfitting due to the small training set size (the number of seen classes) as well as the model selection challenge imposed by ZSL scenarios. SVR also comes with other benefits; it is more efficient and less susceptible to initialization. Table 18 investigates the effect of PCA. In general, EXEM (1NN) performs comparably with and without PCA. Moreover, we see that our approach is extremely robust, working reasonably over a wide range of (large enough) d on all datasets. Clearly, a smaller PCA dimension leads to faster computation due to fewer regressors to be trained. E.3 Effect of PCA Fig. 4 4How calibrated stacking affects the harmonic mean of SynC struct on SUN. Fig. 5 5Data splitting for different cross-validation (CV) strategies: (a) the seen-unseen class splitting for (conventional) zeroshot learning, (b) the sample-wise CV, (c) the class-wise CV. Fig. 7 7The base classifiers are learned with SynC o-vs-o . Percentages of basis components required to capture 95% of variance in classifier matrices for AwA and CUB. Fig. 8 8t-SNE [69] visualization of randomly selected real images (crosses) and predicted visual exemplars (circles) for the unseen classes on (from left to right, then from top to bottom) AwA, CUB, SUN, and ImageNet. Different colors of symbols denote different unseen classes. Perfect predictions of visual features would result in well-aligned crosses and circles of the same color. Plots for CUB and SUN are based on their first splits of SS. Plots for ImageNet are based on randomly selected 48 unseen classes from 2-hop and word vectors as semantic representations. Best viewed in color. Table 3 ) 3, including DAP [38], IAP [38], CMT [67], DeViSE [17], ConSE [50], ALE [2], SJE [3], LatEm [75], ESZSL [61], SSE [85], SAE ). The results with learned representations are in Appendix D. Furthermore, we consider the following variants of EXEM (cf. Sect 2.2.2). -EXEM (ZSL method): A ZSL method with predicted exemplars as semantic representations, where ZSL method = ConSE [50], ESZSL [61], or the variants of SynC. -EXEM (1NN): 1-nearest neighbor classifier with the Euclidean distance to the exemplars. -EXEM (1NNs): 1-nearest neighbor classifier with the standardized Euclidean distance to the exemplars, where the standard deviation is obtained by averaging the intra-class standard deviations of all seen classes. (Per-class Accuracy A U →U ) Generalized ZSL (AUSUC) Approach/Datasets Reported by AwA AwA2 CUB SUN ImageNet Reported by AwA AwA2 CUB SUN effective. On CUB and SUN, SynC o-vs-o and SynC struct clearly outperform SynC cs .DAP [38] [76] 44.1 46.1 40.0 39.9 - us 0.341 0.353 0.200 0.094 IAP [38] [76] 35.9 35.9 24.0 19.4 - us 0.376 0.392 0.209 0.121 CMT [67] [76] 39.5 37.9 34.6 39.9 0.29 - - - - - DeViSE [17] [76] 54.2 59.7 52.0 56.5 0.49 - - - - - ConSE [50] [76] 45.6 44.5 34.3 38.8 0.95 us 0.350 0.344 0.214 0.170 ALE [2] [76] 59.9 62.5 54.9 58.1 0.50 us 0.504 0.538 0.338 0.193 SJE [3] [76] 65.6 61.9 53.9 53.7 0.52 - - - - - LatEm [75] [76] 55.1 55.8 49.3 55.3 0.50 us 0.506 0.514 0.276 0.171 SSE [85] [76] 60.1 61.0 43.9 51.5 - - - - - - ESZSL [61] [76] 58.2 58.6 53.9 54.5 0.62 us 0.452 0.454 0.303 0.138 SAE [34] [76] 53.0 54.1 33.3 40.3 0.56 - - - - - GFZSL [71] [76] 68.3 63.8 49.3 60.6 - - - - - - COSTA [44] us 49.0 53.2 44.6 43.0 - - - - - - SynC o-vs-o us 57.0 52.6 54.6 55.7 0.98 us 0.454 0.438 0.353 0.220 SynC cs us 58.4 53.7 51.5 47.4 - us 0.477 0.463 0.359 0.189 SynC struct us 60.4 59.7 53.4 55.9 0.99 us 0.505 0.504 0.337 0.241 EXEM (ConSE) us 57.6 57.9 44.5 51.5 - us 0.439 0.425 0.266 0.189 EXEM (ESZSL) us 65.2 63.6 56.9 57.1 - us 0.522 0.538 0.346 0.191 EXEM (SynC o-vs-o ) us 60.0 56.1 56.9 57.4 1.25 us 0.481 0.474 0.361 0.221 EXEM (SynC cs ) us 60.5 57.9 54.2 51.1 - us 0.497 0.481 0.360 0.205 EXEM (SynC struct ) us 65.5 64.8 60.5 60.1 1.29 us 0.533 0.552 0.397 0.251 EXEM (1NN) us 68.5 66.7 54.2 63.0 1.26 us 0.565 0.565 0.298 0.253 EXEM (1NNs) us 68.1 64.6 58.0 62.9 1.29 us 0.575 0.559 0.366 0.251 ZSL GZSL Table 3 3Comparison between existing ZSL approaches in per-class multi-way classification accuracy (in %) on small datasets. All methods use visual attributes as semantic representations. Each row corresponds to a ZSL method. Each column corresponds to a scenario with a particular combination of dataset, its class split, and visual features. We use GoogLeNet features (G) and ResNet features (R). Class splits include both standard (SS or SS0) and new (NS) splits. For each scenario, the best is in red and the second best in blue.Reported by AwA AwA2 CUB SUN Features G R G R R G R G R Approach/Splits - SS SS NS SS NS SS SS0 NS SS SS0 NS DAP [38] [8] [76] 60.5 57.1 44.1 58.7 46.1 39.1 37.5 40.0 44.5 38.9 39.9 IAP [38] [8] [76] 57.2 48.1 35.9 46.9 35.9 36.7 27.1 24.0 40.8 17.4 19.4 HAT [4] [4] - 74.9 - - - - - - - - - - CMT [67] - [76] - 58.9 39.5 66.3 37.9 - 37.3 34.6 - 41.9 39.9 DeViSE [17] - [76] - 72.9 54.2 68.6 59.7 - 53.2 52.0 - 57.5 56.5 ConSE [50] [8] [76] 63.3 63.6 45.6 67.9 44.5 36.2 36.7 34.3 51.9 44.2 38.8 ALE [2] us [76] 74.8 78.6 59.9 80.3 62.5 53.8 53.2 54.9 66.7 59.1 58.1 SJE [3] [8] [76] 66.3 76.7 65.6 69.5 61.9 46.5 55.3 53.9 56.1 57.1 53.7 LatEm [75] [9] [76] 72.1 74.8 55.1 68.7 55.8 48.0 49.4 49.3 64.5 56.9 55.3 SSE [85] - [76] - 68.8 60.1 67.5 61.0 - 43.7 43.9 - 54.5 51.5 ESZSL [61] us [76] 73.2 74.7 58.2 75.6 58.6 54.7 55.1 53.9 58.7 57.3 54.5 SAE [34] - [76] - 80.6 53.0 80.7 54.1 - 33.4 33.3 - 42.4 40.3 GFZSL [71] - [76] - 80.5 68.3 79.3 63.8 - 53.0 49.3 - 62.9 60.6 BiDiLEL [73] [73] - 72.4 - - - - - - - - - - COSTA [44] [8] us 61.8 70.1 49.0 63.0 53.2 40.8 42.1 44.6 47.9 46.7 43.0 SynC o-vs-o [8] us 69.7 75.2 57.0 71.0 52.6 53.4 53.5 54.6 62.8 59.4 55.7 SynC cs [8] us 72.1 77.9 58.4 66.7 53.7 51.6 49.6 51.5 53.3 54.7 47.4 SynC struct [8] us 72.9 78.4 60.4 75.4 59.7 54.5 53.5 53.4 62.7 59.1 55.9 EXEM (ConSE) [9] us 70.5 74.6 57.6 76.6 57.9 46.2 47.4 44.5 60.0 55.6 51.5 EXEM (ESZSL) us us 78.1 80.9 65.2 80.4 63.6 57.5 59.3 56.9 63.4 58.2 57.1 EXEM (SynC o-vs-o ) [9] us 73.8 77.7 60.0 77.1 56.1 56.2 58.3 56.9 66.5 60.9 57.4 EXEM (SynC cs ) [9] us 75.0 79.5 60.5 75.3 57.9 56.1 56.2 54.2 58.4 57.2 51.1 EXEM (SynC struct ) [9] us 77.2 82.4 65.5 80.2 64.8 59.8 60.1 60.5 66.1 62.2 60.1 EXEM (1NN) [9] us 76.2 80.9 68.5 78.1 66.7 56.3 57.1 54.2 69.6 64.2 63.0 EXEM (1NNs) [9] us 76.5 77.8 68.1 81.4 64.6 58.5 59.7 58.0 67.3 62.7 62.9 Table 6 6Comparison between different metrics on the task of GZSL on small datasets. all possible calibrating factors are integrated over. A U →T and A S→T used to compute H are also included. Calibrated stacking was introduced in[10]. We focus on ResNet features (R) and the new split (NS) in all cases. For each scenario, the best is in red and the second best in blue.We consider (i) "H w/o calibration": Table 13Comparison between regularization with wc and vc on SynC o-vs-o .Table 14Effect of learning semantic representations.Table 15We compute the Euclidean distance matrix between the unseen classes based on semantic representations (Da u ), predicted exemplars(D ψ(au)), and real exemplars (Dv u ). Our method leads to D ψ(au) that is better correlated with Dv u than Da u is. See text for more details.Semantic Hierarchy Most populated Least populated All types 2-hop 3-hop 500 1K 5K 500 1K 5K SynC o-vs-o 12.42 3.35 16.23 11.18 3.55 5.68 4.05 1.42 1.64 SynC struct 11.47 3.31 14.78 10.38 3.50 4.69 3.58 1.31 1.67 EXEM (SynC o-vs-o ) wv-v1 14.01 4.13 19.10 13.42 4.58 6.42 4.73 1.75 2.03 EXEM (SynC struct ) 14.26 4.18 18.82 13.31 4.64 6.17 4.85 1.74 2.06 EXEM (1NN) 13.78 4.17 18.06 12.74 4.51 6.54 5.04 1.68 2.15 EXEM (1NNs) 14.90 4.36 18.94 13.22 4.65 6.30 4.88 1.72 2.18 SynC o-vs-o 15.37 4.30 18.84 13.06 4.46 5.56 4.36 1.93 2.06 SynC struct 14.76 4.25 17.59 12.58 4.33 6.54 4.39 1.78 2.09 EXEM (SynC o-vs-o ) wv-v2 16.76 4.87 21.43 14.89 5.23 6.42 5.53 2.38 2.37 EXEM (SynC struct ) 17.05 4.94 21.27 14.86 5.35 6.42 4.79 2.31 2.39 EXEM (1NN) 16.52 4.94 20.63 14.45 5.21 7.28 5.34 2.15 2.49 EXEM (1NNs) 17.44 5.08 21.12 14.89 5.35 6.79 5.56 2.34 2.50 SynC o-vs-o 23.25 5.29 15.46 11.66 4.86 9.88 6.24 2.32 2.37 SynC struct 22.86 5.20 14.67 11.27 4.78 9.14 5.90 2.29 2.33 EXEM (SynC o-vs-o ) hie 23.78 5.40 16.96 12.82 5.20 10.62 6.98 2.74 2.42 EXEM (SynC struct ) 24.46 5.54 17.10 13.10 5.35 10.37 6.98 2.79 2.48 EXEM (1NN) 23.36 5.25 16.67 12.50 5.06 10.12 6.39 2.49 2.35 EXEM (1NNs) 24.17 5.41 16.91 12.69 5.25 9.51 6.64 2.69 2.42 SynC o-vs-o 25.34 6.21 20.02 14.50 5.57 8.89 6.18 2.83 2.86 SynC struct wv-v1 25.69 6.14 18.18 13.51 5.37 9.38 6.43 2.70 2.77 EXEM (SynC o-vs-o ) + 25.39 6.32 23.16 17.01 6.43 11.85 7.85 3.29 2.97 EXEM (SynC struct ) hie 26.37 6.53 23.64 17.27 6.69 11.11 7.66 3.27 3.04 EXEM (1NN) 25.46 6.52 22.36 16.38 6.14 10.49 7.29 3.02 3.05 EXEM (1NNs) 27.44 6.94 23.55 17.13 6.60 10.74 7.82 3.36 3.24 SynC o-vs-o 25.70 6.28 20.69 15.07 5.55 11.36 6.98 2.92 2.84 SynC struct wv-v2 25.00 6.04 18.22 13.37 5.23 9.38 6.15 2.62 2.73 EXEM (SynC o-vs-o ) + 25.74 6.41 24.32 17.82 6.78 12.22 7.97 3.51 3.01 EXEM (SynC struct ) hie 26.65 6.70 24.50 18.02 6.97 12.96 7.88 3.42 3.12 EXEM (1NN) 26.42 6.92 23.82 17.39 6.59 13.21 7.35 3.24 3.26 EXEM (1NNs) 27.02 7.08 24.53 17.83 6.82 12.47 7.54 3.55 3.35 Datasets Visual features wc 2 2 vr 2 2 AwA GoogLeNet 69.7% 71.7% CUB GoogLeNet 53.4% 56.4% SUN GoogLeNet 62.8% 67.5% Datasets Visual features w/o learning w/ learning AwA GoogLeNet 69.7% 71.1% CUB GoogLeNet 53.4% 54.2% SUN GoogLeNet 62.8% 63.3% Dataset Correlation to Dv u name Semantic distances Predicted exemplar Da u distances D ψ(au) AwA 0.862 0.897 CUB 0.777 ± 0.021 0.904 ± 0.026 SUN 0.784 ± 0.022 0.893 ± 0.019 Table 16 16Overlap of k-nearest classes (in %) on AwA, CUB, SUN. We measure the overlap between those searched by real exemplars and those searched by semantic representations (i.e., attributes) or predicted exemplars. We set k to be 40 % of the number of unseen classes. See text for more details.Distances for kNN using AwA CUB SUN (k=4) (k=20) (k=29) Semantic representations 57.5 68.9 75.2 Predicted exemplars 67.5 80.0 82.1 Table 17 17Comparison between EXEM (1NN) with support vector regressors (SVR) and with 2-layer multi-layer perceptron (MLP) for predicting visual exemplars. Results on CUB are for the first split of SS. Each number for MLP is an average over 3 random initialization.Dataset Exemplar No PCA PCA PCA name predicted by d=1024 d=1024 d=500 AwA SVR 77.8 76.2 76.2 MLP 76.1±0.5 76.4±0.1 75.5±1.7 CUB SVR 57.1 59.4 59.4 MLP 53.8±0.3 54.2±0.3 53.8±0.5 Table 18 18Accuracy of EXEM (1NN) on AwA, CUB, and SUN when predicted exemplars are from original visual features (No PCA) and PCA-projected features (PCA with d = 1024, 500, 200, 100, 50, 10).Dataset No PCA PCA PCA PCA PCA PCA PCA name d=1024 d=1024 d=500 d=200 d=100 d=50 d=10 AwA 77.8 76.2 76.2 76.0 75.8 76.5 73.4 CUB 55.1 56.3 56.3 58.2 54.7 54.1 38.4 SUN 69.2 69.6 69.6 69.6 69.3 68.3 55.3 In the context of deep neural networks for classification, one can think of wc as the vector corresponding to class c in the last fully-connected layer and x as the input to that layer. http://dumps.wikimedia.org/enwiki/latest/ enwiki-latest-pages-articles.xml.bz2 on September 1, 2015 http://www.image-net.org/api/xml/structure released. xml [74,30] extracted word vectors of class names by averaging the vectors of words in the synset name, enabling all 20,842 unseen classes to have word vectors. The number of 2-hop, 3-hop, and All classes are thus 1,589, 7,860, and 20,842, respectively. For interested readers, if we set the number of attributes as the number of phantom classes (each br is the one-hot representation of an attribute), and use the Gaussian kernel with an isotropically diagonal covariance matrix in Eq. (3) with properly set bandwidths (either very small or very large) for each attribute, we will recover the formulation in[2,3] when the bandwidths tend to zero or infinity. https://code.google.com/p/word2vec/ For GoogLeNet features, we follow[9] to set λ = 1 and d = 500 for all experiments. For CV-distance, we set d = 500 for all experiments. This is because the smaller d is, the smaller the distance is. We treat rows of each distance matrix as data points and compute the Pearson correlation coefficients between matrices. Appendix A Details on How to Obtain Word Tensorflow: Large-scale machine learning on heterogeneous distributed systems. M Abadi, A Agarwal, P Barham, E Brevdo, Z Chen, C Citro, G S Corrado, A Davis, J Dean, M Devin, S Ghemawat, I J Goodfellow, A Harp, G Irving, M Isard, Y Jia, R Józefowicz, L Kaiser, M Kudlur, J Levenberg, D Mané, R Monga, S Moore, D G Murray, C Olah, M Schuster, J Shlens, B Steiner, I Sutskever, K Talwar, P A Tucker, V Vanhoucke, V Vasudevan, F B Viégas, O Vinyals, P Warden, M Wattenberg, M Wicke, Y Yu, X Zheng, OSDI. M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. J. Goodfellow, A. Harp, G. Irving, M. Is- ard, Y. Jia, R. Józefowicz, L. Kaiser, M. Kudlur, J. Leven- berg, D. Mané, R. Monga, S. Moore, D. G. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. A. Tucker, V. Vanhoucke, V. Vasudevan, F. B. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. In OSDI, 2016. Label-embedding for attribute-based classification. Z Akata, F Perronnin, Z Harchaoui, C Schmid, CVPR. Z. Akata, F. Perronnin, Z. Harchaoui, and C. Schmid. Label-embedding for attribute-based classification. In CVPR, 2013. Evaluation of output embeddings for fine-grained image classification. Z Akata, S Reed, D Walter, H Lee, B Schiele, CVPR. Z. Akata, S. Reed, D. Walter, H. Lee, and B. Schiele. Eval- uation of output embeddings for fine-grained image classi- fication. In CVPR, 2015. How to transfer? zeroshot object recognition via hierarchical transfer of semantic attributes. Z Al-Halah, R Stiefelhagen, WACV. Z. Al-Halah and R. Stiefelhagen. How to transfer? zero- shot object recognition via hierarchical transfer of semantic attributes. In WACV, 2015. Convex multitask feature learning. A Argyriou, T Evgeniou, M Pontil, Machine Learning. 73A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi- task feature learning. Machine Learning, 73:243-272, 2008. Laplacian eigenmaps for dimensionality reduction and data representation. M Belkin, P Niyogi, Neural computation. 156M. Belkin and P. Niyogi. Laplacian eigenmaps for dimen- sionality reduction and data representation. Neural com- putation, 15(6):1373-1396, 2003. Zero-shot classification by generating artificial visual features. M Bucher, S Herbin, F Jurie, In RFIAP. M. Bucher, S. Herbin, and F. Jurie. Zero-shot classification by generating artificial visual features. In RFIAP, 2018. Synthesized classifiers for zero-shot learning. S Changpinyo, W.-L Chao, B Gong, F Sha, CVPR. S. Changpinyo, W.-L. Chao, B. Gong, and F. Sha. Synthe- sized classifiers for zero-shot learning. In CVPR, 2016. Predicting visual exemplars of unseen classes for zero-shot learning. S Changpinyo, W.-L Chao, F Sha, ICCV. S. Changpinyo, W.-L. Chao, and F. Sha. Predicting visual exemplars of unseen classes for zero-shot learning. In ICCV, 2017. An empirical study and analysis of generalized zero-shot learning for object recognition in the wild. W.-L Chao, S Changpinyo, B Gong, F Sha, ECCV. W.-L. Chao, S. Changpinyo, B. Gong, and F. Sha. An em- pirical study and analysis of generalized zero-shot learning for object recognition in the wild. In ECCV, 2016. Inferring analogous attributes. C.-Y Chen, K Grauman, CVPR. C.-Y. Chen and K. Grauman. Inferring analogous at- tributes. In CVPR, 2014. On the algorithmic implementation of multiclass kernel-based vector machines. K Crammer, Y Singer, JMLR. 2K. Crammer and Y. Singer. On the algorithmic implemen- tation of multiclass kernel-based vector machines. JMLR, 2:265-292, 2002. Imagenet: A large-scale hierarchical image database. J Deng, W Dong, R Socher, L.-J Li, K Li, L Fei-Fei, CVPR. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei- Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009. Discovering localized attributes for fine-grained recognition. K Duan, D Parikh, D Crandall, K Grauman, CVPR. K. Duan, D. Parikh, D. Crandall, and K. Grauman. Dis- covering localized attributes for fine-grained recognition. In CVPR, 2012. Write a classifier: Zero-shot learning using purely textual descriptions. M Elhoseiny, B Saleh, A Elgammal, ICCV. M. Elhoseiny, B. Saleh, and A. Elgammal. Write a classifier: Zero-shot learning using purely textual descriptions. In ICCV, 2013. Describing objects by their attributes. A Farhadi, I Endres, D Hoiem, D Forsyth, CVPR. A. Farhadi, I. Endres, D. Hoiem, and D. Forsyth. Describ- ing objects by their attributes. In CVPR, 2009. Devise: A deep visualsemantic embedding model. A Frome, G S Corrado, J Shlens, S Bengio, J Dean, M A Ranzato, T Mikolov, NIPS. A. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, M. A. Ranzato, and T. Mikolov. Devise: A deep visual- semantic embedding model. In NIPS, 2013. Transductive multi-view embedding for zero-shot recognition and annotation. Y Fu, T M Hospedales, T Xiang, Z Fu, S Gong, ECCV. Y. Fu, T. M. Hospedales, T. Xiang, Z. Fu, and S. Gong. Transductive multi-view embedding for zero-shot recogni- tion and annotation. In ECCV, 2014. Transductive multi-view zero-shot learning. Y Fu, T M Hospedales, T Xiang, S Gong, TPAMIY. Fu, T. M. Hospedales, T. Xiang, and S. Gong. Trans- ductive multi-view zero-shot learning. TPAMI, 2015. Recent advances in zero-shot recognition: Toward dataefficient understanding of visual content. Y Fu, T Xiang, Y.-G Jiang, X Xue, L Sigal, S Gong, IEEE Signal Processing Magazine. 35Y. Fu, T. Xiang, Y.-G. Jiang, X. Xue, L. Sigal, and S. Gong. Recent advances in zero-shot recognition: Toward data- efficient understanding of visual content. IEEE Signal Pro- cessing Magazine, 35:112-125, 2018. Exploring semantic interclass relationships (sir) for zero-shot action recognition. C Gan, M Lin, Y Yang, Y Zhuang, A G Hauptmann, AAAI. C. Gan, M. Lin, Y. Yang, Y. Zhuang, and A. G. Haupt- mann. Exploring semantic interclass relationships (sir) for zero-shot action recognition. In AAAI, 2015. Learning attributes equals multi-source domain generalization. C Gan, T Yang, B Gong, CVPR. C. Gan, T. Yang, and B. Gong. Learning attributes equals multi-source domain generalization. In CVPR, 2016. An extension on "statistical comparisons of classifiers over multiple data sets" for all pairwise comparisons. S Garcia, F Herrera, JMLR. 9S. Garcia and F. Herrera. An extension on "statistical com- parisons of classifiers over multiple data sets" for all pair- wise comparisons. JMLR, 9(Dec):2677-2694, 2008. Active transfer learning with zero-shot priors: Reusing past datasets for future tasks. E Gavves, T Mensink, T Tommasi, C G Snoek, T Tuytelaars, ICCV. E. Gavves, T. Mensink, T. Tommasi, C. G. Snoek, and T. Tuytelaars. Active transfer learning with zero-shot pri- ors: Reusing past datasets for future tasks. In ICCV, 2015. Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, CVPR. K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016. Stochastic neighbor embedding. G E Hinton, S T Roweis, NIPS. G. E. Hinton and S. T. Roweis. Stochastic neighbor em- bedding. In NIPS, 2002. Zero-shot recognition with unreliable attributes. D Jayaraman, K Grauman, NIPS. D. Jayaraman and K. Grauman. Zero-shot recognition with unreliable attributes. In NIPS, 2014. Decorrelating semantic visual attributes by resisting the urge to share. D Jayaraman, F Sha, K Grauman, CVPR. D. Jayaraman, F. Sha, and K. Grauman. Decorrelating semantic visual attributes by resisting the urge to share. In CVPR, 2014. Caffe: Convolutional architecture for fast feature embedding. Y Jia, E Shelhamer, J Donahue, S Karayev, J Long, R Girshick, S Guadarrama, T Darrell, ACM Multimedia. Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convo- lutional architecture for fast feature embedding. In ACM Multimedia, 2014. Rethinking knowledge graph propagation for zero-shot learning. M Kampffmeyer, Y Chen, X Liang, H Wang, Y Zhang, E P Xing, arXiv:1805.11724arXiv preprintM. Kampffmeyer, Y. Chen, X. Liang, H. Wang, Y. Zhang, and E. P. Xing. Rethinking knowledge graph propagation for zero-shot learning. arXiv preprint arXiv:1805.11724, 2018. Gaze embeddings for zero-shot image classification. N Karessli, Z Akata, A Bulling, B Schiele, CVPR. N. Karessli, Z. Akata, A. Bulling, and B. Schiele. Gaze embeddings for zero-shot image classification. In CVPR, 2017. Semi-supervised classification with graph convolutional networks. T N Kipf, M Welling, ICLR. T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. In ICLR, 2017. Unsupervised domain adaptation for zero-shot learning. E Kodirov, T Xiang, Z Fu, S Gong, ICCV. E. Kodirov, T. Xiang, Z. Fu, and S. Gong. Unsupervised domain adaptation for zero-shot learning. In ICCV, 2015. Semantic autoencoder for zero-shot learning. E Kodirov, T Xiang, S Gong, CVPR. E. Kodirov, T. Xiang, and S. Gong. Semantic autoencoder for zero-shot learning. In CVPR, 2017. Imagenet classification with deep convolutional neural networks. A Krizhevsky, I Sutskever, G E Hinton, NIPS. A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012. Generalized zero-shot learning via synthesized examples. V Kumar Verma, G Arora, A Mishra, P Rai, CVPR. V. Kumar Verma, G. Arora, A. Mishra, and P. Rai. Gen- eralized zero-shot learning via synthesized examples. In CVPR, 2018. Learning to detect unseen object classes by between-class attribute transfer. C H Lampert, H Nickisch, S Harmeling, CVPR. C. H. Lampert, H. Nickisch, and S. Harmeling. Learning to detect unseen object classes by between-class attribute transfer. In CVPR, 2009. Attributebased classification for zero-shot visual object categorization. C H Lampert, H Nickisch, S Harmeling, TPAMI36C. H. Lampert, H. Nickisch, and S. Harmeling. Attribute- based classification for zero-shot visual object categoriza- tion. TPAMI, 36(3):453-465, 2014. Predicting deep zero-shot convolutional neural networks using textual descriptions. J Ba, K Swersky, S Fidler, R Salakhutdinov, ICCV. J. Lei Ba, K. Swersky, S. Fidler, and R. Salakhutdinov. Pre- dicting deep zero-shot convolutional neural networks using textual descriptions. In ICCV, 2015. Semi-supervised zeroshot classification with label representation learning. X Li, Y Guo, D Schuurmans, ICCV. X. Li, Y. Guo, and D. Schuurmans. Semi-supervised zero- shot classification with label representation learning. In ICCV, 2015. From zero-shot learning to conventional supervised classification: Unseen visual data synthesis. Y Long, L Liu, L Shao, F Shen, G Ding, J Han, CVPR. Y. Long, L. Liu, L. Shao, F. Shen, G. Ding, and J. Han. From zero-shot learning to conventional supervised classi- fication: Unseen visual data synthesis. In CVPR, 2017. Unsupervised learning of neural network outputs. Y Lu, IJCAI. Y. Lu. Unsupervised learning of neural network outputs. In IJCAI, 2016. Generating images from captions with attention. E Mansimov, E Parisotto, J L Ba, R Salakhutdinov, ICLR. E. Mansimov, E. Parisotto, J. L. Ba, and R. Salakhutdinov. Generating images from captions with attention. In ICLR, 2016. COSTA: Cooccurrence statistics for zero-shot classification. T Mensink, E Gavves, C G Snoek, CVPR. T. Mensink, E. Gavves, and C. G. Snoek. COSTA: Co- occurrence statistics for zero-shot classification. In CVPR, 2014. Distance-based image classification: Generalizing to new classes at near-zero cost. T Mensink, J Verbeek, F Perronnin, G Csurka, TPAMI. 3511T. Mensink, J. Verbeek, F. Perronnin, and G. Csurka. Distance-based image classification: Generalizing to new classes at near-zero cost. TPAMI, 35(11):2624-2637, 2013. Efficient estimation of word representations in vector space. T Mikolov, K Chen, G S Corrado, J Dean, ICLR Workshops. T. Mikolov, K. Chen, G. S. Corrado, and J. Dean. Efficient estimation of word representations in vector space. In ICLR Workshops, 2013. Distributed representations of words and phrases and their compositionality. T Mikolov, I Sutskever, K Chen, G S Corrado, J Dean, NIPS. T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In NIPS, 2013. Wordnet: a lexical database for english. G A Miller, Communications of the ACM. 38G. A. Miller. Wordnet: a lexical database for english. Com- munications of the ACM, 38(11):39-41, 1995. Semantically consistent regularization for zero-shot recognition. P Morgado, N Vasconcelos, CVPR. P. Morgado and N. Vasconcelos. Semantically consistent regularization for zero-shot recognition. In CVPR, 2017. Zero-shot learning by convex combination of semantic embeddings. M Norouzi, T Mikolov, S Bengio, Y Singer, J Shlens, A Frome, G S Corrado, J Dean, ICLR Workshops. M. Norouzi, T. Mikolov, S. Bengio, Y. Singer, J. Shlens, A. Frome, G. S. Corrado, and J. Dean. Zero-shot learning by convex combination of semantic embeddings. In ICLR Workshops, 2014. Zero-shot learning with semantic output codes. M Palatucci, D Pomerleau, G E Hinton, T M Mitchell, NIPS. M. Palatucci, D. Pomerleau, G. E. Hinton, and T. M. Mitchell. Zero-shot learning with semantic output codes. In NIPS, 2009. Interactively building a discriminative vocabulary of nameable attributes. D Parikh, K Grauman, CVPR. D. Parikh and K. Grauman. Interactively building a dis- criminative vocabulary of nameable attributes. In CVPR, 2011. The SUN Attribute Database: Beyond categories for deeper scene understanding. G Patterson, C Xu, H Su, J Hays, IJCV. 1081-2G. Patterson, C. Xu, H. Su, and J. Hays. The SUN At- tribute Database: Beyond categories for deeper scene un- derstanding. IJCV, 108(1-2):59-81, 2014. Glove: Global vectors for word representation. J Pennington, R Socher, C Manning, EMNLP. J. Pennington, R. Socher, and C. Manning. Glove: Global vectors for word representation. In EMNLP, 2014. iCaRL: Incremental classifier and representation learning. S.-A Rebuffi, A Kolesnikov, G Sperl, C H Lampert, CVPR. S.-A. Rebuffi, A. Kolesnikov, G. Sperl, and C. H. Lampert. iCaRL: Incremental classifier and representation learning. In CVPR, 2017. Learning deep representations of fine-grained visual descriptions. S Reed, Z Akata, H Lee, B Schiele, CVPR. S. Reed, Z. Akata, H. Lee, and B. Schiele. Learning deep representations of fine-grained visual descriptions. In CVPR, 2016. Generative adversarial text to image synthesis. S Reed, Z Akata, X Yan, L Logeswaran, B Schiele, H Lee, ICML. S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee. Generative adversarial text to image synthesis. In ICML, 2016. Incremental learning of random forests for large-scale image classification. M Ristin, M Guillaumin, J Gall, L Van Gool, TPAMI. 383M. Ristin, M. Guillaumin, J. Gall, and L. Van Gool. In- cremental learning of random forests for large-scale image classification. TPAMI, 38(3):490-503, 2016. Evaluating knowledge transfer and zero-shot learning in a large-scale setting. M Rohrbach, M Stark, B Schiele, CVPR. M. Rohrbach, M. Stark, and B. Schiele. Evaluating knowl- edge transfer and zero-shot learning in a large-scale setting. In CVPR, 2011. . * Soravit Changpinyo, Soravit Changpinyo* et al. What helps where-and why? semantic relatedness for knowledge transfer. M Rohrbach, M Stark, G Szarvas, I Gurevych, B Schiele, CVPR. M. Rohrbach, M. Stark, G. Szarvas, I. Gurevych, and B. Schiele. What helps where-and why? semantic relat- edness for knowledge transfer. In CVPR, 2010. An embarrassingly simple approach to zero-shot learning. B Romera-Paredes, P H S Torr, ICML. B. Romera-Paredes and P. H. S. Torr. An embarrassingly simple approach to zero-shot learning. In ICML, 2015. . O Russakovsky, J Deng, H Su, J Krause, S Satheesh, S Ma, Z Huang, A Karpathy, A Khosla, M Bernstein, A C Berg, L Fei-Fei, ImageNet large scale visual recognition challenge. IJCVO. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet large scale visual recognition challenge. IJCV, 2015. Learning to share visual appearance for multiclass object detection. R Salakhutdinov, A Torralba, J Tenenbaum, CVPR. R. Salakhutdinov, A. Torralba, and J. Tenenbaum. Learn- ing to share visual appearance for multiclass object detec- tion. In CVPR, 2011. Learning with kernels: support vector machines, regularization, optimization, and beyond. B Schölkopf, A J Smola, MIT pressB. Schölkopf and A. J. Smola. Learning with kernels: sup- port vector machines, regularization, optimization, and be- yond. MIT press, 2002. New support vector algorithms. B Schölkopf, A J Smola, R C Williamson, P L Bartlett, Neural computation. 125B. Schölkopf, A. J. Smola, R. C. Williamson, and P. L. Bartlett. New support vector algorithms. Neural computa- tion, 12(5):1207-1245, 2000. Very deep convolutional networks for large-scale image recognition. K Simonyan, A Zisserman, ICLR. K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. Zeroshot learning through cross-modal transfer. R Socher, M Ganjoo, C D Manning, A Y Ng, NIPS. R. Socher, M. Ganjoo, C. D. Manning, and A. Y. Ng. Zero- shot learning through cross-modal transfer. In NIPS, 2013. Going deeper with convolutions. C Szegedy, W Liu, Y Jia, P Sermanet, S Reed, D Anguelov, D Erhan, V Vanhoucke, A Rabinovich, CVPR. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, 2015. Visualizing data using t-sne. L Van Der Maaten, G Hinton, JMLR. 985L. Van der Maaten and G. Hinton. Visualizing data using t-sne. JMLR, 9(2579-2605):85, 2008. The devil is in the tails: Fine-grained classification in the wild. G Van Horn, P Perona, arXiv:1709.01450arXiv preprintG. Van Horn and P. Perona. The devil is in the tails: Fine-grained classification in the wild. arXiv preprint arXiv:1709.01450, 2017. A simple exponential family framework for zero-shot learning. V K Verma, P Rai, ECML/PKDD. V. K. Verma and P. Rai. A simple exponential family framework for zero-shot learning. In ECML/PKDD, 2017. C Wah, S Branson, P Welinder, P Perona, S Belongie, CNS-TR-2011-001The Caltech-UCSD Birds-200-2011 Dataset. California Institute of TechnologyTechnical ReportC. Wah, S. Branson, P. Welinder, P. Perona, and S. Be- longie. The Caltech-UCSD Birds-200-2011 Dataset. Tech- nical Report CNS-TR-2011-001, California Institute of Technology, 2011. Zero-shot visual recognition via bidirectional latent embedding. Q Wang, K Chen, IJCV. 124Q. Wang and K. Chen. Zero-shot visual recognition via bidirectional latent embedding. IJCV, 124:356-383, 2017. Zero-shot recognition via semantic embeddings and knowledge graphs. X Wang, Y Ye, A Gupta, CVPR. X. Wang, Y. Ye, and A. Gupta. Zero-shot recognition via semantic embeddings and knowledge graphs. In CVPR, 2018. Latent embeddings for zero-shot classification. Y Xian, Z Akata, G Sharma, Q Nguyen, M Hein, B Schiele, CVPR. Y. Xian, Z. Akata, G. Sharma, Q. Nguyen, M. Hein, and B. Schiele. Latent embeddings for zero-shot classification. In CVPR, 2016. Zeroshot learning -a comprehensive evaluation of the Good, the Bad and the Ugly. Y Xian, C H Lampert, B Schiele, Z Akata, TPAMIY. Xian, C. H. Lampert, B. Schiele, and Z. Akata. Zero- shot learning -a comprehensive evaluation of the Good, the Bad and the Ugly. TPAMI, 2018. Feature generating networks for zero-shot learning. Y Xian, T Lorenz, B Schiele, Z Akata, CVPR. Y. Xian, T. Lorenz, B. Schiele, and Z. Akata. Feature generating networks for zero-shot learning. In CVPR, 2018. Zero-shot learning -the Good, the Bad and the Ugly. Y Xian, B Schiele, Z Akata, CVPR. Y. Xian, B. Schiele, and Z. Akata. Zero-shot learning -the Good, the Bad and the Ugly. In CVPR, 2017. SUN Database: Large-scale scene recognition from abbey to zoo. J Xiao, J Hays, K Ehinger, A Oliva, A Torralba, CVPR. J. Xiao, J. Hays, K. Ehinger, A. Oliva, and A. Torralba. SUN Database: Large-scale scene recognition from abbey to zoo. In CVPR, 2010. Semantic embedding space for zero-shot action recognition. X Xu, T Hospedales, S Gong, ICIP. X. Xu, T. Hospedales, and S. Gong. Semantic embedding space for zero-shot action recognition. In ICIP, 2015. Attribute2Image: Conditional image generation from visual attributes. X Yan, J Yang, K Sohn, H Lee, ECCV. X. Yan, J. Yang, K. Sohn, and H. Lee. Attribute2Image: Conditional image generation from visual attributes. In ECCV, 2016. A unified perspective on multi-domain and multi-task learning. Y Yang, T M Hospedales, ICLR. Y. Yang and T. M. Hospedales. A unified perspective on multi-domain and multi-task learning. In ICLR, 2015. Designing category-level attributes for discriminative visual recognition. F X Yu, L Cao, R S Feris, J R Smith, S.-F Chang, CVPR. F. X. Yu, L. Cao, R. S. Feris, J. R. Smith, and S.-F. Chang. Designing category-level attributes for discriminative visual recognition. In CVPR, 2013. Learning a deep embedding model for zero-shot learning. L Zhang, T Xiang, S Gong, CVPR. L. Zhang, T. Xiang, and S. Gong. Learning a deep embed- ding model for zero-shot learning. In CVPR, 2017. Zero-shot learning via semantic similarity embedding. Z Zhang, V Saligrama, ICCV. Z. Zhang and V. Saligrama. Zero-shot learning via semantic similarity embedding. In ICCV, 2015. Zero-shot learning via joint latent similarity embedding. Z Zhang, V Saligrama, CVPR. Z. Zhang and V. Saligrama. Zero-shot learning via joint latent similarity embedding. In CVPR, 2016. Places: A 10 million image database for scene recognition. B Zhou, A Lapedriza, A Khosla, A Oliva, A Torralba, TPAMI. 40B. Zhou, A. Lapedriza, A. Khosla, A. Oliva, and A. Tor- ralba. Places: A 10 million image database for scene recog- nition. TPAMI, 40:1452-1464, 2018. Learning deep features for scene recognition using places database. B Zhou, A Lapedriza, J Xiao, A Torralba, A Oliva, NIPS. B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. Learning deep features for scene recognition using places database. In NIPS, 2014. Capturing long-tail distributions of object subcategories. X Zhu, D Anguelov, D Ramanan, CVPR. X. Zhu, D. Anguelov, and D. Ramanan. Capturing long-tail distributions of object subcategories. In CVPR, 2014. A generative adversarial approach for zero-shot learning from noisy texts. Y Zhu, M Elhoseiny, B Liu, X Peng, A Elgammal, CVPR. Y. Zhu, M. Elhoseiny, B. Liu, X. Peng, and A. Elgammal. A generative adversarial approach for zero-shot learning from noisy texts. In CVPR, 2018.
[]
[ "A determinant representation for generalized ballot and Fuss-Catalan numbers", "A determinant representation for generalized ballot and Fuss-Catalan numbers" ]
[ "James J Y Zhao [email protected] \nDongling School of Economics and Management\nCenter for Applied Mathematics\nUniversity of Science and Technology\n100083Beijing, BeijingP.R. China\n\nTianjin University\n300072TianjinP.R. China\n" ]
[ "Dongling School of Economics and Management\nCenter for Applied Mathematics\nUniversity of Science and Technology\n100083Beijing, BeijingP.R. China", "Tianjin University\n300072TianjinP.R. China" ]
[]
In this note we introduce a determinant and then give its evaluating formula. The determinant turns out to be a generalization of the well-known ballot and Fuss-Catalan numbers, which is believed to be new. The evaluating formula is proved by showing that the determinant coincides with the number of lattice paths with (1, 0), (0, 1)-steps in the plane that stay below a boundary line of rational slope.
null
[ "https://arxiv.org/pdf/1312.3164v1.pdf" ]
119,628,868
1312.3164
80051c970d8489fc7972c034e58b44bcc8e449c9
A determinant representation for generalized ballot and Fuss-Catalan numbers 11 Dec 2013 December 11, 2013 James J Y Zhao [email protected] Dongling School of Economics and Management Center for Applied Mathematics University of Science and Technology 100083Beijing, BeijingP.R. China Tianjin University 300072TianjinP.R. China A determinant representation for generalized ballot and Fuss-Catalan numbers 11 Dec 2013 December 11, 2013arXiv:1312.3164v1 [math.CO] Mathematics Subject Classification: 05A15, 05A10, 15A15, 15B36determinant representationlattice pathballot numberFuss-Catalan number In this note we introduce a determinant and then give its evaluating formula. The determinant turns out to be a generalization of the well-known ballot and Fuss-Catalan numbers, which is believed to be new. The evaluating formula is proved by showing that the determinant coincides with the number of lattice paths with (1, 0), (0, 1)-steps in the plane that stay below a boundary line of rational slope. Introduction For any given integers m, n, u and k satisfying u ≥ 0, k ≥ 2, n ≥ 2, and m ≥ max{u + 1, (k − 1)(n − 1)}, let D(m, n, u, k) be a determinant as follows: D(m, n, u, k) := det(t ij ) i,j=1,2,...,n−1 , where t ij = m − max{u, (k − 1)j} 1 − i + j . As we shall soon see, the determinant D(m, n, u, k) is a generalization of the renowned ballot and Fuss-Catalan numbers by the following proposition. Proposition 1.1. For any given integers m, n, u and k satisfying u ≥ 0, k ≥ 2, n ≥ 2 and m ≥ max{u + 1, (k − 1)(n − 1)} we have D(m, n, u, k) = ⌊ u k ⌋ i=0 (−1) i m − (k − 1)(n − 1) m + n − 1 − ki m + n − 1 − ki n − 1 − i u − (k − 1)i i , (1.2) where ⌊x⌋ is the largest integer not greater than x. The formula (1.2), the main result of this paper, is obtained by showing that the determinant D(m, n, u, k) satisfies the same recurrence and initial conditions as |L(u + 1, 1; m, n; k)|, the cardinality of L(u + 1, 1; m, n; k), where L(u + 1, 1; m, n; k) is the set of lattice paths from (u + 1, 1) to (m, n) with (1, 0), (0, 1)-steps in the plane that stay below the line y = x−1 k−1 + 1. It should be mentioned that m and n are indeed the variants concerning the recurrence and initial conditions, yet u and k are just parameters. When u = 0, by (1.2) the determinant D(m + 1, n + 1, 0, k + 1) = m+1−kn m+1 m+n n is just the solution of the generalized ballot problem which had been studied by Barbier [3] (cf. [10]). Also, D(m + 1, n + 1, 0, 2) = m−n+1 m+1 m+n n , called the ballot number, is the solution of the original ballot problem first introduced by Bertrand [4]. Furthermore, by (1.2) there follows D((k − 1)n + 1, n + 1, 0, k) = 1 (k−1)n+1 kn n , which is known as the Fuss-Catalan number [8] (cf. [2] or [11,Eq. (7.67)]). It also enumerates (k − 1)-Dyck paths and k-ary trees. Specifically, when k = 2 the Fuss-Catalan number reduces to D(n + 1, n + 1, 0, 2) = 1 n+1 2n n , which is the well known Catalan number [15], and when k = 3 it becomes D(2n + 1, n + 1, 0, 3) = 1 2n+1 3n n , which counts 2-Dyck paths and ternary trees [1]. The ballot number, as well as the Fuss-Catalan number, generalizes the Catalan number. These fascinating numbers have many interesting interpretations in algebra, combinatorics, and probability. One of the most famous examples is, the classical Chung-Feller Theorem [6] offers a graceful perspective for counting the Catalan number. More combinatorial and algebraic interpretations of Catalan number were shown by Stanley [15,16]. Various applications of Catalan number were collected by Koshy [12]. For recent inspiring works involving generalizations of the Catalan number and the ballot number, with many references, see Krattenthaler [13], Sagan and Savage [14], Gorsky and Mazin [9], He [7], and Bousquet-Mélou, Chapuy, and Préville-Ratelle [5]. Although numerous generalized ballot and Catalan numbers had been given, the determinant representation (1.1), to the best of our knowledge, is new. The parameter u and k indeed make sense while calculating certain ruin probability in the framework of classical compound binomial risk model, where the initial capital of an insurer is denoted by u and each claim amount is assumed to be k. This shall be discussed in a following paper [18]. In particular, when k = 2, it reduces to the classical gambler's ruin problem. In this note, we first show a formula for |L(u + 1, 1; m, n; k)| in Section 2. In Section 3 we give the recurrences and initial conditions for |L(u + 1, 1; m, n; k)| and D(m, n, u, k). Then we complete the proof of Proposition 1.1 by showing that D(m, n, u, k) and |L(u + 1, 1; m, n; k)| coincide. 2 Enumeration formula for |L(u + 1, 1; m, n; k)| In this section we shall prove the formula for |L(u + 1, 1; m, n; k)|. Lemma 2.1. For any given integers m, n, u and k satisfying u ≥ 0, k ≥ 2, n ≥ 2, and m ≥ max{u + 1, (k − 1)(n − 1)}, we have |L(u + 1, 1; m, n; k)| = ⌊ u k ⌋ i=0 (−1) i m − (k − 1)(n − 1) m + n − 1 − ki m + n − 1 − ki n − 1 − i u − (k − 1)i i . (2.1) Proof. Given integers u ≥ 0, k ≥ 2, and n ≥ 2, without loss of generality define |L(u + 1, 1; (k − 1)(n − 1), n; k)| = 0 since the point ((k − 1)(n − 1), n) stays above the boundary line y = x−1 k−1 + 1. Clearly (2.1) works when m = (k − 1)(n − 1) . It remains to show that (2.1) holds when m ≥ max{u, (k − 1)(n − 1)} + 1. Here we need a known result on lattice paths enumeration. In [17, Theorem 2.1], the number of lattice paths from (a, b) to (m, n) that stay above the line y = kx was shown to be ⌊ b−ka k+1 ⌋ i=0 (−1) i n + 1 − km n + 1 − k(a + i) m + n − (k + 1)(a + i) m − a − i b − k(a + i) i , (2.2) where a, b, m, n and k are integers satisfying k ≥ 1, 0 ≤ a ≤ m, ka ≤ b ≤ n and n ≥ km, by mathematical induction based on a forward recursion. To obtain (2.1) from (2.2), first reflect each path counted by (2.2) along the diagonal y = x in the plane. The resulting path obviously belongs to the set of lattice paths from (b, a) to (n, m) that stay below the line y = x/k which is denoted by N 1/k (b, a; n, m). Make the substitutions a → b, b → a, m → n, and n → m in (2.2). Then we have |N 1/k (a, b; m, n)| = ⌊ a−kb k+1 ⌋ i=0 (−1) i m + 1 − kn m + 1 − k(b + i) m + n − (k + 1)(b + i) n − b − i a − k(b + i) i ,(2.3) for integers a, b, m, n and k satisfying k ≥ 1, 0 ≤ b ≤ n, kb ≤ a ≤ m, n ≥ 1 and m ≥ kn. Next for any lattice path ξ ∈ L(u + 1, 1; m, n; k), shift the x-axis upward for one unit and the y-axis rightward for one unit. The resulting path clearly belongs to the set N 1/(k−1) (u, 0; m − 1, n − 1). This map is obviously reversible and unique. Thus |L(u + 1, 1; m, n; k)| = |N 1/(k−1) (u, 0; m − 1, n − 1)|. After making the substitutions k → k − 1, a → u, b → 0, m → m − 1, and n → n − 1 in (2.3), we obtain (2.1) for m ≥ max{u, (k − 1)(n − 1)} + 1. This completes the proof. Proof of Proposition 1.1 The aim of this section is to complete the proof of Proposition 1.1. This is achieved by showing that both D(m, n, u, k) and |L(u + 1, 1; m, n; k)| have the same recurrence and initial conditions. First we have the following result for |L(u + 1, 1; m, n; k)|. Lemma 3.1. Given integers u ≥ 0 and k ≥ 2, the numbers |L(u + 1, 1; m, n; k)| satisfy the recurrence |L(u + 1, 1; m, n; k)| = |L(u + 1, 1; m − 1, n; k)| + |L(u + 1, 1; m, n − 1; k)|, n ≥ 3, m ≥ max{u + 2, (k − 1)(n − 1) + 1}, (3.1) with the initial conditions |L(u + 1, 1; m, 2; k)| = m − max{u, k − 1}, m ≥ max{u + 2, (k − 1)(n − 1) + 1}, (3.2) |L(u + 1, 1; u + 1, n; k)| = 1, 2 ≤ n ≤ ⌊ u k−1 ⌋ + 1, u ≥ k − 1, (3.3) |L(u + 1, 1; (k − 1)(n − 1), n; k)| = 0, n ≥ max{2, ⌈ u k−1 ⌉ + 1}, (3.4) where ⌈x⌉ is the least integer not less than x. Proof. The recurrence (3.1) is obviously true from the definition of |L(u + 1, 1; m, n; k)|. Note that the initial conditions are distributed on the lines y = 2, x = u + 1 (if u ≥ k − 1) and y = x k−1 + 1, corresponding to (3.2), (3.3), and (3.4), respectively, in the plane. When n = 2, if 0 ≤ u < k − 1, for each lattice path ξ ∈ L(u + 1, 1; m, 2; k) with m > u + 1, the first k − 1 − u steps must be horizontal because of the restriction that ξ must stay below the line y = x−1 k−1 + 1. For example see the blue portion AB in Figure 3.1. Next we shall prove the recurrence and initial conditions for D(m, n, u, k). Lemma 3.2. Given integers u ≥ 0 and k ≥ 2, the determinant D(m, n, u, k) satisfies the recurrence D(m, n, u, k) = D(m − 1, n, u, k) + D(m, n − 1, u, k), n ≥ 3, m ≥ max{u + 2, (k − 1)(n − 1) + 1}, (3.5) with the initial conditions So |L(u + 1, 1; m, 2; k)| = |L(k, 1; m, 2; k)| = m−k+2−1 2−1 = m − (k − 1). If u ≥ k − 1 ≥ 1,5 ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ y = x 2 + 1 2 A BD(m, 2, u, k) = m − max{u, k − 1}, m ≥ max{u + 2, (k − 1)(n − 1) + 1}, (3.6) D(u + 1, n, u, k) = 1, 2 ≤ n ≤ ⌊ u k−1 ⌋ + 1, u ≥ k − 1, (3.7) D((k − 1)(n − 1), n, u, k) = 0, n ≥ max{2, ⌈ u k−1 ⌉ + 1}. (3.8) Proof. To prove (3.5), we first need to perform an elementary row operation on the matrix (t ij ) i,j=1,2,...,n−1 defined in (1.1). That is, replace the i-th row by adding the (i + 1)-th row to it in increasing order of i. Let (t * ij ) i,j=1,2,...,n−1 denote the resulting matrix. Clearly D(m, n, u, k) = det(t * ij ) i,j=1,2,...,n−1 , where t * ij = m+1−max{u,(k−1)j} 1−i+j , i = 1, 2, . . . , n − 2, j = 1, 2, . . . , n − 1, m−max{u,(k−1)j} 1−i+j , i = n − 1, j = 1, 2, . . . , n − 1. The first n − 2 rows of the matrix (t * ij ) i,j=1,2,...,n−1 are obtained easily by applying the well known identity n k + n k−1 = n+1 k for integer k [11, Eq. (5.8)], and it is clear that both (t ij ) i,j=1,2,...,n−1 and (t * ij ) i,j=1,2,...,n−1 have the same bottom row, (0, . . . , 0, 1, m − max{u, (k − 1)(n − 1)}), whose zero entries are obtained by the definition n k = 0 when n ≥ 0 and k < 0. Now let t * j denote the j-th column of the matrix (t * ij ) i,j=1,2,...,n−1 , and let T denote transpose. Note that t * n−1 = α + β, where α = (t * 1,n−1 , . . . , t * n−2,n−1 , m + 1 − max{u, (k − 1)(n − 1)}) T and β = (0, . . . , 0 (n−2) ′ s , −1) T . Thus by basic properties of determinant and the definition of D(m, n, u, k), we have D(m, n, u, k) = det t * 1 , . . . , t * n−2 , α + β = det t * 1 , . . . , t * n−2 , α + det t * 1 , . . . , t * n−2 , β = D(m + 1, n, u, k) − D(m + 1, n − 1, u, k), (3.9) for n ≥ 3 and m ≥ max{u + 1, (k − 1)(n − 1)}. Letting m → m − 1 in (3.9) gives (3.5). Next we shall prove the initial conditions of D(m, n, u, k). Given integers u ≥ 0 and k ≥ 2, first by the definition of D(m, n, u, k) we have D(m, 2, u, k) = m−max{u,k−1} 1 = m − max{u, k − 1} when m ≥ max{u + 2, (k − 1)(n − 1) + 1}, which leads to (3.6). When m = u + 1, 2 ≤ n ≤ ⌊ u k−1 ⌋ + 1, and u ≥ k − 1, it follows that t ij = u+1−max{u,(k−1)j} 1−i+j = 1 1−i+j since u ≥ (k − 1)(n − 1) ≥ (k − 1)j for 1 ≤ j ≤ n − 1. Then we have t ij =        0, 1 ≤ i < j ≤ n − 1, 1, i = j, 1, i = j + 1, 0, i > j + 1. Clearly (t ij ) i,j=1,2,...,n−1 is a lower triangular matrix with 1's on the main diagonal when m = u + 1, 2 ≤ n ≤ ⌊ u k−1 ⌋ + 1, and u ≥ k − 1, therefore (3.7) follows. Note that if u < k − 1, by the restriction m ≥ max{u + 1, (k − 1)(n − 1)}, the variant m should start at (k − 1)(n − 1) for k, n ≥ 2. Finally when u > 0 and n ≥ ⌈ u k−1 ⌉ + 1, it is clear that (k − 1)(n − 1) ≥ u. Let us consider the matrix corresponding to D((k − 1)(n − 1), n, u, k). By the definition of D(m, n, u, k) the entry t i,n−1 = (k−1)(n−1)−max(u,(k−1)(n−1)) 1−i+n−1 = 0 n−i = 0 for i = 1, 2, . . . , n − 1, which means that D((k − 1)(n − 1), n, u, k) = 0. When u = 0 and n ≥ 2, a similar discussion also leads to the same outcome. Then we have (3.8) as desired. This completes the proof. We are now in a position to complete the proof of Proposition 1.1. Proof of Proposition 1.1. Let m, n, u and k be integers where u ≥ 0, k ≥ 2, n ≥ 2 and m ≥ max{u + 1, (k − 1)(n − 1)}. First by Lemma 3.1 and Lemma 3.2, it is clear that D(m, n, u, k) satisfies the same recurrence and initial conditions as |L(u + 1, 1; m, n; k)|, so they agree. That is, D(m, n, u, k) = |L(u + 1, 1; m, n; k)|. Together with Lemma 2.1, there follows (1.2). This completes the proof. by the definition of L(u + 1, 1; m, n; k) it is easy to see that |L(u + 1, 1; m, 2; Figure 3 . 31 A lattice path ξ ∈ L(2, 1; 11, 5; 3) where 1 = u < k − 1 = 2. In this case the boundary line is y = x 2 + 1 2 . The blue portion AB is the first k − 1 − u step, which has to be horizontal. The number above each point (m, n) is equal to |L(2, 1; m, n; 3)|.The initial conditions (3.3) and (3.4) are easy to obtain by the definition of L(u + 1, 1; m, n; k). The restrictions 2 ≤ n ≤ ⌊ u k−1 ⌋ + 1 and u ≥ k − 1 in (3.3) are required to make sure that each lattice path stays below the boundary line y = x−1 k−1 + 1. Note that when u < k − 1, the initial conditions consist of only (3.2) and(3.4). See for example inFigure 3.1. Now we have shown that (3.1)-(3.4) hold. This completes the proof. Acknowledgments I am very grateful to Arthur L.B. Yang and Guo-Ce Xin for inspiring discussions and valuable comments on an earlier version of this paper. Multivariate Fuss-Catalan numbers. J.-C , Discrete Math. 30820J.-C. Aval, Multivariate Fuss-Catalan numbers, Discrete Math. 308(20):4660-4669, 2008. Chromatic statistics for triangulations and Fuß-Catalan complexes. R Bacher, C Krattenthaler, Electron. J. Combin. 181152R. Bacher and C. Krattenthaler, Chromatic statistics for triangulations and Fuß- Catalan complexes, Electron. J. Combin., 18(1) #P152, 2011. Note: Calcul des probabilites. E Barbier, Generalisation du probleme resolu par M. J. Bertrand, Co. Re. Acad. Sci. Paris, 105. 1887E. Barbier, Note: Calcul des probabilites. Generalisation du probleme resolu par M. J. Bertrand, Co. Re. Acad. Sci. Paris, 105, p. 407, 1887. Solution d'un probléme. J Bertrand, Comptes Rendus de l'Académie des Sciences. ParisJ. Bertrand, Solution d'un probléme, Comptes Rendus de l'Académie des Sciences, Paris, 105 p. 369. 1887. The representation of the symmetric group on m-Tamari intervals. M Bousquet-Mélou, G Chapuy, L.-F Préville-Ratelle, Adv. Math. 24710M. Bousquet-Mélou, G. Chapuy, and L.-F. Préville-Ratelle, The representation of the symmetric group on m-Tamari intervals, Adv. Math., 247(10):309-342, 2013. On fluctuations in coin-tossing. K L Chung, W Feller, Proc. Natl. Acad. Sci. U.S.A. 3510K.L. Chung, and W. Feller, On fluctuations in coin-tossing, Proc. Natl. Acad. Sci. U.S.A., 35(10):605-608, 1949. Parametric Catalan numbers and Catalan triangles. T X He, Linear Algebra Appl. 4383T.X. He, Parametric Catalan numbers and Catalan triangles, Linear Algebra Appl., 438(3):1467-1484, 2013. N Fuss, Solutio quaestionis, quot modis polygonum n laterum in polygona m laterum, per diagonales resolvi quaeat, Nova acta academiae scientiarum Imperialis Petropolitanae. 9N. Fuss, Solutio quaestionis, quot modis polygonum n laterum in polygona m lat- erum, per diagonales resolvi quaeat, Nova acta academiae scientiarum Imperialis Petropolitanae, 9:243-251, 1791. Compactified Jacobians and q, t-Catalan numbers, I. E Gorsky, M Mazin, J. Combin. Theory Ser. A. 1201E. Gorsky and M. Mazin, Compactified Jacobians and q, t-Catalan numbers, I, J. Combin. Theory Ser. A, 120(1):49-63, 2013. Maintaining the spirit of the reflection principle when the boundary has arbitrary integer slope. I P Goulden, L G Serrano, J. Combin. Theory Ser. A. 1042I.P. Goulden and L.G. Serrano, Maintaining the spirit of the reflection princi- ple when the boundary has arbitrary integer slope, J. Combin. Theory Ser. A, 104(2):317-326, 2003. R L Graham, D E Knuth, O Patashnik, Concrete Mathematics: A Foundation for Computer Science. Reading, MAAddison-Wesley2nd ed.R.L. Graham, D.E. Knuth, and O. Patashnik, Concrete Mathematics: A Foundation for Computer Science, 2nd ed. Reading, MA: Addison-Wesley, 1994. T Koshy, Catalan Numbers with Applications. New YorkOxford University PressT. Koshy, Catalan Numbers with Applications, Oxford University Press, New York, 2009. Determinants of (generalised) Catalan numbers. C Krattenthaler, J. Statist. Plann. Inference. 1408C. Krattenthaler, Determinants of (generalised) Catalan numbers, J. Statist. Plann. Inference, 140(8):2260-2270, 2010. Mahonian pairs. B E Sagan, C D Savage, J. Combin. Theory Ser. A. 1193B.E. Sagan, C.D. Savage, Mahonian pairs, J. Combin. Theory Ser. A, 119(3):526- 545, 2012. . R P Stanley, Enumerative Combinatorics. 2Cambridge University PressR.P. Stanley, Enumerative Combinatorics, Vol. 2. Cambridge University Press, Cam- bridge, UK, 1999. Catalan addendum, version of 25. R P Stanley, R.P. Stanley, Catalan addendum, version of 25 May 2013. Koroljuk's formula for counting lattice paths revisited. J J Y Zhao, arXiv:1306.6015v1preprintJ.J.Y. Zhao, Koroljuk's formula for counting lattice paths revisited, preprint. arXiv:1306.6015v1 A combinatorial approach for calculating certain ruin probabilities. J J Y Zhao, preprintJ.J.Y. Zhao, A combinatorial approach for calculating certain ruin probabilities, preprint.
[]
[ "Bayesian optimization of massive material injection for disruption mitigation in tokamaks", "Bayesian optimization of massive material injection for disruption mitigation in tokamaks" ]
[ "I Pusztai \nDepartment of Physics\nChalmers University of Technology\nSE-41296GöteborgSweden\n", "I Ekmark \nDepartment of Physics\nChalmers University of Technology\nSE-41296GöteborgSweden\n", "H Bergström \nDepartment of Physics\nChalmers University of Technology\nSE-41296GöteborgSweden\n\nMax Planck Institute for Plasma Physics\n85748Garching b. MGermany\n", "P Halldestam \nDepartment of Physics\nChalmers University of Technology\nSE-41296GöteborgSweden\n\nMax Planck Institute for Plasma Physics\n85748Garching b. MGermany\n", "P Jansson \nDepartment of Computer Science and Engineering\nChalmers University of Technology\nSE-41296GöteborgSweden\n", "M Hoppe \nSwiss Plasma Center\nEcole Polytechnique Fédérale de Lausanne\nCH-1015LausanneSwitzerland\n", "O Vallhagen \nDepartment of Physics\nChalmers University of Technology\nSE-41296GöteborgSweden\n", "T Fülöp \nDepartment of Physics\nChalmers University of Technology\nSE-41296GöteborgSweden\n" ]
[ "Department of Physics\nChalmers University of Technology\nSE-41296GöteborgSweden", "Department of Physics\nChalmers University of Technology\nSE-41296GöteborgSweden", "Department of Physics\nChalmers University of Technology\nSE-41296GöteborgSweden", "Max Planck Institute for Plasma Physics\n85748Garching b. MGermany", "Department of Physics\nChalmers University of Technology\nSE-41296GöteborgSweden", "Max Planck Institute for Plasma Physics\n85748Garching b. MGermany", "Department of Computer Science and Engineering\nChalmers University of Technology\nSE-41296GöteborgSweden", "Swiss Plasma Center\nEcole Polytechnique Fédérale de Lausanne\nCH-1015LausanneSwitzerland", "Department of Physics\nChalmers University of Technology\nSE-41296GöteborgSweden", "Department of Physics\nChalmers University of Technology\nSE-41296GöteborgSweden" ]
[]
A Bayesian optimization framework is used to investigate scenarios for disruptions mitigated with combined deuterium and neon injection in ITER. The optimization cost function takes into account limits on the maximum runaway current, the transported fraction of the heat loss and the current quench time. The aim is to explore the dependence of the cost function on injected densities, and provide insights into the behaviour of the disruption dynamics for representative scenarios. The simulations are conducted using the numerical framework Dream (Disruption Runaway Electron Analysis Model). We show that irrespective of the quantities of the material deposition, multi-megaampere runaway currents will be produced in the deuterium-tritium phase of operations, even in the optimal scenarios. However, the severity of the outcome can be influenced by tailoring the radial profile of the injected material; in particular if the injected neon is deposited at the edge region it leads to a significant reduction of both the final runaway current and the transported heat losses. The Bayesian approach allows us to map the parameter space efficiently, with more accuracy in favorable parameter regions, thereby providing us information about the robustness of the optima. † Email address for correspondence: [email protected]
10.1017/s0022377823000193
[ "https://export.arxiv.org/pdf/2302.01260v2.pdf" ]
256,503,639
2302.01260
f2db60fe9958a902566b453dad8ee887d43a1b91
Bayesian optimization of massive material injection for disruption mitigation in tokamaks I Pusztai Department of Physics Chalmers University of Technology SE-41296GöteborgSweden I Ekmark Department of Physics Chalmers University of Technology SE-41296GöteborgSweden H Bergström Department of Physics Chalmers University of Technology SE-41296GöteborgSweden Max Planck Institute for Plasma Physics 85748Garching b. MGermany P Halldestam Department of Physics Chalmers University of Technology SE-41296GöteborgSweden Max Planck Institute for Plasma Physics 85748Garching b. MGermany P Jansson Department of Computer Science and Engineering Chalmers University of Technology SE-41296GöteborgSweden M Hoppe Swiss Plasma Center Ecole Polytechnique Fédérale de Lausanne CH-1015LausanneSwitzerland O Vallhagen Department of Physics Chalmers University of Technology SE-41296GöteborgSweden T Fülöp Department of Physics Chalmers University of Technology SE-41296GöteborgSweden Bayesian optimization of massive material injection for disruption mitigation in tokamaks (Received xx; revised xx; accepted xx)Under consideration for publication in J. Plasma Phys. 1 A Bayesian optimization framework is used to investigate scenarios for disruptions mitigated with combined deuterium and neon injection in ITER. The optimization cost function takes into account limits on the maximum runaway current, the transported fraction of the heat loss and the current quench time. The aim is to explore the dependence of the cost function on injected densities, and provide insights into the behaviour of the disruption dynamics for representative scenarios. The simulations are conducted using the numerical framework Dream (Disruption Runaway Electron Analysis Model). We show that irrespective of the quantities of the material deposition, multi-megaampere runaway currents will be produced in the deuterium-tritium phase of operations, even in the optimal scenarios. However, the severity of the outcome can be influenced by tailoring the radial profile of the injected material; in particular if the injected neon is deposited at the edge region it leads to a significant reduction of both the final runaway current and the transported heat losses. The Bayesian approach allows us to map the parameter space efficiently, with more accuracy in favorable parameter regions, thereby providing us information about the robustness of the optima. † Email address for correspondence: [email protected] Introduction One of the threats to reliable tokamak operation are off-normal events known as disruptions, which are induced by a sudden loss of plasma confinement (Boozer 2012). When this occurs, the ensuing heat and particle transport results in a rapid temperature drop -a thermal quench (TQ) -that is accompanied by a decrease in the electrical conductivity of the plasma. The reduced conductivity leads to a decay in plasma current -a current quench (CQ) -that is counteracted by the induction of an electric field, which may accelerate runaway electrons (REs) to relativistic energies (Breizman et al. 2019). The REs could potentially strike the wall and lead to subsurface melting of the wall components. The plasma current in future devices will be around an order of magnitude higher than in present experiments. Correspondingly, the magnetic energy in the plasma will increase (∼ 400 MJ in ITER versus ∼ 10 MJ in JET) (Hender et al. 2007), along with the kinetic energy, thus the available energy that can be released in a disruption is significantly higher than in present devices. It is therefore essential to develop effective disruption mitigation systems. An effective disruption mitigation system in a tokamak should limit the exposure of the wall to localized heat losses and to the impact of high current RE beams, and avoid excessive forces on the structure . To avoid damage to the first wall on ITER, at least 90% of the thermal energy loss must be lost in the form of radiation. The RE current should be kept below 150 kA in order to avoid melting of plasma facing components, in the case of localised loss (Lehnen & the ITER DMS task force 2021). The CQ time, i.e. the time it takes for the ohmic component of the current to decay, should be kept between 50 and 150 ms. Current quench times below 50 ms will lead to excessive forces due to eddy currents in the structures surrounding the plasma. On the other hand, CQ times above 150 ms are expected to lead to intolerably large halo currents in plasma facing components. In ITER, the envisaged disruption mitigation system is based on massive material injection . The injected material can radiate away a large fraction of the thermal energy and it can also inhibit RE generation by increasing the critical energy for electron runaway. Furthermore, it can also be used to control the temperature during the CQ, which directly influences the CQ duration. However, the question of what mixture of material should be injected, and how it should be deposited, to accommodate all requirements on the disruption mitigation system simultaneously, if it is at all possible, is still open. In this paper, we describe a Bayesian optimization framework applied to simulations of ITER-like disruption scenarios mitigated with combined injection of deuterium and neon. The aim is to find the injected material quantities and deposition profiles for which the outcome of the disruption is tolerable with respect to the expected RE current, transported heat fraction and CQ time. Bayesian optimization has several attractive features: it does not rely on gradient information, it can handle non-deterministic (noisy) functions, and it is suitable for relatively high-dimensional optimization problems and computationally expensive function evaluations. However, its main advantage concerning the current study is that it informs us about the properties of promising parameter regions -in particular the robustness of the optima to variations in the control parameters. The rest of the paper is structured as follows. The methods are explained in Sec. 2, detailing the setup of the disruption simulations in 2.1 and the Bayesian optimization in 2.2. The results are presented in Sec. 3, first mapping out the optimization landscape with constant injected densities in 3.1 followed by a detailed analysis of some representative scenarios in 3.2. Then we present optimization results allowing radially varying injection in 3.3. Finally we study the parametric sensitivities of the optima and reflect on the beneficial effects of radial profile variations in 3.4, before we conclude and discuss our findings in Sec. 4. Bayesian optimization of simulated disruptions We employ an open source Bayesian optimization routine that treats the disruption simulations as a black-box function that produces a single scalar output, the cost function, and accepts inputs for injected material densities and deposition profiles in specified ranges -these are the input parameters that we want to optimize. In the following we will discuss the disruption simulations, and provide details of the optimization algorithm. Simulation setup The disruption simulations assume an initially (t < 0) pure fully-ionized deuteriumtritium (D-T) plasma with 50-50% isotope concentrations. Specifically, the initial electron density is spatially constant 10 20 m −3 , the temperature is parabolic with 20 keV onaxis, and the total plasma current is 15 MA. The simulations use an ITER-like magnetic geometry with major radius R 0 = 6 m, minor radius a = 2 m, wall radius b = 2.833 m, on-axis toroidal magnetic field B(r = 0) = 5.3 T, and a resistive wall time of τ w = 0.5 s, as well as a Miller model equilibrium (Miller et al. 1998) with realistic, radially varying shaping parameters; further information given in Appendix A. The simulations are performed by the Dream (Disruption Runaway Electron Analysis Model) code that captures the particle acceleration and energy dissipation processes following a disruption (Hoppe et al. 2021). It solves a set of coupled transport equations describing the evolution of temperature, ion charge state densities, current density and electric field in arbitrary axisymmetric geometry. The temperature evolution includes ohmic heating, radiated power using atomic rate coefficients, collisional energy transfer from hot electrons and ions, as well as dilution cooling. Dream allows modelling of the REs at different degrees of approximation ranging from fluid to fully kinetic. As we do not require kinetic outputs, we limit our modelling to the least computationally expensive, fluid treatment of the plasma. This means that the thermal bulk of cold electrons and the small runaway population are modelled as two separate fluid species. The former is characterized by a density n e , a temperature T e as well as an ohmic current density j ohm , and the REs are described by their density n RE . It is assumed that the REs move with the speed of light parallel to the magnetic field, hence their associated current density is j RE = ecn RE . The simulations include Dreicer, hot-tail, and avalanche sources, as well as REs generated by Compton scattering of γ photons and tritium decay. These are modelled as quasi-stationary sources feeding electrons into the runaway population (Fülöp et al. 2020). The runaway generation rates used in the simulations have been benchmarked with the corresponding kinetic results (Hoppe et al. 2021). Further details on the simulations are given in Appendix A. Neutral neon and deuterium are introduced with zero temperature at the start of the simulation (t = 0). At the same time an elevated transport of electron heat and energetic electrons is activated, using a Rechester-Rosenbluth-type model (Rechester & Rosenbluth 1978) with a radially constant normalized magnetic perturbation amplitude δB/B. This is done to emulate the break-up of flux surfaces during the TQ, and leads to heat-losses, with a heat diffusivity proportional to R 0 v te (δB/B) 2 , where v te = 2T e /m e is the local electron thermal speed. The full expression is given by Eq. (B.5) of (Hoppe et al. 2021). In four optimization runs δB/B is scanned over the range 0.2% − 0.5% that falls within the range of values observed in magnetohydrodynamic simulations of the TQ (Hu et al. 2021). During the TQ we also account for a diffusive transport of REs using a diffusion coefficient of similar form, but assuming a parallel streaming along the perturbed field lines at the speed of light, D RE = πR 0 c(δB/B) 2 . This approach neglects the momentum space variation of the transport coefficients (Särkimäki et al. 2020), as well as the form of the RE distribution function, which would reduce the effect of runaway transport. Thus, using this expression provides an upper bound on the effect of runaway transport for a given magnetic perturbation amplitude (Svensson et al. 2021). We employ here the same δB/B as for electron heat transport for consistency. The injected material is ionized by its interaction with the plasma, and cools it by radiation and dilution. When the average electron temperature falls below 10 −3 times the maximum initial temperature (here 20 eV), we assume that the TQ is completed and the flux surfaces reform. After the TQ the transport of energetic electrons is switched off, and a significantly reduced, but finite electron heat diffusivity is used (δB/B = 0.04%). This is to avoid the development of non-physical narrow hot ohmic channels during the CQ. Such ohmic channels are soliton-like solutions of the problem (Putvinski et al. 1997) without sufficient heat diffusivity. In a physical system the corresponding excessive temperature and current gradients would be expected to destabilize these formations well before they could fully form. Note, that the diffusive heat transport is subdominant compared to radiative heat losses at the low post-TQ temperatures, thus this heat transport has no effect besides not allowing hot channels to form. Optimization The optimization problem involves multiple objectives, i.e. multiple quantities need to be within certain limits simultaneously. The maximum value of the total RE current and the fraction of transported heat losses must be small, while the CQ time should be within certain limits. These quantities are normalized and combined into a single scalar cost function L 0 which is to be minimized. Denoting the control vector containing the parameters by x, we wish to find the x * that minimizes L, where x resides in a specified volume V ⊂ R d of the control space (where d is the dimensionality of the optimization). We employ Bayesian optimization (Brochu et al. 2010) using Gaussian process regression (Rasmussen & Williams 2005), using the Bayesian Optimization (Nogueira 2014-) Python package. A Gaussian process is fitted to the already sampled points {x i } n i=1 , and the Expected improvement acquisition strategy (described in Appendix B) is used to choose the next point to be sampled, x n+1 . The Gaussian process contains information on both the expected value µ(x) and the uncertainty of the estimate of L, quantified in terms of the covariance k(x, x ) between any two points x and x . In this process there is a balance between exploration and exploitation, i.e. search within regions with high uncertainty, as well as in regions that are most likely to host the global optimum. The cost function we use is of the form where I max RE is the maximum RE current in the simulation, I tol RE = 150 kA represents the tolerable RE current in ITER, I fin ohm is the ohmic current at the end of the (150 ms long) simulation. A significant remnant ohmic current may be the sign of an incomplete TQ, and it can still potentially be converted to a RE current. Thus it is treated on equal footing with the RE current, so we also set I tol ohm = 150 kA. η tol cond = 0.1 is the tolerable transported heat loss fraction. The prefactor 10 in the η cond term is used to get a penalty for non-tolerable transported heat losses comparable to typical penalties obtained for mega-ampere (MA) size currents. Finally, to penalize CQ times t CQ below t L = 50 ms and above t U = 150 ms we use the penalty function θ(t CQ ) =Θ(t L − t CQ ) +Θ(t CQ − t U ), (2.2) whereΘ(t) = 1 2 [1 + tanh(t/∆t)] is a function similar to a step function but smooth with a transition width set by ∆t = 3.3 ms. Values of t CQ outside the tolerable range yield a penalty as high as the maximum achievable penalty for any of the other terms in (2.1), due to the prefactor 100 in front of θ. We calculate the CQ time as t CQ = [t(I ohm = 0.2I 0 p ) − t(I ohm = 0.8I 0 p )]/0.6 (Hender et al. 2007), where I ohm (t) is the total ohmic current and I 0 p is the initial plasma current. In addition we set L = 500 for simulations where the TQ is not complete within 20 ms, our condition for which is that the average temperature falls to below 10 −3 T e (r = 0, t = 0). Finally, as it is difficult to completely avoid simulations that fail due to numerical issues, we use L = 500 for these as well. Independently of their dimensionality, the optimizations use 400 samples, chosen according to the acquisition function through sequential function evaluations, following 20 randomly selected initial samples. As the parameter space of injected quantities ranges across multiple orders of magnitude, the logarithm of the injected quantities is used as optimization parameters. Bayesian optimization of disruption mitigation with material injection The goal is to identify what densities of injected neon and deuterium produce the most favourable outcomes in a disruption mitigation, corresponding to the minimum of the cost function. Modelling the details of the material injection is outside the scope of the present work, instead we assume the material to be instantaneously deposited in the form of neutrals, either uniformly distributed over the magnetic flux surfaces, described in Sec. 3.1-3.2 or with radially varying distribution, described in Sec. 3.3. Optimization landscape with constant concentrations First we perform optimization in the two-dimensional (2D) parameter space of radially constant injected deuterium and neon densities, n inj,D and n inj,Ne . The ranges of injected densities we consider are n inj,D ∈ [10 18 , 3.16 × 10 22 ] m −3 , and n inj,Ne ∈ [10 16 , 10 20 ] m −3 . Figure 1 shows the estimated mean of the cost function µ on a logarithmic contour plot for four different values of δB/B, with blue shades representing favourable and red shades unfavourable values. Each subplot used 420 samples, indicated by gray dots, while the optima are indicated with black stars. The area of favourable values (with blue shades) decreases with increasing δB/B, and this is mostly due to the increasing transported heat fraction, and to a lesser degree to an increasing RE current, to be discussed further in relation to Fig. 5. In general, the lower left corner of the plots is occupied by cases with an incomplete TQ. In this case the plasma tends to get reheated after the prescribed transport event, leading to long CQ times (i.e. t CQ > 150 ms). With an increasing δB/B the incomplete TQ region shrinks somewhat. Another general feature is a relatively narrow corridor of favourable parameters in the vicinity of n inj,D = 10 22 m −3 , extending from the lowest n inj,Ne values plotted to a bit above n inj,Ne = 10 19 m −3 . The optima also reside in these corridors at n inj,Ne values of a few times 10 18 m −3 . Additionally, a wider corridor of moderate values of L extends to the left of the optima, between n inj,Ne ≈ 3 × 10 18 -2 × 10 19 m −3 , which is most pronounced in the δB/B = 0.3% and 0.4% cases. At n inj,Ne values above that, L increases, and decreases again around the highest n inj,Ne values included in this optimization. Before analyzing the characteristic behaviors in these different regions of the current optimization landscape in Sec. 3.2, we shall discuss the detailed dynamics at the optima. We consider the behavior of the representative optimum obtained in the δB/B = 0.3% case (indicated by the black star in Fig. 1b), located at n inj,D = 9.4 × 10 21 m −3 and n inj,Ne = 2.9 × 10 18 m −3 , shown in Fig. 2. Following the instantaneous material injection at t = 0, the temperature profile drops by a factor of ≈ 100 within a µs, due to dilution. This is followed by an approximately exponential cooling with a characteristic time of τ ≈ 1.5 ms. After the initial exponential cooling, a cold front starts to propagate radially inward from the edge. This inward propagating cooling is seen in the t = 2 ms curve in Fig. 2b. This cooling proceeds until almost the entire plasma settles at around 5 eV (see the t = 10 ms curve), representing the equilibrium between ohmic heating and radiation corresponding to the ion composition and current density of the plasma. Then there is another inward propagating cooling happening over the next 50 ms. As the ohmic current density drops during the CQ, the equilibrium temperature falls from ≈ 5 eV to ≈ 1.2 eV. The temperature is radially uniform at this level at 60 ms (black curve), and remains there until the end of the simulation. The ohmic current density gradually decreases in the core and it drops rapidly across the cold front at the edge; compare the 10 ms curves in Fig. 2b and c. This front propagates inward in the first 40 ms, after which the ohmic current gets rapidly replaced by RE current; see the process in terms of total current components in Fig E eff c , where it stays until the macroscopic RE conversion starts. Then it drops into the vicinity of E eff c , such that in the core E is pinned to E eff c , and it takes radially decreasing values at the edge; compare E (solid black curve) to E eff c (dotted) in Fig. 2d. Then the electric field remains like that until most of the RE current dissipates away. Physically, the dissipation of the RE current, in the absence of transport losses, is caused by a collisional slowing down and thermalization of the REs when E < E eff c . In the employed fluid RE model it is technically accounted for by allowing the avalanche growth rate to become negative for E < E eff c values. The corresponding decay of the RE current is quite pronounced in this case. a) δB/B = 0.2% b) δB/B = 0.3% C1 C2 C3 C4 C5 C6 c) δB/B = 0.4% d) δB/B = 0.5% Characteristic cases with constant concentrations In order to understand the typical dynamics in various regions in the n D,inj -n Ne,inj space, we consider six representative cases in the δB/B = 0.3% optimization, with case C1 being the optimum discussed above. The cases are indicated in Fig. 1b and corresponding injected quantities and figures of merit are listed in Tab. 1. Cases C2 and C3 are taken in the high n D,inj region of the space; C2 is located in the favourable channel at low n Ne,inj , and C3 at even higher n D,inj than the optimum. Cases C4 to C6 are taken at a fixed n D,inj = 10 20 m −3 , at respectively increasing value of n Ne,inj . We discuss C1-C3 and C4-C6 in the following subsections. Representative cases at high n D,inj In the RE plateau the electric field tends to stay close to the effective field E eff c , as expected (Breizman 2014). In particular, following the RE conversion, all E /E eff c values (taken at mid-radius) settle around unity, as shown in Fig. 3c. The high n D,inj cases are all characterized by a significant decay rate of the RE current after it reaches its maximum value; see Fig. 3a. This is consistent with their E /E eff c being lower than unity towards the edge, as we have seen for C1 in Fig. 2d. This is due to the relatively high value of E eff c typical at these high n D,inj values. The dynamics of the RE current in C1 and C2 are fairly similar, as seen in Fig. 3a. It may be surprising that E /E eff c is almost all the time higher in C1 than C2 -shown in Fig. 3c -but the maximum RE current in C2 is still higher, and is reached a bit earlier. The reason for this is that the temperature drops to 1.09 eV in C2 already at t ≈ 10 ms (Fig. 3b), a temperature where 44% of the hydrogenic species is recombined, thereby increasing the total-to-free electron density ratio and the avalanche growth rate in proportion. Meanwhile in C1 the temperature does not drop to this low temperature until the RE conversion is over. The effect of the hydrogen recombination is even more pronounced in C3, where the temperature drops to 1.02 eV within a millisecond. The reason for this fast cooling is the very high dilution that brings the temperature down to a range where radiative losses are strong and can effectively (and rapidly) cool the plasma further. That the temperature drops immediately to its final value, without stopping at some higher, intermediate value, can be explained by how the temperature dependence of the total radiative losses (P ) is affected by the very high hydrogen content. Depending on the hydrogen (including D and T) and neon densities, the curve P (T e ) can exhibit a local minimum in the few eV range between a low T e peak caused by hydrogen and a higher T e peak from neon. The large hydrogen density in C3 leads to an elevated value of P at this minimum, thereby effectively eliminating the bottleneck this minimum represents concerning the cooling. While 1.02 eV is just slightly cooler than the final temperature in C2, now 70% of the hydrogenic species are recombined, which, in combination with the early high value of E /E eff c , leads to an extremely fast RE conversion and the highest RE current among these three cases. In terms of figures of merit, C3 is not only problematic due to a high I max RE value, but also because of the extremely short t CQ ≈ 5 ms. While I max RE is not too much higher in C2 than in C1, it has a η cond ≈ 44%, exceeding the tolerable 10%, unlike C1 and C3. This is due to the small neon content in C2. The remarkably short cooling times, of the order of 2 ms, observed at large deuterium injections, such as C3, may be partly due to our simplifying assumption of instantaneous deposition. However, in realistic material injection scenarios, the cooling at a given flux surface can be comparably short to the time observed here, even if the time-scale needed for pellet shards flying at 500 m/s to travel between the edge and the center of an ITER plasma is longer (≈ 4ms). As the local cooling time is the crucial factor to get a large hottail seed, and furthermore, the rapid avalanche rate depends on the final temperature, similar behaviour is also observed in shattered pellet injection simulations (Vallhagen et al. 2022). Ion convection timescales across the radius in a TQ can also be in the ms range. The excessive runaway generation is thus not an artefact of the instantaneous deposition, however, the detailed temperature evolution is expected to be different once the injection dynamics is resolved. Representative cases at low n D,inj The cases at n D,inj = 10 20 m −3 -C4 to C6 -are not affected by hydrogen recombination as their temperature never drops below 2 eV. They reach much higher values of E /E eff c than the high n D,inj cases, as they have low E eff c ; compare Figs. 3c and 4c. Their RE conversion timing and magnitude well correlates with when the peak of E /E eff c is reached, and its magnitude. This, in turn depends on the first equilibrium temperature reached, varying between approximately 5 and 11 eV, see Fig. 4b. This temperature decreases monotonically with increasing injected neon quantity, while the magnitude of the final RE current increases, and the time of RE conversion shifts earlier. Once the conversion is complete, the temperature falls further into the 2-4 eV range. Note that at these low n D,inj cases the dissipation rate of the RE current in the RE plateau is negligible during the simulation, due to the lower E eff c values. Only in C4 does t CQ fall in the acceptable range, in the other two cases it is too short due to the early RE conversion. The reason for the non-monotonic dependence of L with increasing n Ne,inj , i.e. it is higher for C5 than for C6, is caused by the reduction in the transported heat loss fraction from the 70-80% range to 23% (that is still not acceptably low though). Radially varying material injection Next, we relax the assumption of spatially homogeneous injection, and allow profile variations with a simple model for the injected densities, where the inward or outward peaking of the profile is set by a single parameter c i per species ĩ n i,inj ∝ 1 + tanh c i r a − 1 2 , (3.1) where the tilde indicates thatñ i,inj is a radially varying quantity. The notation n i,inj is reserved to the scalar parameter that appears in the optimization. The factor multiplying the expression in Eq. (3.1) is determined such that the total number of injected particles in the plasma is the same as in an injection of a constant density n i,inj . Negative/positive values of c i correspond to densities peaked in the plasma center/edge, and in the optimization we allow values in the [−10, 10] range. Figure 5 shows the I max RE and η cond figures of merit, along with the cost function L at the optima found for different δB/B values, when radially constant injection is employed (dotted line, referred to as 2D) and when profile variation is allowed (dashed, 4D). In the latter case, the additional degrees of freedom allow us to find optima with better properties. Since in all cases the remaining ohmic current is much smaller than I max RE (in the 300-400 kA range), and t CQ is also in the tolerable range, L is dominated by the two figures of merit plotted. In none of the cases considered is I max RE tolerably small; it is around 4 MA independently of δB/B in the 2D optimization, and it reduces almost by a factor of 2 in 4D (without any clear trend with δB/B), as seen in Fig. 5a. There are two main reasons for obtaining such high values even in the optimal cases. We consider D-T plasmas, and we include RE seed sources relevant for activated operation, tritium decay and Compton scattering of γ photons, in addition to Dreicer and hottail RE generation. The tritium decay and Compton sources can provide a significant RE seed even after the TQ, during which the transport due to magnetic perturbations decimates the initial hot-tail and Dreicer seed population. This circumstance also explains the weak sensitivity of I max RE to δB/B in the 2D simulations. A simulation identical to the 2D optimum at δB/B = 0.3%, but without activated seed sources (i.e. only Dreicer, hot-tail and avalanche sources active) yields a negligibly small I max RE = 4.1 kA instead of 4.2 MA. A similarly important factor is the realistic radius of the conducting wall, which is chosen to match the energy in the poloidal magnetic field due to the plasma current within the conducting wall to that observed in JOREK simulations. If in the 2D optimum at δB/B = 0.3% we reduce the wall radius from 2.833 m to 2.15 m, which was used in previous work, e.g. by Vallhagen et al. (2020), the RE current reduces to the -nonnegligible, but still significantly lower -value of 1 MA. The fraction of transported heat losses, shown in Fig. 5b, increases strongly with δB/B in the 2D cases, which is not surprising, since the heat transport during the TQ is then increasing, while the radiated losses are not directly impacted by δB/B. However, when profile variation is allowed η cond is almost independent of δB/B; the reason for this will be explained in relation to Fig. 7. Sensitivity of the optima To gauge the sensitivity of the optima to the input parameters, we investigate the regions occupied by samples within some range of L above the optimal values. The location of the optima in the optimization space is marked in Fig. 6 (⊗ markers). In the 2D optimization study we also scatter-plot all samples in the 10% vicinity of the Table 2. Total hydrogenic (including the background) and neon densities at the plasma center (r = 0) and at the edge (r = a) in the 4D optimization in the various δB/B cases. optimum, Fig. 6a; this is such a narrow range in L, that any point in this point cloud can be considered equally well performing as the optimum itself. In the 4D optimization study we show points in the 25% vicinity of the optima, Fig. 6b-d. As the total number of samples is the same in both the 2D and 4D optimization studies, the higher dimensionality in 4D implies a sparser exploration in the vicinity of the optimum compared to 2D; hence the lower number of points in spite of the wider relative range included. First, considering the 2D optimization, Fig. 6a, we find that the relative extent of the point clouds is significantly larger in the n Ne,inj direction, than in the n D,inj direction; for instance in the δB/B = 0.2% case n Ne,inj spans more than an order of magnitude, while n D,inj spans only a bit more than a factor of two. In practice it translates to the need of a higher precision concerning the injected amount of deuterium than that of neon. The negative correlation between n D,inj and n Ne,inj seen from the arrangement of the point cloud indicates that there are similarities in the effects of these two injected species. These features are also reflected in the favourable valleys (blue tone regions) seen in Fig. 1. The favourable parameter range indicated by the point clouds shrinks with δB/B. Note that the region covered by the optima at different δB/B values is even smaller than the smallest (black) point cloud; thus we should not read much into how the actual location of the optima varies with δB/B. In the 4D optimization, the resulting point clouds are more scattered, when projected into the n D,inj -n Ne,inj subspace, see Fig. 6b. If anything, there is still a weak anticorrelation between the injected quantities, but the poor statistics makes it less clear. Similarly to the 2D optimization, the range covered in n D,inj is smaller than that in n Ne,inj . We can also see that there are no cases within a relative range of 25% of the optimum for δB/B = 0.3%. In addition, the optimum itself appears far in the parameter space from the other three overlapping clouds. Namely, it appears at the highest n D,inj a) and lowest n Ne,inj values. We omit this outlier case in the following discussion, but will return to it at the end of this section. The point clouds occupy the relatively narrow c D ∈ [−1.5, 1.2] range, as seen in Fig. 6c, corresponding to modest profile variation. We find a positive correlation between n D,inj and c D . It means that higher injected content corresponds to more edge-localized peaking. In particular, the injected densities at the plasma center occupy a narrower range than at the edge (see Table 2); apparently, the deuterium density value at the edge is less important. We also observe that lower δB/B corresponds to higher c D and n D,inj values. 2D • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • ⊗ ⊗ ⊗ ⊗ ⊗ ⊗ ⊗ ⊗ 1 × 10n D,inj [m -3 ] n Ne,inj [m -3 ] b) 4D δB/B[%] 0.2 0.3 0.4 0.5 • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • ⊗ ⊗ ⊗ ⊗ ⊗ ⊗ ⊗ ⊗c D n D,inj [m -3 ] c) 4D • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • ⊗ ⊗ ⊗ ⊗ ⊗ ⊗ ⊗ ⊗ For the injected neon profiles, a strong outward peaking is preferred, with values of c Ne ∈ [5, 10], as seen in Fig. 6d. The total injected quantities are typically higher than those in the 2D optimization, covering mostly the n Ne,inj ∈ [10 19 , 10 20 ]m −3 rangean order of magnitude higher than in 2D. It is interesting to note that, similarly to deuterium, there is a positive correlation between c Ne and n Ne,inj . To understand why the optima in the 4D optimization perform better than those of 2D, we compare the respective δB/B = 0.5% cases, where the figures of merit are most disparate. The hydrogenic (blue curves) and neon (red) density profiles of the 2D (dashed curves) and 4D (solid) optima, are shown in Fig. 7a. For deuterium, the 4D optimization finds a moderate inward peaking (c D = −0.92), while the neon profile is strongly peaked at the edge (c Ne = 8.67), covering a density range over three orders of magnitude. The neon content has two major effects on our figures of merit. An increasing neon concentration corresponds to a lower quasi-equilibrium temperature during the RE conversion, typically leading to higher final RE currents. This is the same trend that we have witnessed moving from C4 to C6. At the same time, a higher neon concentration can c) Figure 7. Comparison of the optimal cases in the 2D (dashed curves) and the 4D (solid curves) optimization for δB/B = 0.5%. a) Radial total hydrogenic density, nD+T+D,inj (blue), and neon density, nNe (red). b) RE current density profiles taken at the time point when the total RE current takes its maximum, t = 42 ms (50 ms) in the 2D (4D) case. c) Time evolution of the heat loss power in the first millisecond, when most of the thermal energy is lost from the plasmas (note the log-scale). Blue curves represent the transported heat losses, red curves are the radiated losses. help increasing the radiated fraction of heat losses (this was also clear when comparing C5 to C6). However, the neon concentration affects the final RE current most strongly where the RE growth is strongest. This happens to be the plasma core in the parameter region of interest, without a radial variation of the neon density. In addition, to achieve a low η cond value it is sufficient to have enough radiating impurities in the edge. Both requirements can be satisfied by an outward peaking neon concentration, which is indeed what the 4D optima tend to develop. We find that the 2D optimum produces a centrally peaked RE current, as seen in Fig. 7b (dashed curve), while the 4D optimum has a RE profile peaked off-axis (solid curve), as expected for the low core concentration of neon. We note, that in this case the runaway and the centrally peaked ohmic currents decay together after the RE current reached its maximum, and only towards the end of the simulation (≈ 140 ms) does the total current profile become truly hollow †. The time evolution of the volume-integrated heat losses is shown in Fig. 7c in the first millisecond. This is when the vast majority of the thermal energy content of the plasma is lost, while the fraction of magnetic-to-thermal energy conversion is still negligible. Again, the dashed curves correspond to the 2D optimum; in this case the transported loss (dashed blue) reaches comparable values to the radiated losses (dashed red). The entire energy loss process varies relatively smoothly over the plotted timescale. In contrast, in the 4D optimum case the transported heat losses (solid blue) are approximately two orders of magnitude lower than the radiated losses (solid red), and both of these channels have a strong peak at t = 0, related to the ionization and equilibration of the injected material. After having discussed the representative behavior at the optima we return to the analysis of the outlier case, the 4D optimum at δB/B = 0.3%. In this case the injected neon density is roughly three orders of magnitude lower than in the other three cases, and as such, it exhibits reheating following the TQ in the plasma center. This reheated region supports a relatively slowly decaying ohmic current, hence the CQ time is on the long side t CQ = 123 ms (while still tolerable). The slowly decaying ohmic current and the high value of the effective critical electric field E eff c , owing to the high n D,inj , lead to that the RE growth stops just before the RE current grows to macroscopic values. The strong dilution is able to rapidly reduce the temperature to sufficiently low values at the edge, that even in the presence of a low neon content the cooling can continue to ≈ 1 eV. As then most of the heat transported to the edge is radiated away by the recombined deuterium, the resulting transported heat loss also remains small in this case. This is a fragile case nevertheless; indeed there is no sample within 25% of the L value reached by this optimum. Some parameter combinations in the vicinity of this optimum yield a behavior reminiscent of C3, with an extremely rapid RE conversion and then a strongly decaying RE current. Thus, even though this optimum performs better than the other three cases in 4D, it should not be targeted in a experiment, due to the lack of robustness. Finally, we comment on the numerical efficiency of the Bayesian approach. We estimate that to achieve a similar level of resolution in the regions that contain samples within 25% of the optima would require more than 12 000 points in 2D and 800 000 points in 4D, should we decide to use equidistant scans over the entire search domains. These estimates are based on the average minimum distance between samples (in the search space mapped to the unit hyper-cube). As a reference, we use only 420 samples in both the 2D and the 4D optimizations. In uninteresting regions with high cost function values the resolution is much lower. The Bayesian results can be confirmed with calculations on a uniform grid. In a detailed study of a similar problem presented by Bergström & Halldestam (2022), it was shown that the mean function obtained by the Gaussian process regression accurately recovered the cost function calculated on a uniform grid in the vicinity of the optimum, and showed a good agreement even in regions with high cost values. In terms of finding the global optimum the Bayesian method outperformed Powell's method (Powell 1964). Discussion and conclusions We have used Bayesian optimization to find optimal parameters characterizing massive material injection. This is a multi-objective problem where the cost function we aim to minimize accounts for the maximum RE current, the transported heat loss fraction, the CQ time, and the final ohmic current. Bayesian optimization is well suited for this problem, as it is a computationally efficient method for finding global optima, providing also uncertainty quantification. In the disruption context, it has also been used recently for validation of simulations of a CQ in a JET plasma discharge with an argon induced disruption (Järvinen et al. 2022). We find that even in the optimal case, RE currents of several megaampere are predicted. Magnetic perturbations strongly affect the RE dynamics through inducing transport losses of heat and seed REs. Then the optimization is, to a large degree, searching for a balance between sufficiently low transported heat loss -typically favoring large injected impurity quantities and low magnetic perturbation amplitudes -and tolerable final RE current -favoring the opposite conditions. The importance of such a balance has previously been pointed out by Svenningsson et al. (2021). In each optimization we kept the normalized magnetic perturbation level constant, in the range 0.2-0.5%. This range of magnetic perturbation levels is motivated by MHD simulations, We note that higher values are also reached in some recent studies (Nardon et al. 2021;Särkimäki et al. 2020), which, based on the trends we observe in Fig. 5, is not expected to have significant effect on the final RE current, while it would impact the transported heat loss fraction negatively. The optimum is generally found at a rather high injected deuterium density n inj,D ≈ 10 22 m −3 , while at a lower neon density n inj,Ne ≈ 3 × 10 18 m −3 . The sensitivity of the optimum to an inaccuracy of the injected deuterium quantity is much stronger than that of the injected neon. The strong sensitivity to the deuterium quantity is due to the possibility of extremely rapid cooling through dilution and subsequent radiation at sufficiently high deuterium densities, which leads to an effective seed generation. In addition, deuterium recombination steeply increases above a certain deuterium density, allowing the already large seed to avalanche more effectively. We also find, that neon deposited at the edge is advantageous, where it can produce sufficient radiative heat losses, without making the avalanche RE generation problem more severe, for which the conditions are typically more favorable in the core. Whether an outward peaking impurity density can be sustained long enough to see these benefits can only be answered using higher fidelity simulations. In this sense, our 4D optimization results can be considered as optimistic bounds. We point out the importance of choosing the wall radius carefully, as it determines the magnetic energy reservoir for RE generation; a tightly fitted conducting wall may lead to too optimistic results concerning the maximum RE current (yielding 1 MA instead of 4 MA in our example). As we allow for activated RE seed generation mechanisms we cannot find parameter regions where all objectives fall within their respective tolerable ranges; we see however that this may not need to be the case with non-activated seed sources only. The megaampere-scale RE currents predicted even in the optimal scenarios is concerning, thus these results should prompt further studies accounting for additional effects that can impact RE current generation. The most important effects to consider are: 1) magnetohydrodynamic and kinetic instabilities, 2) vertical displacement and the associated interaction of the current-carrying plasma column with the wall, 3) the possibility of magnetic surface re-healing to take place significantly later than the end of the TQ, and 4) the possible disappearance of closed flux surfaces below a finitestill megaampere-level -plasma current. In addition, the dynamics of the injectionwhich is not resolved here -has a direct impact on the transported heat fraction, and more generally it may affect the temperature evolution and in turn the RE dynamics (mostly the Dreicer and hot-tail seed generation, and as such, it is expected to be more consequential in non-activated operation). Employing this Bayesian framework for the optimization of the more directly accessible parameters describing the injection (for instance the composition and timing of the injected pellets in shattered pellet injection) is thus a natural next step to pursue. The results are quite robust with respect to the choice of the cost function. The most important trade-off between the various figures of merit appears between achieving a low runaway current and a low transported heat fraction. For instance, in the δB/B = 0.3% 2D case, changing the weight of η cond in the cost function by ±10% moves the optimum by ±1.5% in n Ne,inj , and by 0.4% in n D,inj . These figures are calculated relative to the extent of the 10% neighborhood of the optimum on a logarithmic scale (i.e., the size of the corresponding point cloud in Fig. 6a). The lower bound of the 10% neighborhood of the optimum changes by ±5%, while the other bounds change by 1% or less. The functional form and weight of the various components in the cost function are ultimately chosen by the user. Currently this arbitrariness of the weights cannot be fully eliminated, partly because of a detailed knowledge about the (monetary) cost of a given value of a figure of merit is lacking, and such estimated figures may never be available. In addition, the current modelling provides too coarse information on the outcome of a given scenario. Indeed, RE beams with the same RE current may cause serious damage, or no detectable effect at all, depending on how the beam is lost to the wall. Recent results indicate that a combination of a low impurity concentration bulk plasma and large-scale magnetohydrodynamic instabilities may enable termination of megaamperelevel RE currents without damage to the wall (Reux et al. 2021;Paz-Soldan et al. 2021 Declaration of Interests The authors report no conflict of interest. Appendix A. Simulation details The magnetic geometry and the initial plasma temperature and current density profiles are shown in figure 8a and b-c, respectively. The parallel current density component j is taken at the outboard mid-plane. The magnetic geometry uses a model equilibrium parametrization similar to the Miller equilibrium (Miller et al. 1998), with the profiles of elongation, triangularity, Shafranov shift and toroidal magnetic field variation being identical to those shown in Appendix A of (Pusztai et al. 2022). The on-axis value is B 0 = 5.3 T. The magnetic equilibrium is not evolved self-consistently in the simulation, instead these shaping parameters, as well as the plasma position, are held fixed throughout the simulation. The Dream simulations are performed in fluid mode. The Dreicer RE generation rate is calculated using a neural network (Hesslow et al. 2019b), which takes effects of partial screening into account. Compton scattering and tritium decay seed sources are accounted for as in (Vallhagen et al. 2020). The hot-tail seed is calculated using the model described in Appendix C.4 in (Hoppe et al. 2021). The avalanche growth rate accounts for partial screening (Hesslow et al. 2019a). Trapping effects are accounted for in the conductivity through the model by Redl et al. (2021), and in the avalanche and hot-tail RE generation rates. The bulk electron temperature evolution is calculated from the time dependent energy balance throughout the simulation, according to Eq. (43) in (Hoppe et al. 2021), accounting for ohmic heating, line and recombination radiation and bremsstrahlung, as well as a radial heat transport. Since the RE population is not resolved in momentum space, the kinetic term -in Eq. (44) of (Hoppe et al. 2021) -describing heating by REs is zero. However, the latter process is approximately accounted for by a term j RE E c , with E c = e 3 n e ln Λ c /(4π 0 m e c 2 ) the critical electric field, 0 the vacuum permittivity, and m e the electron mass. We evolve the temperatures of the ion charge states separately according to Eq. (45) in (Hoppe et al. 2021) which accounts for collisional heat exchange among various charge states as well as with electrons. We neglect current density profile flattening (Pusztai et al. 2022) associated with the flux surface breakup. Opacity effects have been shown to have significant effect on the post-TQ plasma temperature and indirectly on the avalanche gain (Vallhagen et al. 2022). These effects are taken into account by using ionisation, recombination and radiation rates for the hydrogen isotopes that are based on the assumption of the plasma being opaque to Lyman radiation. The simulations use 20 radial grid cells. During the TQ that takes a few milliseconds the solver uses adaptive time stepping with time steps estimated from the relative change of the free electron density within a time step (referred to as the ionization-based adaptive time stepping), with allowed minimum and maximum time steps 10 −11 s and 2 × 10 −6 s. The rest of the 150 ms long simulation uses 2 × 10 4 -2 × 10 5 equidistant time steps as needed for convergence. Appendix B. Details of Bayesian optimization After n steps our sample data D n := (X n , Y n ) is a collection of control vectors X n = {x i } and the corresponding function outputs Y n = {f (x i )} where the function f runs Dream to obtain the four objectives and combines them using the cost function L. The basic idea of Bayesian optimization is that f (x) is a random variable for each x and that, given the observations D n , the joint distribution of all these random variables is a Gaussian process. The corresponding mean and covariance functions are defined as the expected values In our case the Dream simulation runs are deterministic, which means that the function µ will exactly coincide with f on the samples observed so far. In other points the Gaussian µ(x) = E[f (x)], (B 1 Figure 1 . 1The estimated mean of the cost function of Bayesian optimizations in the nD,inj-nNe,inj space for various normalized magnetic perturbation amplitudes. The color code varies from blue to red tones, representing favourable and unfavourable values of µ. a) δB/B = 0.2%, b) 0.3%, c) 0.4%, d) 0.5%. Black stars indicate the locations of the optima. Gray dots show the samples taken; note that these are more numerous in the vicinity of the optima. Circles with case identifiers in panel b) indicate the cases discussed in Sec. 3.2. Figure 2 . 2. 2a, and RE current density at 60 ms inFig. 2c(dashed line). The electric field exceeds the effective critical field E eff c -calculated as in Appendix C2 of(Hoppe et al. 2021) -first in the edge, then it grows to an approximately radially constant value around 30 V/m, The best performing case for the optimization in the nD,inj-nNe,inj space, for δB/B = 0.3%. a) The time evolution of the total plasma current (dashed), and its ohmic (solid) and RE (dash-dotted) components. b)-d) show radial profiles of quantities in a few time points, indicated by their respective figure legends; with increasing time corresponding to darker colors. b) Electron temperature. c) Ohmic (solid) and RE (dashed) current density. d) Parallel electric field (solid). The effective critical electric field is also indicated for t = 60 ms (dotted); note that it does not vary appreciably over time. Figure 3 . 3Time evolution of quantities of interest for the high nD,inj representative cases: C1-C3. Line color darkens and dashing shortens with increasing case number, and case numbers are indicated with callouts. a) Runaway electron current. b) Electron temperature at mid-radius. c) Electric field normalized to critical electric field at mid-radius. (Note the longer time range plotted in panel a). Figure 4 . 4Time evolution of quantities of interest for the low nD,inj representative cases: C4-C6. Line color darkens and dashing shortens with increasing case number, and case numbers are indicated with callouts. a) Runaway electron current. b) Electron temperature at mid-radius. c) Electric field normalized to critical electric field at mid-radius. (Note the longer time range plotted in panel a) Figure 5 . 5Variation of (a) the maximum RE current, (b) the transported heat loss fraction, and (c) the corresponding cost function in optimizations, for a range of δB/B values, when optimizing only for injected densities (circle markers, blue short dashed curve) and when including profile variation as well in the optimization (squares, red long dashed). In panels a) and b), below the thin solid line the values are considered tolerable. In panel a) simulations with the parameters corresponding to the 2D optimum at δB/B = 0.3%, but without activated sources is indicated with a black rectangle marker, and a simulation with a reduced wall radius of 2.15 m is shown with a black asterisk.δB/B nD(0)[10 20 m −3 ] nD(a)[10 20 m −3 ] nNe(0)[10 18 m −3 ] nNe(a)[10 18 m −3 ] Figure 6 . 6Scatter plot of input parameters for samples with the lowest L values in each optimization case. When (a) optimizing only for injected densities (2D) they represent an additional 10% range above the optimum, and when (b-d) including profile variation as well in the optimization (4D), they represent a 25% range. Darkening color indicates increasing value of δB/B, as given in panel b), and the optima are indicated by ⊗ markers. (a-b) Concentration space, (c-d) correlating concentration with profile parameter of an injected species. Note that in the 4D, δB/B = 0.3% case there is no sample within the 25% range above the optimum. Figure 8 . 8a) Magnetic geometry with flux surfaces (gray curves), the outermost modeled flux surface r = a is indicated by the thick blue line, and the effective wall is shown in red. The rest of the panels show initial plasma parameter profiles. b) Electron temperature. c) Current density. )k(x, x ) = E[(f (x) − µ(x))(f (x ) − µ(x ))].(B 2) Case ID nD,inj[10 20 m −3 ] nNe,inj[10 18 m −3 ] I max RE [MA] I fin ohm [MA] tCQ[ms] η cond [%] L Table 1. Characteristic cases from the nD,inj-nNe,inj optimization landscape for δB/B = 0.3%, their four figures of merit and corresponding cost function values. The cases are marked in Fig. 1b. C1 is the optimum.C1 93.9 2.88 4.2 0.33 59 8.9 39 C2 160 0.032 4.8 0.33 54 43 88 C3 316 2.88 8.2 0.0007 5 1.4 156 C4 1 5.01 6.3 0.059 88 80 122 C5 1 31.6 8.1 0.092 26 72 226 C6 1 100 8.9 0.159 15 23 163 ). This work was supported by the Swedish Research Council (Dnr. 2018-03911 and Dnr. 2022-02862) and in part by the Swiss National Science Foundation. The work has been carried out within the framework of the EUROfusion Consortium, funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No 101052200 -EUROfusion). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them.Funding † The magnetohydrodynamic stability of the current density is not monitored in the Dream simulations; a hollow current profile might well be unstable to macroscopic plasma instabilities; this aspect of the simulated current evolution is outside the scope of this study. AcknowledgementsThe authors are grateful to N. Botta, N. Smallbone, E. Berger, S. Newton, E. Nardon, J. Artola and M. Lehnen for fruitful discussions.process model provides a smooth interpolation of the cost (something we used to visualize the cost function inFig. 1).The covariance between two points is modeled by the Matérn kernel(Matérn 1986;Stein 1999)where Γ denotes the Gamma function and K ζ is the modified Bessel function of the second kind. We use a fixed smoothness parameter of ζ = 5/2. The distance between two points in the D-dimensional parameter space is calculated aswith the correlation length parameters θ i (which are updated after each new sampling to maximize the marginal likelihood of D n ).We use the expected improvement EI n (x) acquisition function to find the most promising next point to sample. The following thought experiment (Frazier 2018) illustrates this acquisition strategy. Let f * n be the minimal value of f based on the current sample, and let x * n be the corresponding input. If the optimization procedure is terminated at this sample size, x * n would be returned as the best estimate of the actual optimum location x * . Suppose that an additional evaluation is to be performed at any point x yielding f (x). After this, the minimal observed value of f is either f (x) if f (x) < f * n or remain to be f * n otherwise. We might define the improvement we gain by performing this additional evaluation to be f * n − f (x) in the former case -the amount we could decrease the best value found so far -and 0 in the latter. We aim to maximize this improvement, while f (x) is, as of yet, still unknown. Instead, he next sample location is chosen to maximize the expectation value of the improvement, given the information at hand, that iswhere E n [·] should be understood as the expectation under the posterior distribution, given the previously evaluated D n . Optimization of tokamak disruption scenarios: Avoidance of runaway electrons and excessive wall loads. Hannes &amp; Bergström, Peter Halldestam, Chalmers University of TechnologyMaster's thesisBergström, Hannes & Halldestam, Peter 2022 Optimization of tokamak disruption scenarios: Avoidance of runaway electrons and excessive wall loads. Master's thesis, Chalmers University of Technology. Marginal stability model for the decay of runaway electron current. Allen H Boozer, 058101. Breizman, Boris N. 1972002Nuclear FusionBoozer, Allen H. 2012 Theory of tokamak disruptions. Physics of Plasmas 19 (5), 058101. Breizman, Boris N. 2014 Marginal stability model for the decay of runaway electron current. Nuclear Fusion 54 (7), 072002. Physics of runaway electrons in tokamaks. Boris N Breizman, Aleynikov, Pavel, Eric M Hollmann, Lehnen, Nuclear Fusion. 59883001Breizman, Boris N., Aleynikov, Pavel, Hollmann, Eric M. & Lehnen, Michael 2019 Physics of runaway electrons in tokamaks. Nuclear Fusion 59 (8), 083001. A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. Eric Brochu, Vlad M Cora, Nando De Freitas, A tutorial on Bayesian. Peter IBrochu, Eric, Cora, Vlad M. & de Freitas, Nando 2010 A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. https://arxiv.org/abs/1012.2599. Frazier, Peter I. 2018 A tutorial on Bayesian optimization. https://arxiv.org/abs/1807.02811. T Fülöp, P Helander, O Vallhagen, O Embréus, L Hesslow, P Svensson, A J Creely, N T &amp; Howard, Rodriguez-Fernandez, P 2020 Effect of plasma elongation on current dynamics during tokamak disruptions. 86474860101Fülöp, T, Helander, P, Vallhagen, O, Embréus, O, Hesslow, L, Svensson, P, Creely, A J, Howard, N T & Rodriguez-Fernandez, P 2020 Effect of plasma elongation on current dynamics during tokamak disruptions. Journal of Plasma Physics 86 (1), 474860101. . T C Hender, J C Wesley, J Bialek, A Bondeson, A H Boozer, R J Buttery, A Garofalo, T P Goodman, R S Granetz, Y Gribov, O Gruber, M Gryaznevich, G Giruzzi, S Günter, N Hayashi, P Helander, C C Hegna, D F Howell, D A Humphreys, G T A Huysmans, A W Hyatt, A Isayama, S C Jardin, Y Kawano, A Kellman, C Kessel, H R Koslowski, R J Haye, La, E Lazzaro, Y Q Liu, V Lukash, J Manickam, S Medvedev, V Mertens, S V Mirnov, Y Nakamura, G Navratil, M Okabayashi, T Ozeki, R Paccagnella, G Pautasso, F Porcelli, V D Pustovitov, V Riccardo, M Sato, O Sauter, M J Schaffer, M Shimada, P Sonato, E J Strait, M Sugihara, M Takechi, A D Turnbull, E Westerhof, D G Whyte, Yoshino , R, Zohm, H & the ITPA MHD, Disruption and Magnet GroupHender, T.C, Wesley, J.C, Bialek, J, Bondeson, A, Boozer, A.H, Buttery, R.J, Garofalo, A, Goodman, T.P, Granetz, R.S, Gribov, Y, Gruber, O, Gryaznevich, M, Giruzzi, G, Günter, S, Hayashi, N, Helander, P, Hegna, C.C, Howell, D.F, Humphreys, D.A, Huysmans, G.T.A, Hyatt, A.W, Isayama, A, Jardin, S.C, Kawano, Y, Kellman, A, Kessel, C, Koslowski, H.R, Haye, R.J. La, Lazzaro, E, Liu, Y.Q, Lukash, V, Manickam, J, Medvedev, S, Mertens, V, Mirnov, S.V, Nakamura, Y, Navratil, G, Okabayashi, M, Ozeki, T, Paccagnella, R, Pautasso, G, Porcelli, F, Pustovitov, V.D, Riccardo, V, Sato, M, Sauter, O, Schaffer, M.J, Shimada, M, Sonato, P, Strait, E.J, Sugihara, M, Takechi, M, Turnbull, A.D, Westerhof, E, Whyte, D.G, Yoshino, R, Zohm, H & the ITPA MHD, Disruption and Magnet Group 2007 MHD stability, operational limits and disruptions. 47ChapterChapter 3: MHD stability, operational limits and disruptions. Nuclear Fusion 47 (6), S128-S202. Influence of massive material injection on avalanche runaway generation during tokamak disruptions. L Hesslow, O Embréus, O Vallhagen, T Fülöp, Nuclear Fusion. 59884004Hesslow, L., Embréus, O., Vallhagen, O. & Fülöp, T. 2019a Influence of massive material injection on avalanche runaway generation during tokamak disruptions. Nuclear Fusion 59 (8), 084004. T 2019b Evaluation of the Dreicer runaway growth rate in the presence of high-Z impurities using a neural network. L Hesslow, L Unnerfelt, O Vallhagen, O Embréus, M Hoppe, G &amp; Papp, Fülöp, Journal of Plasma Physics. 85475850601Hesslow, L, Unnerfelt, L, Vallhagen, O, Embréus, O, Hoppe, M, Papp, G & Fülöp, T 2019b Evaluation of the Dreicer runaway growth rate in the presence of high-Z impurities using a neural network. Journal of Plasma Physics 85, 475850601. Status of research toward the ITER disruption mitigation system. E M Hollmann, P B Aleynikov, T Fülöp, D A Humphreys, V A Izzo, M Lehnen, V E Lukash, G Papp, G Pautasso, F Saint-Laurent, J A Snipes, Physics of Plasmas. 22221802Hollmann, E. M., Aleynikov, P. B., Fülöp, T., Humphreys, D. A., Izzo, V. A., Lehnen, M., Lukash, V. E., Papp, G., Pautasso, G., Saint-Laurent, F. & Snipes, J. A. 2015 Status of research toward the ITER disruption mitigation system. Physics of Plasmas 22 (2), 021802. Tünde 2021 DREAM: A fluid-kinetic framework for tokamak disruption runaway electron simulations. Mathias Hoppe, Ola &amp; Embreus, Fülöp, Computer Physics Communications. 268Hoppe, Mathias, Embreus, Ola & Fülöp, Tünde 2021 DREAM: A fluid-kinetic framework for tokamak disruption runaway electron simulations. Computer Physics Communications 268, 108098. . D Hu, E Nardon, M Hoelzl, F Wieschollek, M Lehnen, G T A Huijsmans, D C Van Vugt, S.-H Kim, Hu, D., Nardon, E., Hoelzl, M., Wieschollek, F., Lehnen, M., Huijsmans, G.T.A., van Vugt, D. C., Kim, S.-H. & JET contributors and JOREK team 2021 Radiation asymmetry and MHD destabilization during the thermal quench after impurity shattered pellet injection. Nuclear Fusion. 61226015Radiation asymmetry and MHD destabilization during the thermal quench after impurity shattered pellet injection. Nuclear Fusion 61 (2), 026015. Bayesian approach for validation of runaway electron simulations. A E Järvinen, T Fülöp, E Hirvijoki, M Hoppe, A Kit, J Åström, Journal of Plasma Physics. 886905880612Järvinen, A.E., Fülöp, T., Hirvijoki, E., Hoppe, M., Kit, A. & Åström, J. 2022 Bayesian approach for validation of runaway electron simulations. Journal of Plasma Physics 88 (6), 905880612. Disruptions in ITER and strategies for their control and mitigation. M Lehnen, K Aleynikova, P B Aleynikov, D J Campbell, P Drewelow, N W Eidietis, Yu Gasparyan, R S Granetz, Y Gribov, N Hartmann, E M Hollmann, V A Izzo, S Jachmich, S.-H Kim, M Kočan, H R Koslowski, D Kovalenko, U Kruezi, A Loarte, S Maruyama, G F Matthews, P B Parks, G Pautasso, R A Pitts, C Reux, V Riccardo, R Roccella, J A Snipes, A J Thornton, P C De Vries, Journal of Nuclear Materials. 463Lehnen, M., Aleynikova, K., Aleynikov, P.B., Campbell, D.J., Drewelow, P., Eidietis, N.W., Gasparyan, Yu., Granetz, R.S., Gribov, Y., Hartmann, N., Hollmann, E.M., Izzo, V.A., Jachmich, S., Kim, S.-H., Kočan, M., Koslowski, H.R., Kovalenko, D., Kruezi, U., Loarte, A., Maruyama, S., Matthews, G.F., Parks, P.B., Pautasso, G., Pitts, R.A., Reux, C., Riccardo, V., Roccella, R., Snipes, J.A., Thornton, A.J. & de Vries, P.C. 2015 Disruptions in ITER and strategies for their control and mitigation. Journal of Nuclear Materials 463, 39-48. & the ITER DMS task force 2021 The ITER disruption mitigation systemdesign progress and design validation. Presented at Theory and Simulation of Disruptions Workshop. M Lehnen, PPPLLehnen, M. & the ITER DMS task force 2021 The ITER disruption mitigation system - design progress and design validation. Presented at Theory and Simulation of Disruptions Workshop, PPPL. Bertil 1986 Spatial Variation. Matérn, SpringerNew YorkMatérn, Bertil 1986 Spatial Variation. Springer New York. . R L Miller, M S Chu, J M Greene, Y R Lin-Liu, R E Waltz, Miller, R. L., Chu, M. S., Greene, J. M., Lin-Liu, Y. R. & Waltz, R. E. 1998 Noncircular, finite aspect ratio, local equilibrium model. Physics of Plasmas. 54Noncircular, finite aspect ratio, local equilibrium model. Physics of Plasmas 5 (4), 973- 978. the JOREK team 2021 Thermal quench and current profile relaxation dynamics in massive-materialinjection-triggered tokamak disruptions. E Nardon, D Hu, F J Artola, D Bonfiglio, M Hoelzl, A Boboc, P Carvalho, S Gerasimov, G Huijsmans, V Mitterauer, N Schwarz, Sun, Plasma Physics and Controlled Fusion. 6311115006Nardon, E, Hu, D, Artola, F J, Bonfiglio, D, Hoelzl, M, Boboc, A, Carvalho, P, Gerasimov, S, Huijsmans, G, Mitterauer, V, Schwarz, N, Sun, H & the JOREK team 2021 Thermal quench and current profile relaxation dynamics in massive-material- injection-triggered tokamak disruptions. Plasma Physics and Controlled Fusion 63 (11), 115006. Bayesian Optimization: Open source constrained global optimization tool for Python. Fernando Nogueira, Nogueira, Fernando 2014-Bayesian Optimization: Open source constrained global optimization tool for Python. https://github.com/fmfn/BayesianOptimization. C Paz-Soldan, C Reux, K Aleynikova, P Aleynikov, V Bandaru, M Beidler, N Eidietis, Y Q Liu, C Liu, A Lvovskiy, S Silburn, L Bardoczi, L Baylor, I Bykov, D Carnevale, D Negrete, Del-Castillo, X Du, O Ficker, S Gerasimov, M Hoelzl, E Hollmann, S Jachmich, S Jardin, E Joffrin, C Lasnier, M Lehnen, E Macusova, A Manzanares, G Papp, G Pautasso, Z Popovic, F Rimini, D Shiraki, C Sommariva, D Spong, S Sridhar, G Szepesi, C Zhao, the DIII-D Team & Contributors, JET 2021 A novel path to runaway electron mitigation via deuterium injection and current-driven MHD instability. 61116058Paz-Soldan, C., Reux, C., Aleynikova, K., Aleynikov, P., Bandaru, V., Beidler, M., Eidietis, N., Liu, Y.Q., Liu, C., Lvovskiy, A., Silburn, S., Bardoczi, L., Baylor, L., Bykov, I., Carnevale, D., Negrete, D. Del-Castillo, Du, X., Ficker, O., Gerasimov, S., Hoelzl, M., Hollmann, E., Jachmich, S., Jardin, S., Joffrin, E., Lasnier, C., Lehnen, M., Macusova, E., Manzanares, A., Papp, G., Pautasso, G., Popovic, Z., Rimini, F., Shiraki, D., Sommariva, C., Spong, D., Sridhar, S., Szepesi, G., Zhao, C., the DIII-D Team & Contributors, JET 2021 A novel path to runaway electron mitigation via deuterium injection and current-driven MHD instability. Nuclear Fusion 61 (11), 116058. An efficient method for finding the minimum of a function of several variables without calculating derivatives. M J D Powell, The Computer Journal. 72Powell, M. J. D. 1964 An efficient method for finding the minimum of a function of several variables without calculating derivatives. The Computer Journal 7 (2), 155-162, arXiv: https://academic.oup.com/comjnl/article-pdf/7/2/155/959784/070155.pdf. Oskar 2022 Runaway dynamics in tokamak disruptions with current relaxation. Pusztai, István, Mathias &amp; Hoppe, Vallhagen, Journal of Plasma Physics. 884905880409Pusztai, István, Hoppe, Mathias & Vallhagen, Oskar 2022 Runaway dynamics in tokamak disruptions with current relaxation. Journal of Plasma Physics 88 (4), 905880409. Impurity fueling to terminate tokamak discharges. S Putvinski, N Fujisawa, D Post, N Putvinskaya, M N Rosenbluth, J Wesley, Journal of Nuclear Materials. 243Putvinski, S., Fujisawa, N., Post, D., Putvinskaya, N., Rosenbluth, M.N. & Wesley, J. 1997 Impurity fueling to terminate tokamak discharges. Journal of Nuclear Materials 241-243, 316-321. Gaussian Processes for Machine Learning. Carl Rasmussen, &amp; Edward, Williams, K I Christopher, The MIT PressRasmussen, Carl Edward & Williams, Christopher K. I. 2005 Gaussian Processes for Machine Learning. The MIT Press. Electron heat transport in a tokamak with destroyed magnetic surfaces. A B Rechester, M N Rosenbluth, Phys. Rev. Lett. 40Rechester, A. B. & Rosenbluth, M. N. 1978 Electron heat transport in a tokamak with destroyed magnetic surfaces. Phys. Rev. Lett. 40, 38-41. A new set of analytical formulae for the computation of the bootstrap current and the neoclassical conductivity in tokamaks. A Redl, C Angioni, E Belli, O Sauter, Physics of Plasmas. 28222502Redl, A., Angioni, C., Belli, E. & Sauter, O. 2021 A new set of analytical formulae for the computation of the bootstrap current and the neoclassical conductivity in tokamaks. Physics of Plasmas 28 (2), 022502. . Cédric Reux, Paz-Soldan, Carlos, Bandaru, Vinodh, Ficker, Ondrej, Silburn, Scott, Hoelzl, Matthias, Jachmich, Stefan, Eidietis, Nicholas, Lehnen, Michael, Sridhar, Sundaresan & JET contributors 2021Reux, Cédric, Paz-Soldan, Carlos, Aleynikov, Pavel, Bandaru, Vinodh, Ficker, Ondrej, Silburn, Scott, Hoelzl, Matthias, Jachmich, Stefan, Eidietis, Nicholas, Lehnen, Michael, Sridhar, Sundaresan & JET contributors 2021 Demonstration of safe termination of megaampere relativistic electron beams in tokamaks. Phys. Rev. Lett. 126175001Demonstration of safe termination of megaampere relativistic electron beams in tokamaks. Phys. Rev. Lett. 126, 175001. Interpolation of Spatial Data. Michael L Stein, SpringerNew YorkStein, Michael L. 1999 Interpolation of Spatial Data. Springer New York. Tünde 2021 Hot-tail runaway seed landscape during the thermal quench in tokamaks. Ida Svenningsson, Embreus, Ola, Hoppe, Mathias, Sarah L Newton, Fülöp, Physical Review Letters. 12735001Svenningsson, Ida, Embreus, Ola, Hoppe, Mathias, Newton, Sarah L. & Fülöp, Tünde 2021 Hot-tail runaway seed landscape during the thermal quench in tokamaks. Physical Review Letters 127, 035001. Effects of magnetic perturbations and radiation on the runaway avalanche. P Svensson, O Embreus, S L Newton, K Särkimäki, O Vallhagen, T Fülöp, Journal of Plasma Physics. 87905870207Svensson, P., Embreus, O., Newton, S. L., Särkimäki, K., Vallhagen, O. & Fülöp, T. 2021 Effects of magnetic perturbations and radiation on the runaway avalanche. Journal of Plasma Physics 87, 905870207. Tünde & JET contributors 2020 Assessing energy dependence of the transport of relativistic electrons in perturbed magnetic fields with orbit-following simulations. Konsta Särkimäki, Embreus, Ola, Nardon, Fülöp Eric, Nuclear Fusion. 6012126050Särkimäki, Konsta, Embreus, Ola, Nardon, Eric, Fülöp, Tünde & JET contributors 2020 Assessing energy dependence of the transport of relativistic electrons in perturbed magnetic fields with orbit-following simulations. Nuclear Fusion 60 (12), 126050. Runaway dynamics in the DT phase of ITER operations in the presence of massive material injection. O Vallhagen, O Embreus, I Pusztai, L Hesslow, T Fülöp, Journal of Plasma Physics. 86475860401Vallhagen, O., Embreus, O., Pusztai, I., Hesslow, L. & Fülöp, T. 2020 Runaway dynamics in the DT phase of ITER operations in the presence of massive material injection. Journal of Plasma Physics 86, 475860401. Effect of twostage shattered pellet injection on tokamak disruptions. O Vallhagen, I Pusztai, M Hoppe, S L Newton, T Fülöp, Nuclear Fusion. 6211112004Vallhagen, O., Pusztai, I., Hoppe, M., Newton, S.L. & Fülöp, T. 2022 Effect of two- stage shattered pellet injection on tokamak disruptions. Nuclear Fusion 62 (11), 112004.
[ "https://github.com/fmfn/BayesianOptimization." ]
[ "Max-Information, Differential Privacy, and Post-Selection Hypothesis Testing", "Max-Information, Differential Privacy, and Post-Selection Hypothesis Testing" ]
[ "Ryan Rogers [email protected]. \nDepartment of Applied Mathematics and Computational Science\nUniversity of Pennsylvania\n\n", "Aaron Roth [email protected]. \nDepartment of Computer and Information Sciences\nUniversity of Pennsylvania\n\n", "Adam Smith [email protected]. \nComputer Science and Engineering Department\nPennsylvania State University\n\n", "Om Thakkar \nComputer Science and Engineering Department\nPennsylvania State University\n\n" ]
[ "Department of Applied Mathematics and Computational Science\nUniversity of Pennsylvania\n", "Department of Computer and Information Sciences\nUniversity of Pennsylvania\n", "Computer Science and Engineering Department\nPennsylvania State University\n", "Computer Science and Engineering Department\nPennsylvania State University\n" ]
[]
In this paper, we initiate a principled study of how the generalization properties of approximate differential privacy can be used to perform adaptive hypothesis testing, while giving statistically valid p-value corrections. We do this by observing that the guarantees of algorithms with bounded approximate max-information are sufficient to correct the p-values of adaptively chosen hypotheses, and then by proving that algorithms that satisfy ( , δ)-differential privacy have bounded approximate max information when their inputs are drawn from a product distribution. This substantially extends the known connection between differential privacy and max-information, which previously was only known to hold for (pure) ( , 0)-differential privacy. It also extends our understanding of max-information as a partially unifying measure controlling the generalization properties of adaptive data analyses. We also show a lower bound, proving that (despite the strong composition properties of max-information), when data is drawn from a product distribution, ( , δ)-differentially private algorithms can come first in a composition with other algorithms satisfying max-information bounds, but not necessarily second if the composition is required to itself satisfy a nontrivial max-information bound. This, in particular, implies that the connection between ( , δ)-differential privacy and max-information holds only for inputs drawn from product distributions, unlike the connection between ( , 0)-differential privacy and max-information.
10.1109/focs.2016.59
[ "https://arxiv.org/pdf/1604.03924v1.pdf" ]
9,626,840
1604.03924
e38f28b2d787c558a7b25d473fd47b0d7fa06917
Max-Information, Differential Privacy, and Post-Selection Hypothesis Testing April 14, 2016 13 Apr 2016 Ryan Rogers [email protected]. Department of Applied Mathematics and Computational Science University of Pennsylvania Aaron Roth [email protected]. Department of Computer and Information Sciences University of Pennsylvania Adam Smith [email protected]. Computer Science and Engineering Department Pennsylvania State University Om Thakkar Computer Science and Engineering Department Pennsylvania State University Max-Information, Differential Privacy, and Post-Selection Hypothesis Testing April 14, 2016 13 Apr 2016 In this paper, we initiate a principled study of how the generalization properties of approximate differential privacy can be used to perform adaptive hypothesis testing, while giving statistically valid p-value corrections. We do this by observing that the guarantees of algorithms with bounded approximate max-information are sufficient to correct the p-values of adaptively chosen hypotheses, and then by proving that algorithms that satisfy ( , δ)-differential privacy have bounded approximate max information when their inputs are drawn from a product distribution. This substantially extends the known connection between differential privacy and max-information, which previously was only known to hold for (pure) ( , 0)-differential privacy. It also extends our understanding of max-information as a partially unifying measure controlling the generalization properties of adaptive data analyses. We also show a lower bound, proving that (despite the strong composition properties of max-information), when data is drawn from a product distribution, ( , δ)-differentially private algorithms can come first in a composition with other algorithms satisfying max-information bounds, but not necessarily second if the composition is required to itself satisfy a nontrivial max-information bound. This, in particular, implies that the connection between ( , δ)-differential privacy and max-information holds only for inputs drawn from product distributions, unlike the connection between ( , 0)-differential privacy and max-information. Introduction Adaptive Data Analysis refers to the reuse of data to perform analyses suggested by the outcomes of previously computed statistics on the same data. It is the common case when exploratory data analysis and confirmatory data analysis are mixed together, and both conducted on the same dataset. It models both well-defined, self-contained tasks, like selecting a subset of variables using the LASSO and then fitting a model to the selected variables, and also much harder-to-specify sequences of analyses, such as those that occur when the same dataset is shared and reused by multiple researchers. Recently two lines of work have arisen, in statistics and computer science respectively, aimed at rigorous statistical understanding of adaptive data analysis. By and large, the goal in the statistical literature (often called "selective" or "post-selection" inference [BBB + 13]) is to derive valid hypothesis tests and tight confidence intervals around parameter values that arise from very specific analyses, such as LASSO model selection followed by least squares regression (see e.g. [FST14,LSST13]). In contrast, the second line of work has aimed for generality (at the possible expense of giving tight application-specific bounds). This second literature imposes conditions on the algorithms performing each stage of the analysis, and makes no other assumptions on how, or in what sequence, the results are used by the data analyst. Two algorithmic constraints that have recently been shown to guarantee that future analyses will be statistically valid are differential privacy [DFH + 15b, BNS + 16] and bounded output description length, which are partially unified by a measure of information called max-information [DFH + 15a]. This paper falls into the second line of research-specifically, we extend the connection made in [DFH + 15b, BNS + 16] between differential privacy and the adaptive estimation of low-sensitivity queries to a more general setting that includes adaptive hypothesis testing with statistically valid p-values. Our main technical contribution is a quantitatively tight connection between differential privacy and a max-information. Max-information is a measure of correlation, similar to Shannon's mutual information, which allows bounding the change in the conditional probability of events relative to their a priori probability. Specifically, we extend a bound on the max-information of ( , 0)differentially private algorithms, due to [DFH + 15a], to the much larger class of ( , δ)-differentially private algorithms. Post-Selection Hypothesis Testing To illustrate an application of our results, we consider a simple model of one-sided hypothesis tests on real valued test statistics. Let X denote a data domain. A dataset x consists of n elements in X : x ∈ X n . A hypothesis test is defined by a test statistic φ i : X n → R, where we use i to index different test statistics. Given an output a = φ i (x), together with a distribution P over the data domain, the p-value associated with a and P is simply the probability of observing a value of the test statistic that is at least as extreme as a, assuming the data was drawn independently from P: p P i (a) = Pr X∼P n [φ i (X) ≥ a]. Note that there may be multiple distributions P over the data that induce the same distribution over the test statistic. With each test statistic φ i , we associate a null hypothesis H (i) 0 ⊆ ∆(X ), 1 which is simply a collection of such distributions. The p-values are always computed with respect to a distribution P ∈ H (i) 0 , and hence from now on, we elide the dependence on P and simply write p i (a) to denote the p-value of a test statistic φ i evaluated at a. The goal of a hypothesis test is to reject the null hypothesis if the data is not likely to have been generated from the proposed model, that is, if the underlying distribution from which the data were drawn was not in H (i) 0 . By definition, if X truly is drawn from P n for some P ∈ H (i) 0 , then p i (φ i (X)) is uniformly distributed over [0,1]. A standard approach to hypothesis testing is to pick a significance level α ∈ [0, 1] (often α = 0.05), compute the value of the test statistic a = φ i (X), and then reject the null hypothesis if p i (a) ≤ α. Under this procedure, the probability of incorrectly rejecting the null hypothesis-i.e., of rejecting the null hypothesis when X ∼ P n for some P ∈ H (i) 0 -is at most α. An incorrect rejection of the null hypothesis is called a false discovery. The discussion so far presupposes that φ i , the test statistic in question, was chosen independently of the dataset X. Let T denote a collection of test statistics, and suppose that we select a test statistic using a data-dependent selection procedure A : X n → T . If φ i = A(X), then rejecting the null hypothesis when p i (φ i (X)) ≤ α may result in a false discovery with probability much larger than α (indeed, this kind of naive approach to post-selection inference is suspected to be a primary culprit behind the prevalence of false discovery in empirical science [GL14,WL16,SNS11]). This is because even if the null hypothesis is true (X ∼ P n for some P ∈ H (i) 0 ), the distribution on X conditioned on φ i = A(X) having been selected need not be P n . Our goal in studying valid post-selection hypothesis testing is to find a valid p-value correction function γ : [0, 1] → [0, 1], which we define as follows: Definition 1.1 (Valid p-value Correction Function). A function γ : [0, 1] → [0, 1] is a valid p-value correction function for a selection procedure A : X n → T if for every significance level α ∈ [0, 1], the procedure: 1. Select a test statistic φ i = A(X) using selection procedure A. Reject the null hypothesis H (i) 0 if p i (φ i (X)) ≤ γ(α) . has probability at most α of resulting in a false discovery. Necessarily, to give a nontrivial correction function γ, we will need to assume that the selection procedure A satisfies some useful property. In this paper, we focus on differential privacy, which is a measure of algorithmic stability, and more generally, max-information, which is defined in the next subsection. Differential privacy is of particular interest because it is closed under postprocessing and satisfies strong composition properties. This means that, if the test statistics in T are themselves differentially private, then the selection procedure A can represent the decisions of a worst-case data analyst, who chooses which hypothesis tests to run in arbitrary ways as a function of the outcomes of previously selected tests. Finally, we note that, despite the fact that previous works [DFH + 15b, BNS + 16] are explicitly motivated by the problem of false discovery in empirical science, most of the technical results to date have been about estimating the means of adaptively chosen predicates on the data (i.e., answering statistical queries) [DFH + 15b], and more generally, estimating the values of low-sensitivity (i.e., Lipschitz continuous) functions on the dataset [BNS + 16, RZ16, WLF16]. These kinds of results do not apply to the problem of adaptively performing hypothesis tests while generating statistically valid p-values, because p-values are by definition not low-sensitivity statistics. See Appendix B for a detailed discussion. There is one constraint on the selection procedure A that does allow us to give nontrivial pvalue corrections-that A should have bounded max-information (A condition of bounded mutual information has also been considered [RZ16]-but as we discuss in Appendix C, it is possible to obtain a strictly stronger guarantee by instead reasoning via max-information). Max-information is a measure introduced by [DFH + 15a], which we discuss next. Max-Information (and p-values) Given two (arbitrarily correlated) random variables X, Z, we let X ⊗ Z denote a random variable (in a different probability space) obtained by drawing independent copies of X and Z from their respective marginal distributions. We write log to denote logarithms base 2. Definition 1.2 (Max-Information [DFH + 15a]) . Let X and Z be jointly distributed random variables over the domain (X , Z). The max-information between X and Z, denoted by I ∞ (X; Z), is the minimal value of k such that for every x in the support of X and z in the support of Z, we have Pr [X = x|Z = z] ≤ 2 k Pr [X = x]. Alternatively, I ∞ (X; Z) = log sup (x,z)∈(X ,Z) Pr [(X, Z) = (x, z)] Pr [X ⊗ Z = (x, z)] . The β-approximate max-information between X and Z is defined as I β ∞ (X; Z) = log sup O⊆(X ×Z), Pr[(X,Z)∈O]>β Pr [(X, Z) ∈ O] − β Pr [X ⊗ Z ∈ O] . We say that an algorithm A : X n → Y has β-approximate max-information of k, denoted as I β ∞ (A, n) ≤ k, if for every distribution S over elements of X n , we have I β ∞ (X; A(X)) ≤ k when X ∼ S. We say that an algorithm A : X n → Y has β-approximate max-information of k over product distributions, written I β ∞,P (A, n) ≤ k, if for every distribution P over X , we have I β ∞ (X; A(X)) ≤ k when X ∼ P n . It follows immediately from the definition that if an algorithm has bounded max-information, then we can control the probability of "bad events" that arise as a result of the dependence of A(X) on X: for every event O, we have Pr[(X, A(X)) ∈ O] ≤ 2 k Pr[X ⊗ A(X) ∈ O] + β. For example, if A is a data-dependent selection procedure for selecting a test statistic, we can derive a valid p-value correction function γ as a function of a max-information bound on A: Theorem 1.3. Let A : X n → T be a data-dependent algorithm for selecting a test statistic such that I β ∞,P (A, n) ≤ k. Then the following function γ is a valid p-value correction function for A: γ(α) = max α − β 2 k , 0 . Proof. Fix a distribution P n from which the dataset X is drawn. If α−β 2 k ≤ 0, then the theorem is trivial, so assume otherwise. Define O ⊂ X n × T to be the event that A selects a test statistic for which the null hypothesis is true, but its p-value is at most γ(α): O = {(x, φ i ) : P ∈ H (i) 0 and p i (φ i (x)) ≤ γ(α)} Note that the event O represents exactly those outcomes for which using γ as a p-value correction function results in a false discovery. Note also that, by definition of the null hypothesis, Pr[X ⊗ A(X) ∈ O] ≤ γ(α) = α−β 2 k . Hence, by the guarantee that I β ∞,P (A, n) ≤ k, we have that Pr[(X, A(X) ∈ O)] is at most 2 k · α−β 2 k + β = α. Because of Theorem 1.3, we are interested in methods for usefully selecting test statistics using data dependent algorithms A for which we can bound their max-information. It was shown in [DFH + 15a] that algorithms which satisfy pure differential privacy also have a guarantee of bounded max-information: Theorem 1.4 (Pure Differential Privacy and Max-Information [DFH + 15a]). Let A : X n → Y be an ( , 0)-differentially private algorithm. Then for every β ≥ 0: I ∞ (A, n) ≤ log(e) · n and I β ∞,P (A, n) ≤ log(e) · 2 n/2 + n ln(2/β)/2 This connection is powerful, because there are a vast collection of data analyses for which we have differentially private algorithms-including a growing literature on differentially private hypothesis tests [JS13, USF13, YFSU14, KS16, DSZ15, She15, WLK15, GLRV16]. However, there is an important gap: Theorem 1.4 holds only for pure ( , 0)-differential privacy, and not for approximate ( , δ)-differential privacy. Many statistical analyses can be performed much more accurately subject to approximate differential privacy, and it can be easier to analyze private hypothesis tests that satisfy approximate differential privacy, because the approximate privacy constraint is amenable to perturbations using Gaussian noise (rather than Laplace noise) [GLRV16]. Most importantly, for pure differential privacy, the privacy parameter degrades linearly with the number of analyses performed, whereas for approximate differential privacy, need only degrade with the square root of the number of analyses performed [DRV10]. Hence, if the connection between max-information and differential privacy held also for approximate differential privacy, it would be possible to perform quadratically more adaptively chosen statistical tests without requiring a larger p-value correction factor. Our Results In addition to the framework just described for reasoning about adaptive hypothesis testing, our main technical contribution is to extend the connection between differential privacy and maxinformation to approximate differential privacy. We show the following (see Section 3 for a complete statement): Theorem 3.1 (Informal). Let A : X n → Y be an ( , δ)-differentially private algorithm. Then, I β ∞,P (A, n) = O 2 n + n √ δ for β = O n δ . It is worth noting several things. First, this bound nearly matches the bound for max-information over product distributions from Theorem 1.4, except Theorem 3.1 extends the connection to the substantially more powerful class of ( , δ)-differentially private algorithms. The bound is qualitatively tight in the sense that despite its generality, it can be used to nearly recover the tight bound on the generalization properties of differentially private mechanisms for answering low-sensitivity queries that was proven using a specialized analysis in [BNS + 16] (see Appendix D for a comparison). We also only prove a bound on the max-information for product distributions on the input, and not for all distributions (that is, we bound I β ∞,P (A, n) and not I β ∞ (A, n)). A bound for general distributions would be desirable, since such bounds compose gracefully [DFH + 15a]. Unfortunately, a bound for general distributions based solely on ( , δ)-differential privacy is impossible: a construction of De [De12] implies the existence of ( , δ)-differentially private algorithms for which the max information between input and output on arbitrary distributions is much larger than the bound in Theorem 3.1. One might nevertheless hope that bounds on the max-information under product distributions can be meaningfully composed. Our second main contribution is a negative result, showing that such bounds do not compose when algorithms are selected adaptively. Specifically, we analyze the adaptive composition of two algorithms, the first of which has a small finite range (and hence, by [DFH + 15a], small bounded max-information), and the second of which is ( , δ)-differentially private. We show that the composition of the two algorithms can be used to exactly recover the input dataset, and hence, the composition does not satisfy any nontrivial max-information bound. Further Interpretation Although our presentation thus far has been motivated by p-values, an algorithm A with bounded max-information allows a data analyst to treat any event A(x) that is a function of the output of the algorithm "as if" it is independent of the dataset x, up to a correction factor determined by the max-information bound. Our results thus substantially broaden the class of analyses for which approximate differential privacy promises generalization guarantees-this class was previously limited to estimating the values of low-sensitivity numeric valued queries (and more generally, the outcomes of low-sensitivity optimization problems) [BNS + 16]. Our result also further develops the extent to which max-information can be viewed as a unifying information theoretic measure controlling the generalization properties of adaptive data analysis. Dwork et al. [DFH + 15a] previously showed that algorithms that satisfy bounded output description length, and algorithms that satisfy pure differential privacy (two constraints both known individually to imply adaptive generalization guarantees) both have bounded max-information. Because bounded max-information satisfies strong composition properties, this connection implies that algorithms with bounded output description length and pure differentially private algorithms can be composed in arbitrary order and the resulting composition will still have strong generalization properties. Our result brings approximate differential privacy partially into this unifying framework. In particular, when the data is drawn from a product distribution, if an analysis that starts with an (arbitrary) approximate differentially private computation is followed by an arbitrary composition of algorithms with bounded max-information, then the resulting composition will satisfy a maxinformation bound. However, unlike with compositions consisting solely of bounded description length mechanisms and pure differentially private mechanisms, which can be composed in arbitrary order, in this case it is important that the approximate differentially private computation come first. This is because, even if the dataset x is initially drawn from a product distribution, the conditional distribution on the data that results after observing the outcome of an initial computation need not be a product distribution any longer. In fact, the lower bound we prove in Section 4 is an explicit construction in which the composition of a bounded description length algorithm, followed by an approximate differentially private algorithm can be used to exactly reconstruct a dataset drawn from a product distribution (which can in turn be used to arbitrarily overfit that dataset). Other Related Work Differential Privacy is an algorithmic stability condition introduced by Dwork et al. [DMNS06]. Its connection to adaptive data analysis was made by Dwork give a third methodcompression schemes-which can also guarantee validity in adaptive data analysis in the context of learning. Computational and information theoretic lower bounds for adaptively estimating means in this framework were proven by Hardt and Ullman [HU14], and Steinke and Ullman [SU15]. Russo and Zou [RZ16] show how to bound the bias of sub-gaussian statistics selected in a datadependent manner, in terms of the mutual information between the selection procedure and the value of the statistics. In particular (using our terminology), they show how to give a valid p-value correction function in terms of this mutual information. In Appendix C, we demonstrate that if a bound on the mutual information between the dataset and the output of the selection procedure is known, then it is possible to substantially improve on the p-value correction function given by [RZ16] by instead using the mutual information bound to prove a max-information bound on the selection procedure. [WLF16] study adaptive data analysis in a similar framework to [RZ16], and give a minimax analysis in a restricted setting. McGregor et al. [MMP + 11], and De [De12] also study (among other things) information theoretic bounds satisfied by differentially private algorithms. Together, they prove a result that is analogous to ours, for mutual information-that while pure differentially private algorithms have bounded mutual information between their inputs and their outputs, a similar bound holds for (approximate) ( , δ)-differentially private algorithms only if the data is drawn from a product distribution. Preliminaries We will use the following vector notation throughout: x = (x 1 , · · · , x n ) x b a = (x a , x a+1 , · · · , x b ) (x −i , t) = (x 1 , · · · , x i−1 , t, x i+1 , · · · , x n ). We denote the distribution of a random variable X as p(X). In our analysis, jointly distributed random variables (X, Z) will typically be of the form (X, A(X)) where X ∼ P n is a dataset of n elements sampled from domain X , and A : X n → Y is a (randomized) algorithm that maps a dataset to some range Y. We denote by A(X) the random variable that results when A is applied to a dataset X ∼ P n (note that here, the randomness is both over the choice of dataset, and the internal coins of the algorithm). When the input variable is understood, we will sometimes simply write A. It will be useful in our analysis to compare the distributions of two random variables. In the introduction, we define (approximate-) max information, and we now give some other measures between distributions. We first define indistinguishability, and then differential privacy. Definition 2.1 (Indistinguishability [KS14]). Two random variables X, Y taking values in a set D are ( , δ)-indistinguishable, denoted X ≈ ,δ Y , if for all O ⊆ D,Pr [X ∈ O] ≤ e · Pr [Y ∈ O] + δ and Pr [Y ∈ O] ≤ e · Pr [X ∈ O] + δ. Definition 2.2 (Point-wise indistinguishibility [KS14]). Two random variables X, Z taking values in a set D are point-wise ( , δ)-indistinguishable if with probability 1 − δ over a ∼ p(X): e − Pr [Z = a] ≤ Pr [X = a] ≤ e Pr [Z = a] . Before we define differential privacy, we say that two databases x, x ∈ X n are neighboring if they differ in at most one entry. We now define differential privacy in terms of indistinguishability: Definition 2.3 (Differential Privacy [DMNS06, DKM + 06]). A randomized algorithm A : X n → Y is ( , δ)-differentially private if for all neighboring datasets x, x ∈ X n , we have A(x) ≈ ,δ A(x ). In the appendix, we give several useful connections between these definitions along with other more widely known measures between distributions, e.g., KL-divergence, and total-variation distance. Max-Information for ( , δ)-Differentially Private Algorithms In this section, we prove a bound on approximate max-information for ( , δ)-differentially private algorithms over product distributions. Theorem 3.1. Let A : X n → Y be an ( , δ)-differentially private algorithm for ∈ (0, 1/2] and δ ∈ (0, ). For β = e − 2 n + O n δ , we have I β ∞,P (A, n) = O 2 n + n √ δ . We will prove Theorem 3.1 over the course of this section, using a number of lemmas. We first set up some notation. We will sometimes abbreviate conditional probabilities of the form Pr [X = x|A = a] as Pr [X = x|a] when the random variables are clear from context. Further, for any x ∈ X n and a ∈ Y, we define Z(a, x) def = log Pr [A = a, X = x] Pr [A = a] · Pr [X = x] = n i=1 log Pr X i = x i |a, x i−1 1 Pr [X i = x i ](1) If we can bound Z(a, x) with high probability over (a, x) ∼ p(A(X), X), then we can bound the approximate max-information by using the following lemma: Lemma 3.2 ([DFH + 15a, Lemma 18]). If Pr [Z(A(X), X) ≥ k] ≤ β, then I β ∞ (A(X); X) ≤ k. We next define each term in the sum of Z(a, x) as Z i (a, x i 1 ) def = log Pr X i = x i |a, x i−1 1 Pr [X i = x i ] .(2) The plan of the proof is simple: our goal is to apply Azuma's inequality (Theorem A.8) to the sum of the Z i 's to achieve a bound on Z with high probability. Applying Azuma's inequality requires both understanding the expectation of each term Z i (a, x i 1 ), and being able to argue that each term is bounded. Unfortunately, in our case, the terms are not always bounded -however, we will be able to show that they are bounded with high probability. This plan is somewhat complicated by the conditioning in the definition of Z i (a, x i 1 ). First, we argue that we can bound each Z i with high probability. This argument takes place over the course of Claims 3.3, 3.4, 3.5 and 3.6. Claim 3.3. If A is ( , δ)-differentially private and X ∼ P n , then for each i ∈ [n] and each prefix x i−1 1 ∈ X i−1 , we have: (A, X i )| x i−1 1 ≈ ,δ A| x i−1 1 ⊗ X i . Proof. Fix any set O ⊆ Y × X and prefix x i−1 1 ∈ X i−1 . We then define the set O x i = {a ∈ Y : (a, x i ) ∈ O}. Now, we have that: Pr (A(X), X i ) ∈ O|x i−1 1 = x i ∈X Pr [X i = x i ] Pr A(X) ∈ O x i |x i−1 1 , x i ≤ x i ∈X Pr [X i = x i ] e Pr A(X) ∈ O x i |x i−1 1 , t i + δ ∀t i ∈ X Thus, we can multiply both sides of the inequality by Pr [X i = t i ] and sum over all t i ∈ X to get: Pr (A(X), X i ) ∈ O|x i−1 1 = t i ∈X Pr [X i = t i ] Pr (A(X), X i ) ∈ O|x i−1 1 ≤ x i ∈X t i ∈X Pr [X i = x i ] Pr [X i = t i ] e Pr A(X) ∈ O x i |x i−1 1 , t i + δ ≤ e x i ∈X Pr [X i = x i ] Pr A(X) ∈ O x i |x i−1 1 + δ = e Pr A(X) ⊗ X i ∈ O|x i−1 1 + δ. We follow a similar argument to prove: Pr A(X) ⊗ X i ∈ O|x i−1 1 ≤ e Pr (A(X), X i ) ∈ O|x i−1 1 + δ. We now define the following set of "good outcomes and prefixes" for anyδ > 0: E i (δ) = (a, x i−1 1 ) : X i ≈ 3 ,δ X i | a,x i−1 1 (3) We use a technical lemma from [KS14] (stated in the appendix, Lemma A.7), and Claim 3.3 to derive the following result: Claim 3.4. If A is ( , δ)-differentially private and X ∼ P n , then for each i ∈ [n] and each prefix x i−1 1 ∈ X i−1 we have forδ > 0: Pr (A, X i−1 1 ) ∈ E i (δ)|x i−1 1 ≥ 1 − δ , where δ def = 2δ δ + 2δ 1 − e − . Proof. This follows directly from Lemma A.7: Pr (A, X i−1 1 ) ∈ E i (δ)|x i−1 1 = Pr X i ≈ 3 ,δ X i | A,x i−1 1 |x i−1 1 = Pr a∼p A| x i−1 1 X i ≈ 3 ,δ X i | a,x i−1 1 ≥ 1 − δ We now define the set of outcome/dataset prefix pairs for which the quantities Z i are not large: F i = (a, x i 1 ) : |Z i (a, x i 1 )| ≤ 6 .(4) Using another technical lemma from [KS14] (which we state in Lemma A.6 in the appendix), we prove: Claim 3.5. Given (a, x i−1 1 ) ∈ E i (δ), we have: Pr (A, X i 1 ) ∈ F i |a, x i−1 1 ≥ 1 − δ , where δ def = 2δ 1 − e −3 . Proof. Since (a, x i−1 1 ) ∈ E i (δ), we know that X i is (3 ,δ)-indistinguishable from X i | a,x i−1 1 . Using Lemma A.6, we know that X i and X i | a,x i−1 1 are pointwise (6 , δ )-indistinguishable. Thus, by definition of F i and Z i , we have: Pr (A, X i 1 ) ∈ F i |a, x i−1 1 = Pr Z i (A, X i 1 ) ≤ 6 |a, x i−1 1 = Pr x i ∼p X i | a,x i−1 1 ) log Pr X i = x i |a, x i−1 1 Pr [X i = x i ] ≤ 6 ≥ 1 − δ We now define the "good" tuples of outcomes and databases as G i (δ) = (a, x i 1 ) : (a, x i−1 1 ) ∈ E i (δ) & (a, x i 1 ) ∈ F i ,(5)G ≤i (δ) = (a, x i 1 ) : (a, x 1 ) ∈ G 1 (δ), · · · , (a, x i 1 ) ∈ G i (δ)(6) Claim 3.6. If A is ( , δ)-differentially private and X ∼ P n , then Pr (A, X i 1 ) ∈ G i (δ) ≥ 1 − δ − δ for δ and δ given in Claim 3.4 and Claim 3.5, respectively. Proof. We have: Pr (A, X i 1 ) / ∈ G i (δ) = Pr (A, X i−1 1 ) / ∈ E i (δ) or (A, X i 1 ) / ∈ F i ) = 1 − Pr (A, X i−1 1 ) ∈ E i (δ) and (A, X i 1 ) ∈ F i ) = 1 − (a,x i−1 1 )∈E i (δ) Pr (A, X i−1 1 ) = (a, x i−1 1 ) Pr (A, X i 1 ) ∈ F i |a, x i−1 1 ≤ 1 − (a,x i−1 1 )∈E i (δ) Pr (A, X i−1 1 ) = (a, x i−1 1 ) · (1 − δ ) = 1 − (1 − δ )Pr (A, X i−1 1 ) ∈ E i (δ) ≤ 1 − (1 − δ )(1 − δ ) = δ + δ − δ δ where the last two inequalities follow from Claim 3.5 and Claim 3.4, respectively. Having shown a high probability bound on the terms Z i , our next step is to bound their expectation so that we can continue towards our goal of applying Azuma's inequality. Note a complicating factor -throughout the argument, we need to condition on the event (A, X i 1 ) ∈ F i to ensure that Z i has bounded expectation. We will use the following shorthand notation for conditional expectation: E Z i (A, X i 1 )|a, x i−1 1 , F i def = E Z i (A, X i 1 )|A = a, X i−1 1 = x i−1 1 , (A, X i 1 ) ∈ F i , with similar notation for sets G i (δ), G ≤i (δ). Lemma 3.7. Let A be ( , δ)-differentially private and X ∼ P n . Given (a, x i−1 1 ) ∈ E i (δ), for all ∈ (0, 1/2] andδ ∈ (0, /15], E Z i (A, X i 1 )|a, x i−1 1 , F i = O( 2 +δ). More precisely, E Z i (A, X i 1 )|a, x i−1 1 , F i ≤ ν(δ), where ν(δ) is defined in (7). Proof. Given an outcome and prefix (a, x i−1 1 ) ∈ E i (δ), we define the set of data entries X (a, x i−1 1 ) = {x i ∈ X : (a, x i 1 ) ∈ F i }. We then have: E Z i (A, X i 1 )|a, x i−1 1 , F i = x i ∈X (a,x i−1 1 ) Pr X i = x i |a, x i−1 1 , F i log Pr X i = x i |a, x i−1 1 Pr [X i = x i ] Here, our goal is to mimic the proof of the "advanced composition theorem" of [DRV10] by adding a term that looks like a KL divergence term (see Definition A.4). In our case, however, the sum is not over the entire set X , and so it is not a KL-divergence, which leads to some additional complications. Consider the following term: x i ∈X (a,x i−1 1 ) Pr [X i = x i ] log Pr X i = x i |a, x i−1 1 Pr [X i = x i ] = Pr X i ∈ X (a, x i−1 1 ) x i ∈X (a,x i−1 1 ) Pr [X i = x i ] Pr X i ∈ X (a, x i−1 1 ) log Pr X i = x i |a, x i−1 1 Pr [X i = x i ] ≤ log Pr X i ∈ X (a, x i−1 1 )|a, x i−1 1 Pr X i ∈ X (a, x i−1 1 ) = log 1 − Pr X i / ∈ X (a, x i−1 1 )|a, x i−1 1 1 − Pr X i / ∈ X (a, x i−1 1 ) where the inequality follows from Jensen's inequality. Note that, because (a, x i−1 1 ) ∈ E i (δ), we have forδ > 0: Pr X i / ∈ X (a, x i−1 1 ) ≤ e 3 Pr X i / ∈ X (a, x i−1 1 )|a, x i−1 1 +δ. We now focus on the term Pr X i / ∈ X (a, x i−1 1 )|a, x i−1 1 . Note that x i / ∈ X (a, x i−1 1 ) ⇔ (a, x i 1 ) / ∈ F i . Thus, Pr X i / ∈ X (a, x i−1 1 )|a, x i−1 1 = Pr (A, X i 1 ) / ∈ F i |a, x i−1 1 def = q Note that q ≤ δ by Claim 3.5. Now, we can bound the following: x i ∈X (a,x i−1 1 ) Pr [X i = x i ] log Pr X i = x i |a, x i−1 1 Pr [X i = x i ] ≤ log(1 − q) − log(1 − (e 3 q +δ)) ≤ log(e) · (−q + e 3 q +δ + 2(e 3 q +δ) 2 ) = log(e) · ((e 3 − 1)q +δ + 2(e 3 q +δ) 2 ) ≤ log(e) · (e 3 − 1) 2δ 1 − e −3 +δ + 2δ 2 · 2e 3 1 − e −3 + 1 2 =δ(log(e)(2e 3 + 1)) +δ 2 2 log(e) 4e 12 + 4e 9 − 3e 6 − 2e 3 + 1 e 6 − 2e 3 + 1 def = τ (δ) where the second inequality follows by using the inequality (−x−2x 2 ) log(e) ≤ log(1−x) ≤ −x log(e) for 0 < x ≤ 1/2, and as (e 3 q +δ) ≤ 1/2 for andδ bounded as in the lemma statement. We then use this result to upper bound the expectation we wanted: E Z i (A, X i 1 )|a, x i−1 1 , F i ≤ x i ∈X (a,x i−1 1 ) Pr X i = x i |a, x i−1 1 , F i log Pr X i = x i |a, x i−1 1 Pr [X i = x i ] − x i ∈X (a,x i−1 1 ) Pr [X i = x i ] log Pr X i = x i |a, x i−1 1 Pr [X i = x i ] + τ (δ) = x i ∈X (a,x i−1 1 ) Pr X i = x i |a, x i−1 1 , F i − Pr [X i = x i ] log Pr X i = x i |a, x i−1 1 Pr [X i = x i ] + τ (δ) ≤ 6 x i ∈X (a,x i−1 1 ) |Pr X i = x i |a, x i−1 1 , F i − Pr [X i = x i ] | + τ (δ) ≤ 6 x i ∈X (a,x i−1 1 ) Pr [X i = x i ] max e 6 Pr (A, X i 1 ) ∈ F i |a, x i−1 1 − 1, 1 − e −6 Pr (A, X i 1 ) ∈ F i |a, x i−1 1 + τ (δ) ≤ 6   e 6 1 − 2δ 1−e −3 − 1   + τ (δ) ≤ 6 2 e 9 − 9 + 6δ 3 − 2δ(1 + 3 ) +δ 12 3 − 2δ(1 + 3 ) + log(e)(2e 3 + 3) −δ 2 log(e) (4e 12 + 4e 9 − 24) + 4 (e 3 − 3) def = ν(δ)(7) where the third inequality follows from the definition of F i , the fourth inequality follows from Claim 3.5 and the last inequality follows by substituting the value of τ (δ), and using the inequalities 1 + y ≤ e y and e ky ≤ 1 + e k y for y ∈ (0, 0.5], k > 1. Finally, we need to apply Azuma's inequality (stated in Theorem A.8) to a set of variables that are bounded with probability 1, not just with high probability. Towards this end, we define variables T i that will match Z i for "good events", and will be zero otherwise-and hence, are always bounded: T i (a, x i 1 ) = Z i (a, x i 1 ) if (a, x i 1 ) ∈ G ≤i (δ) 0 otherwise (8) The next lemma verifies that the variables T i indeed satisfy the requirements of Azuma's inequality: Lemma 3.8. Let A be ( , δ)-differentially private and X ∼ P n . The variables T i defined in (8) are bounded by 6 with probability 1, and for any (a, x i−1 1 ) ∈ Y × X i−1 andδ > 0, E T i (A, X i 1 )|a, x i−1 1 = O( 2 +δ),(9) where the bound does not depend on n or i. More precisely, E T i (A, X i 1 )|a, x i−1 1 ≤ ν(δ), where ν(δ) is defined in (7). Proof. By definition, T i (A, X i 1 ) takes values only in [−6 , 6 ]. Thus, Pr |T i (A, X i 1 )| ≤ 6 = 1. Now, given (a, x i−1 1 ) ∈ E i (δ) ∩ G ≤i−1 (δ) , we can see that: E T i (A, X i 1 ) a, x i−1 1 , G c ≤i (δ) = 0. Further, given (a, x i−1 1 ) ∈ E i (δ) ∩ G ≤i−1 (δ), we have: E T i (A, X i 1 )|a, x i−1 1 , G ≤i (δ) = x i :(a,x i 1 )∈F i T i (a, x i 1 )Pr X i = x i |a, x i−1 1 , G ≤i (δ) = x i :(a,x i 1 )∈F i Z i (a, x i 1 )Pr X i = x i |a, x i−1 1 , G ≤i (δ) = x i :(a,x i 1 )∈F i Z i (a, x i 1 )Pr X i = x i |a, x i−1 1 , F i = E Z i (A, X i 1 )|a, x i−1 1 , F i = O( 2 +δ) where the second equality follows from (8), and the last equality follows from Lemma 3.7. For any (a, x i−1 1 ) / ∈ E i (δ) ∩ G ≤i−1 (δ), we have that the conditional expectation is zero. This proves the lemma. We are now ready to prove our main theorem. Proof of Theorem 3.1. For any constant ν, we have: Pr n i=1 Z i (A, X i 1 ) > nν + 6t √ n ≤ Pr n i=1 Z i (A, X i 1 ) > nν + 6t √ n ∩ (A, X) ∈ G ≤n (δ) + Pr (A, X) / ∈ G ≤n (δ) = Pr n i=1 T i (A, X i 1 ) > nν + 6t √ n ∩ (A, X) ∈ G ≤n (δ) + Pr (A, X) / ∈ G ≤n (δ) We then substitute ν by ν(δ) as defined in Equation (7), and apply a union bound on Pr (A, X) / ∈ G ≤n (δ) using Claim 3.6 to get Pr n i=1 Z i (A, X i 1 ) > nν(δ) + 6t √ n ≤ Pr n i=1 T i (A, X i 1 ) > nν(δ) + 6t √ n + n(δ + δ ) ≤ e −t 2 /2 + n(δ + δ ) where the two inequalities follow from Claim 3.6 and Theorem A.8, respectively. Therefore, Pr Z(A(X), X) > nν(δ) + 6t √ n ≤ e −t 2 /2 + n(δ + δ ) def = β(t,δ) From Lemma 3.2, we have I β(t,δ) ∞ (X; A(X)) ≤ nν(δ) + 6t √ n. We set the parameters t = √ 2n andδ = √ δ/15 to obtain our result. Note that settingδ = √ δ/15 does not violate the bounds on it stated in the statement of Lemma 3.7. A Counterexample to Nontrivial Composition and a Lower Bound for Non-Product Distributions It is known that algorithms with bounded description length have bounded approximate maxinformation [DFH + 15a]. In section 3, we showed that ( , δ)-differentially private algorithms have bounded approximate max-information when the dataset is drawn from a product distribution. In this section, we show that although approximate max-information composes adaptively [DFH + 15a], one cannot always run a bounded description length algorithm, followed by a differentially private algorithm, and expect the resulting composition to have strong generalization guarantees. In particular, this implies that ( , δ)-differentially private algorithms cannot have any nontrivial bounded max-information guarantee over non-product distributions. Specifically, we give an example of a pair of algorithms A and B such that A has output description length o(n) for inputs of length n, and B is ( , δ)-differentially private, but the adaptive composition of A followed by B can be used to exactly reconstruct the input database with high probability. In particular, it is easy to overfit to the input X given B(X; A(X)), and hence, no nontrivial generalization guarantees are possible. Note that this does not contradict our results on the max-information of differentially private algorithms for product distributions: even if the database used as input to A is drawn from a product distribution, the distribution on the database is no longer a product distribution once conditioned on the output of A. The distribution of B's input violates the hypothesis that is used to prove a bound on the max-information of B. Theorem 4.1. Let X = {0, 1} and Y = {X n ∪ {⊥}}. Let X be a uniformly distributed random variable over X n . For n > 64e, for every ∈ 0, 1 2 , δ ∈ 0, 1 4 , there exists an integer r > 0 and randomized algorithms A : X n → {0, 1} r , and B : X n × {0, 1} r → Y, such that: 1. r = O log(1/δ) log n and I β ∞ (X; A(X)) ≤ r + log( 1 β ) for all β > 0; 2. for every a ∈ {0, 1} r , B(X, a) is ( , δ)-differentially private and I β ∞ (X; B(X, a)) ≤ 1 for all β ≥ 2δ; 3. for every x ∈ X n , with probability at least 1 − δ, we have that B(x; A(x))) = x. In particular, I β ∞ (X, B(X; A(X))) ≥ n − 1 for all 0 < β ≤ 1 2 − δ. [De12] showed that the mutual information of ( , δ)-differentially private protocols can be large: if 1 log 1 δ = O(n), then there exists an ( , δ)-differentially private algorithm B and a distribution S such that for X ∼ S, I(X; B(X)) = Ω(n), where I denotes mutual information. De's construction also has large approximate max-information. By the composition theorem for approximate max-information (given in the appendix in Lemma A.3), our construction implies a similar bound: Corollary 4.2. There exists an ( , δ)-differentially private mechanism C : X n → Y such that I β 2 ∞ (C, n) ≥ n − 1 − r − log(1/β 1 ) for all β 1 ∈ (0, 1/2 − δ) and β 2 ∈ (0, 1/2 − δ − β 1 ), where r = O log(1/δ) log(n) . We adapt ideas from De's construction in order to prove Theorem 4.1. In De's construction, the input is not drawn from a product distribution-instead, the support of the input distribution is an error-correcting code, meaning that all points in the support are far from each other in Hamming distance. For such a distribution, De showed that adding the level of noise required for differential privacy does not add enough distortion to prevent decoding of the dataset. Our construction adapts De's idea. Given as input a uniformly random dataset x, we show a mechanism A which outputs a short description of a code that contains x. Because this description is short, A has small max-information. The mechanism B is then parameterized by this short description of a code. Given the description of a code and the dataset x, B approximates (privately) the distance from x to the nearest codeword, and outputs that codeword when the distance is small. When B is composed with A, we show that it outputs the dataset x with high probability. Before getting into the details of our result, we present some preliminaries for linear codes. Preliminaries for Linear Codes For the current and the next subsections, we limit our scope to F 2 , i.e., the finite field with 2 elements. First, we define linear codes: denotes the Hamming distance between binary vectors p and q. We now define parity check matrices, which can be used to construct linear codes. Every linear code has a parity-check matrix corresponding to it. Thus, given a parity-check matrix, one can reconstruct the corresponding linear code. Definition 4.4 (Parity-check matrix). For a linear code C ⊆ {0, 1} n of length n and rank k, H ∈ {0, 1} (n−k)×n is a parity-check matrix of C iff H is a matrix whose null space is C, i.e., c ∈ C iff Hc = 0, where 0 represents the zero vector. Now, we state a theorem which shows the existence of high-rank linear codes when the minimum distance is less than half the code length: Theorem 4.5 (From Theorem 5.1.8 in [Lin99]). For every t ∈ (0, n 2 ), there exists a linear code of rank k such that k ≥ n − 3t log(n). Next, we will define an affine code, which is a translation of a linear code by a fixed vector in the vector space of the linear code: Definition 4.6 (Affine Code). Let C ⊆ {0, 1} n be a linear code of length n, rank k and minimum distance t. For any vector b ∈ {0, 1} n , the code defined by C a = {c + b : c ∈ C}, where a = Hb, is called an affine code. Lemma 4.7. If C is a linear code with parity check matrix H and minimum distance t, then the affine code C a also has minimum distance t. Further, for all c ∈ C a , we have Hc = a. Proof. Let c ∈ C a . We know that there exists a c ∈ C such that Hc = H(c + b) = 0 + Hb = a. Lastly, we define the concept of a Hamming ball around a point, which is helpful in understanding the point's neighborhood -i.e., the points close to it with respect to Hamming distance. The volume of a Hamming ball, denoted by V ol(B r ), is independent of the point around which the ball is centered, i.e., for any point p ∈ {0, 1} n : V ol(B r ) = B r (p) = r i=0 {x ∈ {0, 1} n : dist Hamm (x, p) = i} = r i=0 n i .(10) Proof of Theorem 4.1 In this section, we define the mechanisms A and B from the theorem statement, and then prove our result in three parts: First, we show that the first bullet in the theorem statement directly follows from setting the parameters appropriately and from [DFH + 15a]. Next, we show the proof of the second bullet in two pieces. We start by showing that the algorithm B that we define is differentially private, and then, we show that the approximate max-information of B is small when its inputs are chosen independently. Lastly, we prove the third bullet by first showing that the adaptive composition of A followed by B results in the reconstruction of the input with high probability. Subsequently, we show that such a composition has large approximate max-information. Before we define the mechanisms A and B, we must set up some notation. We fix t such that t = 8 log(1/δ) + 1. We know that t ≥ 33 because ∈ (0, 1/2] and δ ∈ (0, 1/4]. Now, fix an ((n − k) × n) parity-check matrix H for a linear code C ⊆ {0, 1} n of rank k over F 2 where t is the minimum distance of C and k = n − 3t log n, and let r = n − k = 3t log n. We can ensure the existence of C from Theorem 4.5. We define the mechanisms A and B from the theorem statement in Algorithm 1 and Algorithm 2, respectively. Brief description of A: For any input x ∈ X n , mechanism A returns a vector a x ∈ {0, 1} r such that x ∈ C ax , where C ax is an affine code with minimum distance t. This follows as a x = A(x) = Hx, and from Lemma 4.7, as C ax = {c ∈ X n : Hc = a x }. Input: x ∈ {0, 1} n Output: a x ∈ {0, 1} r 1 Return Hx (multiplication in F 2 ). Algorithm 1: A Brief description of B ,δ : For any input x ∈ X n and a ∈ {0, 1} r , mechanism B ,δ first computes d x , which is the distance of x from f (x), i.e., the nearest codeword to x in code C a . Next, it setŝ d x to be d x perturbed with Laplace noise L ∼ Lap(1/ ). It returns f (x) ifd x is below a threshold w def = t − 1 4 − log(1/δ) , and ⊥ otherwise. Input: x ∈ {0, 1} n (private) and a ∈ {0, 1} r (public) Output: b ∈ Y 1 Compute the distance of x to the nearest codeword in code C a . Let d x = min c∈Ca (dist Hamm (x, c)) and f (x) = arg min c∈Ca (dist Hamm (x, c)) (breaking ties arbitrarily). 2 Letd x = d x + L, where L ∼ Lap(1/ ), and Lap(c) denotes a random variable having Laplace(0,c) distribution. 3 ifd x < t − 1 4 − log(1/δ) then 4 Return f (x). 5 else 6 Return ⊥. 7 end Algorithm 2: B ,δ Now, we present the proof of our theorem. Proof of Theorem 4.1, part 1. Observe that r = O log(1/δ) log n from the value assigned to t. We know that the second statement holds by the max-information bound for mechanisms with bounded description length from [DFH + 15a]. where the first inequality follows as f (x) = f (x ) implies d Thus, for any set O ⊆ Y, we can bound the following difference in terms of the total variation distance T V (B ,δ (x, a), B ,δ (x , a)) (defined in the appendix) δ (x, a), B ,δ (x , a)) Pr [B ,δ (x, a) ∈ O] − Pr B ,δ (x , a) ∈ O ≤ T V (B ,= (p − 0) + (p − 0) + |(1 − p) − (1 − p )| 2 = p + p + |p − p| 2 = max{p, p } ≤ δ 2. f (x) = f (x ): Observe that for every x ∈ {0, 1} n , the value of d x can change by at most 1 if exactly one coordinate is changed in x . Computingd x is then just an instantiation of the Laplace mechanism, given in the appendix (Theorem A.1). Therefore,d x satisfies ( , 0)-differential privacy. Notice that determining whether to output f (x) = f (x ) or ⊥ is a post-processing function of the ( , 0)-differentially privated x , and thus, by Theorem A.2, B ,δ (·, a) is ( , 0)differentially private for such inputs. Therefore, from the above two cases, for any set O ⊆ Y, we have that: Pr [B ,δ (x, a) ∈ O] ≤ e Pr B ,δ (x , a) ∈ O + δ. Thus, we can conclude that B ,δ (·, a) is ( , δ)-differentially private for every a ∈ {0, 1} r . Next, we look at the outcome of B ,δ (X, a) when X is drawn uniformly over X n and a is a fixed r-bit string. Note that B ,δ (X, a) outputs either ⊥ or a codeword of C a . Thus, Pr [B ,δ (X, a) = ⊥] = Pr d X < w = Pr d X + L < t − 1 4 − log(1/δ)(11) Now, let us define the set D = x ∈ X n : d x < t − 1 4 . If d X + L < t − 1 4 − log(1/δ) , then either X ∈ D, or L < − log(1/δ) , or both. Thus, Pr [B ,δ (X, a) = ⊥] ≤ Pr [X ∈ D] + Pr L < − log(1/δ)(12) From the tail bound of the Laplace distribution, Pr L < − log(1/δ) ≤ δ(13) Next, we will calculate the probability of X ∈ D. We then assign s def = t−1 4 . Notice that as the minimum distance of C a is t, the Hamming balls B 2s of radius 2s around the codewords of C a are disjoint and thus we can bound the volume (defined in (10)) of each, |C a | · V ol(B 2s ) ≤ 2 n(14) Therefore, |C a | · V ol(B s ) ≤ 2 n · V ol(B s ) V ol(B 2s ) = 2 n · s i=0 n i 2s j=0 n j ≤ 2 n · s · n s n 2s ≤ 2 n · s · ne s s n 2s 2s = 2 n s 4es n s(15) where the first inequality follows from equation (14), and the last inequality follows as n k k ≤ n k ≤ ne k k for k ≥ 1 (from Appendix C.1 in [CLRS09]). Thus, Pr [X ∈ D] = |C a | · V ol(B s ) 2 n ≤ s 4es n s(16) where the inequality follows from equation (15). Hence, Pr [B ,δ (X, a) = ⊥] ≤ s 4es n s + δ < s · 2 −s + δ(17) where the first inequality follows from equations (12),(13) and (16), and the last inequality follows from the fact that n > 8es = 2e(t − 1). Bounding the term s · 2 −s from above, we have s · 2 −s = t − 1 4 · 2 (1−t)/4 = 2 log(1/δ) · 2 −2 log(1/δ)/ = 2 log(1/δ) · δ 2/ = (δ log(1/δ)) 2 · δ (2/ )−2 δ ≤ δ where the inequality follows as δ log(1/δ) ≤ 1 for δ ∈ 0, 1 4 , and 2 · δ (2/ )−2 ≤ 1 for ∈ 0, 1 2 , δ ∈ 0, 1 4 . From equations (17) and (18), Pr [B ,δ (X, a) = ⊥] > 1 − 2δ(19) Now, for any x ∈ X n , log Pr [(X, B ,δ (X, a)) = (x, ⊥)] Pr [X ⊗ B ,δ (X, a) = (x, ⊥)] = log Pr [B ,δ (X, a) = ⊥|X = x] Pr [B ,δ (X, a) = ⊥] < log 1 1 − 2δ ≤ log 1 1 − 0.5 = 1(20) where the first inequality follows from equation (19), and the second inequality follows from the fact that δ ≤ 1 4 . We then apply Lemma 3.2 using (19) and (20) to get, I β ∞ (X; B ,δ (X, a)) ≤ 1, for β ≥ 2δ. Proof of Theorem 4.1, part 3. Let us look at the outcome of B ,δ (x, A(x)). First, as x ∈ C A(x) , f (x) = x and d x = 0. Thus, B ,δ (x, A(x)) will either return x or ⊥. Furthermore, we can show the probability of outputting x is high: Pr coins of B ,δ [B ,δ (x, A(x)) = x] = Pr d x < w ≥ Pr d x < log(1/δ) = Pr Lap(1/ ) < log(1/δ) ≥ 1 − δ where the first inequality follows from the fact that t − 1 4 − log(1/δ) ≥ log(1/δ) , the equality after it follows since d x = 0, and the last inequality follows from a tail bound of the Laplace distribution. Thus, for every x ∈ X n , Pr [B ,δ (x, A(x)) = x] ≥ 1 − δ.(21) Consider the event D X = {(x, x) : x ∈ X n }. From equation (21), Pr [(X, B ,δ (X, A(X)) ∈ D X ] ≥ 1 − δ.(22) Also, for b ∈ Y, if b = ⊥, then Pr [X = b] = 0, and if b ∈ X n , then Pr [X = b] = 2 −n as X is drawn uniformly over X n . Thus, for all b ∈ Y, Pr [(X, b) ∈ D X ] ≤ 2 −n . Hence, Pr [(X ⊗ B ,δ (X, A(X))) ∈ D X ] = b∈Y Pr [(X, b) ∈ D X ] Pr [B ,δ (X, A(X)) = b] ≤ 2 −n(23) Therefore, for β ≤ 1 2 − δ, I β ∞ (X; B ,δ (X, A(X))) = log     max O⊆(X×Y), Pr[(X,B ,δ (X,A(X)))∈O]>β Pr [(X, B ,δ (X, A(X))) ∈ O] − β Pr [(X ⊗ B ,δ (X, A(X))) ∈ O]     ≥ log Pr [(X, B ,δ (X, A(X))) ∈ D X ] − β Pr [(X ⊗ B ,δ (X, A(X))) ∈ D X ] ≥ log 1 − δ − β 2 −n = n + log(1 − δ − β) ≥ n − 1 where the first inequality follows from equation (22) and as (1−δ) > β, the second inequality follows from equations (22) and (23), and the last inequality follows from the fact that β ≤ 1 2 − δ. A Other Useful Probabilistic Tools We use this section to give an overview of some useful probabilistic tools and their connections with one another. We start by giving a commonly used differentially private mechanism, called the Laplace Mechanism, which releases an answer to a query on the dataset with appropriately scaled Laplace noise. We use this mechanism in the proof of Theorem 4.1. Theorem A.1 (Laplace Mechanism [DMNS06]). Let φ : X n → R be a function such that for any pair of points x and x where dist Hamm (x, x ) = 1, then we have |φ( x)−φ(x )| ≤ 1. The mechanism M(x) = φ(x) + L where L ∼ Lap(1/ ), is ( , 0)-differentially private. A very useful fact about differentially private mechanisms is that one cannot take the output of a differentially private mechanism and perform any modification to it that does not depend on the input itself and make the output any less private. Theorem A.2 (Post Processing [DMNS06]). Let M : X n → Y be ( , δ)-differentially private and ψ : Y → Y be any function mapping to arbitrary domain Y . Then ψ • M is ( , δ)-differentially private. We have been focusing on approximate max-information throughout this paper, which has the following strong composition guarantee: Theorem A.3 (Composition [DFH + 15a]) . Let A 1 : X n → Y and A 2 : X n × Y → Z be such that I β 1 ∞ (A 1 , n) ≤ k 1 and I β 2 ∞ (A 2 (·, y), n) ≤ k 2 for every y ∈ Y. Then the composition of A 1 and A 2 , defined to be A(X) = A 2 (X, A 1 (X)) satisfies: I β 1 +β 2 ∞ (A, n) ≤ k 1 + k 2 . In our analysis, in addition to max-information, we will use more familiar measures between two random variables, which we give here: Definition A.4 (KL Divergence). The KL Divergence between random variables X and Z, denoted as D KL (X||Z) over domain D is defined as D KL (X||Z) = x∈D Pr [X = x] ln Pr [X = x] Pr [Z = x] Definition A.5 (Total Variation Distance). The total variation distance between two random variables X and Z, denoted as T V (X; Z), over domain D is defined as T V (X, Z) = 1 2 · x∈D | Pr [X = x] − Pr [Z = x] |. In the following lemma, we state some basic connections between max-information, total variation distance, differential privacy, and indistinguishability: Lemma A.6. Let X, Z be two random variables over the same domain. We then have: 1. [DFH + 15a] I β ∞ (X; Z) ≤ k ⇔ (X, Z) ≈ (k ln 2),β X ⊗ Z. 2. [KS14] If X ≈ ,δ Y then X and Y are pointwise 2 , 2δ 1−e − -indistinguishable. Another useful result is from [KS14], which we use in the proof of our main result in Theorem 3.1: Lemma A.7 (Conditioning Lemma). Suppose that (X, Z) ≈ ,δ (X , Z ). Then for everyδ > 0, the following holds: Pr t∼p(Z) X| Z=t ≈ 3 ,δ X | Z =t ≥ 1 − 2δ δ − 2δ 1 − e − . The proof of our main result in Theorem 3.1 also makes use of the following standard concentration inequality: Theorem A.8 (Azuma's Inequality). Let C 1 , · · · , C n be a sequence of random variables such that for every i ∈ [n], we have Pr [|C i | ≤ α] = 1 and for every fixed prefix C i−1 1 = c i−1 1 , we have E C i |c i−1 1 ≤ γ, then for all t ≥ 0, we have Pr n i=1 C i > nγ + t √ nα ≤ e −t 2 /2 . We say that a real-valued function f : X n → R has sensitivity ∆ if for all i ∈ [n] and x ∈ X n and x i ∈ X , |f (x −i , x i ) − f (x −i , x i )| ≤ ∆. One important use of max-information bounds is that they imply strong generalization bounds for low sensitivity functions when paired with McDiarmid's inequality. Theorem A.9 (McDiarmid's Inequality). Let X 1 , · · · , X n be independent random variables with domain X . Further, let f : X n → R be a function of sensitivity ∆ > 0. Then for every τ > 0 and µ = E [f (X 1 , · · · , X n )] we have Proof. Note that if X ∼ P n , then p • φ(X) is uniform on [0, 1], and thus, has mean 1/2. From Theorem A.9, we know that if p • φ has sensitivity ∆, then for any 0 < δ < 1/2, we have: Pr [f (X 1 , · · · , X n ) − µ ≥ τ ] ≤ exp −Pr p • φ(X) ≥ 1/2 + ∆ n 2 ln(1/δ) ≤ δ. However, we also know that p • φ(X) is uniform, so that Pr [p • φ(X) ≥ 1 − δ] = δ. Hence, if ∆ < 1/2−δ √ n 2 ln(1/δ) , we obtain a contradiction: δ ≥ Pr p • φ(X) ≥ 1/2 + ∆ n 2 ln(1/δ) > Pr [p • φ(X) ≥ 1 − δ] = δ. We then set δ = 0.08 to get our stated bound on sensitivity. Thus, the sensitivity ∆ for the p-value for any test statistic and any null hypothesis must be at least 0.37/ √ n. This is too large for the following theorem, proven in [BNS + 16], to give a nontrivial guarantee: Theorem B.2 ([BNS + 16]). Let ∈ (0, 1/3), δ ∈ (0, /4), and n ≥ ln(4 /δ) 2 . Let Y denote the class of ∆-sensitive functions f : X n → R, and let A : X n → Y be an algorithm that is ( , δ)-differentially private. Let X ∼ P n for some distribution P over X , and let q = A(X). Then: Pr X,A [|q(P n ) − q(X)| ≥ 18 ∆n] < δ . When we attempt to apply this theorem to a p-value, we see that the error it guarantees, by Lemma B.1, is at least: 18 ∆n ≥ 18 (.37) √ n. However, the theorem is only valid for n ≥ 1 2 ln 4 δ . Plugging this in, we see that 18 (.37) √ n ≥ 1, which is a trivial error guarantee for p-values (which take values in [0, 1]). C Adjustment of p-values Given a Mutual Information Bound, and a Comparison to [RZ16] In the introduction, we proved a simple theorem about how to correct p-values (using a valid p-value correction function -Definition 1.1) given a bound on the max-information between the input dataset and the test-statistic selection procedure A (Theorem 1.3). We note that we can easily extend the definition of null hypotheses given in the introduction (and hence p-values and correction functions), to allow for distributions S over X n that need not be product distributions. In fact, we can restate Theorem 1.3 in terms of non-product distributions: Theorem C.1. Let A : X n → T be a data-dependent algorithm for selecting a test statistic such that I β ∞ (A, n) ≤ k. Then the following function γ is a valid p-value correction function for A: γ(α) = max α − β 2 k , 0 . Proof. The proof is exactly the same as the proof of Theorem 1.3, except we fix an arbitrary (perhaps non-product) distribution S from which the dataset X is drawn. Previously, Russo and Zou [RZ16] have given a method to correct p-values given a bound on the mutual information between the input data and the test-statistic selection procedure. 2 In this section, we observe that if we had a bound on the mutual information between the input data and the test-statistic procedure, this would imply a bound on the max-information that would be sufficiently strong so that our Theorem C.1 would give a strictly improved p-value correction function than the bound given by [RZ16]. First, we state the relevant theorem from Russo and Zou, stated using our terminology: Theorem C.2 ([RZ16] Proposition 7). Let A : X n → T be a test-statistic selection procedure such that I(X; A(X)) ≤ m (where I denotes mutual information). If we define φ i = A(X), then for every γ ∈ [0, 1]: Pr [p i (φ i (X)) ≤ γ] ≤ γ + m ln(1/2γ) . If we want to set parameters so that the probability of a false discovery is at most α, then in particular, we must pick γ such that m ln(1/2γ) ≤ α. Equivalently, solving for α, the best valid p-value correction function implied by the bound of [RZ16] must satisfy: γ RZ (α) ≤ 1 2 · 2 − log(e)m/α 2 . We can obtain a better bound by instead arguing via max-information. First, we prove that a bound on the mutual information between the data and selection procedure implies a bound on the max-information between the data and selection procedure: Theorem C.3. Let A : X n → T be a selection rule. If I(X; A(X)) ≤ m and X ∼ S for any distribution S over X n , then for any k > 0, I Also, we can apply Jensen's inequality to get: By rearranging terms, we obtain: where the last inequality follows from the fact that the function (1 − x) log(1/(1 − x)) is maximized at x = (e − 1)/e (and takes value < 0.54 ). Solving for β(k) gives the claimed bound. Combining Theorem C.3 with Theorem C.1, we can derive a valid p-value correction function given a bound on the mutual information between the data and the selection procedure: Theorem C.4. Let A : X n → T be a test-statistic selection procedure such that I(X; A(X)) ≤ m. Then γ(α) is a valid p-value correction function, where: γ(α) = α 2 · 2 −2 α (m+0.54) . Proof. From Theorem C.3, we know that for any k > 0, I β(k) ∞ (A, n) ≤ k, where β(k) ≤ m+0.54 k . Hence, from Theorem C.1, we know that for any choice of k > 0, γ(α) is a valid p-value correction function where: γ(α) = α − m+0.54 k 2 k . Choosing k = 2(m+0.54) α gives our claimed bound. Comparing the p-value correction function γ(α) derived above, with the function γ RZ (α) that arises from [RZ16], we see that the version above has an exponentially improved dependence on 1/α Moreover, it almost always gives a better correction factor in practice: for any value of α ≤ 0.05, the function γ(α) derived above improves over γ RZ (α) whenever the mutual information bound m ≥ 0.05 (whereas, we would naturally expect the mutual information to be m 1, and to scale with n). D Rederiving Generalization for Low Sensitivity Queries, and a Comparison with the Bounds of [BNS + 16] In this section, we use the bound from our main theorem (Theorem 3.1) to rederive known results for the generalization properties of differentially private algorithms which select low sensitivity queries. Our bounds do not exactly -but nearly -match the tight bounds for this problem, given in [BNS + 16]. This implies a limit on the extent to which our main theorem can be quantitatively improved, despite its generality. The goal of generalization bounds for low sensitivity queries is to argue that with high probability, if a low sensitivity function f : X n → R = A(x) is chosen in a data-dependent way when x is sampled from a product distribution P n , then the value of the function on the realized data f (x) is close to its expectation f (P n ) def = E X∼P n [f (X)]. If f were selected in a data-independent manner, this would follow from McDiarmid's inequality. No bound like this is true for arbitrary selection procedures A, but a tight bound by [BNS + 16] is known when A is ( , δ)-differentially private -we have already quoted it, as Theorem B.2. Using our main theorem (Theorem 3.1), together with McDiarmid's inequality (Theorem A.9), we can derive a comparable statement to Theorem B.2: Theorem D.1. Let ∈ (0, 1), δ = O( 3 ), and n = Ω log( /δ) 2 . Let Y denote the class of ∆sensitive functions f : X n → R, and let A : X n → Y be an algorithm that is ( , δ)-differentially private. Let X ∼ P n for some distribution P over X , and let q = A(X). Then there exists a constant C such that: Pr X,A [|q(P n ) − q(X)| ≥ C ∆n] < n δ Proof. If A satisfied I β ∞ (X; A(X)) ≤ k, then McDiarmid's inequality (Theorem A.9) paired with Definition 1.2 would imply: Pr X,A [|q(P n ) − q(X)| ≥ C ∆n] < 2 k exp −2C 2 2 n + β Because A is ( , δ)-differentially private, Theorem 3.1 implies that indeed I β ∞ (X; A(X)) ≤ k for k = O 2 n + √ δ n = O 2 n and β = O n δ + e − 2 n = O n δ , and the claimed bound follows. Because Theorem B.2 is asymptotically tight, this implies a limit on the extent to which Theorem 3.1 can be quantitatively improved. Definition 4. 3 ( 3Linear Code). A code C ⊆ {0, 1} n of length n and rank k is called linear iff it is a k dimensional linear subspace of the vector space F n 2 . The vectors in C are called codewords.The minimum distance t of a linear code C is t = min c 1 ,c 2 ∈C dist Hamm (c 1 , c 2 ), where, dist Hamm (p, q) Definition 4 . 8 ( 48Hamming ball). A Hamming ball of radius r around a point p ∈ {0, 1} n , denoted by B r (p), is the set of strings x ∈ {0, 1} n such that dist Hamm (x, p) ≤ r. = Pr [B ,δ (x , a) = f (x )] ≤ δ, and consequently, Pr [B ,δ (x , a) = ⊥] = 1 − p . We define the set G(k) = {(a, x) : Z(a, x) ≤ k} where Z(a, x) = log Pr[(A,X)=(a,x)] Pr[A⊗X=(a,x)]as before, and we define β(k) to be the quantity such that Pr [(A, X) ∈ G(k)] = 1 − β(k).Pr [(A, X) = (a, x)] Z(a, x) + kβ(k) Pr [(A, X) = (a, x)] Pr [(A, X) ∈ G(k)] Z(a, x) ≤ log Pr [A ⊗ X ∈ G(k)] Pr [(A, X) ∈ G(k)] Pr [(A, X) = (a, x)] · Z(a, x) ≥ −(1 − β(k)) et al. [DFH + 15b] and both strengthened and generalized by Bassily et al. [BNS + 16]. Dwork et al. [DFH + 15a] showed that algorithms with bounded description length outputs have similar guarantees for adaptive data analysis, and introduced the notion of max-information. Cummings et al. [CLN + 16] In this section, we demonstrate that the theorem from [BNS + 16], which shows that differentially private algorithms which select low-sensitivity queries cannot overfit, does not give nontrivial generalization bounds for p-values. We first show that p-values cannot have sensitivity smaller than Lemma B.1. Let φ : X n → R be a test statistic with null hypothesis H 0 , and p : R → [0, 1], where p(a) = Pr x∼P n [φ(x) ≥ a], and P ∈ H 0 . The sensitivity of p • φ must be larger than 0.37/ √ n.2τ 2 n∆ 2 . B Sensitivity of p-values .37/ √ n. Actually,[RZ16] do not explicitly model the dataset, and instead give a bound in terms of the mutual information between the test-statistics themselves and the test-statistic selection procedure. We could also prove bounds with this dependence, by viewing our input data to be the value of the given test-statistics, however for consistency, we will discuss all bounds in terms of the mutual information between the data and the selection procedure. Proof of Theorem 4.1, part 2. First, we show that B ,δ is indeed differentially private.Lemma 4.9. B ,δ (·, a) is ( , δ)-differentially private for every a ∈ {0, 1} r .Proof. We will prove this lemma by following the proof of Proposition 3 in [ST13]. Fix any a ∈ {0, 1} r . Firstly, observe that for every x ∈ {0, 1} n , there are only 2 possible outputs for B ,δ (x, a):where d x = min c∈Ca (dist Hamm (x, c)) and L ∼ Lap(1/ ). Now, for any pair of points x and x such that dist Hamm (x, x ) = 1, there are two possible cases:In this case,For comparison, note that the generalization bound for ( , δ)-differentially private mechanisms given in Theorem B.2 differs by only constants from the generalization bound proven via the maxinformation approach for ( , δ )-differentially private mechanisms, where:Note that in most applications (including the best known mechanism for answering large numbers of low sensitivity queries privately-the median mechanism[RR10]as analyzed in[DR14]Theorem 5.10), the accuracy of a differentially private algorithm scales with log(1/δ) n (ignoring other relevant parameters). In such cases, using the bound derived from the max-information approach yields an accuracy that is worse than the bound from Theorem B.2 by an additive termE Omitted ProofsProof of Corollary 4.2. We use the same algorithms A and B from Theorem 4.1. Suppose that for all a ∈ {0, 1} r and 0 < β 2 < 1/2 − δ − β 1 for any β 1 ∈ (0, 1/2 − δ), we have:Note that because A has bounded description length r, we can bound I β 1 ∞ (A, n) ≤ r + log(1/β 1 ) for any β 1 > 0 [DFH + 15a]. We then apply the composition theorem for max-info mechanisms, Theorem A.3, to obtain:However, this contradicts Theorem 4.1, because for any β < 1/2 − δ,Thus, we know that there exists some a * ∈ {0, 1} r and (non-product) distribution X ∼ S such that: I β 2 ∞ (X; B(X, a * ) ≥ n − 1 − r − log(1/β 1 ). We then define C : X n → Y to be C(x) = B(x, a * ). Hence, I β 2 ∞ (C, n) ≥ I β 2 ∞ (X; C(X)) ≥ n − 1 − r − log(1/β 1 ) which completes the proof. Valid post-selection inference. Bbb + 13] Richard, Lawrence Berk, Andreas Brown, Kai Buja, Linda Zhang, Zhao, The Annals of Statistics. 412BBB + 13] Richard Berk, Lawrence Brown, Andreas Buja, Kai Zhang, and Linda Zhao. Valid post-selection inference. The Annals of Statistics, 41(2):802-837, 2013. Algorithmic stability for adaptive data analysis. Bns + 16] Raef, Kobbi Bassily, Adam D Nissim, Thomas Smith, Uri Steinke, Jonathan Stemmer, Ullman, Proceedings of the 48th Annual ACM on Symposium on Theory of Computing, STOC. the 48th Annual ACM on Symposium on Theory of Computing, STOCBNS + 16] Raef Bassily, Kobbi Nissim, Adam D. Smith, Thomas Steinke, Uri Stemmer, and Jonathan Ullman. Algorithmic stability for adaptive data analysis. In Proceedings of the 48th Annual ACM on Symposium on Theory of Computing, STOC, 2016. Cln + 16] Rachel, Katrina Cummings, Kobbi Ligett, Aaron Nissim, Zhiwei Steven Roth, Wu, arXiv:1602.07726Adaptive learning with robust generalization guarantees. arXiv preprintCLN + 16] Rachel Cummings, Katrina Ligett, Kobbi Nissim, Aaron Roth, and Zhiwei Steven Wu. Adaptive learning with robust generalization guarantees. arXiv preprint arXiv:1602.07726, 2016. Introduction to Algorithms, Third Edition. H Thomas, Charles E Cormen, Ronald L Leiserson, Clifford Rivest, Stein, The MIT Press3rd editionThomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Intro- duction to Algorithms, Third Edition. The MIT Press, 3rd edition, 2009. Lower bounds in differential privacy. Anindya De, Proceedings of the 9th International Conference on Theory of Cryptography, TCC'12. the 9th International Conference on Theory of Cryptography, TCC'12Berlin, HeidelbergSpringer-VerlagAnindya De. Lower bounds in differential privacy. In Proceedings of the 9th Interna- tional Conference on Theory of Cryptography, TCC'12, pages 321-338, Berlin, Heidel- berg, 2012. Springer-Verlag. Generalization in adaptive data analysis and holdout reuse. [ Dfh + 15a] Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toni Pitassi, Omer Reingold, Aaron Roth, Advances in Neural Information Processing Systems. C. Cortes, N.D. Lawrence, D.D. Lee, M. Sugiyama, R. Garnett, and R. GarnettCurran Associates, Inc28[DFH + 15a] Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toni Pitassi, Omer Reingold, and Aaron Roth. Generalization in adaptive data analysis and holdout reuse. In C. Cortes, N.D. Lawrence, D.D. Lee, M. Sugiyama, R. Garnett, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2341-2349. Curran Associates, Inc., 2015. Preserving statistical validity in adaptive data analysis. Vitaly Dfh + 15b] Cynthia Dwork, Moritz Feldman, Toniann Hardt, Omer Pitassi, Aaron Leon Reingold, Roth, Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, STOC '15. the Forty-Seventh Annual ACM on Symposium on Theory of Computing, STOC '15New York, NY, USAACMDFH + 15b] Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Aaron Leon Roth. Preserving statistical validity in adaptive data analysis. In Pro- ceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, STOC '15, pages 117-126, New York, NY, USA, 2015. ACM. Our data, ourselves: Privacy via distributed noise generation. Cynthia Dwork, Krishnaram Kenthapadi, Frank Mcsherry, Ilya Mironov, Moni Naor, 25th Annual International Conference on the Theory and Applications of Cryptographic Techniques. St. Petersburg, RussiaAdvances in Cryptology -EUROCRYPT 2006[DKM + 06] Cynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor. Our data, ourselves: Privacy via distributed noise generation. In Advances in Cryptology -EUROCRYPT 2006, 25th Annual International Conference on the Theory and Applications of Cryptographic Techniques, St. Petersburg, Russia, May 28 -June 1, 2006, Proceedings, pages 486-503, 2006. Calibrating noise to sensitivity in private data analysis. Cynthia Dwork, Frank Mcsherry, Kobbi Nissim, Adam Smith, Proceedings of the 3rd Theory of Cryptography Conference. the 3rd Theory of Cryptography ConferenceSpringerCynthia Dwork, Frank Mcsherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In In Proceedings of the 3rd Theory of Cryptography Conference, pages 265-284. Springer, 2006. The algorithmic foundations of differential privacy. Cynthia Dwork, Aaron Roth, Foundations and Trends in Theoretical Computer Science. 93-4Cynthia Dwork and Aaron Roth. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3-4):211-407, 2014. Boosting and differential privacy. Cynthia Dwork, Guy N Rothblum, Salil P Vadhan, 51th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2010. Las Vegas, Nevada, USACynthia Dwork, Guy N. Rothblum, and Salil P. Vadhan. Boosting and differential privacy. In 51th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2010, October 23-26, 2010, Las Vegas, Nevada, USA, pages 51-60, 2010. Cynthia Dwork, Weijie Su, Li Zhang, arXiv:1511.03803Private false discovery rate control. arXiv preprintCynthia Dwork, Weijie Su, and Li Zhang. Private false discovery rate control. arXiv preprint arXiv:1511.03803, 2015. Optimal inference after model selection. William Fithian, Dennis Sun, Jonathan Taylor, arXiv:1410.2597arXiv preprintWilliam Fithian, Dennis Sun, and Jonathan Taylor. Optimal inference after model selection. arXiv preprint arXiv:1410.2597, 2014. The statistical crisis in science. Andrew Gelman, Eric Loken, American Scientist. 1026460Andrew Gelman and Eric Loken. The statistical crisis in science. American Scientist, 102(6):460, 2014. Differentially private chisquared hypothesis testing: Goodness of fit and independence testing. Marco Gaboardi, Hyun Lim, Ryan Rogers, Salil Vadhan, arXiv:1602.03090arXiv preprintMarco Gaboardi, Hyun Lim, Ryan Rogers, and Salil Vadhan. Differentially private chi- squared hypothesis testing: Goodness of fit and independence testing. arXiv preprint arXiv:1602.03090, 2016. Preventing false discovery in interactive data analysis is hard. Moritz Hardt, Jonathan Ullman, Foundations of Computer Science (FOCS). IEEEIEEE 55th Annual Symposium onMoritz Hardt and Jonathan Ullman. Preventing false discovery in interactive data analysis is hard. In Foundations of Computer Science (FOCS), 2014 IEEE 55th Annual Symposium on, pages 454-463. IEEE, 2014. Privacy-preserving data exploration in genomewide association studies. Aaron Johnson, Vitaly Shmatikov, Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '13. the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '13New York, NY, USAACMAaron Johnson and Vitaly Shmatikov. Privacy-preserving data exploration in genome- wide association studies. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '13, pages 1079-1087, New York, NY, USA, 2013. ACM. On the 'Semantics' of Differential Privacy: A Bayesian Formulation. S P Kasiviswanathan, A Smith, Journal of Privacy and Confidentiality. 61exact statements refer to the Arxiv version (v3)S.P. Kasiviswanathan and A. Smith. On the 'Semantics' of Differential Privacy: A Bayesian Formulation. Journal of Privacy and Confidentiality, Vol. 6: Iss. 1, Article 1, 2014. Available at http://repository.cmu.edu/jpc/vol6/iss1/1. The theorem numbers and exact statements refer to the Arxiv version (v3). Inference using noisy degrees: Differentially private beta-model and synthetic graphs. Vishesh Karwa, Aleksandra Slavković, The Annals of Statistics. 441Vishesh Karwa and Aleksandra Slavković. Inference using noisy degrees: Differentially private beta-model and synthetic graphs. The Annals of Statistics, 44(1):87-112, 2016. Jacobus Hendricus van Lint. Introduction to Coding Theory. SpringerBerlin3rd editionJacobus Hendricus van Lint. Introduction to Coding Theory. Springer, Berlin, 3rd edition, 1999. Exact post-selection inference. D Jason, Lee, L Dennis, Yuekai Sun, Jonathan E Sun, Taylor, arXiv:1311.6238arXiv preprintwith application to the lassoJason D Lee, Dennis L Sun, Yuekai Sun, and Jonathan E Taylor. Exact post-selection inference, with application to the lasso. arXiv preprint arXiv:1311.6238, 2013. The limits of two-party differential privacy. Ilya Mmp + 11] Andrew Mcgregor, Toniann Mironov, Omer Pitassi, Kunal Reingold, Salil P Talwar, Vadhan, Electronic Colloquium on Computational Complexity (ECCC). 18106MMP + 11] Andrew McGregor, Ilya Mironov, Toniann Pitassi, Omer Reingold, Kunal Talwar, and Salil P. Vadhan. The limits of two-party differential privacy. Electronic Colloquium on Computational Complexity (ECCC), 18:106, 2011. Interactive privacy via the median mechanism. Aaron Roth, Tim Roughgarden, Proceedings of the forty-second ACM symposium on Theory of computing. the forty-second ACM symposium on Theory of computingACMAaron Roth and Tim Roughgarden. Interactive privacy via the median mechanism. In Proceedings of the forty-second ACM symposium on Theory of computing, pages 765-774. ACM, 2010. Controlling bias in adaptive data analysis using information theory. Daniel Russo, James Zou, Proceedings of the 19th International Conference on Artificial Intelligence and Statistics. the 19th International Conference on Artificial Intelligence and StatisticsAISTATSDaniel Russo and James Zou. Controlling bias in adaptive data analysis using in- formation theory. In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, AISTATS, 2016. Differentially private least squares: Estimation, confidence and rejecting the null hypothesis. Or Sheffet, arXiv:1507.02482arXiv preprintOr Sheffet. Differentially private least squares: Estimation, confidence and rejecting the null hypothesis. arXiv preprint arXiv:1507.02482, 2015. False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant. J P Simmons, L D Nelson, U Simonsohn, Psychological Science. J. P. Simmons, L. D. Nelson, and U. Simonsohn. False-Positive Psychology: Undis- closed Flexibility in Data Collection and Analysis Allows Presenting Anything as Sig- nificant. Psychological Science, October 2011.
[]
[ "An Epipolar Line from a Single Pixel", "An Epipolar Line from a Single Pixel" ]
[ "Tavi Halperin \nSchool of Computer Science and Engineering\nThe Hebrew University of Jerusalem\nIsrael\n", "Michael Werman \nSchool of Computer Science and Engineering\nThe Hebrew University of Jerusalem\nIsrael\n" ]
[ "School of Computer Science and Engineering\nThe Hebrew University of Jerusalem\nIsrael", "School of Computer Science and Engineering\nThe Hebrew University of Jerusalem\nIsrael" ]
[]
We exploit the following observation to directly find epipolar lines. For a pixel p in Image A all pixels corresponding to p in Image B are on the same epipolar line, or equivalently the image of the line spanning A's center and p is an epipolar line in B.Computing the epipolar geometry from feature points between cameras with very different viewpoints is often error prone as an object's appearance can vary greatly between images. This paper extends earlier work based on the dynamics of the scene which was successful in these cases.The algorithms introduced here for finding corresponding epipolar lines accelerate and robustify previous methods for computing the epipolar geometry in dynamic scenes.
10.1109/wacv.2018.00113
[ "https://arxiv.org/pdf/1703.09725v1.pdf" ]
9,645,204
1703.09725
8c94da1f4bd52c37758fdf721ab7790b2b83e8e7
An Epipolar Line from a Single Pixel Tavi Halperin School of Computer Science and Engineering The Hebrew University of Jerusalem Israel Michael Werman School of Computer Science and Engineering The Hebrew University of Jerusalem Israel An Epipolar Line from a Single Pixel We exploit the following observation to directly find epipolar lines. For a pixel p in Image A all pixels corresponding to p in Image B are on the same epipolar line, or equivalently the image of the line spanning A's center and p is an epipolar line in B.Computing the epipolar geometry from feature points between cameras with very different viewpoints is often error prone as an object's appearance can vary greatly between images. This paper extends earlier work based on the dynamics of the scene which was successful in these cases.The algorithms introduced here for finding corresponding epipolar lines accelerate and robustify previous methods for computing the epipolar geometry in dynamic scenes. Introduction The fundamental matrix is a basic building block of multiple view geometry and its computation is the first step in many vision tasks. The computation is usually based on pairs of corresponding points. Matching points across images is error prone, especially between cameras with very different viewpoints, and many subsets of points need to be sampled until a good solution is found. In this paper, we address the problem of robustly estimating the fundamental matrix from line correspondences in dynamic scenes. The fundamental matrix is a 3 × 3 homogeneous rank two matrix with seven degrees of freedom. The bestknown algorithm, for computing the fundamental matrix, is the eight point algorithm by Longuet-Higgins [11] which was made practical by Hartley [5]. The overall method is based on normalization of the data, solving a set of linear equations and enforcing the rank 2 constraint [13]. The requirement of eight point correspondences can be relaxed to seven. This results in a cubic equation with one or three real solutions. The estimation from 7 points is very sensitive to noise. These methods are often followed by a non-linear optimization step. Usually, the first step for calibrating cameras from moving objects is feature tracking, using e.g. deep features [1]. Khan and Shah [8] tracked features on a plane (people viewed from multiple surveillance cameras), and used their trajectories to compute a planar homography between the cameras. They further assumed long videos and the occasional entrance of a single person into the FOV of each camera, from every side. Similarly to them, we assume temporally synchronized cameras. Meingast et al. [14] used the tracks from a multi-target tracking algorithm as features for correspondences. We, as them, use centroids of detected foreground areas as a proxy for an object's location. Theoretically, estimating geometric properties based on fuzzy measurements such as areas resulting from foreground segmentation, or their centroids is error prone. But, as shown by [14], and also by our experiments, this method is robust, and when followed by a global optimization step it is also accurate. The fundamental matrix can also be computed from three corresponding pairs of epipolar lines [6]. The onedimensional homography between the lines can be recovered as the epipolar lines in each of the images intersect at the epipoles. The 3 degrees of freedom for the 1D homography together with the 4 degrees of freedom of the epipoles yield the needed 7 parameters. There are only a few papers using corresponding epipolar lines to compute the epipolar geometry, [2] treats the (i) is "1" when a moving object intersects the line in frame i (black entries) and "0" otherwise (white entries). case of still images, and [17, 3,7] are applicable to videos of dynamic scenes. Sinha and Pollefeys [17] used the silhouette of a single moving object to find corresponding epipolar lines to calibrate a network of cameras. Ben-Artzi et al. [3] accelerated Sinha's method using a similarity measure for epipolar lines. The similarity measure is a generalization of motion barcodes defined in [4,15]. This line motion barcode was also used in [7] to find corresponding epipolar lines and is the most relevant paper to ours. In that paper, they found corresponding epipolar lines by matching all pairs of lines between the images using the motion barcode. This paper proposes to drastically reduce the search space for matching epipolar lines utilizing pixels which record multiple depths. The use of corresponding epipolar lines instead of corresponding points stems from; a) the exponent in RANSAC execution time depends on the size of minimal sets needed 3 for epipolar lines as opposed to 7 for points, b) line pairs can be filtered with motion barcodes even in very disparate views where points cannot. As in previous methods, we assume that cameras are relatively stationary and that moving objects have been extracted using background subtraction. Motion Barcodes Motion barcodes of lines are used in the case of synchronized stationary cameras viewing a scene with moving objects. Following background subtraction we get a binary video, where "0" represents static background and "1" moving objects. Given such a video of N binary frames, the motion barcode of a given image line l [3] is a binary vector b l in {0, 1} N . b l (i) = 1 iff a silhouette of a foreground object intersects at least one pixel of line l at the i th frame. An example of a motion barcode is shown in Figure 1. Figure 2: Illustration of a scene with a moving object viewed by two video cameras. The lines l and l are corresponding epipolar lines, and π is the 3D epipolar plane that projects to l and l . At time t = 1 the object does not intersect the plane π, and thus does not intersect l or l in the video. At times t = 2, 3 the object intersects the plane π, so the projections of this object on the cameras do intersect the epipolar lines l and l . The motion barcodes of both l and l is (0, 1, 1) The case of a moving object seen by two cameras is illustrated in Figure 2. If the object intersects the epipolar plane π at frame i, and does not intersect the plane π at frame j, both motion barcodes of lines l and l will be 1, 0 at frames i, j respectively. Corresponding epipolar lines therefore have highly correlated motion barcodes. The similarity measure between motion barcodes b and b is their normalized cross correlation, [4]. To improve reliability, foreground objects are taken to be only some small disc around the centroid of the full computed foreground element. Epipolar Lines Corresponding epipolar lines are projections of epipolar planes, 3d planes that go through both camera centers. Pixels are projections of 3d rays though a camera center. The search for corresponding epipolar lines in this paper is based on finding two different correspondences of a pixel. These two correspondences are necessarily on an epipolar line. The cue to matching, if there is no auxiliary information such as color or reliable shape features, is the existence/non-existence of movement at time t. The following notation is used throughout the paper: p, q, r pixels p t A pixel p imaged in Camera A at time t q B ∨ r B line between two pixels Given a pixel p in Image A imaged at two times t and s the corresponding pixels in Image B, q t B and r s B are on the epipolar line, q B ∨ r B . Likewise, p A is a point on the corresponding epipolar line to the epipolar line q B ∨ r B . The algorithm to find corresponding epipolar lines thus has two main steps, (i) finding (at least) two different pixels in B corresponding to a single pixel p in A which results in a single epipolar line in B and (ii) finding a corresponding epipolar line in A from the pencil of lines through p which gives a corresponding pair of epipolar lines. Algorithm Our algorithm assumes background subtraction. The only pixels we use are the centers of mass of the detected objects. Point to Line For pixels p A from Camera A, which occur in frames t 1 , t 2 we take all the pixels from frames t 1 and t 2 from Camera B, {q t1 B , r t1 B . . . }, {u t2 B , v t2 B . . . }. Each pair of pixels gives an epipolar line candidate, Figure 3 L = {q B ∨ u B , q B ∨ v B , . . . , r B ∨ v B , . . . },(b). For each of the resulting lines, l ∈ L, we find a third pixel q t3 B on l, such points usually exist in real videos. Let Λ be the lines in Camera A between p A and all the pixels in Camera A at time t 3 , Figure 3(c). The λ ∈ Λ whose motion barcode has the highest normalized crosscorrelation to l's barcode is chosen as l's partner and the partners with high normalized cross-correlation are considered possible corresponding epipolar lines. This basic building block is not symmetric with respect to the two cameras. Its result, a pair of candidate epipolar lines, is symmetric. We need to perform this step at least twice in order to proceed, and the cameras may or may not switch roles each time. When there is enough motion in the scene, using pixels in A that have more than 2 correspondences in B produce Third Line We use RANSAC to estimate the location of the epipoles. We sample two pairs of putative corresponding epipolar lines from the previous step, with the probability to sample a pair proportional to its matching score. The intersection of the two epipolar lines suggest epipole locations, e A and e B (Figure 4). In order to compute the 1D line homography, a third pair of lines is required. If a third pair of lines is available to us we skip the following step. We pick a random frame t and connect all foreground objects to the epipoles with lines T A = {p t A ∨ e A }, and T B = {q t B ∨ e B }, the third correspondence is found by matching the barcodes of lines from T A and T B . These three pairs determine the 1D line homography, which together with the epipoles is sufficient to compute the fundamental matrix. Validation The validation step is carried out in each RANSAC iteration, to evaluate the quality of the estimated epipoles and the homography. Similarly to [2], we compute the 1D line homography between the 3 pairs of lines, sample uniformly 10 lines from the pencil around e A , transform them to the pencil around e B , and compute the barcode correlation between the 10 pairs of lines. The epipole and homography with the highest score are used to compute the fundamental matrix between the cameras. The recovered parameters may be iteratively optimized via bundle adjustment. Planar motion Our algorithm does not work on pure planar motion since it requires two points with different depths on a ray from the camera. However, in the special configuration of one camera on the plane and the other off it, the location of the epipole in the off-plane camera frame may be recovered. In this variant of the algorithm, the first half is the same, with the on-plane camera playing the role of Camera A, with point p. Then, given candidate lines Λ p from Camera B, we exploit the following facts, (i) there is no motion outside the plane, and (ii) Camera B is off the plane. So that all the motion visible on the epipolar line through p in A is concentrated around p. We then sample p's barcode from a disc around it, instead of sampling from a line, and use NCC with the barcodes of Λ p . The one with highest score is kept. We only recover epipolar lines in B, which is not enough to run the validation step, to choose the correct epipole among all intersections of lines. Instead, we ignore lines with matching scores under a certain threshold, and vote for the epipole by maximal consensus voting. This step is carried out using RANSAC, where two lines are drawn in every iteration, their intersection yields the candidate epipole, and the number of lines which agree with the epipole is counted. The candidate with the maximal set of inliers is chosen as the epipole. One definition for inliers is the one used in [7], but a simpler approach which works well when the epipole is inside the image boundaries is measuring whether the perpendicular distance between the epipole and a line is below a certain threshold. As a side effect, this process allows Camera A to be wide-angle with extreme lens distortion. We are not interested in image lines (in A), thus we are not worried about lens distortion, because from the point of view of Camera A we are only interested in rays through pixels, and those are not altered by lens distortion. This even improves accuracy, with more possible epipolar lines (in B) and larger angles between them. Static objects In some cases, features of multiple objects projected to the same point may be extracted. A dynamic object can occlude a static one, for which a different kind of feature (e.g. SIFT [12]) can be detected. The scene can even be fully static with multiple objects detected at the same image point, such as semi transparent surfaces. Various algorithms exist to separate reflections from transmitted light (for example [9,10,16]). Two features extracted from the separated layers matched to their corresponding points in the other camera, will produce an epipolar line. Coupling with other features Other features can be used in addition to motion barcodes to guide the search. For example, two objects imaged on p having certain colors (identifiable from other viewing points) will constrain the search in B for objects with matching colors. More complex features such as deep features could be used, for example in a natural scene with high number of moving objects, we can isolate one kind of moving object, e.g. butterflies, and process only their locations. Experiments We evaluated our algorithm on real and simulated video streams. Since this approach is novel, there are no existing suitable real datasets. The authors of [7] provided us with their synthetic datasets cubes and thin cubes. We adopted their area measure and used the same threshold for the definition of inliers. The main difference is in the first step of the algorithm where we need to find putative corresponding epipolar line pairs. In our case we need to compute the normalized cross correlation between about 10,000 pairs of motion barcodes and in [7] they needed to compute the normalized cross correlation between about 100,000,000 pairs of motion barcodes. The number of barcode correlations which were calculated is four orders of magnitude less and the inlier fraction was reasonable (49.7% in thin cubes and 58.1% in cubes, on average). With our relatively tiny number of candidates, The whole algorithm took a small fraction of a second as opposed to minutes. Examples To validate our method on real video examples we filmed several scenes with various types of motion. Figure 5 shows an example from a real video with planar motion. A wide angle Camera A (GoPro Hero 3+) is mounted at a height of about a meter above the ground facing towards a busy square (right image). Another camera, B, captured the same scene from a typical surveillance angle from a nearby roof (left image). An example of images of a static scene with a semi transparent surface is shown in Figure 10. Behind the flat window part of a corridor with two doors and a painting on the wall is visible. The reflection on the glass consists of the two cameras with tripods, and buildings behind. The difference in colors between the cameras is due to different white balance. The two red dots marked on the left image (A) are points where two corner points were detected on different surfaces (one behind the glass and one reflected on it), the two layers have been separated and shown individually. The two black boxes show a detected corner point on a door and a point on the tripod of Camera A. The same points are marked with red dots on the right image (B). Since the reflecting surface is flat, the virtual location of the reflected tripod is the same for A and B. Thus, its projection on B must lie on the same epipolar line as the corner of the door. A second line is obtained by applying the same to a second point, and their intersection yields the epipole. For visualization, the reflections of the two camera centers, which of course share an epipolar line, have also been marked and connected by a line. Representative samples from other real video experiments are shown in Figures 6, 7, 8, and 9. Conclusion We introduced a method for finding corresponding epipolar lines from multiple correspondences to a single pixel. We conducted experiments with real and synthetic videos, where our method was shown to calibrate cameras similarly to the state of the art but with much less computation. Figure 1 : 1A motion barcode b of a line l is a vector in {0, 1} N . The value of b l Figure 3 : 3Basic building blocks of our algorithm. (a) Co temporality is the main feature used. (b) The correspondences of a single pixel lie on an epipolar line. (c) Matching epipolar lines have similar motion barcodes. Figure 4 : 4Recovering epipoles from two pairs of epipolar lines.even better matches with less false positives as it is also checkable if the 3 correspondences in B are co-linear. Figure 5 : 5An example from the Square sequence. (a) An image from the off-plane camera (B), with recovered epipolar lines overlayed. The small box zooms-in on the cameraman of the on-plane camera (A). (b) A frame from the on-plane wide angle camera, taken at the same time. The area around the other camera is enlarged for convenience. Figure 6 : 6A pair of representing frames from the Threads sequence. Recovered pairs of epipolar lines share the same color. Note that although part of the background is visible in both videos, the epipoles cannot be recovered using only corresponding points from the background, since it's essentially planar. Figure 7 : 2 Figure 8 : 728A pair of representative frames from the Fish sequence with overlaying corresponding epipolar lines. Notice that the camera visible in each of the images is not the other camera, but its reflection on the aquarium wall. The two camera reflections are located on corresponding epipolar lines (turquoise). Best viewed in color.IEEE Conference on Computer Vision and Pattern Recognition, pages 3193-3201, 2015. 5 [17] S. Sinha and M. Pollefeys. Camera network calibration and synchronization from silhouettes in archived video. IJCV, 87(3):266-283, 2010. A representative pair of frames from the Balls sequence.When an object is a perfect sphere its 3D centroid projects exactly to the center-of-mass of the detected silhouette (up to the precision of the foreground detection). Figure 9 :Figure 10 : 910A representative pair of frames from the Drones sequence. Each camera is visible in the other'In still images of semi transparent surfaces such as windows, multiple objects may be visible at the same image location. (a) Separating the reflections from the transmitted light results in two images (highlighted black boxes), features extracted from these images will correspond to (different) points on an epipolar line in the right image. (b) The two corresponding epipolar lines are shown, and a third one, namely the line connecting the reflections of both camera centers. Acknowledgement: This research was supported by the Israel Ministry of Science, by the Israel Science Foundation, and by the DFG. G Amato, F Falchi, C Gennaro, F Rabitti, Similarity Search and Applications: 9th International Conference, SISAP, chapter YFCC100M-HNfc6: A Large-Scale Deep Features Benchmark for Similarity Search. ChamSpringer International PublishingG. Amato, F. Falchi, C. Gennaro, and F. Rabitti. Simi- larity Search and Applications: 9th International Confer- ence, SISAP, chapter YFCC100M-HNfc6: A Large-Scale Deep Features Benchmark for Similarity Search, pages 196-209. Springer International Publishing, Cham, 2016. 1 Epipolar geometry based on line similarity. G Ben-Artzi, T Halperin, M Werman, S Peleg, ICPR. 14G. Ben-Artzi, T. Halperin, M. Werman, and S. Peleg. Epipolar geometry based on line similarity. In ICPR, 2016. 1, 4 Camera calibration from dynamic silhouettes using motion barcodes. G Ben-Artzi, Y Kasten, S Peleg, M Werman, CVPR'16. G. Ben-Artzi, Y. Kasten, S. Peleg, and M. Werman. Cam- era calibration from dynamic silhouettes using motion bar- codes. In CVPR'16, 2016. 2 Event retrieval using motion barcodes. G Ben-Artzi, M Werman, S Peleg, ICIP'15. G. Ben-Artzi, M. Werman, and S. Peleg. Event retrieval us- ing motion barcodes. In ICIP'15, pages 2621-2625, 2015. 2 In defense of the eight-point algorithm. R Hartley, IEEE Trans. PAMI. 196R. Hartley. In defense of the eight-point algorithm. IEEE Trans. PAMI, 19(6):580-593, 1997. 1 Multiple view geometry in computer vision. R Hartley, A Zisserman, Cambridge university pressR. Hartley and A. Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2003. 1 Fundamental matrices from moving objects using line motion barcodes. Y Kasten, G Ben-Artzi, S Peleg, M Werman, European Conference on Computer Vision. Springer International Publishing5Y. Kasten, G. Ben-Artzi, S. Peleg, and M. Werman. Fun- damental matrices from moving objects using line motion barcodes. In European Conference on Computer Vision, pages 220-228. Springer International Publishing, 2016. 2, 4, 5 Consistent labeling of tracked objects in multiple cameras with overlapping fields of view. S Khan, M Shah, IEEE Transactions on Pattern Analysis and Machine Intelligence. 2510S. Khan and M. Shah. Consistent labeling of tracked ob- jects in multiple cameras with overlapping fields of view. IEEE Transactions on Pattern Analysis and Machine Intel- ligence, 25(10):1355-1360, 2003. 1 User assisted separation of reflections from a single image using a sparsity prior. A Levin, Y Weiss, IEEE Transactions on Pattern Analysis and Machine Intelligence. 299A. Levin and Y. Weiss. User assisted separation of reflec- tions from a single image using a sparsity prior. IEEE Transactions on Pattern Analysis and Machine Intelli- gence, 29(9), 2007. 5 Single image layer separation using relative smoothness. Y Li, M S Brown, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionY. Li and M. S. Brown. Single image layer separation using relative smoothness. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2752-2759, 2014. 5 A computer algorithm for reconstructing a scene from two projections. H Longuet-Higgins, Nature. 2931H. Longuet-Higgins. A computer algorithm for recon- structing a scene from two projections. Nature, 293:133- 135, 1981. 1 Distinctive image features from scaleinvariant keypoints. D G Lowe, International journal of computer vision. 602D. G. Lowe. Distinctive image features from scale- invariant keypoints. International journal of computer vi- sion, 60(2):91-110, 2004. 5 The fundamental matrix: Theory, algorithms, and stability analysis. Q.-T Luong, O Faugeras, IJCV. 171Q.-T. Luong and O. Faugeras. The fundamental matrix: Theory, algorithms, and stability analysis. IJCV, 17(1):43- 75, 1996. 1 Automatic camera network localization using object image tracks. M Meingast, S Oh, S Sastry, Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on. IEEEM. Meingast, S. Oh, and S. Sastry. Automatic camera net- work localization using object image tracks. In Computer Vision, 2007. ICCV 2007. IEEE 11th International Confer- ence on, pages 1-8. IEEE, 2007. 1 Video synchronization using temporal signals from epipolar lines. D Pundik, Y Moses, ECCV'10. SpringerD. Pundik and Y. Moses. Video synchronization using temporal signals from epipolar lines. In ECCV'10, pages 15-28. Springer, 2010. 2 Reflection removal using ghosting cues. Y Shih, D Krishnan, F Durand, W T Freeman, Proceedings of the. theY. Shih, D. Krishnan, F. Durand, and W. T. Freeman. Re- flection removal using ghosting cues. In Proceedings of the
[]
[ "Boolean algebras, Morita invariance, and the algebraic K-theory of Lawvere theories", "Boolean algebras, Morita invariance, and the algebraic K-theory of Lawvere theories" ]
[ "Anna Marie Bohmann ", "Markus Szymik " ]
[]
[]
The algebraic K-theory of Lawvere theories is a conceptual device to elucidate the stable homology of the symmetry groups of algebraic structures such as the permutation groups and the automorphism groups of free groups. In this paper, we fully address the question of how Morita equivalence classes of Lawvere theories interact with algebraic K-theory. On the one hand, we show that the higher algebraic K-theory is invariant under passage to matrix theories. On the other hand, we show that the higher algebraic K-theory is not fully Morita invariant because of the behavior of idempotents in non-additive contexts: We compute the K-theory of all Lawvere theories Morita equivalent to the theory of Boolean algebras.
10.1017/s0305004123000105
[ "https://export.arxiv.org/pdf/2011.11755v3.pdf" ]
245,144,952
2011.11755
1bf02f993f2730ee16f48b707add336a39edf086
Boolean algebras, Morita invariance, and the algebraic K-theory of Lawvere theories January 2023 Anna Marie Bohmann Markus Szymik Boolean algebras, Morita invariance, and the algebraic K-theory of Lawvere theories January 2023 The algebraic K-theory of Lawvere theories is a conceptual device to elucidate the stable homology of the symmetry groups of algebraic structures such as the permutation groups and the automorphism groups of free groups. In this paper, we fully address the question of how Morita equivalence classes of Lawvere theories interact with algebraic K-theory. On the one hand, we show that the higher algebraic K-theory is invariant under passage to matrix theories. On the other hand, we show that the higher algebraic K-theory is not fully Morita invariant because of the behavior of idempotents in non-additive contexts: We compute the K-theory of all Lawvere theories Morita equivalent to the theory of Boolean algebras. Quillen's seminal work [Qui72] used algebraic K-theory to organize our thinking about the stable homology of general linear groups. This initiated generalizations to contexts far broader than that of rings. In this paper, we restrict our attention to Lawvere's algebraic theories. These structures provide a happy medium between rings and symmetric monoidal categories: no higher-categorical language is required, and they are much more flexible than rings. For instance, the stable homology of the symmetric groups and of the automorphism groups of free groups [Gal11] fit into this context as well. Our results are motivated by such stable homology computations, the starting point being the following fact (see Theorem 2.6): For every Lawvere theory T , there is an isomorphism colim r H * (Aut(T r )) ∼ = H * (Ω ∞ 0 K(T )) between the stable homology of the automorphism groups of finitely generated free objects of the theory T and the homology of the zero component Ω ∞ 0 K(T ) of the algebraic Ktheory space Ω ∞ K(T ). The surprising power of this observation comes from two sources. First, the K-theory space or spectrum is often easier to describe than its homology. This happens, for instance for the symmetric groups. Second, algebraic K-theory can sometimes be computed without explicitly using the groups Aut(T r ) (see [SW19], for example). Here, we present a new stable homology computation, for the theory of Boolean algebras, phrased once again in terms of algebraic K-theory. Theorem A. For the algebraic K-theory of the Lawvere theory Boole of Boolean algebras, we have K * (Boole) ∼ = π * (S)/2-power torsion. In this result, the groups π * (S) are the stable homotopy groups of spheres. These groups are the K-groups of the initial Lawvere theory of sets, but the resulting homomorphism to K * (Boole) is not surjective (see Proposition 5.3). Theorem A is a consequence of the following spectrum-level result, proved as Theorem 5.1, which is also a generalization from Boolean algebras to many-valued logics as modeled by the Lawvere theories Post v of Post algebras of valence v. The superscript in R × refers to the units of a ring (spectrum) R. Theorem B. For every integer v 2, there is a homotopy pullback square K(Post v ) / / S[1/v] × HZ v / / HZ[1/v] × of spectra. These results can be conceptualized in terms of Morita invariance. Two rings are called Morita equivalent if they have equivalent categories of modules. Morita equivalent rings must have isomorphic higher algebraic K-groups (see [Wei13, IV Ex. 1.21, IV 6.3.5]). More generally, two Lawvere theories are called Morita equivalent if their categories of models are equivalent. This is the case if and only one of them is an idempotent modification of a matrix theory of the other; see the brief review in Section 4. We first prove a positive result (see Theorem 4.1), which we expect to be a useful tool in stable homology computations. Theorem C. The higher algebraic K-theory of Lawvere theories is invariant under passage to matrix theories. Because we define the algebraic K-theory of Lawvere theories in terms of free models, there is no hope of extending this result to K 0 : there are even Morita equivalent rings that have non-isomorphic K 0 's when these K-groups are defined using free modules only. This is due, of course, to the presence of projectives that are not free. Arguably, the ability to detect those non-free projectives is one desirable feature of lower K-theory. For rings, we could have built that feature into our theory by completing idempotents. In an additive category, all retracts have complements, and this completion does not change the higher algebraic Ktheory, only K 0 . However, for general Lawvere theories, this fix for K 0 is not possible without changing the higher algebraic K-theory: we show that completing at idempotents can change the higher algebraic K-groups. In fact, since the Lawvere theories Post v are all Morita equivalent, our computations in Theorem B show the following: Theorem D. The higher algebraic K-theory of Lawvere theories is not Morita invariant. We can rephrase this result in terms of the "syntax" of a Lawvere theory, which is defined by the free models, and its "semantics," which comprises all models: the higher algebraic Ktheory of an algebraic theory depends essentially on the syntax of the theory, rather than merely its semantics. We refer to Lawvere's writings [Law63,Law69,Law75] for the distinction between syntax and semantics in this context. From the perspective of mathematical logic and topos theory [Car79], different notions of equivalence of theories, both semantic and syntactical, have recently been discussed and compared in [BH16,Tse17]. While stable homology computations are one of our motivations for considering the algebraic K-theory of Lawvere theories, the related issue of homological stability is not the focus of the present work. We refer to the paper [R-WW17] by Randal-Williams and Wahl, which discusses the homological stability problem in a more general framework than ours. Nonetheless, the specific setting of Lawvere theories balances rigidity and flexibility in a way that suggests it to be particularly amenable to homological stability questions as well. Additional motivation for the algebraic K-theory of Lawvere theories, in the form of multiplicative matters and applications to assembly maps, is discussed in [BS]. Outline. Section 1 recalls Lawvere's categorical approach to universal algebra and sets up the notation that we use. In Section 2, we define the K-theory of algebraic theories and show that it encodes the stable homology of the automorphism groups of the free models. A plethora of examples that do not come from rings and modules are presented in Section 3 before we start our discussion of Morita invariance with our theorem for matrix theories in Section 4. The final Section 5 contains the computation for the theory of Boolean algebras and all theories equivalent to them. Lawvere theories We need to review the basic notions and set up our notation for Lawvere theories [Law63]. Some textbook references are [Par69,Sch70,Bor94,ARV11]. Choose a skeleton E of the category of finite sets and (all) maps between them. For each integer r 0 such a category has a unique object with precisely r elements, and there are no other objects. For the sake of explicitness, let us choose the model r = {a ∈ Z | 1 a r} for such a set. A set with r + s elements is the (categorical) sum, or co-product, of a set with r elements and a set with s elements. Definition 1.1. A Lawvere theory T = (F T , F T ) is a pair consisting of a small category F T together with a functor F T : E → F T that is bijective on sets of objects and that preserves sums. This means that the canonical map F T (r) + F T (s) → F T (r + s) induced by the canonical injections is an isomorphism for all sets r and s in E. The image of the set r with r elements under the functor F T : E → F T will be written T r , so that the object T r is the sum in the category F T of r copies of the object T 1 . We recall two of the most important classes of examples of Lawvere theories. Example 1.2. Let A be a ring. Let F A be the full subcategory of the category Mod A of Amodules spanned by the modules A ⊕r for r 0. This category is a skeleton of the category of finitely generated, free A-modules. The functor F A : E → F A that sends the set with r elements to the free module A ⊕r with r generators is a Lawvere theory, called the theory of A-modules. Note that A ⊕0 = 0 is the 0 module. In particular, for the initial ring A = Z, we have the Lawvere theory of abelian groups. Rings can be very complicated, and this is even more true for Lawvere theories, which are significantly more general. Example 1.3. Let G be a group. Let F G be (a skeleton of) the full subcategory of the category of G-sets on the free G-sets with finitely many orbits: those of the form ∏ r G. The functor F G : E → F G sending r to ∏ r G is a Lawvere theory, called the theory of Gsets. In particular, for the trivial group G = {e}, we have the Lawvere theory E of sets. Remark 1.4. Some authors prefer to work with the opposite category F op T , so that the object T r is the product (rather than the co-product) of r copies of the object T 1 . For example, this was Lawvere's convention when he introduced this notion in [Law63]. Our convention reflects the point of view that the object T r should be thought of as the free T -model (or Talgebra) on r generators, covariantly in r (or rather in E). To make this precise, recall the definition of a model (or algebra) for a theory T . Definition 1.5. Given a Lawvere theory T , a T -model (or T -algebra) is a presheaf X (of sets) on the category F T that sends (categorical) sums in F T to (categorical, i.e. Cartesian) products of sets. (This means that the canonical map X(T r + T s ) → X(T r ) × X(T s ) induced by the injections is a bijection for all sets r and s in E.) We write M T for the category of Tmodels, and we write M T (X,Y ) to denote the set of morphisms X → Y between T -algebras. Such a morphism is defined to be a map of presheaves, i.e., a natural transformation, so that M T is a full subcategory of the category of presheaves on F T . The values of a T -model are determined up to isomorphism by the value at T 1 , and we often use the same notation for a model and its value at T 1 . Example 1.6. The categories of models for the Lawvere theories of Examples 1.2 and 1.3 are the categories of A-modules and G-sets, respectively. For example, the action of G on itself from the right gives for each g ∈ G a G-map g : ∏ 1 G → ∏ 1 G in the category F G of Example 1.3. Given a model X : F op G → Sets, the set maps X(g) : X( ∏ 1 G) → X( ∏ 1 G) combine to produce the action of the group G on the set X( ∏ 1 G). Example 1.7. The co-variant Yoneda embedding F T → Pre(F T ) sends the object T r of F T to the presheaf T s → F T (T s , T r ) represented by it. Such a presheaf is readily checked to be a T -model. We refer to a T -model of this form as free. The definitions unravel to give natural bijections M T (T r , X) ∼ = X r for T -models X, so that T r is indeed a free T -model on r generators. We can summarize the situation as follows. The Yoneda embedding of F T into presheaves on F T factors F T → M T → Pre(F T ) through the category M T of T -models. Both functors are fully faithful, and the free T -models are those in the (essential) image of the first functor. Definition 1.8. A morphism S → T between Lawvere theories is a functor L : F S → F T that (strictly) preserves sums. This is equivalent to the condition that F T ∼ = L • F S , i.e., that L is a map under E. It is common to describe a morphism S → T between two Lawvere theories by giving a functor R : M T → M S that is compatible with the forgetful functors to the category M E of sets. In this case, R has a left-adjoint by Freyd's adjoint functor theorem and L is induced by the restriction of the left adjoint to R to free models. For any Lawvere theory T , the category M T of T -models is complete and cocomplete. Limits are constructed pointwise, and the existence of colimits follows from the adjoint functor theorem. The category M T becomes symmetric monoidal with respect to the (categorical) sum, and the unit object T 0 for this structure is also an initial object in the category M T . Algebraic K-theory and stable homology In this section, we define the algebraic K-theory spectrum K(T ) of a Lawvere theory T , show how it encodes the stable homology of the automorphism groups of free T -models, and prove our positive results on Morita invariance. We first specify the constructions of K-theory we use in this paper. Our primary approach is to view Lawvere theories as a special case of symmetric monoidal categories and apply the classic constructions of K-theory for the latter. There are several ways of approaching these constructions; we begin with a brief overview. Let S denote a symmetric monoidal groupoid. For the following to make sense, S needs to satisfy an additional assumption, but we show in Proposition 2.4 that this is always the case for the categories we are interested in. We can then pass to Quillen's categorification S −1 S of the Grothendieck construction. The canonical morphism BS → BS −1 S between the classifying spaces is a group completion, and the target is an infinite loop space. We refer to [Gra76] and Thomason's particularly brief and enlightening discussion [Tho80] for detail. To build a K-theory spectrum K(S) with underlying infinite loop space Ω ∞ K(S) BS −1 S, we can use Segal's definition of the algebraic K-theory of a symmetric monoidal category in terms of Γ-spaces. The equivalence comes from [Seg74,§4], where he shows that Ω ∞ K(S) is also a group completion of BS. Definition 2.1. Let T be a Lawvere theory. The algebraic K-theory of T is the spectrum K(T ) = K(F × T ), (2.1) that is, the spectrum corresponding to the symmetric monoidal groupoid F × T of isomorphisms in the symmetric monoidal category F T of finitely generated free T -models, where the monoidal structure is given by the categorical sum. Since the category F T can be identified with the symmetric monoidal category of finitely generated free T -models, Definition 2.1 concerns the algebraic K-theory of finitely generated free T -models. In particular, the group K 0 (T ) = π 0 K(T ) is the Grothendieck group of isomorphism classes of finitely generated free T -models. This group is always cyclic, generated by the isomorphism class [ T 1 ] of the free T -model on one generator. However, the group K 0 (T ) does not have to be infinite cyclic, as the Examples 3.7 and 4.2 below show. Remark 2.2. A morphism S → T of Lawvere theories (as in Definition 1.8) induces, via the left-adjoint functor F S → F T , a morphism K(S) → K(T ) of algebraic K-theory spectra. The left adjoint F S → F T sends the free S-model S 1 on one generator to the free T -model T 1 on one generator. It follows that the induced homomorphism K 0 (S) → K 0 (T ) between cyclic groups is surjective, being the identity on representatives. One reason for interest in the algebraic K-theory of Lawvere theories is the relation to the stable homology of the sequence of automorphism groups attached to a Lawvere theory. We now make this relation made precise. Let T be a Lawvere theory. The automorphism groups of the free algebras T r often turn out to be very interesting (see the examples in Section 3 below). We use the notation Aut(T r ) for these groups. Given integers r, s 0, there is a stabilization homomorphism Aut(T r ) −→ Aut(T r+s ) (2.2) that 'adds' the identity of the object T s in the sense of the categorical sum +, and we use additive notation for this operation. More precisely, stabilization sends an automorphism u of T r to the automorphism of T r+s that makes the diagram T r+s / / T r+s T r + T s ∼ = O O u+T s ∼ = / / T r + T s ∼ = O O commute. By abuse of notation, this automorphism of the object T r+s will sometimes also be denoted by u + T s . Remark 2.3. The alert reader will have noticed that we have not specified our choice of isomorphism T r +T s ∼ = T r+s in the preceding diagram. While the requirement that T r+s be the sum of T r and T s provides a canonical identification here, we could in fact use any choice of isomorphism. All such choices obviously differ by conjugation by an automorphism of T r+s , so that they induce the same map in homology, which is all that matters for the purposes of this section. Proposition 2.4. For every Lawvere theory T , the stabilization maps Aut(T r ) → Aut(T r+1 ) are injective. Proof. It is enough to show that the kernels are trivial. This is clear for r = 0, since T 0 is initial, so that Aut(T 0 ) is the trivial group. For positive r we can choose a retraction ρ of the canonical embedding σ : T r → T r+1 . If u is in the kernel of the stabilization map, then we have the following commutative diagram. T r σ u / / T r σ T r+1 id / / T r+1 ρ / / T r It implies u = id. Stabilization leads to a diagram Aut(T 0 ) −→ Aut(T 1 ) −→ Aut(T 2 ) −→ Aut(T 3 ) −→ · · · (2.3) of groups for every Lawvere theory T . We write colim r Aut(T r ) for the colimit of the diagram (2.3) with respect to the stabilization maps. This is the stable automorphism group for the Lawvere theory T . Let us record the following group theoretical property of the stable automorphism groups. This is presumably well-known already in more or less generality. We nevertheless include an argument here for completeness' sake. Proposition 2.5. For every Lawvere theory T , the commutator subgroup of the stable automorphism group colim r Aut(T r ) is perfect. Proof. Given a commutator in the group colim r Aut(T r ), we can represent it as [u, v] for a pair u, v of automorphisms in the group Aut(T r ) for some r. Allowing us thrice the space, in the group Aut(T 3r ) we have the identity [u, v] + id(T 2r ) = [u + u −1 + id(T r ), v + id(T r ) + v −1 ]. It therefore suffices to prove that each element of the form w + w −1 is a commutator. This is a version of Whitehead's lemma that holds in every symmetric monoidal category: whenever there are automorphisms w 1 , . . . , w n of an object such that their composition w 1 · · · w n is the identity, then w 1 + · · · + w n is a commutator. We apply this to the category F T with respect to the monoidal product given by categorical sum +. After these preliminaries, we now move on to give another model for the algebraic K-theory space of a Lawvere theory T , one that uses the Quillen plus construction. This construction led to Quillen's historically first definition of the algebraic K-theory of a ring [Qui71] (see also [Wag72] and [Lod76]). The plus construction can be applied to connected spaces X for which the fundamental groups have perfect commutator subgroups. It produces a map X → X + into another connected space X + with the same integral homology, and such that the induced maps on fundamental groups are the abelianization. In fact, these two properties characterize the plus construction. By Proposition 2.5, the commutator subgroup of colim r Aut(T r ) is perfect. Therefore, the plus construction can be applied the classifying space Bcolim r Aut(T r ) in order to produce another space Bcolim r Aut(T r ) + . Theorem 2.6. For every Lawvere theory T , there is an equivalence Ω ∞ K(T ) K 0 (T ) × Bcolim r Aut(T r ) + (2.4) of spaces. Proof. Quillen, in the his proof that the plus construction of K-theory agrees with the one obtained from the Q-construction, takes an intermediate step (see [Gra76,p. 224]): he shows that the plus construction, together with K 0 , gives a space that is equivalent to the classifying space of his categorification S −1 S of the Grothendieck construction of a suitable symmetric monoidal category S. This part of his argument applies here to show that there is an equivalence K 0 (T ) × Bcolim r Aut(T r ) + B((F × T ) −1 F × T ) of spaces for every Lawvere theory T . The claim follows because we already know that the right hand side has the homotopy type of Ω ∞ K(T ). In general, there seems to be no reason to believe that an artificial product such as the one in (2.4) would form a meaningful whole (see [Sch11, Warning 2.2.9]). The present case is special because K 0 (T ) is generated by the isomorphism class of the free T -algebra T 1 of rank 1. Other constructions of the same homotopy type do not separate the group K 0 (T ) of components from the rest of the space. One way or another, note that all components of the algebraic K-theory space K(T ) are equivalent; the group K 0 (T ) of components acts transitively on the infinite loop space Ω ∞ K(T ) up to homotopy. Since the plus construction does not change homology, the definition of the algebraic Ktheory space immediately gives the following result. Theorem 2.7. For every Lawvere theory T , there is an isomorphism colim r H * (Aut(T r )) ∼ = H * (Ω ∞ 0 K(T )) between the stable homology of the automorphism groups of finitely generated free objects of the theory T and the homology of the zero component Ω ∞ 0 K(T ) of the algebraic K-theory space Ω ∞ K(T ). Ideally, the algebraic K-theory spectrum K(T ) is more accessible and easier to understand and describe than the stable automorphism group colim r Aut(T r ). This is not at all plausible from the definition; only the now-classical methods of algebraic K-theory, which have been developed over half a century, allow us to take this stance. From this perspective, Theorem 2.7 should be thought of as a computation of the group homology, once the spectrum K(T ) is identified. The examples in Sections 3 and 5 give a taste of the flavor of some non-trivial (and non-linear) cases. Some non-linear examples The goal of this section is to demonstrate the interest in the algebraic K-theory K(T ) of Lawvere theories T beyond what are arguably the most fundamental examples, the theories of modules over rings: Example 3.1. Consider the theory of modules over a ring A, as in Example 1.2. The automorphism group of the free A-module A r of rank r is the general linear group Aut(A r ) = GL r (A). The algebraic K-theory spectrum K(A) is Quillen's algebraic Ktheory (actually, the 'free' version). In particular K(Z) is the K-theory spectrum of the Lawvere theory of abelian groups, in the guise of Z-modules. We can now move on to discuss non-linear examples: theories that are not given as modules over a ring. Example 3.2. Consider the initial theory E of sets. The automorphisms are just the permutations, and the automorphism group Aut{1, . . . , r} = Σ(r) is the symmetric group on r symbols. The algebraic K-theory is the sphere spectrum: K(E) S. This is one version of the Barratt-Priddy theorem [Pri71,BP72]. We go into detail so that we can use the same notation later as well: Let Q Ω ∞ S denote the infinite loop space of stable self-maps of the spheres. The path components of the space Q are indexed by the degree of the stable maps, as a reflection of π 0 (S) = Z, and we will write Q(r) for the component of maps of degree r. There are maps BΣ(r) → Q(r) which are homology isomorphisms in a range that increases with r by Nakaoka stability [Nak60]. These maps fit together to induce a homology isomorphism BΣ(∞) → Q(∞) (3.1) between the colimits. The stabilization Q(r) → Q(r + 1) is always an equivalence, so that all the maps Q(r) → Q(∞) to the colimit are equivalences as well. Passing to group completions, the map (3.1) induces an equivalence Ω ∞ 0 K(E) Ω ∞ 0 S of infinite loop spaces, so that K(E) S as spectra. We refer to Morava's notes [Mor] for more background and for relations to the algebraic K-theory of the finite fields F q when the number q of elements goes to 1. Example 3.3. More generally, for any discrete group G, we can consider the Lawvere theory of G-sets. The algebraic K-theory spectrum of the Lawvere theory of G-sets is K(G-Sets) Σ ∞ + (BG), the suspension spectrum of the classifying space BG (with a disjoint base point +). This observation is attributed to Segal. In particular, for the Lawvere theory Z-sets, this gives K(Z-Sets) Σ ∞ + (BZ) Σ ∞ + (S 1 ) S ∨ ΣS. The theory Z-sets is the theory of permutations [Szy18]: a model is a set together with a permutation of that set. Example 3.4. Consider the theory Groups of (all) groups. In this case, the automorphism groups Aut(F r ) are the automorphism groups of the free groups F r on r generators. The algebraic K-theory space has been shown to be the infinite loop space underlying the sphere spectrum by Galatius [Gal11]: the unit S K(Sets) → K(Groups) is an equivalence. The theory of abelian groups has been dealt with in Example 3.1. of algebraic K-theory spectra. This tower has been studied from the point of view of homological stability and stable homology in [Szy14] and [Szy19], respectively. Example 3.6. In contrast to groups, the algebraic K-theory of the Lawvere theory Monoids of (associative) monoids (with unit) is easy to compute: the free monoid on a set X is modeled on the set of words with letters from that set, and it has a unique basis: the subset of words of length one, which can be identified with X. This implies that the automorphism group of the free monoid on r generators is isomorphic to the symmetric group Σ(r), so that the map K(E) → K(Monoids) from the algebraic K-theory of the initial theory E of sets is an equivalence. By Example 3.2, we get an equivalence K(Monoids) S of spectra. It follows, again from Galatius's theorem (see Example 3.4), that the canonical morphism K(Monoids) → K(Groups) is an equivalence. It would be interesting to see a proof of this fact that does not depend on his result. Example 3.7. Let a 2 be an integer. A Cantor algebra of arity a is a set X together with a bijection X a → X. The Cantor algebras of arity a are the models for a Lawvere theory Cantor a , and its algebraic K-theory has been computed in [SW19]: K(Cantor a ) S/(a − 1), (3.2) the mod a − 1 Moore spectrum . In particular, the spectrum K(Cantor 2 ) is contractible. Note that the definition makes sense for a = 1 as well. In that case, we have an isomorphism between Cantor 1 and the Lawvere theory Z-Sets of permutations, and the equivalence (3.2) is still true by Example 3.3. Example 3.8. Lawvere theories can be presented by generators and relations. The 'generators' of a theory are specified in terms of a graded set P = ( P a | a 0 ), where P a is a set of operations of arity a. There is a free Lawvere theory functor P → T P that is left adjoint to the functor that assigns to a theory the graded set of operations. For instance, let [a] be the graded set that only has one element, and where the degree of that element is a. Then T [a] is the free theory generated by one operation of arity a. For instance, the Lawvere theory T [0] is the theory of pointed sets. The Lawvere theory T [1] is the theory of self-maps (or N-sets): sets together with a self-map, and T [2] is the theory of magmas: sets equipped with a multiplication that does not have to satisfy any axioms. The free T [a] -model on a set X is given by the set of all trees of arity a with leaves colored in X. This model has a unique basis: the trees of height 1, and we can argue as in Example 3.6 that K(T [a] ) S. Example 3.9. There is a theory such that all models are either empty or singletons. It has no operations aside from the projections X n → X, and the relations require that all these projections are equal, so that x 1 = x 2 for all elements x j in a set X that is a model. Morita equivalences and invariance for matrix theories Given a Lawvere theory T and an integer n 1, the matrix theory M n (T ) is the Lawvere theory such that the free M n (T )-model on a set X is the free T -model on the set n × X (see [Wra71,Sec. 4]). In other words, the category F M n (T ) is the full subcategory of the category F T consisting of the objects T nr for r 0. More diagrammatically, we may view n × − as a strong monoidal endofunctor of F T , which takes an object T r to the n-fold sum of T r with itself. The underlying category of the Lawvere theory M n (T ) is the image of n × − and the structure functor that defines M n (T ) as a Lawvere theory is the composite E −−→ F T n×− −−→ F T . It is easy to describe all M n (T )-models up to isomorphism: given a T -model X, we can construct an M n (T )-model on the n-th cartesian power X n of X; the r-ary M n (T )operations (X n ) r → X n are the maps such that all components (X n ) r → X are nr-ary Toperations on X. In particular, we get a unary operation X n → X n for each self-map of the set n, and so the monoid End We now show that the higher algebraic K-theory of a Lawvere theory T is invariant under passage to matrix theories M n (T ). Theorem 4.1. For every Lawvere theory T , there is an equivalence Ω ∞ 0 K(M n (T )) Ω ∞ 0 K(T ) of infinite loop spaces. Proof. We may use that the existence of isomorphisms M n (T ) r ∼ = T n×r of models implies that we have isomorphisms Aut(M n (T ) r ) ∼ = Aut(T n×r ) between the automorphism groups. Therefore, when we compare the diagrams (2.3), the one with the groups Aut(M n (T ) r ) for M n (T ) naturally embeds as a cofinal subdiagram of the diagram with the groups Aut(T r ) for T . We only see every n-th term, but the colimits can be identified, of course, and this proves the statement on the level of spaces. To see that we have an equivalence of infinite loop spaces, we show that this map is induced by a map of spectra. However, the equivalence is not induced by the morphism K(T ) → K(M n T ) of spectra that comes from the canonical morphism (4.1) of theories. A remedy is to leave the world of Lawvere theories for the rest of the proof and use the general context of symmetric monoidal theories. Then we see that the equivalence does come from a morphism K(M n T ) → K(T ) of spectra in the other direction. This morphism of spectra is obtained from the symmetric monoidal functor F M n (T ) → F T given by the inclusion of F M n (T ) into F T as the image of the functor n × −. This functor is defined by M n (T ) r ∼ = T n×r → T n×r and so, while it is essentially the identity on morphisms, it is not necessarily surjective on objects. In particular, it need not be surjective on the level of components, as is required for a map of Lawvere theories according to Remark 2.2. On the component of zero, however, it has the effect described in the first part of the proof, showing that we have an equivalence of infinite loop spaces. In fact, as tempting as it might be to hope for an equivalence K(M n T ) K(T ) of K-theory spectra, we cannot have that, in general, because of the difference in the groups K 0 of components: Example 4.2. As explained in [SW19, Rem. 5.3] and Example 3.7 of the following section, the Cantor theories Cantor a of arity a 2 have K 0 (Cantor a ) = Z/(a − 1) finite. But by construction, the matrix theory M n (Cantor a ) only involves the elements represented by multiples of n in the group Z/(a−1). Therefore, if n is not coprime to a−1, then K 0 (M n Cantor a ) will be strictly smaller than K 0 (Cantor a ). In particular, the morphisms between K(Cantor a ) and K(M n Cantor a ) described in Remark 2.2 and Example 4.2 are not equivalences in this case. Theorem 4.1 might suggest that the higher algebraic K-theory of Lawvere theories is Morita invariant, but we show in the rest of the paper that this is not the case. We start with the definition. For instance, if S is the Lawvere theory of modules over a ring A, then T is also a Lawvere theory of modules over a ring B, and this ring B is Morita equivalent to A in the usual sense; see [ASS06, Ex. 3.1]. Thus, Definition 4.3 is in agreement with the established terminology for Lawvere theories that are given by rings. In general, it turns out that the Morita equivalence relation is generated by two processes, one of which we have already seen. Since behavior of algebraic K-theory on passage to matrix theories is already fully described by our results above, we now turn to idempotent modifications. Let T be a Lawvere theory with an idempotent endomorphism u : T 1 → T 1 of the free Tmodel T 1 on one generator. We write u n : T n → T n for the n-fold sum, so that u 1 = u. An idempotent u is pseudo-invertible if, for some fixed k, there are morphisms T 1 → T k and T k → T 1 such that their composition around u k : T k → T k is the identity on T 1 . Lemma 4.5. Consider the following properties for a morphism f : T r → T s in F T with respect to a fixed idempotent u. (1) f = u s gu r for some g : T r → T s We define F u T F T to be the subcategory (!) consisting of the morphisms that satisfy condition (3) in Lemma 4.5 above. Note that (1) and (2) do not define a subcategory in general, because the identity morphisms satisfy (3), but not necessarily (1) or (2). However, we can define a new category structure on the subsets of F T (T r , T s ) consisting of morphisms satisfying conditions (1) and (2): these subsets are closed under composition, and the u r 's act as new identities. This gives another category F uTu and another Lawvere theory, the idempotent modification uTu of T with respect to the idempotent u. There is a functor F u T → F uTu defined by f → u f = u f u = f u, and we can, in principle, compare the new Lawvere theory uTu to T using the zigzag F uTu ←− F u T −→ F T of functors defined above, all of which are the identities on objects. These functors then induce a comparison zigzag of K-theory spectra. However, this zigzag of K-theory spectra is not generally an equivalence. In the following section, we provide examples of Lawvere theories that are Morita equivalent but have different higher algebraic K-theory. This also shows that [Wei13, Ex. IV.4.13(a)], which suggests that the inclusion of a symmetric monoidal category into its idempotent completion should always be cofinal, is lacking an additivity assumption. Theories equivalent to the theory of Boolean algebras In this section, we present new computations: we determine the algebraic K-theory of the Lawvere theory of Boolean algebras. Our methods allow us to deal more generally with the Lawvere theories of v-valued Post algebras. Boolean algebras form the case v = 2. The Lawvere theories of v-valued Post algebras are all Morita equivalent to each other. In fact, these form the set of all the Lawvere theories that are equivalent to the theory of Boolean algebras. As a consequence of our computations, we show that algebraic K-theory is not Morita invariant in general. Boolean algebras and their relationship to set theory and logic are fundamental for mathematics and well-known. Post algebras were introduced by Rosenbloom [Ros42]. They are named after Post's work [Pos21] on non-classical logics with v truth values. Later references are Wade [Wad45], Epstein [Eps60], as well as the surveys by Serfati [Ser73] and Dwinger [Dwi77], to which we refer for defining equations and explicit models of the free algebras. In the following, we will only recall their definition as a Lawvere theory and what is necessary for our purposes. We write Map(R, S) for the set of all maps from a set R to a set S. As before, we build on the specific finite sets r = {a ∈ Z | 1 a r}. For a fixed integer v 2, we now consider the category whose objects are the finite sets of the form Map(r, v), where r ranges over all integers r 0, and whose morphisms are all maps between these sets. By construction, this category has finite products, and every object Map(r, v) is the r-th power of the object Map(1, v) = v. Therefore, the opposite category has finite co-products, and every object is a multiple of one object, the one corresponding to the set Map(1, v). This opposite category defines the Lawvere theory Post v of v-valued Post algebras. For v = 2, Post's vvalued logic specializes to the 2-valued Boolean logic, and we have Post 2 = Boole, the Lawvere theory of Boolean algebras. Using our description above, this is a well-known consequence of Stone duality: the set of subsets of Map(r, 2) is a free Boolean algebra on r generators, with 2 2 r elements in total. Dukarm [Duk88,Sec. 3] notes that the Lawvere theories Post v are all Morita equivalent to each other. After all, for any given integer v 2, any finite set is a retract of a set of the form Map(r, v) for r 0 large enough. There is no need for us to choose such a retraction. (The situation is comparable to the abstract existence of isomorphisms Q p ∼ = C of fields between the algebraic closure Q p of the field Q p of p-adic numbers and the field C of complex numbers, showing that the isomorphism type of Q p is independent of p.) In any event, it follows from the existence of such retractions that the idempotent completions of the categories of free v-valued Post algebras are equivalent to the category of non-empty finite sets, regardless of v. Since these idempotent completions are independent of the integer v, so is the Morita equivalence class of Post v , by the results recalled in Section 4. The following theorem shows that, in contrast, higher algebraic K-theory detects the number v of truth values, and K-theory is therefore not fully Morita invariant. In order to state the result, we need the spectrum R × of units of a commutative ring spectrum R (see [May77]). This spectrum is defined so that its underlying infinite loop space Ω ∞ R × is the union of the components of Ω ∞ R that represent units, i.e., are invertible, in the ring π 0 R. The inclusion Ω ∞ R × → Ω ∞ R then induces an isomorphism on higher homotopy groups. The inclusion is not, however, a morphism of infinite loop spaces. Instead, the delooping R × of Ω ∞ R × comes from the E ∞ multiplication of R. We need the units for the localization R = S[1/v] of the sphere spectrum S away from v and its 0-truncation, the Eilenberg-Mac Lane spectrum R = HZ[1/v]. The truncation induces a morphism S[1/v] × → HZ[1/v] × of spectra of units. There is also a homomorphism Z → Z[1/v] × of abelian groups that sends the generator 1 to the unit v, which induces a map of Eilenberg-Mac Lane spectra. Theorem 5.1. For every integer v 2, there is a homotopy pullback square K(Post v ) / / S[1/v] × HZ v / / HZ[1/v] × of spectra. In particular, we have K * (Post v ) ∼ = π * (S)/v-power torsion, where the π * (S) are the stable homotopy groups of spheres. We single out the case v = 2 for emphasis: Corollary 5.2. We have K * (Boole) ∼ = π * (S)/2-power torsion for the algebraic K-theory of the Lawvere theory of Boolean algebras. While Boolean algebras form a comparatively well-known algebraic structure, the v-valued Post algebras are certainly non-standard, and it might come as a surprise that we can prove such results without even revealing their defining operations, let alone the axioms that these operations are required to satisfy. However, as we hope the following proof makes clear, the ability to do so is precisely one of the benefits of our categorical methods. Proof of Theorem 5.1. By definition, the category of free v-valued Post algebras is equivalent to the opposite of the full subcategory of the category of sets spanned by those sets of the form Map(r, v). Since these have different cardinalities for different values of r, the isomorphism type of the free v-valued Post algebra of rank r determines the rank r. Passing to group completion, we get K 0 (Post v ) ∼ = Z ∼ = π 0 (S), as claimed. For the higher algebraic K-theory, we turn toward the automorphism groups. If X is an object in a category C, we have Aut C op (X) ∼ = Aut C (X) op ∼ = Aut C (X). Applied to our situation, this shows that the automorphism group of the free v-valued Post algebra of rank r is isomorphic to the group of permutations of the set Map(r, v) of cardinality v r , and therefore to the symmetric group Σ(v r ) acting on a set of v r elements. Stabilization leads us to the colimit of the diagram Σ(1) −→ Σ(v) −→ Σ(v 2 ) −→ · · · −→ Σ(v r ) −→ · · · , (5.1) where the morphisms are given by multiplication with v: a permutation σ of v r is sent to the permutation σ × id v of v r × v = v r+1 , which looks just like v copies of the permutation σ acting on v disjoint copies of v r . In other words, the permutation σ × id v is a block sum of v copies of σ . The diagram (5.1) has been studied before by McDuff-Segal [McDS76, Ex. (iv)], and the following identification of its colimit does not come with any claim on originality. Picking up our notation from Example 3.2, we have maps BΣ(d) → Q(d) that fit together to form a commutative diagram as follows. BΣ(1) ×v / / BΣ(v) ×v / / BΣ(v 2 ) ×v / / · · · Q(1) ×v / / Q(v) ×v / / Q(v 2 ) ×v / / · · · This diagram can be used to compute the group completion of the upper colimit, which is the infinite loop space Ω ∞ 0 K(Post v ) by Theorem 2.6. This time, in contrast to Example 3.2, the maps in the lower row are not equivalences, but multiplication by v in the infinite loop space structure on the Q(v r ) Q(∞) Q(0). In other words, there is a homology isomorphism from the colimit BΣ(v ∞ ) to the localization Q(0)[1/v] away from v. This homology isomorphism gives, after group completion, an equivalence Ω ∞ 0 K(Post v ) Ω ∞ 0 S[1/v] × of infinite loop spaces. Noting that the stable homotopy groups of the sphere spectrum in positive degrees are finite, and A[1/v] = A/(v-power torsion) for finite abelian groups A, we obtain the identification of the higher homotopy groups in the statement of the theorem. In other words, we have a morphism K(Post v ) → S[1/v] × of spectra that induces an isomorphism on stable homotopy groups in positive degrees. To complete the identification of the spectrum K(Post v ), we need to describe what it does on components. However, the description above shows that 1 ∈ Z ∼ = π 0 K(Post v ) is sent to v ∈ Z[1/v] × ∼ = π 0 S[1/v] × , and this observation translates immediately into the homotopy pullback diagram in the statement of the theorem. We end this section with an observation which indicates that the relationship between the Ktheories of the Lawvere theory E of sets and of Boolean algebras, or more generally vvalued Post algebras, is not as simple as Theorem 5.1 might suggest. Proposition 5.3. For each prime p, the homomorphism π n (S) ∼ = K n (E) −→ K n (Post p ) ∼ = π n (S)/p-power torsion, induced by the universal arrow E → Post v of Lawvere theories, is not surjective. In particular, it is not the canonical surjection. Proof. Every Boolean algebra has a natural structure of an F 2 -vector space. The addition is given by the symmetric difference x + y = (x ∨ y) ∧ ¬(x ∧ y) = (x ∧ ¬y) ∨ (¬x ∧ y). In fact, the category of Boolean algebras is isomorphic to the category of Boolean rings, which are commutative rings where every element is idempotent. If 2 is idempotent, we have 4 = 2 2 = 2, so that 2 = 0, and the underlying abelian group is 2-torsion. More generally, if p is a prime number, every p-Post algebra admits a natural structure of an F p -algebra in which every element x satisfies x p = x (see [Wad45] or [Ser73]). It follows that the canonical morphism S K(E) → K(Post p ) of algebraic K-theory spectra factors through the algebraic K-theory K(F p ) of the field F p . S K(E) −→ K(F p ) −→ K(Post p ) On the level of automorphism groups, these morphisms correspond to embeddings Σ(r) −→ GL r (F p ) −→ Σ(p r ) of groups with images given by the subgroups of F p -linear bijections and the subgroup of that given by the permutation matrices. Quillen [Qui72, Thm. 8(i)] has shown that K 2 j−1 (F q ) ∼ = Z/(q j − 1) for all j 1 and for all prime powers q. It follows that the p-torsion of the higher algebraic K-groups K n (F p ) of F p is trivial. On the other hand, his computations [Qui76] showed that most of the stable homotopy of the spheres is contained in the kernel of the canonical morphisms S → K(Z) → K(F p ) of spectra: what is detected in the algebraic K-theory of finite fields is essentially the image of Whitehead's J-homomorphism. In particular, the kernel contains much more than just the p-power torsion. Remark 5.4. Morava, in his 2008 Vanderbilt talk [Mor], highlighted "the apparent fact that the spectrum defined by the symmetric monoidal category of finite pointed sets under Cartesian product has not been systematically studied." This spectrum can be modeled as the algebraic K-theory of a many-sorted Lawvere theory, where the sorts correspond to the prime numbers. It is not worth the effort to develop our theory in more generality just to cover that one example. Instead, we have contented ourselves with demonstrating how the theory we have developed so far suffices for us to deal with the local factors corresponding to each prime. Example 3. 5 . 5There is an interpolation between the theory of all groups and the theory of all abelian groups by the theories Nil c of nilpotent groups of a certain class c, with 1 c ∞. Finally , we mention the two trivial (or inconsistent, in Lawvere's terminology) examples of theories where the free model functor is not faithful (see Lawvere's thesis [Law04, II.1, Prop. 3]). Example 3 . 10 . 310There is a theory such that all models are singletons. It has a 0-ary operation (constant) e, and the relation x = e has to be satisfied for all x in a model X. Another way of describing the same Lawvere theory: this is the theory of modules over the trivial ring, where 0 = 1. From this perspective, the theory is not so exotic after all! For both of these examples, the algebraic K-theory spectra are obviously contractible. (n) acts on the model X n . Every model arises this way, up to isomorphism. Every M n (T )-model of the form X n has an underlying T -model consisting of the operations that are themselves n-th powers, which gives a forgetful functor M M n (T ) → M T . Equivalently, there is a morphism T −→ M n (T ) (4.1) of Lawvere theories. From the diagrammatic perspective, this morphism is simply the above functor n × − : F T → F M n (T ) ⊂ F T , which by construction is a functor under E and hence a map of Lawvere theories. We readily observe that there are isomorphisms M 1 (T ) ∼ = T and M m (M n (T )) ∼ = M mn (T ). If T is the theory of modules over a ring A as in Example 1.2, then M n (T ) is the theory of modules over the matrix ring M n (A). The Lawvere theory M n (E) is the theory of End(n)-sets. Definition 4 . 3 . 43Two Lawvere theories S and T are called Morita equivalent if their categories M S and M T of models are equivalent. Proposition 4.4 ([Duk88],[McK96]). A Lawvere theory is Morita equivalent to a given Lawvere theory T if and only if it is an idempotent modification of a matrix theory of T for some pseudo-invertible idempotent of the matrix theory. ( 2 ) 2u s f = f = f u r (3) u s f = f u r Then (1) ⇔ (2) ⇒ (3). We have (2) ⇐ (3) if and only if u = id. AcknowledgmentsThe authors would like to thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality during the program 'Homotopy harnessing higher structures' where work on this paper was undertaken. This work was supported by EPSRC grant no EP/K032208/1. The first author was partially supported by the United States National Science Foundation under DMS Grants No. 1710534 and 2104300. The authors also thank the anonymous referees for their insightful feedback, which has improved the paper. Algebraic theories. A categorical introduction to general algebra. J Adámek, J Rosický, E M Vitale, Cambridge Tracts in Mathematics. 184Cambridge University PressJ. Adámek, J. Rosický, E.M. Vitale. Algebraic theories. A categorical intro- duction to general algebra. Cambridge Tracts in Mathematics 184. Cambridge University Press, Cambridge, 2011. Morita equivalence of many-sorted algebraic theories. J Adámek, M Sobral, L Sousa, J. Algebra. 297J. Adámek, M. Sobral, L. Sousa. Morita equivalence of many-sorted algebraic theories. J. Algebra 297 (2006) 361-371. On the homology of non-connected monoids and their associated groups. M Barratt, S Priddy, Comment. Math. Helv. 47M. Barratt, S. Priddy. On the homology of non-connected monoids and their associated groups. Comment. Math. Helv. 47 (1972) 1-14. . T W Barrett, H , Halvorson. Morita equivalence. Rev. Symb. Log. 9T.W. Barrett, H. Halvorson. Morita equivalence. Rev. Symb. Log. 9 (2016) 556-582. Generalizations of Loday's assembly maps for Lawvere's algebraic theories. A M Bohmann, M Szymik, arXiv:2112.07003J. Inst. Math. Jussieu. to appearA.M. Bohmann, M. Szymik. Generalizations of Loday's assembly maps for Lawvere's algebraic theories. J. Inst. Math. Jussieu (to appear). arXiv:2112.07003 Handbook of categorical algebra. 2. Categories and structures. Encyclopedia of Mathematics and its Applications 51. F Borceux, Cambridge University PressCambridgeF. Borceux. Handbook of categorical algebra. 2. Categories and structures. Encyclopedia of Mathematics and its Applications 51. Cambridge University Press, Cambridge, 1994. Logique, catégories et faisceaux. P Cartier, d'après F. Lawvere et M. TierneyP. Cartier. Logique, catégories et faisceaux [d'après F. Lawvere et M. Tierney]. Séminaire Bourbaki, Exp. No. 513. BerlinSpringer710Séminaire Bourbaki, 30e année (1977/78), Exp. No. 513, 123-146. Lecture Notes in Math., 710. Springer, Berlin, 1979. Morita equivalence of algebraic theories. J J Dukarm, Colloq. Math. 55J.J. Dukarm. Morita equivalence of algebraic theories. Colloq. Math. 55 (1988) 11-17. A survey of the theory of Post algebras and their generalizations. P Dwinger, Reidel. Fifth Internat. Sympos., Indiana Univ.Modern uses of multiple-valued logicP. Dwinger. A survey of the theory of Post algebras and their generalizations. Modern uses of multiple-valued logic (Fifth Internat. Sympos., Indiana Univ., Bloomington, Ind., 1975) 51-75. Reidel, Dordrecht, 1977. The lattice theory of Post algebras. G Epstein, Trans. Amer. Math. Soc. 95G. Epstein. The lattice theory of Post algebras. Trans. Amer. Math. Soc. 95 (1960) 300-317. Stable homology of automorphism groups of free groups. S Galatius, Ann. of Math. 173S. Galatius. Stable homology of automorphism groups of free groups. Ann. of Math. 173 (2011) 705-768. Higher algebraic K-theory II (after Daniel Quillen). Algebraic Ktheory. D Grayson, Proc. Conf., Northwestern Univ. Conf., Northwestern UnivEvanston, Ill; BerlinSpringer551Lecture Notes in MathD. Grayson. Higher algebraic K-theory II (after Daniel Quillen). Algebraic K- theory (Proc. Conf., Northwestern Univ., Evanston, Ill., 1976) 217-240. Lec- ture Notes in Math. 551. Springer, Berlin, 1976. Functorial semantics of algebraic theories. F W Lawvere, Proc. Nat. Acad. Sci. U.S.A. 50F.W. Lawvere. Functorial semantics of algebraic theories. Proc. Nat. Acad. Sci. U.S.A. 50 (1963) 869-872. . F W Lawvere, Adjointness in Foundations. Dialectica. 23F.W. Lawvere. Adjointness in Foundations. Dialectica 23 (1969) 281-296. F W Lawvere, Introduction. Model theory and topoi. BerlinSpringer445F.W. Lawvere. Introduction. Model theory and topoi, 3-14. Lecture Notes in Math. 445. Springer, Berlin, 1975. Functorial semantics of algebraic theories and some algebraic problems in the context of functorial semantics of algebraic theories. F W Lawvere, Repr. Theory Appl. Categ. 5F.W. Lawvere. Functorial semantics of algebraic theories and some algebraic problems in the context of functorial semantics of algebraic theories. Repr. Theory Appl. Categ. 5 (2004) 1-121. K-théorie algébrique et représentations de groupes. J.-L Loday, Ann. Sci. Ecole Norm. Sup. 9J.-L. Loday. K-théorie algébrique et représentations de groupes. Ann. Sci. Ecole Norm. Sup. 9 (1976) 309-377. May E ∞ ring spaces and E ∞ ring spectra. J P Quinn, N Ray, J Tornehave, Lecture Notes in Mathematics. 577Springer-VerlagJ.P. May E ∞ ring spaces and E ∞ ring spectra. With contributions by F. Quinn, N. Ray, and J. Tornehave. Lecture Notes in Mathematics 577. Springer-Verlag, Berlin-New York, 1977. Homology fibrations and the "group-completion" theorem. D Mcduff, G Segal, Invent. Math. 31D. McDuff, G. Segal. Homology fibrations and the "group-completion" theo- rem. Invent. Math. 31 (1976) 279-284. An algebraic version of categorical equivalence for varieties and more general algebraic categories. Logic and algebra (Pontignano, 1994) 211-243. R Mckenzie, Lecture Notes in Pure and Appl. Math. 180DekkerR. McKenzie. An algebraic version of categorical equivalence for varieties and more general algebraic categories. Logic and algebra (Pontignano, 1994) 211- 243. Lecture Notes in Pure and Appl. Math. 180. Dekker, New York, 1996. Some background for Manin's theorem K(F 1 ) S. Talk at the Vanderbilt conference on Noncommutative Geometry and Geometry over the Field with One Element. J Morava, J. Morava. Some background for Manin's theorem K(F 1 ) S. Talk at the Van- derbilt conference on Noncommutative Geometry and Geometry over the Field with One Element, 2008. www.alainconnes.org/docs/Morava.pdf Decomposition theorem for homology groups of symmetric groups. M Nakaoka, Ann. of Math. 71M. Nakaoka. Decomposition theorem for homology groups of symmetric groups. Ann. of Math. 71 (1960) 16-42. Kategorien und Funktoren. B Pareigis, Mathematische Leitfäden. B.G. Teubner. B. Pareigis. Kategorien und Funktoren. Mathematische Leitfäden. B.G. Teub- ner, Stuttgart, 1969. Introduction to a general theory of propositions. E L Post, Amer. J. Math. 41E.L. Post. Introduction to a general theory of propositions. Amer. J. Math. 41 (1921) 165-185. On Ω ∞ S ∞ and the infinite symmetric group. Algebraic topology. S B Priddy, Univ Xxii, Wisconsin, Wis Madison, Proc. Sympos. Pure Math. Amer. Math. SocS.B. Priddy. On Ω ∞ S ∞ and the infinite symmetric group. Algebraic topology (Proc. Sympos. Pure Math., Vol. XXII, Univ. Wisconsin, Madison, Wis., 1970) 217-220. Amer. Math. Soc., Providence, R.I., 1971. Cohomology of groups. D Quillen, Gauthier-Villars2Nice; Tome; ParisActes du Congrès International des MathématiciensD. Quillen. Cohomology of groups. Actes du Congrès International des Mathé- maticiens (Nice, 1970), Tome 2, 47-51. Gauthier-Villars, Paris, 1971. On the cohomology and K-theory of the general linear groups over a finite field. D Quillen, Ann. Math. 96D. Quillen. On the cohomology and K-theory of the general linear groups over a finite field. Ann. Math. 96 (1972) 552-586. Letter from Quillen to Milnor on Im(π i O → π i s → K i Z). Algebraic K-theory. D Quillen, Proc. Conf., Northwestern Univ. Conf., Northwestern UnivEvanston, Ill182D. Quillen. Letter from Quillen to Milnor on Im(π i O → π i s → K i Z). Alge- braic K-theory (Proc. Conf., Northwestern Univ., Evanston, Ill., 1976) 182- . Lecture Notes in Math. 551SpringerLecture Notes in Math. 551. Springer, Berlin, 1976. Homological stability for automorphism groups. O Williams, N Wahl, Adv. Math. 318O. Randal-Williams, N. Wahl. Homological stability for automorphism groups. Adv. Math. 318 (2017) 534-626. Post algebras. I. Postulates and general theory. P C Rosenbloom, Amer. J. Math. 64P.C. Rosenbloom. Post algebras. I. Postulates and general theory. Amer. J. Math. 64 (1942) 167-188. Higher algebraic K-theory. Topics in algebraic and topological K-theory 167-241. Lecture Notes in Math. M Schlichting, SpringerBerlinM. Schlichting. Higher algebraic K-theory. Topics in algebraic and topological K-theory 167-241. Lecture Notes in Math. 2008. Springer, Berlin, 2011. . H Schubert, Kategorien. II. Heidelberger Taschenbücher. 66Springer-VerlagH. Schubert. Kategorien. II. Heidelberger Taschenbücher 66. Springer-Verlag, 1970. Categories and cohomology theories. G Segal, Topology. 13G. Segal. Categories and cohomology theories. Topology 13 (1974) 293-312. Introduction aux algèbres de Post età leurs applications. M Serfati, Série Recherche. 21Cahiers du Bureau Universitaire de Recherche OpérationnelleM. Serfati. Introduction aux algèbres de Post età leurs applications. Cahiers du Bureau Universitaire de Recherche Opérationnelle. Série Recherche 21 (1973) 3-100. Twisted homological stability for extensions and automorphism groups of free nilpotent groups. M Szymik, J. K-Theory. 14M. Szymik. Twisted homological stability for extensions and automorphism groups of free nilpotent groups. J. K-Theory 14 (2014) 185-201. Permutations, power operations, and the center of the category of racks. M Szymik, Comm. Algebra. 46M. Szymik. Permutations, power operations, and the center of the category of racks. Comm. Algebra 46 (2018) 230-240. The rational stable homology of mapping class groups of universal nil-manifolds. M Szymik, Ann. Inst. Fourier. 69M. Szymik. The rational stable homology of mapping class groups of universal nil-manifolds. Ann. Inst. Fourier 69 (2019) 783-803. The homology of the Higman-Thompson groups. M Szymik, N Wahl, Invent. Math. 216M. Szymik, N. Wahl. The homology of the Higman-Thompson groups. Invent. Math. 216 (2019) 445-518. Beware the phony multiplication on Quillen's A −1 A. R W Thomason, Proc. Amer. Math. Soc. 80R.W. Thomason. Beware the phony multiplication on Quillen's A −1 A. Proc. Amer. Math. Soc. 80 (1980) 569-573. A syntactic characterization of Morita equivalence. D Tsementzis, J. Symb. Log. 82D. Tsementzis. A syntactic characterization of Morita equivalence. J. Symb. Log. 82 (2017) 1181-1198. Post algebras and rings. L I Wade, Duke Math. J. 12L.I. Wade. Post algebras and rings. Duke Math. J. 12 (1945) 389-395. Delooping classifying spaces in algebraic K-theory. J B Wagoner, Topology. 11J.B. Wagoner. Delooping classifying spaces in algebraic K-theory. Topology 11 (1972) 349-370. The K-book. An introduction to algebraic K-theory. C A Weibel, Graduate Studies in Mathematics. 145American Mathematical SocietyC.A. Weibel. The K-book. An introduction to algebraic K-theory. Graduate Studies in Mathematics, 145. American Mathematical Society, Providence, RI, 2013. Algebras over theories. G C Wraith, Colloq. Math. 23G.C. Wraith. Algebras over theories. Colloq. Math. 23 (1971) 181-190.
[]
[ "Room-temperature terahertz anomalous Hall effect in Weyl antiferromagnet Mn 3 Sn thin films", "Room-temperature terahertz anomalous Hall effect in Weyl antiferromagnet Mn 3 Sn thin films" ]
[ "Takuya Matsuda \nThe Institute for Solid State Physics\nThe University of Tokyo\n277-8581KashiwaChibaJapan\n", "Natsuki Kanda \nThe Institute for Solid State Physics\nThe University of Tokyo\n277-8581KashiwaChibaJapan\n", "Tomoya Higo \nThe Institute for Solid State Physics\nThe University of Tokyo\n277-8581KashiwaChibaJapan\n\nCREST\nJapan Science and Technology Agency\n332-0012KawaguchiSaitamaJapan\n", "N P Armitage \nDepartment of Physics and Astronomy\nThe Institute of Quantum Matter\nThe Johns Hopkins University\n21218BaltimoreMAUSA\n", "Satoru Nakatsuji \nThe Institute for Solid State Physics\nThe University of Tokyo\n277-8581KashiwaChibaJapan\n\nCREST\nJapan Science and Technology Agency\n332-0012KawaguchiSaitamaJapan\n\nDepartment of Physics and Astronomy\nThe Institute of Quantum Matter\nThe Johns Hopkins University\n21218BaltimoreMAUSA\n\nDepartment of Physics\nThe University of Tokyo\nBunkyo-Ku113-0033Hongo, TokyoJapan\n", "Ryusuke Matsunaga [email protected] \nThe Institute for Solid State Physics\nThe University of Tokyo\n277-8581KashiwaChibaJapan\n\nPRESTO\nJapan Science and Technology Agency\n4-1-8 Honcho332-0012KawaguchiSaitamaJapan. ✉\n" ]
[ "The Institute for Solid State Physics\nThe University of Tokyo\n277-8581KashiwaChibaJapan", "The Institute for Solid State Physics\nThe University of Tokyo\n277-8581KashiwaChibaJapan", "The Institute for Solid State Physics\nThe University of Tokyo\n277-8581KashiwaChibaJapan", "CREST\nJapan Science and Technology Agency\n332-0012KawaguchiSaitamaJapan", "Department of Physics and Astronomy\nThe Institute of Quantum Matter\nThe Johns Hopkins University\n21218BaltimoreMAUSA", "The Institute for Solid State Physics\nThe University of Tokyo\n277-8581KashiwaChibaJapan", "CREST\nJapan Science and Technology Agency\n332-0012KawaguchiSaitamaJapan", "Department of Physics and Astronomy\nThe Institute of Quantum Matter\nThe Johns Hopkins University\n21218BaltimoreMAUSA", "Department of Physics\nThe University of Tokyo\nBunkyo-Ku113-0033Hongo, TokyoJapan", "The Institute for Solid State Physics\nThe University of Tokyo\n277-8581KashiwaChibaJapan", "PRESTO\nJapan Science and Technology Agency\n4-1-8 Honcho332-0012KawaguchiSaitamaJapan. ✉" ]
[]
Antiferromagnetic spin motion at terahertz (THz) frequencies attracts growing interests for fast spintronics, however, their smaller responses to external field inhibit device application. Recently the noncollinear antiferromagnet Mn 3 Sn, a Weyl semimetal candidate, was reported to show large anomalous Hall effect (AHE) at room temperature comparable to ferromagnets. Dynamical aspect of such large responses is an important issue to be clarified for future THz data processing. Here the THz anomalous Hall conductivity in Mn 3 Sn thin films is investigated by polarization-resolved spectroscopy. Large anomalous Hall conductivity Re σ xy ω ð Þ $ 20 Ω À1 cm À1 at THz frequencies is clearly observed as polarization rotation. A peculiar temperature dependence corresponding to the breaking/recovery of symmetry in the spin texture is also discussed. Observation of the THz AHE at room temperature demonstrates the ultrafast readout for the antiferromagnetic spintronics using Mn 3 Sn, and will also open new avenue for studying nonequilibrium dynamics in Weyl antiferromagnets.
10.1038/s41467-020-14690-6
null
209,515,754
1912.12668
00564596513f5491fe9d4b5abeeae2e87791ab50
Room-temperature terahertz anomalous Hall effect in Weyl antiferromagnet Mn 3 Sn thin films Takuya Matsuda The Institute for Solid State Physics The University of Tokyo 277-8581KashiwaChibaJapan Natsuki Kanda The Institute for Solid State Physics The University of Tokyo 277-8581KashiwaChibaJapan Tomoya Higo The Institute for Solid State Physics The University of Tokyo 277-8581KashiwaChibaJapan CREST Japan Science and Technology Agency 332-0012KawaguchiSaitamaJapan N P Armitage Department of Physics and Astronomy The Institute of Quantum Matter The Johns Hopkins University 21218BaltimoreMAUSA Satoru Nakatsuji The Institute for Solid State Physics The University of Tokyo 277-8581KashiwaChibaJapan CREST Japan Science and Technology Agency 332-0012KawaguchiSaitamaJapan Department of Physics and Astronomy The Institute of Quantum Matter The Johns Hopkins University 21218BaltimoreMAUSA Department of Physics The University of Tokyo Bunkyo-Ku113-0033Hongo, TokyoJapan Ryusuke Matsunaga [email protected] The Institute for Solid State Physics The University of Tokyo 277-8581KashiwaChibaJapan PRESTO Japan Science and Technology Agency 4-1-8 Honcho332-0012KawaguchiSaitamaJapan. ✉ Room-temperature terahertz anomalous Hall effect in Weyl antiferromagnet Mn 3 Sn thin films 10.1038/s41467-020-14690-6ARTICLE OPEN 1 Antiferromagnetic spin motion at terahertz (THz) frequencies attracts growing interests for fast spintronics, however, their smaller responses to external field inhibit device application. Recently the noncollinear antiferromagnet Mn 3 Sn, a Weyl semimetal candidate, was reported to show large anomalous Hall effect (AHE) at room temperature comparable to ferromagnets. Dynamical aspect of such large responses is an important issue to be clarified for future THz data processing. Here the THz anomalous Hall conductivity in Mn 3 Sn thin films is investigated by polarization-resolved spectroscopy. Large anomalous Hall conductivity Re σ xy ω ð Þ $ 20 Ω À1 cm À1 at THz frequencies is clearly observed as polarization rotation. A peculiar temperature dependence corresponding to the breaking/recovery of symmetry in the spin texture is also discussed. Observation of the THz AHE at room temperature demonstrates the ultrafast readout for the antiferromagnetic spintronics using Mn 3 Sn, and will also open new avenue for studying nonequilibrium dynamics in Weyl antiferromagnets. T he control of magnetism has been a key issue for modern data processing and recording in spintronic devices. From the viewpoint of manipulation speed, antiferromagnets are promising materials since spin precession motion occurs typically at terahertz (THz) frequencies, a few orders of magnitude higher than ferromagnets 1 . Therefore, optical control of the antiferromagnets have been intensively investigated in the past few decades 2 . Nonlinear interaction with THz magnetic field has been also studied 3 , which leads to direct spin manipulation for future data processing in nonvolatile memory devices. Readout of the magnetization information in antiferromagnets is, however, still very difficult since they are generally not sensitive to external field because of the much smaller net magnetization than in ferromagnets, which has made their practical application challenging. On the other hand, recent discovery of Weyl semimetals with inversion or time-reversal symmetry breaking have attracted tremendous interests 4 not only for fundamental physics as condensed matter analog of massless Weyl fermions but also for highly intriguing response functions that reflect the broken symmetry such as nonreciprocal or second-order nonlinear responses 5,6 . In particular, recent reports in the broken timereversal symmetry Weyl semimetal candidate noncollinear antiferromagnet Mn 3 X (X = Sn, Ge) [7][8][9] have opened new pathways for utilizing these materials as functional spin-based devices at room temperature. In spite of the vanishingly small net magnetization in the antiferromagnetic phase, these materials show large anomalous Hall effect (AHE) [7][8][9] , anomalous Nernst effect 10,11 , magneto-optical Kerr effect 12 , and magnetic spin Hall effect 13 in a stark contrast to conventional antiferromagnets. These large responses comparable to ferromagnets have been attributed to the Berry curvature in momentum space [14][15][16][17][18] , which is in particular enhanced at the Weyl nodes. Such Weyl semimetals, or Weyl (antiferro)magnets, are of great interest since the magnetic ordering can be controlled by temperature or external field, as small coercive field of 0.05 T has been reported for bulk Mn 3 Sn 7 . A recent neutron scattering experiment has also revealed THz spin excitation in Mn 3 Sn 19 . Therefore, deep understanding of the dynamical properties of Mn 3 Sn in the THz frequency range would contribute to developing novel devices based on fast motion of antiferromagnetic spins and large response to external field. Previously, the anomalous Hall conductivity spectrum σ xy (ω) at THz frequencies has been investigated in ferromagnets with the perspective to reveal the microscopic origin of the AHE [20][21][22] . In general, the AHE could arise from the intrinsic Berry curvature determined by the electronic band structure 23,24 or the extrinsic skew scattering or side jump with impurities 25,26 . If it is intrinsic, the AC anomalous Hall conductivity spectrum shows peculiar resonant structures due to interband transition as presented in infrared or THz spectroscopy [20][21][22] . The AHE in THz frequency has been also observed in a ferromagnetic semiconductor 27 or quantum anomalous Hall state in a topological insulator 28 . For Weyl semimetals, however, the THz anomalous Hall conductivity remains unexplored experimentally, although a number of theoretical efforts have been devoted to investigating the AC AHE across the Weyl nodes [29][30][31][32][33] . A recent angle-resolved photoemission spectroscopy (ARPES) for Mn 3 Sn has captured the Weyl-like dispersion at several meV above the Fermi energy E F 34 . Therefore, low-energy THz polarimetry will provide deep insight into the microscopic picture of the large AHE in the Weyl semimetal. In this work, using Mn 3 Sn thin films showing the large AHE comparable to the bulk 35 , we perform THz time-domain spectroscopy (THz-TDS) with precise polarization resolution to reveal the THz anomalous Hall conductivity. Polarization rotation is clearly observed at room temperature and zero magnetic field, which quantitatively agrees with the large AHE previously reported in the DC resistivity measurement. The results also demonstrate small dissipation in the AHE up to THz frequencies, which is consistent with an intrinsic origin of it. The temperature dependence of the Hall conductivity is also discussed with regards to macroscopic time-reversal symmetry in the spin order. Results Sample. Samples used in this work are polycrystalline Mn 3+x Sn 1−x thin films (x = 0.00, 0.02, and 0.08) deposited on SiO 2 or thermally oxidized Si substrates by DC magnetron sputtering 35 . Figure 1a, b shows a 3D schematic view of atomic configuration and the top view along c-axis of magnetic structure, respectively. Figure 1c presents the X-ray diffraction (XRD) patterns of the 50-nm-thick film of Mn 3+x Sn 1−x (x = 0.02) on the SiO 2 substrate obtained by the grazing angle XRD measurement. All the peaks can be indexed by the hexagonal D0 19 Mn 3 Sn structure and no additional peaks coming from plausible impurity phases are observed, which is consistent with that in the films on a thermally oxidized Si substrate 35 . Figure 1d provides a schematic diagram of our sample configuration. Below the Néel temperature 420 K, the spins on Mn atoms order in an inverse triangular spin structure, where the spins form a 120°order with negative vector chirality in the ab plane. Such a noncollinear antiferromagnetic spin in the Kagome bilayer is characterized by cluster magnetic octupole moments 18 , which macroscopically breaks time-reversal symmetry and is expected to result in the appearance of Weyl nodes [14][15][16][17][18] . The small net magnetization occurs in the ab plane by slight canting of the 120°order. Note that the large AHE is not explained by the small net magnetization but rather attributed to the cluster magnetic octupole 12 . An external magnetic field is applied normal to the film surface to align the cluster magnetic octupole along zdirection as well as the small magnetization vector. Thus, the electric field parallel to the sample surface (x-direction) induces the anomalous Hall conductivity J y ¼ σ xy E in x as well as the longitudinal conductivity J x ¼ σ xx E in x , which are measured in this work in THz frequency. THz spectroscopy. By conventional transmission THz-TDS (see Methods), THz longitudinal conductivity spectra σ xx (ω) from 2 to 10 meV (ω/2π = 0.5 to 2.5 THz) were obtained in the thin-film approximation. Figure 1e, f shows real and imaginary parts of σ xx (ω) in Mn 3+x Sn 1−x thin films at various temperatures (x = 0.02). We fit the data by using the Drude model σ xx ω ð Þ ¼ σ 0 =ð1 À iωτÞ, where σ 0 is DC conductivity and τ is carrier scattering time. The solid curves in Fig. 1e, f represent the result of fitting. τ as a function of temperature is shown in the inset to Fig. 1f. The scattering time is sensitive to temperature and becomes short with increasing temperature as is usual in metals. Above 250 K, however, the temperature dependence weakens and almost saturates, which well reproduces the previous THz-TDS measurement for a Mn 3 Sn thin film 36 and is discussed later. Note that, although the THz longitudinal conductivity σ xx (ω) in Fig. 1e coincides with the DC value because of the short scattering time, it does not mean that the THz anomalous Hall conductivity σ xy (ω) is also in the DC limit. This is because the microscopic mechanisms are different between σ xx (ω) and σ xy (ω). σ xx (ω) is well described with Drude model where any kinds of momentum scattering are involved. On the other hand, σ xy occur without any scattering process or only with impurity-induced scattering, depending on its intrinsic or extrinsic origin. In particular, in addition to the Weyl semimetallic bands, Mn 3 Sn also has other metallic bands 34 which dominates the longitudinal conductivity. Since the intrinsic AHE origantes from the integrated Berry curvature of the occupied states over the entire Brillouin zone, the frequency dependence of σ xy is not obvious even if the scattering time in σ xx is known and therefore to be investigated experimentally. Figure 2a schematically shows our polarization-resolved THz-TDS setup with a set of freestanding wire-grid polarizers (WGP) [37][38][39][40] . WGP1 defines the polarization of the incident THz electric field as x direction, and WGP3 is set to block the x-component THz field before detection. By using two types of configurations for WGP2 (configs. 1 and 2 in the inset of Fig. 2a), both E x (ω) and E y (ω) are obtained. In the small-angle limit, the rotation-angle and ellipticity-angle spectra, θ(ω) and η(ω), are expressed as θ ω ð Þ þ iη ω ð Þ $ E y ðωÞ=E x ðωÞ. Here, WGP3 and the ZnTe crystal are set as the detection efficiency of the y-component electric field is maximized 41 (see details in Methods). To evaluate the precision of our measurement, without samples we performed 1 scan and 10 scans of THz-TDS with the configs. 1 and 2, respectively, which gives one data set of the θ(ω) and η(ω) spectra within 60 s. With increasing the number of the data sets, the standard deviation of θ(ω) is plotted as a function of frequency in Large THz AHE at room temperature. The polarizationresolved THz-TDS measurements for Mn 3+x Sn 1−x thin films were performed at zero magnetic field after the samples are magnetized under field of 5 T, where the magnetization vector M is normal to the film surface. Figure 2c, d shows the θ(ω) and η(ω) spectra, respectively, with different film thicknesses of d = 50, 200, and 400 nm at room temperature (x = 0.02). With using a bare SiO 2 substrate as a reference, the polarization rotation of~4 mrad is observed for a 50-nm sample, and the rotation angle increases as the film thickness increases. The broken curves in corresponds to the reverse of the cluster octupole moment. Upon flipping, the signs of the rotation angle are reversed. The filled circles in Fig. 2e show the thickness dependence of θ(ω) averaged over this energy range calculated by the following form θ ¼ σ xy Z 0 d 1 þ n s þ σ xx Z 0 d ;ð1Þ where the vacuum impedance Z 0 = 377 Ω and the substrate refractive index n s = 1.92. With the longitudinal conductivity σ xx = 3800 Ω −1 cm −1 and the Hall conductivity σ xy = 19 Ω −1 cm −1 , which was evaluated for a 200-nm-thick film in DC resistivity measurements, the calculation reasonably reproduces our THz experimental data (solid line). In the small thickness regime (d ≪ 50 nm), θ $ σ xy Z 0 d=ð1 þ n s Þ, and therefore θ is proportional to the film thickness d. In the large thickness regime (d ≫ 50 nm), θ is saturated to be~σ xy /σ xx . Slight difference of the magnitudes in θ could be attributed to the difference of geometry; the DC resistivity is measured with electrodes attached on the film surface while the THz conductivity is measured in transmission. These results clearly indicate that the large AHE in Mn 3 Sn also appears in the THz frequency range as a polarization rotation. It may be also noteworthy that the DC AHE is usually measured under magnetic field by changing its sign to avoid creation of magnetic domains under zero field and to detect difference of Hall resistivity between positive and negative field with removing artifact such as contact resistance. In the present experiment, the Hall conductivity can be clearly observed as light polarization rotation in a noncontact fashion without using magnetic field. We also confirmed that the polarization rotation was unchanged even several months after the sample fabrication and magnetization, exhibiting robustness of magnetic information in the Mn 3 Sn thin films 35 . Figure 2d shows that η(ω) is negligibly small for all the samples at room temperature. It is Kramers-Kronig relation consistent with the fact that θ(ω) is flat with a value expected from DC. Using Eq. (1), we obtained the real and imaginary parts of the THz anomalous Hall conductivity spectra σ xy (ω) and plotted in Fig. 2f as solid curves. Im σ xy (ω) at ω/2π = 1 THz (~4 meV) is as small as~0.4 Ω −1 cm −1 and become much smaller for lower frequency in contrast to the large anomalous Hall conductivity Re σ xy ω ð Þ $ 20 Ω À1 cm À1 at the THz frequency. Since Im σ xy (ω) indicates the dissipative part of the Hall current 24 , this result provides a direct evidence for dissipationless feature of the anomalous Hall current in Mn 3 Sn up to the THz frequency scale, which is also consistent with the intrinsic nature of the AHE driven by the large Berry curvature in momentum space. A recent study of thermal transport in Mn 3 Sn has also reported negligible inelastic scattering with phonons in the AHE 42 . To investigate in broader frequency range, we also performed the same polarization-resolved THz-TDS for a 50-nm Mn 3+x Sn 1−x (x = 0.08) on a Si substrate at room temperature by using (110) GaP crystals and 20-fs laser pulses with 1-kHz repetition (see Methods). The precision of our broadband measurement was evaluated in the same way as Fig. 2a (see Methods). The open circles in Fig. 2f shows σ xy (ω) spectra up to~7 THz, in which data with a flipped sample was used as a reference. The broadband data are smoothly connected to the low-frequency measurement. The gradual growths of the dissipative-part Hall conductivity Im σ xy (ω) with increasing photon energy can be attributed to a broad optical transition across the band-crossing (or anticrossing) points as discussed for ferromagnets 20,22 . While the onset of the interband transition coincides with twice the chemical potential for the model proposed in a ferromagnet 22 , Im σ xy (ω) in Weyl semimetals is expected to show a different spectrum associated with anisotropic cone dispersion 31,32 which is inherent to a pair of Weyl nodes. A recent ARPES study for Mn 3+x Sn 1−x (x = 0.03) has revealed the Weyl-like dispersions at~8 meV above E F with strong band renormalization by a factor of 5 in comparison with the first-principle calculation 34 . Notably these Weyl nodes are type-II 17,34 , where both electron and hole pockets exist with a strongly anisotropic dispersion as illustrated in Fig. 2g. The asymmetric dispersion indicates that a broad absorption could occur even below 8 meV 32 . In the present experiment, onset of the interband transition was not clearly discerned, perhaps because it could be obscured by the room temperature thermal energy. Temperature dependence of THz AHE. We also measured the temperature dependence of the THz AHE for Mn 3+x Sn 1−x films (x = 0.02) on a SiO 2 substrate. The sample was magnetized at 300 K in advance and cooled down under zero magnetic field. Figure 3a-d shows Re σ xy (ω) and Im σ xy (ω) at higher temperature (300-200 K) and lower temperatures (160-10 K). Figure 3e exhibits Re σ xy (ω) averaged between 2 and 10 meV as a function of temperature. As the temperature decreases, Re σ xy (ω) is sharply suppressed around 250 K. The similar sharp reduction of the AHE as well as magnetization has been also observed in the DC measurement 35 . Such a spin-reorientation phase transition has been studied by neutron scattering experiments, which have revealed that the 120°spin order in each ab plane rotates to form a helical spin ordering along the c-axis below 250 K 43 . Interestingly, although the inverse triangular spin structure in the ab plane of the Kagome bilayer below 420 K breaks macroscopic time-reversal (T) symmetry, the helical spin ordering that develops along the c-axis below 250 K recovers the macroscopic T-symmetry again, which results in the disappearance of the net Berry curvature. Therefore, the drastic reduction of THz AHE at low temperature as well as DC AHE is ascribed to the spinreorientation phase transition to the helical structure [44][45][46] . In addition to the polarization rotation, a peculiar temperature dependence is also found in the scattering time τ obtained from σ xx (ω) in the inset of Fig. 1f. The scattering time decreases with increasing temperature, but shows saturation at around 250 K, which agrees well with the recent report 36 . This result might be also related to the appearance of the Weyl nodes around which backscattering could be suppressed. Figure 3e shows that Re σ xy (ω) increases slightly below 150 K. The Re σ xy (ω) spectra at the low temperatures in Fig. 3c shows a slope upward toward lower frequency, which is clearly distinct from the flat spectra at higher temperatures in Fig. 3a. Correspondingly, Im σ xy (ω) spectra in Fig. 3b, d also show qualitatively different behaviors. Previous studies have reported the emergence of the spin glass state at the low temperature with a weak ferromagnetism due to spin canting towards the caxis 47,48 . Note that our THz measurement was performed at zero magnetic field on cooling after the demagnetization process (we first applied a field of 5 T perpendicular to the film surface and decrease the field down to 0 T) at 300 K. In such a situation the macroscopic magnetic moment is much smaller than that in case of field cooling due to random orientation of ferromagnetic domains. The peculiar Hall conductivity spectra at low temperatures in Fig. 3c, d would reflect the spatial inhomogeneity and might be described by the effective medium theory. Nevertheless, substantial dissipative part comparable to the real part at the low temperatures in Fig. 3d is in contrast to the room temperature Weyl semimetal phase, which implies that different mechanism including substantial dissipation might be involved with the AHE at the spin glass phase. In summary, by using the polarization-resolved THz spectroscopy for the Mn 3 Sn thin films, we observed the large anomalous Hall conductivity of Mn 3 Sn in THz frequency with negligiblysmall dissipation at room temperature. Such a large THz response in the antiferromagnets is desirable and paves the way for ultrafast readout of spin with THz current on spintronic device. The far-field THz polarimtry presented in this work is possible only for large area due to the diffraction limit. For more practical application, THz anomalous Hall conductivity must be measured in much smaller regions. Recently the light-induced THz AHE of a tiny graphene flake has been detected via contacted electrodes combined with optical pulses and THz current generation on chip 49 . Our demonstration of the THz AHE will lead to such a readout of the spin information on integrated devices. Importantly, our all-optical approch in the noncontact way with picosecond time resolution can be extended to pump-probe measurements for the study of nonequilibrium dynamics. THz control of the cluster magnetic octupole in Mn 3 Sn is highly demanded for ultrafast writing. If the external strong magnetic field is applied to the in-plane opposite direction to the ground state, all of the spins on the Kagome bilayer change their directions simulataneously, resulting in the flip of the cluster magnetic octupole. In terms of the spin precession, it corresponds to the damping of the acoustic collective mode, which could occur at picosecond timescale due to the exchange interaction in the antiferromagnetic metal. From fundamental point of view, optical control of Weyl antiferromagnetis is also highly intriguing as recently the THz field control of Weyl nodes has been demonstrated in a noncentrosymmetric Weyl semimetal 50 . Even in static regime, further investigation of THz responses with suppressing thermal energy in another noncollinear antiferromagnetic compound Mn 3 Ge, where the inverse-triangular spin ordering survives at low temperature 8,9 , will clarify interband transition around the Weyl nodes. Higher-frequency infrared polarimetry would be also important for direct comparison with first principle calculation to identify the energies of the multiple Weyl nodes from the Fermi surface. Methods Sample preparation and characterization. Mn 3 Sn polycrystalline films (50-400 nm) were fabricated as reported in ref. 35 on thermally oxidized Si (Si/SiO 2 ) substrates and quartz (SiO 2 ) substrates by DC magnetron sputtering from a Mn 2.7 Sn target in a chamber with a base pressure of <5 × 10 −7 Pa. The Mn 3 Sn layer was deposited at room temperature, and it was subsequently annealed at 500°C for 1 h. The sputtering power and Ar gas pressure were 60 W and 0.4-0.6 Pa, respectively. The compositions of the Mn 3 Sn films were determined by scanning electron microscopy-energy dispersive X-ray spectrometry. A hexagonal D0 19 Mn 3 Sn phase was confirmed by XRD measurement in Mn 3+x Sn 1−x thin films NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-020-14690-6 ARTICLE NATURE COMMUNICATIONS | (2020) 11:909 | https://doi.org/10.1038/s41467-020-14690-6 | www.nature.com/naturecommunications (x = 0.00, 0.02, and 0.08). The data for 50-nm-thick Mn 3+x Sn 1−x film (x = 0.02) on a SiO 2 substrate is shown in Fig. 1c. These samples show the large AHE at room temperature comparable to the bulk single crystal reported 7 , ensuring quality of the thin film. THz time-domain spectroscopy. As a light source for THz-TDS, we used a modelocked Ti:Sapphire laser with 800-nm central wavelength, 100-fs pulse duration, and the 76-MHz repetition rate (TSUNAMI, Spectra Physics). THz pulses were generated from an interdigitated photoconductive antenna on a GaAs substrate with 50-V biased voltage and 50-kHz modulation frequency. The transmitted THz pulses were detected by electro-optical (EO) sampling in a 2-mm-thick (110) ZnTe crystal with another sampling pulse. For broadband THz-TDS in Fig. 2f, we also used a regenerative amplified Ti:Sapphire laser system with 800-nm central wavelength, 50-fs pulse duration, and 1-kHz repetition rate (Spitfire Pro, Spectra Physics). Spectrum of the laser pulse is broadened by the self-phase modulation in SiO 2 plates and then the pulse width is compressed down to 20 fs by chirped mirrors. Broadband THz-TDS up to 7 THz (~28 meV) is realized by optical rectification and EO sampling with 300-μm-thick (110) GaP crystals. In our experiment, we measured the sample and a bare substrate as a reference and then obtained the complex amplitude transmittance t(ω). In the thin-film approximation, the THz longitudinal conductivity spectrum Im σ xx (ω) is obtained from the relation t ω ð Þ ¼ 1 1 þ n s þ σ xx ω ð ÞZ 0 d 4n s 1 þ n s e iΦ ω ð Þ ;ð2Þ where Z 0 is the vacuum impedance, d is the film thickness, n s is the refractive index of the substrate, and Φ ω ð Þ ¼ n s d s À d À d s ð Þ ω=c, where d s is the substrate thickness. Polarization-resolved THz TDS. Schematic of our polarization-resolved spectroscopy setup is shown in Fig. 2a. Before the sample, the incident THz electric field polarization is determined as the x-direction by the first WGP1. The x-and ycomponents of the THz field just after transmitting the sample are defined as E x and E y , respectively. After the sample, other two WGP (WGP2 and WGP3) are inserted before the EO sampling. The angle of WGP3 is fixed to block the xcomponent THz field, and the ZnTe crystal is set to maximize the detection efficiency of the y-component field. Such a sensitive detection of the tiny y-component THz field is important for high resolution of polarization 41 . The rotational angle of the WGP2 is controllable during the measurement. We define ϕ as the angle from the x-direction to the transmission direction of polarization for the WGP2. The signal F measured in the EO sampling is expressed as F ¼ 0 1 ½ R ϕ ð ÞT WGP2 R Àϕ ð Þ E x E y " # ;ð3Þ where R(ϕ) is the rotation matrix. T WGP2 is the complex Jones matrix of the WGP2, which is expressed as T WGP2 ¼ t k 0 0 t ? ;ð4Þ where t || and t ⊥ are the complex transmittances of the WGP2 for THz field polarization parallel and perpendicular to the wires. The Jones matrix T WGP2 is used to consider the effect of the finite extinction ratio of the WGP2 for accurate polarization measurement. As shown in the inset of Fig. 2a, we used two types of the WGP2 configurations, config. 1 (ϕ = 0°) and config. 2 (ϕ = 45°), and then we obtained the signals F 1 (t) and F 2 (t) and their Fourier components F 1 (ω) and F 2 (ω), respectively. By solving Eq. (3), E x (ω) and E y (ω) are obtained as E x ðωÞ E y ðωÞ " # ¼ 0 t ? ω ð Þ ðt ? ω ð Þ À t k ðωÞÞ=2 ðt ? ω ð Þ þ t k ω ð ÞÞ=2 " # À1 F 1 ðωÞ F 2 ðωÞ ;ð5Þ By using E x (ω) and E y (ω), the rotation-angle and ellipticity-angle spectra, θ(ω) and η(ω), are expressed as θ ω ð Þ ¼ tan À1 RefE Ã x ω ð ÞE y ω ð Þg E x ω ð Þ j j 2 ÀjE y ω ð Þj 2 ! ; ð6Þ η ω ð Þ ¼ À sin À1 ImfE Ã x ω ð ÞE y ω ð Þg E x ω ð Þ j j 2 þjE y ω ð Þj 2 ! :ð7Þ In the small-angle limit, θ(ω) and η(ω) are simply described as θ ω ð Þ þ iη ω ð Þ $ E y ω ð Þ E x ω ð Þ :ð8Þ In the thin-film approximation, the anomalous Hall conductivity spectrum σ xy (ω) is obtained from the relation σ xy ω ð Þ ¼ θ ω ð Þ þ iη ω ð Þ Z 0 d 1 þ n s þ σ xx ω ð ÞZ 0 d ð Þ ;ð9Þ which is reduced to Eq. (1) in the DC limit (ω → 0). The σ xy (ω) spectra of the thin film at low temperature in Fig. 3a-d shows a small oscillation. It could be ascribed to interference of the THz pulses, which could be reflected from the cryostat window. Data availablity Fig. 2b . 2bEven for one data set, the standard deviation is smaller than 0.5 mrad in the frequency range from 0.5 to 2.0 THz (2 to 8 meV). The precision can be further improved as small as several tens of μrad between 0.5 and 2.0 THz with 20-min accumulation time. This result ensures high-precision spectroscopy of polarization rotation in this frequency window. Fig. 1 1Fig. 2c, d show the data taken upon flipping the sample to get an oppositely directed magnetization vector (Crystal and magnetic structures of Mn 3 Sn and the THz longetudinal conductivity spectra. a, b A 3D schematic view of atomic configuration (a) and top view along c-axis (b) of the magnetic structures of Mn 3 Sn at room temperature where the magnetic moments form an inverse triangular spin structure in the ab plane. c The X-ray diffraction measurement for the 50-nm-thick film of Mn 3+x Sn 1−x (x = 0.02) on a SiO 2 substrate. d A schematic of our sample configuration. E in x the incident electric field polarized along the x direction, J x the longitudinal current, J y the Hall current. e, f The real and imaginary parts of THz longitudinal conductivity spectra of Mn 3+x Sn 1−x thin films (x = 0.02) on a SiO 2 substrate at various temperatures. The solid curves are the results of the Drude-model fitting. The inset shows temperature dependence of the scattering time obtained from the fitting. Fig. 2 2THz anomalous Hall effect at room temperature with polarization-resolved spectroscopy. a A schematic of our polarization-resolved measurement setup. WGP wire-grid polarizer. b Frequency dependence of the precision of the polarization rotation angle in this measurement evaluated by the standard deviation of the polarization rotation angle. The precision can be more improved with using a larger number (#) of data sets. For example, the precision for 20 data sets (#20) can be as small as several tens of μrad between 0.5 and 2.0 THz (See details in text and Methods). c, d The rotation-angle and ellipticity-angle spectra in Mn 3+x Sn 1−x films (x = 0.02) with different film thicknesses at room temperature. The broken curves correspond to the data with a flipped sample for the opposite magnetization vector. e Filled circles are the averaged rotation angle as a function of the film thickness. The solid line is the calculation of Eq. (1) fixed the DC longitudinal and Hall conductivity for 200-nm-thick sample. f The real-and imaginary-part Hall conductivity spectra for Mn 3+x Sn 1−x films. The solid curves show the low-frequency THz-TDS for x = 0.02 on a SiO 2 substrate and the open circles show the broadband spectrum for x = 0.08 on a Si substrate. g A schematic of interband transition across the type-II Weyl nodes. The error bars in f indicate the standard deviations for the statistical fluctuation after repeating the measurements. Fig. 3 3Temperature dependence of THz anomalous Hall conductivity spectra. a-d The real and imaginary parts of the Hall conductivity for a sample (x = 0.02) from 300 to 200 K (a, b) and from 160 to 10 K (c, d). The filled circles are the DC Hall conductivities at each temperature. e Temperature dependence of the real-part THz Hall conductivity. The lower panel shows the top views along c-axis of the magnetic structure at each phase. NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-020-14690-6 NATURE COMMUNICATIONS | (2020) 11:909 | https://doi.org/10.1038/s41467-020-14690-6 | www.nature.com/naturecommunications © The Author(s) 2020 The data that support the findings of this study are available from the corresponding authors upon reasonable request.Author contributionsR.M., S.N., and N.P.A. conceived this project. T.H. and S.N. performed the sample growth, characterization, and DC transport measurement. T.M., N.K., and R.M. developed the THz spectroscopy system, performed the experiments, and analyzed the data. All coauthors discussed the results. T.M. and R.M. wrote the paper with feedbacks from all the coauthors.Competing interestsThe authors declare no competing interests.Additional informationSupplementary information is available for this paper at https://doi.org/10.1038/s41467-020-14690-6.Correspondence and requests for materials should be addressed to R.M.Peer review information Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.Reprints and permission information is available at http://www.nature.com/reprintsPublisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/ licenses/by/4.0/. Ultrafast optical manipulation of magnetic order. A Kirilyuk, A V Kimel, T Rasing, Rev. Mod. Phys. 82Kirilyuk, A., Kimel, A. V. & Rasing, T. Ultrafast optical manipulation of magnetic order. Rev. Mod. Phys. 82, 2731-2784 (2010). Antiferromagnetic opto-spintronics. P Němec, P M Fiebig, T Kampfrath, A V Kimel, Nat. Phys. 14Němec, P., Fiebig, P. M., Kampfrath, T. & Kimel, A. V. Antiferromagnetic opto-spintronics. Nat. Phys. 14, 229-241 (2018). Nonlinear magnetization dynamics of antiferromagnetic spin resonance induced by intense terahertz magnetic field. Y Mukai, H Hirori, T Yamamoto, H Kageyama, K Tanaka, N. J. Phys. 1813045Mukai, Y., Hirori, H., Yamamoto, T., Kageyama, H. & Tanaka, K. Nonlinear magnetization dynamics of antiferromagnetic spin resonance induced by intense terahertz magnetic field. N. J. Phys. 18, 013045 (2016). Weyl and Dirac semimetals in three-dimensional solids. N P Armitage, E J Mele, A Vishwanath, Rev. Mod. Phys. 9015001Armitage, N. P., Mele, E. J. & Vishwanath, A. Weyl and Dirac semimetals in three-dimensional solids. Rev. Mod. Phys. 90, 015001 (2018). Giant anisotropic nonlinear optical response in transition metal monopnictide Weyl semimetals. L Wu, Nat. Phys. 13Wu, L. et al. Giant anisotropic nonlinear optical response in transition metal monopnictide Weyl semimetals. Nat. Phys. 13, 350-355 (2017). Nonreciprocal responses from noncentrosymmetric quantum materials. Y Tokura, N Nagaosa, Nat. Commun. 93740Tokura, Y. & Nagaosa, N. Nonreciprocal responses from noncentrosymmetric quantum materials. Nat. Commun. 9, 3740 (2018). Large anomalous Hall effect in a non-collinear antiferromagnet at room temperature. S Nakatsuji, N Kiyohara, T Higo, Nature. 527Nakatsuji, S., Kiyohara, N. & Higo, T. Large anomalous Hall effect in a non-collinear antiferromagnet at room temperature. Nature 527, 212-215 (2015). Giant anomalous Hall Effect in the chiral antiferromagnet Mn 3 Ge. N Kiyohara, T Tomita, S Nakatsuji, Phys. Rev. Appl. 564009Kiyohara, N., Tomita, T. & Nakatsuji, S. Giant anomalous Hall Effect in the chiral antiferromagnet Mn 3 Ge. Phys. Rev. Appl. 5, 064009 (2016). Large anomalous Hall effect driven by a nonvanishing Berry curvature in the noncolinear antiferromagnet Mn 3 Ge. A K Nayak, Sci. Adv. 21501870Nayak, A. K. et al. Large anomalous Hall effect driven by a nonvanishing Berry curvature in the noncolinear antiferromagnet Mn 3 Ge. Sci. Adv. 2, e1501870 (2016). Large anomalous Nernst effect at room temperature in a chiral antiferromagnet. M Ikhlas, Nat. Phys. 13Ikhlas, M. et al. Large anomalous Nernst effect at room temperature in a chiral antiferromagnet. Nat. Phys. 13, 1085-1090 (2017). Anomalous Nernst and Righi-Leduc effects in Mn 3 Sn: Berry curvature and entropy flow. X Li, Phys. Rev. Lett. 11956601Li, X. et al. Anomalous Nernst and Righi-Leduc effects in Mn 3 Sn: Berry curvature and entropy flow. Phys. Rev. Lett. 119, 056601 (2017). Large magneto-optical Kerr effect and imaging of magnetic octupole domains in an antiferromagnetic metal. T Higo, Nat. Photon. 12Higo, T. et al. Large magneto-optical Kerr effect and imaging of magnetic octupole domains in an antiferromagnetic metal. Nat. Photon. 12, 73-78 (2018). Magnetic and magnetic inverse spin Hall effects in a noncollinear antiferromagnet. M Kimata, Nature. 565Kimata, M. et al. Magnetic and magnetic inverse spin Hall effects in a non- collinear antiferromagnet. Nature 565, 627-630 (2019). Non-collinear antiferromagnets and the anomalous Hall effect. J Kübler, C Felser, EPL. 10867001Kübler, J. & Felser, C. Non-collinear antiferromagnets and the anomalous Hall effect. EPL 108, 67001 (2014). Anomalous Hall effect arising from noncollinear antiferromagnetism. H Chen, Q Niu, A H Macdonald, Phys. Rev. Lett. 11217205Chen, H., Niu, Q. & MacDonald, A. H. Anomalous Hall effect arising from noncollinear antiferromagnetism. Phys. Rev. Lett. 112, 017205 (2014). Strong anisotropic anomalous Hall effect and spin Hall effect in the chiral antiferromagnetic compounds Mn 3 X (X=Ge, Sn, Ga, Ir, Rh, and Pt). Y Zhang, Phys. Rev. B. 9575128Zhang, Y. et al. Strong anisotropic anomalous Hall effect and spin Hall effect in the chiral antiferromagnetic compounds Mn 3 X (X=Ge, Sn, Ga, Ir, Rh, and Pt). Phys. Rev. B 95, 075128 (2017). Topological Weyl semimetals in the chiral antiferromagnetic materials Mn 3 Ge and Mn 3 Sn. H Yang, N. J. Phys. 1915008Yang, H. et al. Topological Weyl semimetals in the chiral antiferromagnetic materials Mn 3 Ge and Mn 3 Sn. N. J. Phys. 19, 015008 (2017). Cluster multipole theory for anomalous Hall effect in antiferromagnets. M.-T Suzuki, T Koretsune, M Ochi, R Arita, Phys. Rev. B. 9594406Suzuki, M.-T., Koretsune, T., Ochi, M. & Arita, R. Cluster multipole theory for anomalous Hall effect in antiferromagnets. Phys. Rev. B 95, 094406 (2017). Magnetic excitations in non-collinear antiferromagnetic Weyl semimetal Mn 3 Sn. npj Quantum Mater. P Park, 363Park, P. et al. Magnetic excitations in non-collinear antiferromagnetic Weyl semimetal Mn 3 Sn. npj Quantum Mater. 3, 63 (2018). The anomalous Hall effect and magnetic monopoles in momentum space. Z Fang, Science. 302Fang, Z. et al. The anomalous Hall effect and magnetic monopoles in momentum space. Science 302, 92-95 (2003). Infrared anomalous Hall effect in SrRuO 3 : exploring evidence for crossover to intrinsic behavior. M.-H Kim, Phys. Rev. B. 81235218Kim, M.-H. et al. Infrared anomalous Hall effect in SrRuO 3 : exploring evidence for crossover to intrinsic behavior. Phys. Rev. B 81, 235218 (2010). Terahertz Faraday rotation induced by an anomalous Hall effect in the itinerant ferromagnet SrRuO 3. R Shimano, EPL. 9517002Shimano, R. et al. Terahertz Faraday rotation induced by an anomalous Hall effect in the itinerant ferromagnet SrRuO 3 . EPL 95, 17002 (2011). Hall effect in ferromagnetics. R Karplus, J M Luttinger, Phys. Rev. 95Karplus, R. & Luttinger, J. M. Hall effect in ferromagnetics. Phys. Rev. 95, 1154-1160 (1954). Anomalous Hall effect. N Nagaosa, J Sinova, S Onoda, A H Macdonald, N P Ong, Rev. Mod. Phys. 82Nagaosa, N., Sinova, J., Onoda, S., MacDonald, A. H. & Ong, N. P. Anomalous Hall effect. Rev. Mod. Phys. 82, 1539-1592 (2010). The spontaneous hall effect in ferromagnetics II. J Smit, Physica. 24Smit, J. The spontaneous hall effect in ferromagnetics II. Physica 24, 39-51 (1958). Side-jump mechanism for the Hall effect of ferromagnets. L Berger, Phys. Rev. B. 2Berger, L. Side-jump mechanism for the Hall effect of ferromagnets. Phys. Rev. B 2, 4559-4566 (1970). Terahertz magneto-optics in the ferromagnetic semiconductor HgCdCr 2 Se 4. T J Huisman, Appl. Phys. Lett. 106132411Huisman, T. J. et al. Terahertz magneto-optics in the ferromagnetic semiconductor HgCdCr 2 Se 4 . Appl. Phys. Lett. 106, 132411 (2015). Terahertz spectroscopy on Faraday and Kerr rotations in a quantum anomalous Hall state. K N Okada, Nat. Commun. 712245Okada, K. N. et al. Terahertz spectroscopy on Faraday and Kerr rotations in a quantum anomalous Hall state. Nat. Commun. 7, 12245 (2016). Anomalous Hall effect in Weyl metals. A A Burkov, Phys. Rev. Lett. 113187202Burkov, A. A. Anomalous Hall effect in Weyl metals. Phys. Rev. Lett. 113, 187202 (2014). Dirac cone tilt on interband optical background of type-I and type-II Weyl semimetals. J P Carbotte, Phys. Rev. B. 94165111Carbotte, J. P. Dirac cone tilt on interband optical background of type-I and type-II Weyl semimetals. Phys. Rev. B 94, 165111 (2016). Anomalous Hall effect in type-I Weyl metals. J F Steiner, A V Andreev, D A Pesin, Phys. Rev. Lett. 11936601Steiner, J. F., Andreev, A. V. & Pesin, D. A. Anomalous Hall effect in type-I Weyl metals. Phys. Rev. Lett. 119, 036601 (2017). Absorption of circular polarized light in tilted type-I and type-II Weyl semimetals. S P Mukherjee, J P Carbotte, Phys. Rev. B. 9685114Mukherjee, S. P. & Carbotte, J. P. Absorption of circular polarized light in tilted type-I and type-II Weyl semimetals. Phys. Rev. B 96, 085114 (2017). Imaginary part of Hall conductivity in a tilted doped Weyl semimetal with both broken time-reversal and inversion symmetry. S P Mukherjee, J P Carbotte, Phys. Rev. B. 9735144Mukherjee, S. P. & Carbotte, J. P. Imaginary part of Hall conductivity in a tilted doped Weyl semimetal with both broken time-reversal and inversion symmetry. Phys. Rev. B 97, 035144 (2018). Evidence for magnetic Weyl fermions in a correlated metal. K Kuroda, Nat. Mater. 16Kuroda, K. et al. Evidence for magnetic Weyl fermions in a correlated metal. Nat. Mater. 16, 1090-1095 (2017). Anomalous Hall effect in thin films of the Weyl antiferromagnet Mn 3 Sn. T Higo, Appl. Phys. Lett. 113202402Higo, T. et al. Anomalous Hall effect in thin films of the Weyl antiferromagnet Mn 3 Sn. Appl. Phys. Lett. 113, 202402 (2018). Terahertz conductivity of the magnetic Weyl semimetal Mn 3 Sn films. B Cheng, Appl. Phys. Lett. 11512405Cheng, B. et al. Terahertz conductivity of the magnetic Weyl semimetal Mn 3 Sn films. Appl. Phys. Lett. 115, 012405 (2019). Observation of the quasiparticle Hall Effect in superconducting YBa 2 Cu 3 O 7-δ. S Spielman, Phys. Rev. Lett. 73Spielman, S. et al. Observation of the quasiparticle Hall Effect in superconducting YBa 2 Cu 3 O 7-δ . Phys. Rev. Lett. 73, 1537-1540 (1994). Characterization of doped silicon in low carrier density region by terahertz frequency Faraday effect. Y Ikebe, R Shimano, Appl. Phys. Lett. 9212111Ikebe, Y. & Shimano, R. Characterization of doped silicon in low carrier density region by terahertz frequency Faraday effect. Appl. Phys. Lett. 92, 012111 (2008). Polarization modulation time-domain terahertz polarimetry. C M Morris, R V Aguilar, A V Stier, N P Armitage, Opt. Express. 20Morris, C. M., Aguilar, R. V., Stier, A. V. & Armitage, N. P. Polarization modulation time-domain terahertz polarimetry. Opt. Express 20, 12303-12317 (2012). Quantum Faraday and Kerr rotations in graphene. R Shimano, Nat. Commun. 41841Shimano, R. et al. Quantum Faraday and Kerr rotations in graphene. Nat. Commun. 4, 1841 (2013). Terahertz wave polarization rotation with double layered metal grating of complimentary chiral patterns. N Kanda, K Konishi, M Kuwata-Gonokami, Opt. Express. 15Kanda, N., Konishi, K. & Kuwata-Gonokami, M. Terahertz wave polarization rotation with double layered metal grating of complimentary chiral patterns. Opt. Express 15, 11117-11125 (2007). Anomalous thermal Hall effect in the topological antiferromagnetic state. K Sugii, Preprint atSugii, K. et al. Anomalous thermal Hall effect in the topological antiferromagnetic state. Preprint at https://arxiv.org/abs/1902.06601 (2019). A neutron study of the magnetic structure of Mn 3 Sn. J W Cable, N Wakabayashi, P Radhakrishna, Solid State Commun. 88Cable, J. W., Wakabayashi, N. & Radhakrishna, P. A neutron study of the magnetic structure of Mn 3 Sn. Solid State Commun. 88, 161-166 (1993). Spin structure and weak ferromagnetism of Mn 3 Sn. H Ohmori, S Tomiyoshi, H Yamauchi, H Yamamoto, J. Mag. Mag. Mater. 70249Ohmori, H., Tomiyoshi, S., Yamauchi, H. & Yamamoto, H. Spin structure and weak ferromagnetism of Mn 3 Sn. J. Mag. Mag. Mater. 70, 249 (1987). Magnetic anisotropy of single-crystalline Mn 3 Sn in triangular and helix-phase states. T F Duan, Appl. Phys. Lett. 10782403Duan, T. F. et al. Magnetic anisotropy of single-crystalline Mn 3 Sn in triangular and helix-phase states. Appl. Phys. Lett. 107, 082403 (2015). Magnetic phase dependence of the anomalous Hall effect in Mn 3 Sn single crystals. N H Sung, F Ronning, J D Thompson, E D Bauer, Appl. Phys. Lett. 112132406Sung, N. H., Ronning, F., Thompson, J. D. & Bauer, E. D. Magnetic phase dependence of the anomalous Hall effect in Mn 3 Sn single crystals. Appl. Phys. Lett. 112, 132406 (2018). Triangular spin structure and weak ferromagnetism of Mn 3 Sn at low temperature. S Tomiyoshi, S Abe, Y Yamaguchi, H Yamauchi, H Yamamoto, J. Magn. Magn. Mater. Tomiyoshi, S., Abe, S., Yamaguchi, Y., Yamauchi, H. & Yamamoto, H. Triangular spin structure and weak ferromagnetism of Mn 3 Sn at low temperature. J. Magn. Magn. Mater. 54-57, 1001-1002 (1986). Determination of the magnetic structure of Mn 3 Sn using generalized neutron polarization analysis. P J Brown, V Nunez, F Tasset, J B Forsyth, P Radhakrishna, J. Phys. Condens. Matter. 2Brown, P. J., Nunez, V., Tasset, F., Forsyth, J. B. & Radhakrishna, P. Determination of the magnetic structure of Mn 3 Sn using generalized neutron polarization analysis. J. Phys. Condens. Matter 2, 9409-9422 (1990). Light-induced anomalous Hall effect in graphene. J W Mciver, Nat. Phys. 16McIver, J. W. et al. Light-induced anomalous Hall effect in graphene. Nat. Phys. 16, 38-41 (2019). An ultrafast symmetry switch in a Weyl semimetal. E J Sie, Nature. 565Sie, E. J. et al. An ultrafast symmetry switch in a Weyl semimetal. Nature 565, 61-66 (2019).
[]
[ "Projective structures on moduli spaces of compact complex hypersurfaces *", "Projective structures on moduli spaces of compact complex hypersurfaces *" ]
[ "Sergey Merkulov \nSchool of Mathematics and Statistics\nDepartment of Mathematics and Computer Science\nUniversity of Plymouth Plymouth\nPL4 8AADevonUnited Kingdom\n", "Henrik Pedersen \nOdense University\nCampusvej 555230Odense MDenmark\n" ]
[ "School of Mathematics and Statistics\nDepartment of Mathematics and Computer Science\nUniversity of Plymouth Plymouth\nPL4 8AADevonUnited Kingdom", "Odense University\nCampusvej 555230Odense MDenmark" ]
[ "* 1980 Mathematics Subject Classification" ]
It is shown that moduli spaces of complete families of compact complex hypersurfaces in complex manifolds often come equipped canonically with projective structures satisfying some natural integrability conditions.
10.1090/s0002-9939-97-03408-4
[ "https://export.arxiv.org/pdf/dg-ga/9503015v1.pdf" ]
15,419,537
dg-ga/9503015
d84965847f485c0f3f19cc33195b6301a4b7af87
Projective structures on moduli spaces of compact complex hypersurfaces * 1985 Sergey Merkulov School of Mathematics and Statistics Department of Mathematics and Computer Science University of Plymouth Plymouth PL4 8AADevonUnited Kingdom Henrik Pedersen Odense University Campusvej 555230Odense MDenmark Projective structures on moduli spaces of compact complex hypersurfaces * * 1980 Mathematics Subject Classification 1985 It is shown that moduli spaces of complete families of compact complex hypersurfaces in complex manifolds often come equipped canonically with projective structures satisfying some natural integrability conditions. 1. Projective connections. Let M be a complex manifold. Consider the following equivalence relation on the set of affine torsion-free connections on M : two connectionsΓ and Γ are said to be projectively equivalent if they have the same geodesics, considered as unparameterized paths. In a local coordinate chart {t α }, α = 1, . . . , dim M on M , whereΓ and Γ are represented by Christoffel symbolsΓ γ αβ and Γ γ αβ respectively, this equivalence relation reads [H]Γ ∼ Γ ifΓ γ αβ = Γ γ αβ + b β δ γ α + b α δ γ β for some 1-form b = b α dt α . An equivalence class of torsion-free affine connections under this relation is called a projective structure or a projective connection. Let M be a complex manifold with a projective structure. A complex submanifold P ⊂ M is called totally geodesic if for each point t ∈ P and each direction tangent to P at t the corresponding geodesic of the projective connection is contained in P at least locally. 2. Moduli spaces of compact complex hypersurfaces. Let X be a compact complex hypersurface in a complex manifold Y with normal line bundle N such that H 1 (X, N ) = 0. According to Kodaira [K-1], such a hypersurface X belongs to the complete analytic family {X t | t ∈ M } of compact complex hypersurfaces X t in Y with the moduli space M being a dim C H 0 (X, N ) -dimensional complex manifold. Moreover, there is a canonical isomorphism k t : T t M −→ H 0 (X t , N t ) which associates a global section of the normal bundle N t of X t ֒→ Y to any tangent vector at the corresponding point t ∈ M . Consider F = {(y, t) ∈ Y × M | y ∈ X t } and denote by µ : F −→Y and ν : F −→M two natural projections, Y µ ←− F ν −→ M.(1) The space F is a submanifold in Y × M . If N F is the normal bundle of F ֒→ Y × M , then, for any point t ∈ M , we have N F | ν −1 (t) ≃ N Xt|Y , where N Xt|Y is the normal bundle of the submanifold µ • ν −1 (t) = X t ֒→ Y . By Kodaira's theorem, there is an isomorphism k : T M −→ν 0 * (N F ), where ν 0 * (N F ) denotes the direct image. Let us denote the point in the moduli space M corresponding to X by t 0 , i.e. X = µ • ν −1 (t 0 ) . It is easy to show that, for each y ∈ Y ′ ≡ ∪ t∈M X t the set ν • µ −1 (y) is a complex analytic subspace of M . We denote by P y its manifold content, i.e. P y = ν • µ −1 (y) \ {singular points}. If the natural evaluation map H 0 (X t , N Xt|Y ) −→ N z φ −→ φ(z), where N z is the fibre of N at a point z ∈ X t and φ(z) is the value of the global section φ ∈ H 0 (X t , N Xt|Y ) at z, is surjective at all points z ∈ X t and for all t ∈ M , then P y = ν • µ −1 (y). 3. The main theorem. The idea of studying differential geometry on the moduli space of compact complex submanifolds of a given ambient complex manifold goes back to Penrose [Pe] who discovered self-dual conformal structures automatically induced on 4-dimensional moduli spaces of rational curves with normal bundle N = C 2 ⊗ O(∞). In this section we show that moduli spaces of compact complex hypersurfaces often come equipped canonically with induced projective structures satisfying some natural integrability conditions. Other manifestations of general and strong links between complex analysis and differential geometry can be found in Merkulov's survey [M]. Theorem 1 Let X ֒→ Y be a compact complex submanifold of codimension 1 with normal bundle N such that H 1 (X, N ) = 0 and let M be the associated complete moduli space of relative deformations of X inside Y . If H 1 (X, O X ) = ′, then a sufficiently small neighbourhood M 0 ⊂ M of the point t 0 ∈ M corresponding to X, comes equipped canonically with a projective structure such that, for every point y ∈ Y ′ ≡ ∪ t∈M X t , the associated submanifold P y ⊆ ν • µ −1 (y) ∩ M 0 is totally geodesic. Proof. An open neighbourhood of the submanifold X ֒→ Y can always be covered by a finite number of coordinate charts {W i } with local coordinate functions (w i , z a i ), a = 1, . . . , n = dim X, on each neighbourhood W i such that X ∩ W i coincides with the subspace of W i determined by the equation w i = 0. On the intersection W i ∩ W j the coordinates w i , z a i are holomorphic functions of w j and z b j , w i = f ij (w j , z b j ), z a i = g a ij (w j , z b j ), with f ij (0, z b j ) = 0. Here z j = (z 1 j , . . . , z n j ). Let U ⊂ M be a coordinate neighbourhood of the point t 0 with coordinate functions t α , α = 1, . . . , m = dim M . Then the coordinate domains U × W i with coordinate functions (w i , z a i , t α ) cover an open neighbourhood of X × U in the manifold Y × U . For a sufficiently small U , the submanifold F U ≡ ν −1 (U ) ֒→ Y ×U is described in each coordinate chart W i ×U by an equation of the form [K-1] w i = φ i (z i , t), where φ i (z i , t) is a holomorphic function of z a i and t α which satisfies the boundary conditions φ i (z i , t 0 ) = 0. For each fixed t ∈ U this equation defines a submanifold X t ∩ W i ֒→ W i . By construction, F U is covered by a finite number of coordinate neighbourhoods {V i ≡ W i × U | F } with local coordinate functions (z a i , t α ) which are related to each other on the intersections V i ∩ V j as follows z a i = g a ij (φ j (z j , t), z j ) . Obviously we have φ i (g ij (φ j (z j , t), z j ) , t) = f ij (φ j (z j , t), z j ). The Kodaira map k : T M | U −→ν 0 * ( N F | F U ) can be described in the following way: take any vector field v on U and apply the corresponding 1st-order differential operator V α ∂ α , where ∂ α = ∂/∂t α , to each function φ i (z i , t). The result is a collection of holomorphic functions σ i (z i , t) = V α ∂ α φ i (z i , t) defined respectively on V i . On the intersection V i ∩ V j one has σ i (z i , t)| z i =g ij (φ j ,z j ) = F ij (z j , t) σ j (z j , t), where F ij ≡ ∂f ij ∂w j w j =φ j (z j ,t) − ∂φ i ∂z a i z i =g ij (φ j ,z j ) ∂g a ij ∂w j w j =φ j (z j ,t) , is the transition matrix of the normal bundle N F | F U on the overlap F U ∩ V i ∩ V j . Therefore the 0-cochain {σ i (z i , t)} is aČech 0-cocycle representing a global section k(v) of the normal bundle N F over F U . Let us investigate how second partial derivatives of {φ i (z i , t)} and {φ j (z j , t)} are related on the intersection V i ∩ V j . Since ∂φ i (z i , t) ∂t α z i =g ij (φ j ,z j ) = F ij ∂φ j (z j , t) ∂t α we find ∂ 2 φ i ∂t α ∂t β z i =g ij (φ j ,z j ) = F ij ∂ 2 φ j ∂t α ∂t β + E ij ∂φ j ∂t α ∂φ j ∂t β − G ij α ∂φ j ∂t β − G ij β ∂φ j ∂t α ,(2) where E ij = ∂ 2 f ij ∂w j ∂w j w j =φ(z j ,t) − ∂φ i ∂z a i z i =g ij (φ j ,z j ) ∂ 2 g a ij ∂w j ∂w j w j =φ j (z j ,t) − ∂ 2 φ i ∂z a i ∂z b i z i =g ij (φ j ,z j ) ∂g a ij ∂w j ∂g b ij ∂w j w j =φ j (z j ,t) , and G ij α = ∂ 2 φ i ∂z a i ∂t α z i =g ij (φ j ,z j ) ∂g a ij ∂w j w j =φ j (z j ,t) . The collections {E ij } and {G ij α } form 1-cochains with coefficients in N * F and ν * (Ω 1 M ), respectively. Straightforward calculations reveal the obstructions for these two 1-cochains to be 1-cocycles, δ {E ik } = 2 ∂F ij (z j , t) ∂z a j ∂g a jk ∂w k w k =φ k (z k ,t) δ {G ik α } = ∂F ij (z j , t) ∂z a j ∂g a jk ∂w k w k =φ k (z k ,t) ∂φ j (z j , t) ∂t α . From these equations we conclude that the 1-cochain {τ ik α }, where τ ik α ≡ 1 2 E ik ∂φ k ∂t α − G ik α , is actually a 1-cocycle with values in ν * (Ω 1 M ). Since H 1 (X, O X ) = ′, the semi-continuity principle [K-2] implies H 1 (X t , O X⊔ ) = ′ for all points in some Stein neighbourhood M 0 ⊆ U . Hence, by the Leray spectral sequence H 1 ν −1 (M 0 ), ν * (Ω 1 M ) = 0. Therefore, the 1-cocycle {τ ik α } is always a coboundary {τ ij α } = δ {θ i α }, or more explicitly, τ ij α (z j , t) = F ij (z j , t) − θ i α (z i , t)| z i =g ij (φ j ,z j ) + θ j α (z j , t) ,(3)for some 0-cochain {θ i α (z i , t)} on ν −1 (M 0 ) with values in ν * (Ω 1 M ). However, this 0-cochain is defined non-uniquely -for any global section ξ = ξ α dt α of ν * (Ω 1 M ) over ν −1 (M 0 ) the 0-cochainθ i α (z i , t) = θ i α (z i , t) + ξ α (t)| ν −1 (M 0 )∩V i(4) splits the same 1-cocycle {τ ij α }. Note that, due to the compactness of the complex sub- manifolds ν −1 (t) ⊂ F for all t ∈ M 0 the components ξ α of the global section ξ ∈ H 0 (ν −1 (M 0 ), ν * (Ω 1 M )) are constant along the fibers, i.e. ξ α ∈ ν −1 (O M′ ). If we rewrite equation (2) in the form ∂ 2 φ i (z i , t) ∂t α ∂t β z i =g ij (φ j ,z j ) = F ij (z j , t) ∂ 2 φ j (z j , t) ∂t α ∂t β + τ ij α (z j , t) ∂φ j (z j , t) ∂t β + τ ij β (z j , t) ∂φ j (z j , t) ∂t α and take equation (3) into account, we obtain the equality ∂ 2 φ i ∂t α ∂t β + θ i α ∂φ i ∂t β + θ i β ∂φ i ∂t α z i =g ij (φ j ,z j ) = ∂ 2 φ j ∂t α ∂t β + θ j α ∂φ j ∂t β + θ j β ∂φ j ∂t α which implies that, for each value of α and β, the holomorphic functions, Φ i αβ (z i , t) ≡ ∂ 2 φ i (z i , t) ∂t α ∂t β + θ i α (z i , t) ∂φ i (z i , t) ∂t β + θ i β (z i , t) ∂φ i (z i , t) ∂t α , represent a global section of the normal bundle N F over ν −1 (M 0 ). Since the collections of functions {∂ α φ i (z i , t)} form aČech representation of a basis for the free O M′ -module ν 0 * N F | ν −1 (M 0 ) , the equality Φ iαβ (z i , t) = Γ γ αβ (t) ∂ α φ i (z i , t)(5) must hold for some global holomorphic functions Γ γ αβ on ν −1 (M 0 ). Since all the fibers ν −1 (t), t ∈ M 0 , are compact complex manifolds, these functions are actually pull-backs of some holomorphic functions on M 0 . A coordinate system {t α } on M 0 was used in the construction of Γ γ αβ (t). However from (5) it immediately follows that under general coordinate transformations t α −→ t α ′ = t α ′ (t β ) these functions transform according to Γ γ ′ α ′ β ′ = ∂t γ ′ ∂t δ Γ δ µν ∂t µ ∂t α ′ ∂t ν ∂t β ′ + ∂ 2 t δ ∂t α ′ ∂t β ′ . Thus from any given splitting {τ ij α } = δ {θ i α } of the 1-cocycle {τ ij α } we extract a symmetric affine connection Γ γ αβ (t). It is straightforward to check that this connection is independent of the choice of the (w i , z a i )-coordinate system used in the construction and thus is well-defined except for the arbitrariness in its construction described by the transformations (4) which, as one can easily check, change the connection as follows θ i α (z i , t) −→ θ i α (z i , t) + ξ α (t) Γ γ αβ (t) −→ Γ γ αβ (t) + ξ α (t) δ γ β + ξ β (t) δ γ α . Therefore we conclude that the neighbourhood M 0 of the point t 0 in the moduli space comes equipped canonically with a projective structure. Let us now prove that for each point y 0 ∈ Y ′ = ∪ t∈M 0 X t , the associated submanifold P y ⊆ ν • µ −1 (y) ⊂ M 0 is totally geodesic relative to the canonical projective connection in M 0 . Suppose that y 0 ∈ W i for some i. Then y 0 = (w i 0 , z a i 0 ) and the submanifold P y 0 is given locally by the equations w i 0 − φ i (z i 0 , t) = 0, where t ∈ ν • µ −1 (y 0 ) \ {singular points}. Then a vector field v(t) = V α ∂ α | Py 0 is tangent to P y 0 if and only if it satisfies the simultaneous equations V α ∂ α φ i (z i 0 , t) = 0.(6) In order to prove that the submanifold P y 0 for arbitrary y 0 ∈ Y ′ is totally geodesic relative to the canonical projective connection, we have to show that, for any vector fields v(t) = V α ∂ α and w(t) = W α ∂ α on P y 0 , the equation W β ∂ β V α + Γ α βγ V γ W β mod T P y 0 = 0.(7) holds. Since v(t) and w(t) are tangent to P y 0 ⊂ M , we have the equation W β (t) ∂ ∂t β (V α ∂ α φ i (z i 0 , t) = 0. (8) V α W β ∂ 2 φ i (z i 0 , t) ∂t α ∂t β = V α W β Γ γ αβ ∂φ i (z i 0 , t) ∂t γ . From the latter equation and equation (8) it follows that W β ∂ β V α + Γ α βγ V γ W β ∂φ i (z i 0 , t) ∂t α = 0. By (6) this means that W β ∂ β V α + Γ α βγ V γ W β ∂ α ∈ T P y 0 , and thus equation (7) holds. The proof is completed. ✷ We may have a moduli space even if the condition H 1 (X, N ) = 0 is not satisfied. Given a moduli space, the proof above provides a projective structure so we have the following global result. Corollary 2 Let {X t ֒→Y | t ∈ M } be a complete analytic family of compact complex hypersurfaces such that H 1 (X t , O X⊔ ) = ′ for all t ∈ M . Then the moduli space M comes equipped canonically with a projective structure such that, for every point y ∈ Y ′ , the associated submanifold P y = ν • µ −1 (y) ⊂ M is totally geodesic. We conclude this section with a brief geometric interpretation of geodesics canonically induced on moduli spaces of compact complex hypersurfaces. Any complex curve (immersed connected complex 1-manifold) in a complex manifold M has a canonical lift to a complex curve in the projectivized tangent bundle P M (T M ) -one simply associates to each point of the curve its tangent direction. Then a projective structure on M defines a family of lifted curves in P M (T M ) which foliates the projectivized bundle holomorphically [H, L]. Then, for geodesically convex M , the quotient space of this foliation, Z, is a (2n − 2)-dimensional manifold, where n = dim M . There is a double fibration Z τ ←− P M (T M ) σ −→ M(9) such that, for each z ∈ Z, σ • τ −1 (z) ⊂ M is a geodesic from the projective structure; for [L]. Let X 0 ֒→ Y be a compact complex submanifold of codimension 1 such that H 1 (X 0 , N ) = H 1 (X 0 , O X′ ) = ′ and let M be a geodesically convex domain in the associated complete moduli space of relative deformations of X 0 inside Y . The space of geodesics Z can be identified in this case with the family of intersections X s ∩ X t ⊂ Y , s, t ∈ M . From the explicit coordinate description of submanifolds X t ⊂ Y given in the proof of Theorem1 one can easily see that, for each t ∈ M , the intersection X t ∩X 0 is a divisor of the holomorphic line bundle on X 0 which is a holomorphic deformation of the normal bundle N . Since H 1 (X 0 , O X ) = ′, any holomorphic deformation of N must be isomorphic to N [K-3]. Therefore, each intersection X t ∩ X 0 is a divisor of the normal bunlde on X 0 , and, by completeness of the family {X t ֒→Y | t ∈ M }, all divisors of N arise in this way. If t 0 ∈ M is the point associated to X 0 ⊂ Y via the double fibration (1), then the set of all intersections X 0 ∩ X t is a projective space CP dim M− ⊂ Z associated to t 0 via the double fibration (9). Then a geodesic through the point t 0 ∈ M is a family of X t which have the same intersection with X 0 . each t ∈ M , τ • σ −1 (t) ⊂ Z is projective space CP ⋉− embedded into Z with normal bundle T CP ⋉− (− ) Applications and examples. One of the immediate applications of the theorem on projective connections is in the theory of 3-dimensional Einstein-Weyl manifolds. Hitchin [H] proved that there is a one-to-one correspondence between local solutions of Einstein-Weyl equations in 3 dimensions and pairs (X, Y ), where Y is a complex 2-fold and X is the projective line CP embedded into Y with normal bundle N ≃ O(∈). However the corresponding twistor techniques allowed one to compute only part of the canonical Einstein-Weyl structure induced on the complete moduli space M of relative deformations of X in Y , namely the conformal structure on M . Although the geodesics were formally described and the existence of a connection with special curvature was proved, no explicit formula for the connection was obtained. The theorem on projective connections fills this gap and provides one with a technique which is capable of decoding the full Einstein-Weyl structure from the holomorphic data of the embedding X ֒→ Y . We shall use Theorem 1 in some examples to compute explicitly the canonical projective connection and then the canonical Einstein-Weyl structure on the complete moduli space of rational curves embedded into a 2-dimensional complex manifold with normal bundle N ≃ O(∈). Consider a non-singular curve X of bidegree (1, n) in the quadric CP 1 × CP 1 . Then X is rational and has normal bundle O(2n) [P]. The space M of such curves can be described as follows: Let (ζ, η) be affine coordinates on CP 1 × CP 1 and consider the graph of a rational function of degree n: η = P (ζ) Q(ζ) P (ζ) = a n ζ n + a n−1 ζ n−1 + · · · + a 0 (10) Q(ζ) = b n ζ n + b n−1 ζ n−1 + · · · + b 0 The family of such (1, n)-curves is parameterized by CP 2n+1 and the space M of non-singular curves is CP 2n+1 \R where R is the manifold of codimension 1 and degree 2n given by the resultant of P and Q. The geodesics of the projective connection are again given by projective lines in CP 2n+1 \R. We may of course choose to describe the induced structure on the hypersurface given by R = 1, and for n = 1 this corresponds to the standard projective structure on SL(2, C) or on one of its real slices H 3 , S 3 . In order to obtain less trivial examples we consider branched coverings. Consider a complex curve C contained in a complex surface S. We want to construct a branched covering of a neighborhood of C branched along C. Choose coordinates (x i , y i ) on neighborhoods O i along C such that O i ∩ C is given by x i = 0. Then on overlaps we have x i = x j H ij (x j , y j ) and y i = K ij (x j , y j ). Now, we look for an n-fold cover branched along C: take patches W i with coordinates (w i , z i ) and define the covering map (w i , z i ) → (x i , y i ) = (w n i , z i ). This is a branched cover of O i branched along O i ∩ C. We want to identify the neighborhoods W i along the curve C to obtain a surface Y with a map π : Y → S which locally has the form above. We get w n i = x i = x j H ij (z j , y j ) = w n j H ij (w n j , z j ) If we make a choice of the n-th root and put ∼ Hij= H 1 n ij we get w i = w j ∼ Hij (w n j , z j ) = f ij (w j , z j ). The obstruction for this to work along the curve is the class ∼ Hij ∼ Hjk ∼ Hki∈ H 2 (C, Z/n). We can identify this obstruction to be the self-intersection number of C modulo n: since dx i = H ij (0, y j )dx j we see that H ij (0, y j ) represents the normal bundle N in H 1 (C, O * ). From the long exact sequence associated with 0 → Z → O exp → O * → 0 we see that the degree of N is equal to log H ij + log H jk + logH ki . Thus, the obstruction to obtain Y is equal to the self-intersection of C, modulo n. Each choice of H 1 n ij corresponds to an element in H 1 (C, Z/n). Unless the homology class of C in H 2 (S, Z) is divisible by n this local construction along the curve cannot be extended to work globally on S [A]. Now, let us return to the case where C is a (1, n)-curve in CP 1 × CP 1 . In this case C ∼ = CP 1 , so there is a unique n-fold covering Y branched along C which we cannot extend to all of CP 1 × CP 1 . The branch locus X ⊆ Y is a copy of C but deg N X = 1 n deg N C = 2 so we may describe an Einstein-Weyl structure on the moduli space of curves in Y [H] and contrary to earlier attempts we are now able to get the connection Γ γ αβ explicitly. Let us concentrate on (1, 2) curves and let C be the curve η = ζ 2 . The projection π maps the curves in Y onto those (1, 2)-curves which meet C in two points to second order. These curves may be given as in (10) with P (ζ) = ζ 2 − 2t 0 t 1 ζ − t 2 0 Q(ζ) = t 2 2 ζ 2 + 2t 1 t 2 ζ + 1 + 2t 0 t 2 + t 2 1 (see [P]). In order to describe the lifted curves we introduce the coordinates x 1 = η − ζ 2 x 2 = ∼ η − ∼ ζ 2 y 1 = ζ y 2 = ∼ ζ where ( ∼ ζ , ∼ η ) = ( 1 ζ , 1 η ) . Then C is given by x i = 0. Making the coordinate transformation (x 1 , y 1 ) −→ (w, z) = ( √ x 1 , y 1 ) (x 2 , y 2 ) −→ (ŵ,ẑ) = ( √ x 1 , y 1 ) we arrive at a covering of Y by two coordinate charts W andŴ which is exactly of the type used in the proof of Theorem 1 and has the transition functionŝ w = f (w, z),ẑ = g(z), given by f (w, z) = w z √ w 2 + z 2 g(z) = z −1 . The complete maximal family of relative deformations of C is described in this chart by the equations (in the notation of the proof of Theorem 1) w = φ(z, t) andŵ =φ(ẑ, t), with φ(z, t) = i R(z) Q(z) −1/2 ,φ(z, t) = i R(z) P (z) −1/2 , where R(z) = t 2 z 2 + t 1 z + t 0 . Note that a useful identity P = z 2 Q − R 2 holds [P]. Now we have all the data to apply the machinery developed in the proof of Theorem 1. Following that scenario one finds that the canonical projective structure on M can be represented by the following torsion-free affine connection Γ 0 01 = t 1 (1 + 3 t 0 t 2 ) (2 △) −1 , Γ 1 01 = t 2 (2 + t 2 1 + 2 t 0 t 2 )(2 △) −1 , Γ 0 00 = t 2 (1 + t 0 t 2 ) △ −1 , Γ 1 00 = −t 1 t 2 2 △ −1 , Γ 0 02 = t 0 (1 + t 0 t 2 + t 2 1 )(2 △) −1 , Γ 1 02 = −t 1 (1 + t 2 1 )(2 △) −1 , Γ 0 11 = −t 0 (1 + t 0 t 2 ) △ −1 , Γ 1 11 = t 0 t 1 t 2 △ −1 Γ 0 12 = −t 2 0 t 1 (2 △) −1 , Γ 1 12 = −t 0 (1 + t 0 t 2 + t 2 1 )(2 △) −1 and all other Christoffel symbols being zero. Here △ = (1 + t 0 t 2 ) 2 + t 2 1 (1 + 2 t 0 t 2 ). Note that △ 2 = R where R is the resultant of the polynomials in (10). The conformal structure [g] on M is given by the condition for the curves to meet to second order. Thus we may choose the following metric in the conformal structure [P] g = t 2 1 t 2 2 dt 2 0 + (1 + t 0 t 2 ) 2 dt 2 1 + 4t 2 0 (1 + t 2 1 )dt 2 2 + 2t 1 t 2 (1 + t 0 t 2 )dt 0 dt 1 − 4(1 + t 2 1 )(1 + t 0 t 2 )dt 0 dt 2 − 4t 2 0 t 1 t 2 dt 1 dt 2 Since our connection ∇ is projectively equivalent to the Weyl connection D it satisfies (∇g) αβγ = a α g βγ + b β g αγ + b γ g αβ for some 1-forms a = 2 i=0 a α dt α and b = 2 i=0 b α dt α . We may solve these equations and present the Weyl connection D in terms of the Levi-Civita connection ∇ g and the 1-form ω = a − 2b = α ω α dt α , D = ∇ g + 1 2 ω # g − ω ⊙ I see [PT]. We get a 0 = 3 t 2 1 t 2 (2 △) −1 , a 1 = −3 t 1 (1 + t 0 t 2 )(4 △) −1 , a 2 = −3 t 0 (1 + t 0 t 2 + t 2 1 )(2 △) −1 , b 0 = −3 t 2 1 t 2 (4 △) −1 , b 1 = −3 t 1 (1 + t 0 t 1 )(4 △) −1 , b 2 = −3 t 0 (1 + t 0 t 2 + t 2 1 )(2 △) −1 . Thus using only the methods of the relative deformation theory of compact hypersurfaces we computed the full Einstein-Weyl structure on the moduli space. Suppose we blow up a point s on the quadric and take a (1, n)-curve passing through the point. Then in the blown up surface the curve will have self-intersection number 2n − 1 and this corresponds to considering all the (1, n)-curves passing through s. We may combine this with the branched covering construction. In [PT] we considered the Einstein-Weyl structure associated to the (1, 3)-curves: first we considered the 2-fold branched cover which reduced the degree of the normal bundle from 6 to 3 and then we blew up a point on the branch locus to get self-intersection equal to 2. Again we may compute the Weyl connection or compute the connection associated to any combination of blow up and branched cover. This will give non trivial examples with normal bundle O(n) for any n. Acknowledgments. It is a pleasure to thank Paul Tod for many valuable discussions and comments. Thanks are also due to Stephen Huggett, Yat Sun Poon and the anonymous referees for helpful remarks. One of the authors (SM) is grateful to the Department of Mathematics and Computer Science of Odense University for hospitality and financial support. The signature of fibre bundles. M F Atiyah, Global Analysis. honor of K. Kodaira (D. C. Spencer and S. YanagaPrincetonPrinceton Univ. PressM.F. Atiyah, The signature of fibre bundles, in Global Analysis, Papers in honor of K. Kodaira (D. C. Spencer and S. Yanaga, eds.), Princeton Univ. Press, Princeton, 73-84, 1969. Twistor geometry and non-linear systems. N Hitchin, Complex manifolds and Einstein's equations. H. D. Doebner, et al.Berlin Heidelberg New YorkSpringer-Verlag970N. Hitchin, Complex manifolds and Einstein's equations, In: H. D. Doebner, et al. (eds.) Twistor geometry and non-linear systems, Lect. Notes Math. , vol. 970, pp. 73-99, Berlin Heidelberg New York: Springer-Verlag (1982). A theorem of completeness of characteristic systems for analytic families of compact submanifolds of complex manifolds. K C Kodaira ; D, Spencer, Complex manifolds and deformations of complex structures. New-York Berlin Heidelberg TokyoSpringer-Verlag75Ann. Math.K. Kodaira, A theorem of completeness of characteristic systems for analytic families of compact submanifolds of complex manifolds, Ann. Math., 75 (1962), 146-162. [K-2] , Complex manifolds and deformations of complex structures, New-York Berlin Heidelberg Tokyo, Springer-Verlag, 1986. [K-3] and D.C. Spencer, On deformations of complex analytic structures, I, Ann. Math., 67 (1958), 328-401. C Lebrun, Spaces of complex geodesics and related structures. Oxford UniversityD. Phil. ThesisC. LeBrun, Spaces of complex geodesics and related structures, D. Phil. Thesis, Oxford University, 1980. Relative deformation theory and differential geometry. S A Merkulov, S. A. Huggett, ed. Twistor TheoryNew York, Marcel Dekkerto appear 1994S.A. Merkulov, Relative deformation theory and differential geometry, In: S. A. Huggett, ed. Twistor Theory, New York, Marcel Dekker, to appear 1994. Einstein-Weyl Spaces and (1, n)-Curves in the Quadric Surface. H Pedersen, Ann. Global Anal. Geom. 4H. Pedersen, Einstein-Weyl Spaces and (1, n)-Curves in the Quadric Surface, Ann. Global Anal. Geom., 4 (1986), 89-120. Three-Dimensional Einstein-Weyl Geometry. H Pedersen, K P Tod, Adv. Math. 97H. Pedersen and K.P. Tod, Three-Dimensional Einstein-Weyl Geometry, Adv. Math., 97 (1992), 74-109. Non-linear gravitons and curved twistor theory. R Penrose, Gen. Rel. Grav. 7R. Penrose, Non-linear gravitons and curved twistor theory, Gen. Rel. Grav., 7 (1976), 31-52.
[]
[ "The nuclear reaction network WinNet", "The nuclear reaction network WinNet" ]
[ "M R ", "C W ", "O K ", "A A ", "J B ", "M E ", "U F ", "C F ", "R H ", "M J ", "J K ", "G M -P \nDepartament d'Astonomia i Astrofísica\nUniversitat de València\nEdifici d'Investigatció Jeroni Munyoz, C/Dr. Moliner, 50E-46100BurjassotValència)Spain\n\nDepartment of Physics\nUniversity of Basel\nKlingelbergstrasse 82CH-4056BaselSwitzerland\n\nCenter for Theoretical Astrophysics\nLos Alamos National Laboratory\n87545Los AlamosNMUSA\n\nInstitut für Kernphysik (Theoriezentrum)\nTechnische Universität Darmstadt\nSchlossgartenstr. 2D-64289DarmstadtGermany\n\nGSI Helmholtzzentrum für Schwerionenforschung GmbH\nPlanckstr. 1D-64291DarmstadtGermany\n\nDepartment of Physics\nNorth Carolina State University\n27695RaleighNCUSA\n\nAstrophysics Group\nLennard-Jones Laboratories\nKeele University\nST5 5BGKeeleUK\n\nInstitute for the Physics and Mathematics of the Universe (WPI)\nUniversity of Tokyo\n5-1-5 Kashiwanoha277-8583KashiwaJapan\n\nCentre for Astrophysics Research\nUniversity of Hertfordshire\nAL10 9ABHatfieldUnited Kingdom\n", "D M ", "D M ", "T R ", "F.-K T " ]
[ "Departament d'Astonomia i Astrofísica\nUniversitat de València\nEdifici d'Investigatció Jeroni Munyoz, C/Dr. Moliner, 50E-46100BurjassotValència)Spain", "Department of Physics\nUniversity of Basel\nKlingelbergstrasse 82CH-4056BaselSwitzerland", "Center for Theoretical Astrophysics\nLos Alamos National Laboratory\n87545Los AlamosNMUSA", "Institut für Kernphysik (Theoriezentrum)\nTechnische Universität Darmstadt\nSchlossgartenstr. 2D-64289DarmstadtGermany", "GSI Helmholtzzentrum für Schwerionenforschung GmbH\nPlanckstr. 1D-64291DarmstadtGermany", "Department of Physics\nNorth Carolina State University\n27695RaleighNCUSA", "Astrophysics Group\nLennard-Jones Laboratories\nKeele University\nST5 5BGKeeleUK", "Institute for the Physics and Mathematics of the Universe (WPI)\nUniversity of Tokyo\n5-1-5 Kashiwanoha277-8583KashiwaJapan", "Centre for Astrophysics Research\nUniversity of Hertfordshire\nAL10 9ABHatfieldUnited Kingdom" ]
[]
We present the state-of-the-art single-zone nuclear reaction network W N that is capable of calculating the nucleosynthetic yields of a large variety of astrophysical environments and conditions. This ranges from the calculation of the primordial nucleosynthesis where only a few nuclei are considered to the ejecta of neutron star mergers with several thousands of involved nuclei. Here we describe the underlying physics and implementation details of the reaction network. We additionally present the numerical implementation of two different integration methods, the implicit Euler method and Gears method along with their advantages and disadvantages. We furthermore describe basic example cases of thermodynamic conditions that we provide together with the network and demonstrate the reliability of the code by using simple test cases. Once the manuscript has been accepted for publication, W N will be publicly available and open source.
null
[ "https://export.arxiv.org/pdf/2305.07048v1.pdf" ]
258,676,551
2305.07048
cb8ff32b0f850356419b0ed23519b7b506166e05
The nuclear reaction network WinNet M R C W O K A A J B M E U F C F R H M J J K G M -P Departament d'Astonomia i Astrofísica Universitat de València Edifici d'Investigatció Jeroni Munyoz, C/Dr. Moliner, 50E-46100BurjassotValència)Spain Department of Physics University of Basel Klingelbergstrasse 82CH-4056BaselSwitzerland Center for Theoretical Astrophysics Los Alamos National Laboratory 87545Los AlamosNMUSA Institut für Kernphysik (Theoriezentrum) Technische Universität Darmstadt Schlossgartenstr. 2D-64289DarmstadtGermany GSI Helmholtzzentrum für Schwerionenforschung GmbH Planckstr. 1D-64291DarmstadtGermany Department of Physics North Carolina State University 27695RaleighNCUSA Astrophysics Group Lennard-Jones Laboratories Keele University ST5 5BGKeeleUK Institute for the Physics and Mathematics of the Universe (WPI) University of Tokyo 5-1-5 Kashiwanoha277-8583KashiwaJapan Centre for Astrophysics Research University of Hertfordshire AL10 9ABHatfieldUnited Kingdom D M D M T R F.-K T The nuclear reaction network WinNet D M 15, 2023 Typeset using L A T E X twocolumn style in AASTeX631methods: numerical -nuclear reactionsnucleosynthesisabundances We present the state-of-the-art single-zone nuclear reaction network W N that is capable of calculating the nucleosynthetic yields of a large variety of astrophysical environments and conditions. This ranges from the calculation of the primordial nucleosynthesis where only a few nuclei are considered to the ejecta of neutron star mergers with several thousands of involved nuclei. Here we describe the underlying physics and implementation details of the reaction network. We additionally present the numerical implementation of two different integration methods, the implicit Euler method and Gears method along with their advantages and disadvantages. We furthermore describe basic example cases of thermodynamic conditions that we provide together with the network and demonstrate the reliability of the code by using simple test cases. Once the manuscript has been accepted for publication, W N will be publicly available and open source. INTRODUCTION Nuclear reaction networks are crucial to investigate the synthesis of elements and their isotopes in astrophysical events. While the events can vastly differ in their conditions, the procedure to derive their ejecta composition is always similar. The foundation of the understanding of the origin of elements has been outlined already in Alpher et al. (1948), the so called -Paper. The field of nucleosynthetic calculations encompasses the production of the light elements during the Big Bang (e.g., Peebles 1966;Wagoner et al. 1967;Yang et al. 1984;Boesgaard & Steigman 1985;Kawano et al. 1988;Olive et al. 1990;Walker et al. 1991;Smith et al. 1993;Cyburt et al. 2016;Coc & Vangioni 2017;Pitrou et al. 2018Pitrou et al. , 2021Fields & Olive 2022), the element production during the lifetime of stars (see e.g., Kippenhahn et al. 2013;Karakas & Lattanzio 2014;Karakas & Lugaro 2016;Bisterzo et al. 2017;Kobayashi et al. 2020;Busso et al. 2021;Doherty et al. 2017;Gil-Pons et al. 2018;Leung & Nomoto 2018;Leung et al. 2020;Arnett 1977;Woosley & Weaver 1995;Heger et al. 2003;Heger & Woosley 2010;Maeder & Meynet 2012;Frischknecht et al. 2016;Thielemann et al. 2018a;Limongi & Chieffi 2018;Arnett et al. 2019;Kaiser et al. 2020;Eggenberger et al. 2021), and more violent explosive events such as classical novae (e.g., Arnould et al. 1980;Wiescher et al. 1986;José et al. 2004;Jose 2016;Vasini et al. 2022), x-ray bursts (e.g., Wiescher et al. 1986;Rembges et al. 1997;Schatz et al. 1998;Cyburt et al. 2010;Jose 2016;Meisel et al. 2020), type Ia supernovae (e.g., Arnett 1969;Arnett et al. 1971;Iben & Tutukov 1984;Nomoto et al. 1984;Woosley et al. 1986;Mueller & Arnett 1986;Thielemann et al. 1986;Khokhlov et al. 1993;Höflich et al. 1998;Röpke et al. 2012;Hillebrandt et al. 2013;Pakmor et al. 2013;Dan et al. 2015;Maeda & Terada 2016;García-Senz et al. 2016;Jiang et al. 2017;Röpke & Sim 2018;Thielemann et al. 2018b;Shen et al. 2018;Leung & Nomoto 2018;Gronow et al. 2021;Lach et al. 2022), Core-collapse supernovae (e.g., Kotake et al. 2012;Burrows 2013;Janka et al. 2016;Müller 2016;Radice et al. 2018;Müller 2020;Vartanyan et al. 2022) with a focus on nucleosynthesis (e.g., Woosley & Weaver 1995;Thielemann et al. 1996;Woosley & Heger 2006;Heger & Woosley 2010;Perego et al. 2015;Sukhbold et al. 2016;Wanajo et al. 2018;Curtis et al. 2019;Ghosh et al. 2022), or a focus on r-process or neutrino driven winds in supernovae (e.g., Qian & Woosley 1996;Cardall & Fuller 1997;Hoffman et al. 1997;Otsuki et al. 2000; Thompson et al. 2001;Wanajo et al. 2001;Fröhlich et al. 2006a;Kratz et al. 2008;Bliss et al. 2020), magnetorotational supernovae (Nishimura et al. 2006;Winteler et al. 2012;Nishimura et al. 2015Nishimura et al. , 2017bMösta et al. 2018;Reichert et al. 2021;Powell et al. 2022;Reichert et al. 2023), collapsars (MacFadyen & Woosley 1999;Surman & McLaughlin 2004;McLaughlin & Surman 2005;Fujimoto et al. 2008;Siegel et al. 2019;Miller et al. 2020;Zenati et al. 2020; Barnes & Metzger 2022;Just et al. 2022a), and neutron star mergers (e.g., Freiburghaus et al. 1999;Korobkin et al. 2012;Martin et al. 2015;Wu et al. 2016aWu et al. , 2019Holmbeck et al. 2019;Wanajo et al. 2021;Rosswog & Korobkin 2022;Kullmann et al. 2022a,b). Without nucleosynthesis calculations a whole layer of information and observables would remain inaccessible. Some applications require a complex modelling that takes diffusion and nuclear burning simultaneously into account (such as e.g., the oxygen burning phase of a star or the rapid accreting white dwarfs, Hix & Thielemann 1999;Denissenkov et al. 2019) and the nuclear reaction network has therefore to be included into a hydrodynamical simulation. This often has the consequence that only a restricted number of nuclei are considered in the calculation (from the 13 or 14 alpha nuclei network -13 or 14 -developed by Thielemann and used e.g. in Mueller 1986;Benz et al. 1989;Livne & Arnett 1995;Garcia-Senz et al. 2013;García-Senz et al. 2016) over small quasi-equilibrium networks (as e.g. Hix et al. 1998;Timmes et al. 2000;Hix et al. 2007, named QE-reduced or iso7), to slightly enlarged networks beyond 13 -like net21 -which include additional neutron-rich isotopes in the Fe-group in order to be able to follow below 0.5 (for a comparison of these approaches see Bravo 2020). Recently such methods have been extended to networks which contain up to the order of hundred nuclei (Harris et al. 2017;Sandoval et al. 2021;Navó et al. 2022). These so called in-situ networks have the advantage of providing an accurate nuclear energy production as well as more precise nucleon abundances that imply more realistic neutrino opacities for the feedback to the simulation (e.g., Mueller 1986;Nakamura et al. 2014;Harris et al. 2017;Navó et al. 2022). On the other hand, simplifying assumptions within the nuclear reaction network equations, artificial numerical diffusion (e.g., Fryxell et al. 1991;Hix & Thielemann 1999;Plewa & Müller 1999), and the reduced set of nuclei in such energy generation networks can make the predicted ejecta composition, even with extended postprocessing networks, more uncertain (this is nicely shown in Bravo 2020). For astrophysical scenarios with a much larger diffusion timescale compared to the nuclear burning timescale, one can trace the ejecta with passively advected particles whose movements are influenced by the velocity field of the fluid (e.g., Nagataki et al. 1997;Seitenzahl et al. 2010;Nishimura et al. 2015;Harris et al. 2017;Bovard & Rezzolla 2017;Sieverding et al. 2022). These so called tracer particles record the thermodynamic conditions as well as the neutrino fluxes in time. In case that the impact of diffusion on the composition is negligible compared to the burning, each tracer can be calculated individually and the total ejected matter is the (possibly weighted) average over all tracer particles individually. Reaction networks that are based on individual tracers (or zones) that are unable to interact with each other are called singlezone nuclear reaction networks. The advantage of those codes is that they can include a much more complete set of nuclei and reactions. This enables the calculation of the synthesis of the heaviest known elements with typically ∼ 7000 nuclei and ∼ 90000 reactions involved. The compilation of a consistent reaction database is especially challenging. Nuclear reactions are often provided in different formats and in different databases that are individually complete. Among others, the largest and publicly available databases are the JINA Reaclib database (Cyburt et al. 2010), Bruslib (Aikawa et al. 2005), the Starlib database (Sallaska et al. 2013), NACRE (Xu et al. 2013), and KADONIS (Dillmann et al. 2006). However, none of the aforementioned libraries provides a complete set of electron/positron captures as well as + / − decays at stellar conditions (Fuller et al. 1982(Fuller et al. , 1985Oda et al. 1994;Langanke & Martínez-Pinedo 2001;Pruet & Fuller 2003;Suzuki et al. 2016), neutrino reactions (e.g., Bruenn 1986;Langanke & Kolbe 2002;Fröhlich et al. 2006a;Sieverding et al. 2018Sieverding et al. , 2019 or fission reactions and fragment distributions (Panov et al. 2005;Goriely et al. 2009;Panov et al. 2010;Petermann et al. 2012;Eichler et al. 2015;Vassh et al. 2019). For an almost complete survey of all these resources see the JINAWEB collected list . Therefore, nuclear reaction networks always have to perform a certain amount of merging of the reaction rates if one wants to use a complete as possible set of reaction rates. Doing this rigorously can be a major task, as the consistency depends not only on not adding reactions twice or leaving them out, but also on adding reactions with the same underlying nuclear input, such as the same nuclear masses, which, far from stable nuclei, are theoretically calculated. From a numerical point of view, reaction networks can be challenging as well. The huge differences in timescales of the reaction rates (e.g., weak decays versus strong reactions) introduce a stiffness into the differential equations. As https://www.jinaweb.org/science-research/scientific-resources/data a consequence, explicit integration methods become unstable and implicit methods have to be applied. A full implicit implementation was first achieved by Truran et al. (1966b), Truran et al. (1967), Arnett & Truran (1969), Woosley et al. (1973), Arnould (1976), and Thielemann et al. (1979). While nowadays usually the first order implicit Euler scheme is used within large nuclear reaction networks, tests with higher order implicit schemes such as the Gear scheme have been performed as well (e.g., Timmes 1999;Longland et al. 2014). There exist a variety of single-zone reaction networks with fully implicit schemes in the literature e.g., the SantaCruzcode by the Woosley group, going back to Woosley et al. (1973) which followed Arnett (1969) and Truran et al. (1966a) introducing a complete Newton-Raphson scheme, B N (Thielemann et al. 1979, for an early comparison of the two codes and the implemented reaction rate libraries see Hoffman et al. 1999), X (Hix & Thielemann 1999), NET (Wanajo et al. 2001), CFN (Fröhlich et al. 2006b), N N (Meyer & Adams 2007), (Kostka et al. 2014), T (Paxton et al. 2015), GSI (Mendoza-Temis et al. 2015), S N (Lippuner & Roberts 2017), P (Mumpower et al. 2018;Sprouse et al. 2021), and other unnamed reaction networks (e.g., Timmes 1999;Iliadis et al. 2002;Otsuki et al. 2003;Koike et al. 2004;Goriely et al. 2011) . However, only a small subset of them is publicly available, among them T , , N N , X , and S N . Here we present the single-zone nuclear reaction network code W N , an updated version of the reaction network that has been first used in the context of Big Bang nucleosynthesis in Vonlanthen et al. (2009) and later for calculating the synthesis of heavy elements in Winteler et al. (2012). W N has a common origin to many other previously mentioned reaction networks such as XN , CFN , and GSIN as all of them got influenced by B N that served as an initial template. W N has been already used for different astrophysics problems, however it was not publicly available. The code has been entirely written in Fortran 90 and has a user-friendly interface. W N is able to merge reaction rates from multiple sources and is designed for high-performance computations. It includes two fully implicit schemes, the first order implicit Euler-backward scheme and the higher order Gear scheme. This paper presents the basics of nuclear reaction networks as well as provides insight of the implementations within W N . In sect. 2 we present the fundamental physic concepts for nuclear reaction networks. This includes the derivation of the ordinary differential equation that is solved in W N (sect. 2.1), the principle of detailed balance (sect. 2.2), the concept of nuclear statistical equilibrium (sect. 2.3), a method to account for nuclear energy generation within the temperature evolution (sect. 2.4), and the treatment of coulomb corrections (sect. 2.5). The code structure and included numerical solvers are presented in sect. 3. The different supported reaction rate formats are introduced in sect. 4. Applications and test cases are presented in sect. 5. We close with a summary in sect. 6. NUCLEAR REACTION NETWORK FUNDAMENTALS Nuclear reaction networks The fundamental theory behind nuclear reaction networks reaches back far in the past (see e.g. for a few references, Truran et al. 1966a;Clayton 1968;Arnett & Truran 1969;Woosley et al. 1973;Hix & Thielemann 1999;Hix & Meyer 2006;Winteler et al. 2012;. Here we repeat briefly how to derive the differential equations, but refer the reader to previous publications for more details. The cross section of a reaction = number of reactions per target and second flux of incoming particles , is related to the probability of a nucleus to react with nucleus . If (e.g. in laboratory conditions like accelerator experiments) the relative velocity between target and projectile is a constant value , it is given by = / .(2) Here, is the number of reactions per volume and time, and are the number densities of the target and projectile, respectively. In an astrophysical plasma, both target and projectile follow specific velocity distributions depending on the environment conditions like temperature and density (and the reaction cross section is that of a target with thermally populated excited states, e.g. Fowler 1974;Holmes et al. 1976;Rauscher & Thielemann 2000;Rauscher 2022). For an arbitrary velocity distribution, can be expressed as: , = ∫ (|ì − ì |) · |ì − ì |d d .(3) In thermal equilibrium, the velocity (or momentum or energy) distribution depends on the type of particle, i.e., photons obey a Planck distribution and nuclei obey in most cases a Maxwell-Boltzmann distribution. Therefore, for photons, d is given by = 1 2 ( ℏ) 3 2 exp /( B ) − 1 d(4) and for nuclei d is expressed by d = 2 B 3/2 exp − 2 2 B d 3 ì ≡ ( ) d 3 ì ,(5) where is the nuclear mass. For reactions between a nucleus and a photon is therefore given by , = 2 2 ℏ 3 ∫ ∞ 0 ( ) 2 exp /( B ) − 1 d ≡ , ( ).(6) Reactions of this type are called photodisintegrations. For the case of two nuclei, the is given by , = ∫ (| − |) × | − | ( ) ( )d 3 ì d 3 ì = , ,(7) where , stands for the product integrated over thermal distributions. Furthermore, an additional factor has to be introduced to avoid double counting of identical project and target nuclei. Equation (7) becomes , = 1 1 + ,(8) with the delta in the usual sense, i.e., , = 1 for = , otherwise , = 0. We get , , = 1 1 + + + + 2 ≡ 1 1 + Δ .(9) in case of three participating nuclei, where stands for three body reactions (in most cases a sequence of two twobody reactions and an intermediate reaction product with an extremely short half-life, see e.g. Nomoto et al. 1985;Görres et al. 1995). For example, the triple reaction, which describes the probability of three helium nuclei to form 12 C has a pre-factor of 1/(1 + Δ ) = 1/6. Within a fluid that moves with velocity ì, does not only change by nuclear reactions but also by the net-flow into the volume. We have = − ì ∇ · ( ì) + ∑︁ + ∑︁ , , , + ∑︁ , ,, , , , − ì∇ · ( ì) + nuc ,(10) where we introduced the factors , , , , , that account for the number of particle that gets destroyed (negative) or created (positive) in the reaction. The first term in the equation, − ì ∇· ( ì), accounts for changes due to the fluid flow with velocity ì and the last term nuc accounts for changes due to nuclear reactions. We can reformulate the previous equation by using the Lagrangian time derivative that is related to the Eulerian time derivative via = + ì · ì ∇.(11) We can therefore obtain = − ì · ì ∇ ,(12) and, as a consequence Eq. (10) becomes = − ì ∇ · ì + nuc .(13) Using the continuity equation and the Lagrangian time derivative = − ì ∇ · ( ì) = −ì · ì ∇ − ì ∇ · ì ⇒ = − ì ∇ · ì,(14) and therefore ì ∇ · ì = − 1 . Thus, we can insert into Eq. (13) and get − = ( / ) = nuc .(16) This derivation has been done previously (see e.g., Mihalas 1999) in the context of atomic processes related to radiation transport, but as shown here it is also valid in the context of nuclear reactions (see also . In order to obtain a density-independent expression instead of utilizing number densities , we introduce the density (or mass) fraction of nucleus , , which can be expressed via = = = A ≈ .(17) This includes the mass of nuclei A (where A is the relative atomic mass, which can be with a permille error approximated by , the mass number of nucleus , and = ( 12 C)/12 is the atomic mass unit). Alternatively, one can introduce an abundance, without the inclusion of the weight or mass of a nucleus, as the fraction of the number density of nucleus in comparison to the total number density of nucleons, approximated by = / , that is conserved by nuclear reactions = = / = = A A .(18) This definition seems to differ from the traditionally utilized one in nucleosynthesis literature, introduced by Fowler et al. = A ,(1967) as it includes the Avogadro constant A rather than the nuclear mass unit . Eqs. (19) and (18) differ by the product = A , the molar gas constant, which had until 2019 the value 10 −3 kg/mole or 1 g/mole in cgs units,leading in Eq. (19) to an abundances measure in mole/g and in Eq. (18) to a dimensionless number. When utilizing the present values of the natural constants (see Table XXXI in Tiesinga et al. 2021) with A = 6.02214076 × 10 23 mole −1 (exact) and = ( 12 C)/12 = 1.6605390660(50) × 10 −24 g with a relative uncertainty of 3 × 10 −10 , one obtains for the molar mass constant = A = 0.99999999965(30) g mol −1 , i.e. equal to 1 with an uncertainty of 3 × 10 −10 . Thus, both expressions are numerically identical with an extremely high accuracy in cgs units. However, the different dimensions of Eqs. (18) and (19) can introduce some confusion (see also Rauscher 2020). In this paper, we continue to utilize the traditional definition for abundances (Eq. (19), but in agreement with Eq. (17) and = / we will treat mass fractions as well as abundances as dimensionless numbers. When expressing the number densities in terms of abundances , Eq. (16) leads to the form = = ∑︁ (1-body) + ∑︁ , , 1 + A , (2-body) + ∑︁ , , , , 1 + Δ 2 2 A . (3-body),(20) where the individual terms can be identified with specific reactions, neglecting reactions involving four or more participants. The first term, standing for 1-body reactions, usually includes decays, photodisintegrations, electron-or positroncaptures, and neutrino absorption. The equation is often called nuclear reaction network equation. It is the fundamental differential equation that is solved within W N . Note that all A terms would be replaced by / , when utilizing the alternative definition of abundances , which would replace A by −1 . Detailed balance Reverse or backward reactions have a direct relation to the forward reaction by the so called detailed balance theorem. We denote as forward reaction those with a positive Q-value defined as the difference between initial and final ground state masses. The relation between both can be expressed as (e.g., Fowler et al. 1967 ) backward = Δ reactants Δ products reactants =1 ( ) products =1 ( ) (21) × reactants =1 products =1 reactants =1 products =1 3 /2 × B 2 ℏ 2 3 /2 exp [− /( B )] forward , where Δ is the double counting factor for reactants/products as in Eq. (20), are the partition functions, the spin factor defined as = 2 + 1 with the spin of the ground state, the mass of nucleus , the Q-value of the reaction, forward the cross section of the forward reaction, and is the difference between the number of reactants and the number of reaction products. Eq. (21) needs to be modified for photodisintegration reactions. There backward should be replaced by backward . In this case, ≠ 0 and we therefore get the additional factors that are introduced with in the exponential in Eq. (21). This is consistent with literature (e.g., Fowler et al. 1967) and the Reaclib reverse rates. Therefore, in practice, we can use the above equation for both cases, capture reactions and photodisintegrations. Eq. (21) is also valid for 3-body reactions replacing by . It should be mentioned here that the relations in this section include that nuclei in a thermal environment exist with thermally populated excited states. Within the Jina Reaclib framework, the Q-Values are given for each reaction. Additionally, the mass excesses of all nuclei can be found in a separate file (called winvn). Ideally the mass excess is consistent with the Q-value in the Reaclib, however, as pointed out already in , currently there are inconsistencies between these values. Because the reverse rates in Reaclib use the detailed balance principle with the Q-value from the Reaclib, there can be an inconsistency at the transition of NSE to the network equations caused by the inconsistent Q-values (see Fig. 2). Therefore, W N is able to calculate detailed balance with the Q-values obtained from the mass excess. We note that there is no optimal solution for this inconsistency. Using the Q-value from the mass excess will make the calculation consistent with NSE, but introduces an inconsistency with the forward rate as this was calculated on the basis of a different Q-value. Often, it is however more important to be consistent with the equilibrium values. As already mentioned in , this inconsistency in the Reaclib database may be resolved in the future. However, one philosophy of the JINA Reaclib database is to have up to date nuclear masses. Recalculating all reaction rates whenever a new mass is available may not be feasible. To a certain degree this inconsistency may therefore always be present (Schatz 2022). In any case, the advantage of an on-the-fly calculation of reverse rates is also given when using tabulated rates. For these rates, a tabulation for forward and reverse rates may break the detailed balance principle and it can be more consistent to calculate the reverse rate based on the tabulation of the forward rate. Nuclear statistical equilibrium For high temperatures in explosive environments, typically in excess of about 6 GK, reactions mediated by the strong and electromagnetic interaction are in equilibrium. For these conditions one can simplify the treatment, replacing the reaction network equations by utilizing an equilibrium approach, which can be expressed in terms of the chemical potentials of the nuclei ( , ) = + ,(22) where ( , ) is the chemical potential for a nucleus with mass number = + , the chemical potential of neutrons and the chemical potential of protons. For low enough densities, nucleons (fermions) are non degenerate and therefore described well by the Maxwell-Boltzmann statistics. Introducing this for the related chemical potentials in Eq. (22) leads to the so called Saha equations (for a detailed derivation of nuclear statistical equilibrium see e.g. Hix & Thielemann 1999;Iliadis 2015; for an approach using detailed balance, e.g., Clayton 1968): ( , ) = , , ( ) −1 3/2 2 2 ℏ B 3 2 ( −1) , / − ,(23) with the spin factor = 2 + 1, where is the spin of the ground state, the partition function , , and binding energy of a nucleus , . Furthermore, additional constraints of mass conservation and charge neutrality hold: ∑︁ = 1 (mass conservation)(24)∑︁ = (charge neutrality).(25) This set of equations has two unknowns, namely the abundances of protons and neutrons , because temperature, density and electron fraction are assumed to be known quantities (e.g., from a hydrodynamical simulation). The composition is a function of ( , , ) only. Especially, no information of the past behavior is necessary to determine the composition. The system of equations (24) and (25) is solved with a Newton-Raphson scheme. The convergence depends hereby dominantly on the initial guess. In W N this guess is obtained by starting to calculate NSE at a high temperature and descending to lower temperatures, taking the results of the higher temperatures as initial value for the lower ones. The initial composition at the starting temperature is assumed to consist of nucleons only with = 1 − and = . Weak reactions are evolved with a simplified reaction network that includes only these reactions in Eq. (20). After a timestep a new electron fraction is determined using Eq. (25) and the composition is recomputed assuming for NSE. This assumes that strong and electromagnetic reactions occur instantaneously following a weak reaction consistently with the NSE assumption. The implementation of screening corrections in NSE is discussed later on in Sect. 2.5. Nuclear heating A proper consideration of the impact of the energy produced by nuclear processes in the hydrodynamical evolution requires the use of an in-situ network as discussed in the introduction. However, in post-processing network calculations it is commonly assumed that the nuclear energy generation mainly affects the evolution of temperature (see e.g. Freiburghaus et al. 1999;Mueller 1986;. In the following, we describe the general description of energy generation and its treatment in W N . The evolution of a fluid element under exchange of heat with the surroundings in a local inertial frame comoving with the fluid is given by first law of thermodynamics + 1 = ,(26) where is the total energy (including rest-mass energy) per nucleon and is the net heat gained per nucleon. This includes heat produced by shocks or viscous heating or loss by neutrinos when weak processes are considered. Alternatively, if the fluid element is in equilibrium at all times we have + ∑︁ + = (27) = + ∑︁ ( + ) , with the entropy per nucleon in units of and the sum runs over all nuclear species. The term or accounts for the contribution of electrons and positrons. Typically, the densities we are interested in are such that matter is transparent to neutrinos. To ensure this, within W N we include a user defined parameter to specify the density below which nuclear heating will be taken into account. The energy carried by neutrinos per unit of time can be expressed as: loss = − ∑︁ ,(28) where is the average energy of the neutrinos produced by electron capture or beta-decay of the nucleus with rate and abundance . These quantities are provided in tabulations of weak interaction rates at finite temperature and density (see e.g., Langanke & Martínez-Pinedo 2001) and in global calculations of beta-decays for r-process nuclei (Marketin et al. 2016). For measured decays, the average neutrino energies are given by the ENSDF database (Brown et al. 2018) . If we consider only beta-decays we can express the average energy of the neutrinos as a fraction of the beta-decay Q-value , : loss = − ∑︁ , , .(29) Assuming that a constant fraction of the energy is carried by neutrinos we have loss = − ∑︁ , .(30) A typical value of for neutron-rich r-process nuclei is = 0.4 (Marketin et al. 2016) as beta-decays populate mainly excited states in the daughter nuclei that later decay by either or neutron emission. In practice, within W N the average energy of neutrinos produced in the reaction can be taken from all aforementioned sources and in case of an unknown average neutrino energy, a user defined is assumed. Optionally, we also account for escaping thermally produced neutrinos, by e.g., bremsstrahlung or electron recombination with the analytic fitting formulas of Itoh et al. (1996) . Energy can not only leave the system by neutrinos, but also enter it. When assuming that only neutrino reactions add additional energy to the system we obtain: gain = ∑︁ ,(31) where is the average energy of the absorbed neutrino, the neutrino average cross section, and the neutrino number flux (see Sect. 4.2.4 for more details about the implementation of neutrino reactions). At the moment we include for charged-current reactions on nucleons only. When combining Eq. (27), (30), and (31) we obtain: = − 1 ∑︁ ( + ) − = − 1 ∑︁ ( + ) − ( loss + gain ) ,(32) where we obtain the electron chemical potential from the EOS (Timmes & Arnett 1999) and where have rewritten the Accessed via the API of https://www-nds.iaea.org/relnsd/vcharthtml/api_ v0_guide.html see also https://cococubed.com/code_pages/nuloss.shtml chemical potentials of nuclei as = 2 + = − ln ( ) 2 2 ℏ 2 3/2 .(33) Here is the nuclear mass that we get from the atomic mass excess Δ( , ) by ( , ) 2 = Δ( , ) + 2 − 2 .(34) The mass excess from the latest atomic mass evaluation is tabulated in the Jina Reaclib database (Cyburt et al. 2010). Under NSE conditions Eq. (32) can be expressed as = − 1 ( + − ) − ,(35) showing that only reactions that are not in equilibrium, i.e. weak processes that change as well as external heating, are responsible for the change in entropy. This result can be generalized also to r-process conditions for which ( , ) ( , ) equilibrium is valid. Hence, reactions in equilibrium do not introduce a change in entropy. At high densities neutrinos are characterized by a chemical potential . In this case one obtains = − ( + − − ) − /( ) that shows chemical weak equilibrium + = + corresponds to a maximum of the entropy (Arcones et al. 2010). Within W N we solve Eq. (32) explicitly in a so called operator splitting method within the same Newton-Raphson as the nuclear network equations (Eq. 20). The initial value of the entropy is determined using the Timmes EOS (Timmes & Arnett 1999). After a timestep once the new value of entropy is determined the temperature is determined using the Timmes EOS assuming that the density and composition remain constant. For conditions at which ≈ 1-5, the entropy is dominated by the contribution of nuclei and is very sensitive to the composition. Under these conditions, it is necessary to account for changes in the composition when searching for a new value of the temperature. This is currently not implemented in W N . Coulomb corrections Coulomb effects can significantly influence fusion processes in a hot stellar plasma. Electrons can be attracted by the positive charge of a nucleus and therefore shield and modify the Coulomb interactions between two nuclei. This modifies the nuclear reactions and makes charged particle reactions more likely. The effect can be approximated by correction factors, the so called screening corrections, which are an important ingredient in nuclear reaction network calculations (e.g., Salpeter 1954). The calculation of the correction factors depends on the temperature and density of the environment (e.g., Salpeter & van Horn 1969;Yakovlev & Shalybkov 1989;Ichimaru 1993;Yakovlev et al. 2006). Usually, three different screening regimes are distinguished: the weak screening, the intermediate screening, and the strong screening regime. The regimes are commonly separated in terms of the ion coupling parameter (e.g., Kravchuk & Yakovlev 2014) Γ 12 = 2 1 2 1/3 1 + 1/3 2 2 (4 ) 1/3 3 1/3 B ≈ 4.5494 × 10 −4 1 2 1/3 1 + 1/3 2 ( e ) 1/3 −1 ,(36) with being the temperature in GK, the electron number density defined as = A , Z the charge of element , and the elementary charge . For lower values of Γ 12 the effect of screening becomes smaller. The weak screening regime applies for Γ 12 1, the intermediate regime around Γ 12 ≈ 1, and the strong regime for larger values. We do not solve the screening corrections numerically, which would be necessary to obtain the corrections for the strong screening regime. Instead, we have implemented a fitted function that was derived within Kravchuk & Yakovlev (2014). They express the so called screening enhancement factor as (Eq. 62 of Kravchuk & Yakovlev 2014) scr = exp Γ 12 0 + 5 8 2 2 + 63 128 4 4 ,(37) with defined as = 3 Γ 12 ,(38)where = 27 2 ( 1 2 ) 2 4 2 B ℏ 2 1/3 ≈ 4.2487 × 1 2 1 + 2 ( 1 2 ) 2 1 1/3 ,(39) with the nucleon number and the reduced mass . The fitting parameter 0 is expressed by the difference in Coulomb free energies which are defined by another fitted function that Kravchuk & Yakovlev (2014) take from Potekhin & Chabrier (2000): (Γ) = 1 √︁ Γ( 2 + Γ) − 2 ln √︂ Γ 2 + √︂ 1 + Γ 2 + 2 3 √ Γ − arctan √ Γ + 1 Γ − 2 ln 1 + Γ 2 + 3 2 ln 1 + Γ 2 4 .(40) Here, 1 = −0.907, 2 = 0.62954, 3 = 0.2771, 1 = 0.00456, 2 = 211.6, 3 = −0.0001, and 4 = 0.00462 and Γ is the ion coupling parameter for a one component plasma Γ = 5/3 2 (4 ) 1/3 3 1/3 B .(41) From this they obtain 0 = (Γ 1 ) + (Γ 2 ) − (Γ ) Γ 12 ,(42) where Γ 1 and Γ 2 are the ion coupling parameter of the reacting nuclei and Γ the ion coupling parameter of the compound nucleus. Furthermore, 2 and 4 in eq. (37) are defined as 2 = − 1 16 1 + 5/3 3 1 + (43) 4 = − 64 1 + 5/3 5 (1 + ) 11/3 .(44) The differences between the screening correction scheme of Kravchuk & Yakovlev (2014) that is implemented in W N and the one of S N (Lippuner & Roberts 2017) which uses a parametrization of Dewitt et al. (1973) is shown in Fig. 1. In the most relevant regime for nucleosynthesis calculations (i.e., 1 ≤ Γ 12 ≤ 200) all schemes show a good agreement ( Fig. 1). For higher values of Γ 12 > 200, the temperature is usually close to or even below the validity of the reaction rate databases (c.f., min = 10 −2 GK of the Reaclib reaction rate database, Cyburt et al. 2010). Screening corrections will modify the reaction rates according to scr = scr .(45) The implementation of screening with more than two reactants is realized in several steps. For three reactants, the screening correction of only two reactants is calculated and, in a next step, the correction of the third reactant with the summed mass and ion number of the first two reactants is calculated. This corresponds to forming a short lived intermediate nucleus. The total correction is then given by the multiplication of both correction factors scr . In the case of NSE (see sect. 2.3), screening corrections enter in form of a change of the binding energy of a charged nucleus, i.e., the difference in the Helmholtz free energy due to the screening. Since all reactions are in equilibrium, we can assume that every nucleus is built by a series of proton captures and neutron captures, where the latter reaction is independent of screening corrections. To obtain a correction for the binding energy of a given nucleus with charge number we therefore multiply the screening corrections scr of the necessary amount of ( − 1) proton captures . The impact of Note that XNet uses the same approach in NSE, see https://github.com/ starkiller-astro/XNet/blob/master/doc/screening/Screening_for_NSE.pdf screening and the consistency of the network at NSE transition is shown in Fig. 2. Note that there exist other approaches that derive the screening corrections from the detailed balance principle (e.g., Kushnir et al. 2019) or from a global Coulomb correction (Bravo & García-Senz 1999;. All these approaches are consistent with each other. When taking screening corrections into account, heavier nuclei are synthesized compared to the case without screening. METHODS AND NUMERICAL TECHNIQUES Code structure and flow diagram In the following, we describe the control flow of W N (see Fig. 3). The code starts by reading a user-defined file in the initialization step. This file contains runtime parameters such as paths to nuclear physics input data and other options. A full list of possible parameters is given in the documentation of the code. After the initialization, the evolution mode is chosen. This mode is set to either "Network" or "NSE" and depends on the temperature. The implementation of several modes is necessary as the most efficient approach to determine the composition changes with temperature. Whereas solving the full network equations in a temperature regime where an equilibrium holds can lead to arbitrarily small time steps, solving NSE conditions at too low temperatures can lead to incorrect results. For both evolution modes, the temperature, density, and neutrino quantities (i.e., neutrino temperatures or energies and luminosities) are updated using either an interpolation (i.e., linear, cubic, Akima, modified Akima, Pchip) within the thermodynamic data of the Lagrangian tracer particle, analytic equations, or a user-defined extrapolation (i.e., adiabatic, exponential, free). In the network regime, updating the temperature depends on the input settings and includes some special cases. If the user allows a feedback of the nuclear energy release on the temperature, a differential equation of the entropy is solved explicitly together with the nuclear reaction network equations (see sect. 2.4). After updating the temperature, density, and neutrino properties, the reaction network equations are solved numerically. For the network regime, the full set of coupled differential equations (including all reactions) is solved. In NSE, eqs. (23-25) are solved for a given temperature, density and electron fraction. The latter is evolved taking weak reactions into account only. If no convergence is achieved (the criteria are introduced in the following sect. 3.2), the step size is halved and the iteration is repeated. Otherwise, an output is generated and the time is evolved (indicated by "rotate timelevels" in Fig. 3). The main loop ends when a user defined termination criterion is fulfilled. Before the code terminates, final output such as the final abundances and mass fractions are written. Due to the stiff behavior of the nuclear reaction network equations (Eq. (20)), implicit/backwards methods are necessary to integrate the ODE. The general structure of the network however is independent of the chosen integration method. Integration schemes Regardless of the chosen integration method, W N uses a sparse matrix solver PARDISO (Schenk & Gärtner 2004), which is OpenMP parallelized. For a detailed description of the sparse format see e.g., Hix & Thielemann (1999) or Winteler (2012). This sparse format brings a computational advantage for calculations with more than 400 nuclei. In W N , the indices of possible non vanishing entries are calculated once at the beginning and are updated in a next step when solving the linear system. W N provides two methods to integrate the system that are outlined in the following. Implicit Euler The implicit Euler method (see also e.g. Hix & Thielemann 1999;Winteler 2012;) is one of the simplest implicit integration methods. Nevertheless, it is sufficient for most of the calculations, especially when a large number of nuclei is involved in the calculation. For a coupled ODE we can formulate the problem of integrating the equation by the general form of D D = = ( , 1 , ..., ),(46) where N is the amount of involved species and the abundance of species . There are two possibilities to discretize this derivative. The most simple approach would be ( + ℎ) − ( ) ℎ = ( , 1 , ..., ).(47) When choosing a time step ℎ, everything except ( + ℎ) is known and one can integrate the ODE when knowing an initial value of . However, this approach corresponds to an explicit Euler method, an integration scheme that can be numerically unstable for so called stiff problems that are present in reaction networks. Therefore, we can discretize the derivative with ( + ℎ) − ( ) ℎ = ( + ℎ, 1 , ..., ).(48) Note that here ( + ℎ) as well as ( + ℎ, 1 , ..., ) is unknown. We can derive a iterative formula for the solution of ( + ℎ). This is given by: ( + ℎ) = ( ) + ℎ ( + ℎ, 1 , ..., ).(49) To get a solution, we have to apply a root finding algorithm, in W N we use the Newton-Raphson method. To apply the Newton-Raphson method, we must reformulate the problem: 0 = ( ) + ℎ ( + ℎ, 1 , ..., ) − ( + ℎ).(50) Mathematically, a multidimensional Newton-Raphson can be formulated as: ì ( ì) = 0,(51) which we will later apply and set ì to ì . The Taylor series of ì can be expressed in first order as ( ì + ì) = ( ì) + ∑︁ =1 + ( ì 2 ) = 0,(52) where is one entry of the Jacobian matrix containing the partial derivatives of ì , defined as = . To find the root of ì we iterate ì +1 = ì + ì = ì − ( ì ) −1 · ì ( ì )(53) until convergence is reached. In a classical Newton-Raphson the convergence criterion is given by | ì +1 − ì | < NR . In W N we implemented a different criterion that is based on mass conservation by using the mass fraction (see Eq. 24): ∑︁ =1 − 1 = ∑︁ =1 − 1 < NR(54) where NR < 10 −5 is used per default in W N . As investigated by , this convergence criterion is sufficient for most of the nucleosynthesis calculations. Other convergence criteria such as | ì +1 − ì | < NR are often too strict and slow down the calculation significantly. We tested the convergence in more detail in Appendix A. By combining Eq. (53) and Eq. (49) we obtain ì +1 +1 = ì +1 − 1 ℎ × 1 − ( ì +1 ) ì +1 −1 · ì +1 − ì ℎ − ì ( ì +1 ) . (55) Compared to other numerical integration methods, within the implicit Euler method no error estimation is possible. Therefore, the calculation of the time step is independent of an integration error. However, we can estimate a time step by guessing the maximum percentage of change of the abundances Euler , based on the current derivative. The approximate change within one time step is calculated by: ì ( ) = ì ( + ℎ ) − ì ( ) ℎ (56) Euler = max 1 − ì ( + ℎ ) ì ( ) ,(57) therefore we obtain ì ( ) = (1 − Euler ) ì ( ) − ì ( ) ℎ ⇒ ℎ = Euler max ì ( ) ì ( ) .(58) The default value in W N for Euler is set to a maximum change of 10%. In order to avoid rapid changes of the time step, it is limited by the previous step size ℎ = min ℎ, Euler max ì ( ) ì ( )(59) with the constant > 1. Furthermore, only species with abundances higher than a threshold abundance are taken into account in the time step calculation. If the change is higher than expected, the calculation is repeated with a halved time step. This is schematically shown by the loop in Fig. 3. In order to get an adequate resolution for large temperature and density gradients, the step size is additionally restricted to a maximum change of temperature and density within one time step. Furthermore, the temperature change relative to the last Newton-Raphson iteration can be limited, in order to assure the convergence of the entropy update from nuclear heating (sect. 2.4). Gear's Method In contrast to Euler's method, Gear's method (Gear 1971, see also, e.g. Byrne & Hindmarsh 1975;Longland et al. 2014;Martin 2017) includes terms of higher orders (see also Timmes 1999, for a discussion of the advantages of higher order solvers for nuclear reaction networks). In the following we will denote the highest included order with . It is a so called predictor-corrector method, where in a first step a rough solution is guessed and in a second step, this solution is corrected until a given precision is reached. The first prediction is based on information of the past behavior of the system. Therefore, the so called Nordsieck vector ì = ì , ℎ ì , ℎ 2 ì 2! , ..., ℎ ì ( ) !(60) is stored, where ì are the abundances at the current time, ì , ì , ..., ì ( ) are the time derivatives of the abundances and ℎ = +1 − is the current step size. In order to obtain the predictor step ì (0) +1 , the Nordsieck vector is multiplied by a ( + 1) × ( + 1) pascal triangle matrix defined by ( ) =            0 if < = ! !( − )! if ≥ with , ∈ [0, 1, ..., ]. (61) Therefore, the predictor step is given by ì (0) +1 = ì · ,(62) which is the Taylor series of ì truncated at the order of in matrix notation. To obtain an accurate solution for ì +1 the predictor step is iteratively corrected due to ì +1 = ì (0) +1 + ì +1 · ì ℓ,(63) with the correction vector ì +1 . ì ℓ is a 1 × ( + 1)-vector given by ∑︁ =0 ℓ = =1 1 + ( − +1 )/ℎ ( +1 − +1− )/ℎ = =1 1 + . (64) Here, we defined the vector ì storing the information of previous step sizes. The components of ì ℓ = [ℓ 0 ( ), ℓ 1 ( ), ..., ℓ ( ), ..., ℓ ( )] are calculated as ℓ 0 ( ) = 1, ℓ 1 ( ) = ∑︁ =1 −1 , ℓ ( ) = ℓ ( − 1) + ℓ −1 ( − 1)/ , ℓ ( ) = =1 −1 . The correction vector ì is calculated using the same Newton-Raphson scheme as for the solution ì +1 . To obtain the composition of the next step, 1 − ℎ ℓ 1 ì Δ ( ) = − ì ( ) +1 − ì (0) +1 + ℎ ℓ 1 ì ( ) +1 − ì (0) +1 ,(65)ì ( +1) +1 = ì ( ) +1 + ì Δ ( )(66) is solved. Here, ì (0) +1 and ì (0) +1 are extracted from the first and second entry of ì (0) +1 . The index is the number of iterations, Δ ( ) is an iterative correction, and the Jacobian matrix = ( ) , +1 ( ) , +1 .(67) Calculating the Jacobian is one of the most expensive steps when solving the ODE. We therefore tested an implementation of Broydens method (Broyden 1965) to approximate the Jacobian instead of recalculating it in every iteration. This, however, did not lead to a performance improvement due to rapid changes of the reaction rates and feedback from the nuclear reactions on the temperature. Therefore, more Newton-Raphson iterations were needed to obtain convergence leading to an overall performance loss. After the Newton-Raphson iteration has converged, the correction vector ì +1 = ì +1 − ì (0) +1(68) can be determined. To obtain a sophisticated guess of the time step within a given tolerance, the error can be estimated by the truncation error +1 ( ) = − 1 ℓ 1 1 + =2 +1 − +1− − +1− −1 ì +1 . (69) The next time step is computed within a certain allowed tolerance Gear by ℎ = ℎ Gear max¯+ 1 ( ) 1/ +1 ,(70) where is a conservative factor usually chosen in the interval ∈ [0.1, 0.4]. As for the calculation of the step size in Eq. (59), only abundances above a certain threshold should contribute to the calculation of the new time step. Therefore, the truncation error is rescaled in order to prevent an overweighting of the change of very small abundances, smaller than a threshold limit (default in W N : 10 −10 ), , +1 =        , +1 / if > limit , +1 / limit if ≤ limit .(71) Besides the automatic control of the step size, the order can be selected automatically as well. For this we allow only order changes of ± 1. The error estimates for increasing and decreasing order are calculated by: +1 ( − 1) = − −1 =1 ℓ 1 ( − 1) ℎ ì ( ) +1 ! (72) +1 ( + 1) = − +1 ( +1 − +1 ) ( + 2)ℓ 1 ( + 1) 1 + =2 +1 − +1− − +1− ,(73) where and are defined as +1 = +1 ℎ +1 ℎ +1 (74) +1 = =1 ( + 1)! 1 + =2 +1 − +1− − +1− .(75) To obtain the most efficient way of calculating the solution of the ODE, the step size in Eq. (70) is calculated for order − 1, , and + 1, respectively. The order is chosen as the one providing the largest time step, ℎ = max(ℎ ( − 1), ℎ ( ), ℎ ( + 1)). Since the Nordsieck vector depends on the step size (see Eq. 60), it must be rescaled whenever the step size is changed: ì +1 = diag(1, , 2 , ..., ) · ì +1 ,(76) where = ℎ /ℎ. Also when the order decreases to − 1, the Nordsieck vector has to be rescaled. Therefore, we define a correction ì Δ = ì , +1 ,(77) where similar to Eq. (64), ì is implicitely defined as ∑︁ =0 = 2 −2 =1 ( + )(78) and its components are given by: 0 ( ) = 1 ( ) = 0 2 ( ) = −2 =1 , ( ) = −2 ( − 1) + −1 ( − 1), −1 ( ) = −2 ∑︁ =1 , ( ) = 1. Due to the implementation of higher orders, within Gear's method one is able to apply larger step sizes compared to the implicit Euler scheme, reducing the amount of iterations drastically without losing accuracy. However, for most of the calculations, more Newton-Raphson iterations are necessary, resulting in similar or even higher computational costs. The difference between the implicit Euler and Gears method is discussed in more detail in Appendix A. REACTION NETWORK INPUTS Lagrangian tracer particles The nuclear reaction network equations (Eq. 20) contain a dependency on the temperature and density of the environment. To get an initial composition from NSE, additionally the electron fraction is necessary. These quantities have therefore to be recorded from a simulation of an astrophysical scenario. This is often done in terms of Lagrangian tracer particles within the hydrodynamic simulation. These particles (also called trajectories or tracer) are passively advected within the (M)HD simulation, tracing all relevant quantities such as the time, temperature, density, electron fraction, and neutrino properties. W N is a so-called single-zone code, i.e., tracer particles cannot interact among each other. This assumption is valid if the nuclear burning timescales are much faster than other timescales changing the abundances (e.g., diffusion). Therefore, for the majority of explosive environments we can use a single-zone reaction network, however, for some cases such as hydrostatic oxygen burning it has to be taken with care (Hix & Thielemann 1999). There have been several studies on the uncertainties of a tracer particle method. The necessary amount of tracer particles to achieve convergence has been studied e.g., in Seitenzahl et al. (2010) and Nishimura et al. (2015). Also the initial placement of the tracer particles can have an impact on the convergence of the result (Bovard & Rezzolla 2017). A detailed comparison between setting tracers in contrast to calculating the nucleosynthesis inside the hydrodynamical simulation has been presented in the context of CC-SNe by Harris et al. (2017) and by Navó et al. (2022). Additionally, Sieverding et al. (2022) studied the impact of obtaining tracers in a post-processing step after the calculation of a hydrodynamic model from simulation snapshots. Temperature and density regimes During its evolution, a tracer particle can undergo different temperature regimes and therefore different approaches are required to obtain the composition within the given time step. In W N , there are distinctions between three temperature regimes, the regime of NSE, the intermediate temperature regime, and the cold temperature regime, schematically shown in Fig. 4. In the regime of nuclear statistical equilibrium, the network equations are only solved for weak reactions. Instead of calculating also strong reactions, an equilibrium is assumed (sect. 2.3). When the conditions are below a certain temperature threshold NSE , the nuclear reaction network is solved for all nuclear reactions. The transition temperature between these regimes can be chosen individually, depending on whether the transition occurs from hot to intermediate temperatures ( NSE,c ) or from intermediate to hot temperatures ( NSE,h , see Fig. 4). The exact temperatures of the transitions depend on the environment (e.g., Khokhlov 1991). The reason for having two transition temperatures is mainly motivated when using a feedback of the nuclear energy on the temperature (sect. 2.4). In this case, a slight inconsistency at the interface between the hot and intermediate regime (see sect. 2.2) may cause fluctuations in the temperature that can lead to a infinitesimal time step when using only one transition temperature. When the temperature drops below = 10 −2 GK, all reaction rates are frozen to the lower validity limit of the JINA Reaclib reactions (brown region in Fig. 4, Cyburt et al. 2010). Often, the Lagrangian tracer particle finishes before the nu-cleosynthesis is completed and an extrapolation of the thermodynamic conditions is required (dotted line in Fig. 4). The details of these assumptions can have an impact on the final yields (see also Harris et al. 2017) and should be chosen according to the environment e.g., a homologeous expansion for CC-SNe or a free expansion for the dynamical ejecta of NSM. Reaction rates Although all nuclei are connected to each other by nuclear reactions, in practice most of the reactions are negligible. The most important reactions for astrophysical environments are given by reactions that involve nucleons or -particles, decays, neutrino reactions, electron-and positron captures, or fission reactions (Fig. 6). There exist many formats of the reaction rates. W N is built around the Reaclib reaction rate library and this library usually contains the majority of reactions (Cyburt et al. 2010). However, also other formats are supported e.g., tabulated reaction rates from the TALYS code (Koning et al. 2019). Rates given in different formats are either added or merged into the list of all rates within W N . In this case, the different formats have different priorities, starting with the Reaclib reactions with the lowest priority. If, in addition this rate is also included in the theoretical + , − , ec, and pc rates it is replaced once again. The priority of the individual rates is shown in Fig. 5. We note that W N does not perform any evaluation on the reliability of a rate. If a rate is contained multiple times in different formats , it is the user's responsibility to choose the desired rate by either fully automatically using the one with the highest priority as in Fig. 5 or by deleting unwanted rates from high priority formats. The modular structure of W N allows an easy implementation of other popular reaction rate formats. In the following we give a short overview of the current supported file formats. Reaclib file format Most of the nuclear reaction rates are given in form of seven fit parameters, , the so called Reaclib format (Cyburt et al. 2010). The reaction rate is calculated according to: to be multiplied by the partition functions. The fits of the reaction rates are valid between 10 −2 GK ≤ ≤ 10 2 GK . For lower temperatures, within W N , the rates are kept constant. At higher temperatures, usually NSE is assumed that only depends on the binding energies and partition functions. = exp 0 + 5 ∑︁ =1 2 −5 3 9 + 6 ln 9 .(79) Each reaction belongs to a specific chapter in the Reaclib tables as given in Table 1. The Reaclib chapters correspond to different one, two and three-body terms in Eq. (20), where each of these terms in the summation can include different numbers of reaction products. Another Reaclib format includes chapter 8 and 9 together and does not include chapter 10 and 11. W N supports and automatically detects both options. The Reaclib reaction rate database contains only experimental -decays. To make the -decays more complete, W -N is able to calculate additional -decays with the Viola-Seaborg formula (e.g., Viola & Seaborg 1966;Sobiczewski et al. 1989;Brown 1992;Sahu & Bhoi 2016). We provide rate tables of -decays using the parametrization of Dong & Ren (2005). For > 84 and > 126, they fitted experimentally determined -decays with log 10 = ( + ) −0.5 + ( + ) + ℎ , Parametric -decays Z,N Neutrons Protons Z,N+1 (n, γ) (γ, n) Z,N-1 (n, γ) (γ, n) Z+2,N+2 ( γ , α ) ( α , γ ) Z-2,N-2 ( γ , α ) ( α , γ ) Z+1,N (γ, p) (p, γ) Z-1,N (γ, p) (p, γ) Z+1,N+2 ( p , α ) ( α , p ) Z-1,N-2 ( α , p ) ( p , α ) Z+2,N+1 ( n , α ) ( α , n ) Z-2,N-1 ( α , n ) ( n , α ) Z+1,N-1 ( n , p ) , ( β + ν ) ( p , n ) , ( β − ν ) Z-1,N+1 ( p , n ) , ( β − ν ) ( n , p ) , ( β + ν ) ( where is the proton number of the decaying nucleus, the Q-value of the decay, the parameters = 1.64062, = −8.54399, = −0.19430, and = −33.9054 were derived through least square fitting. Additionally, the so called hindrance factor ℎ log was fitted: ℎ log =                  0, Z An obvious consequence of this parametrization is thatdecays happen on shorter timescales if is large or, in other words, they are more relevant for regions with high (c.f., upper and middle panel of Fig. 8). It has been pointed out that this fit is only valid for their fitting regions, other regions need a separate fit. To obtain a valid fit also to the other regions, we use the masses and experimental -decay halflifes of the Reaclib. We therefore use the above parameters only for nuclei with ≥ 82 and ≥ 126, while we use the parameters of Tab. 2 for the other regions. This fit over these four individual regions of the nuclear chart that correspond to the regions between magic numbers is in a much better agreement to the experimental half-lives (Fig. 7). Still, some deviations of around one to two magnitudes are present around the magic numbers. When comparing all available experimental -decays with the calculated ones we obtain a standard deviation of Zeven,Neven = 0.38, Zodd,Neven = 1.61, Zeven,Nodd = 0.93, and Zodd,Nodd = 0.82. The large standard deviation of Zodd,Neven is driven by the decay of 153 Lu whose half-life differs by more than 16 magnitudes (3.9 × 10 16 s versus an experimental value of ∼ 1.3 s). Note that 153 Lu has a magic neutron number of 82, nevertheless the difference between the Viola-Seaborg formula and the experimental value is quite remarkable and indeed possibly a result of an outdated rate in the Jina Reaclib that uses the experimental data last evaluated at 2017. The latest experimental data from 2019 indicates that this nucleus is entirely decaying by an ec/ + -decay which would agree with the large half-life obtained with our fitted formula. When removing this nucleus from the calculation of the standard deviation, it reduces to Zodd,Neven = 0.64. We therefore have excluded it from our least square fit. The obtained half-lives are illustrated in Fig. 8. The additional -decays are mostly located at the proton-rich side of the valley of stability or very heavy nuclei. They therefore impact the nucleosynthesis in very neutron-rich conditions that synthesize elements in the very heavy region or possibly during the -or p-process where heavier nuclei get photodisintegrated and the nucleosynthetic flow moves along the proton-rich side. We note that one could add proton-emitters (nuclei that decay by emitting a proton without previously undergoing -decay) in a similar fashion. This decay would be most relevant for nucleosynthetic paths along the proton-dripline at low mass numbers. There are some works on predicting the half-life of this decay (e.g., Basu et al. 2005 Qi et al. 2012;Saxena et al. 2023). However, proton-emission is somewhat more complex to describe and depends more strongly on the Q-value as well as on often unknown properties such as the angular momentum transfer of the decay. Furthermore, the conditions under which these reactions are relevant are quite exotic and we therefore did not attempt to include them. On a technical level, within W N one can decide if the -decay rates should only supplement the Reaclib rates or also replace them. The latter may become interesting in the future in case other theoretical -decays will be added to the Reaclib. In addition, one can adjust between which proton numbers -decays are added. Within W N we provide a file with the -decay rates using the parametrization presented here. For the fit as well as the rates, we used the masses of the Jina Reaclib as an input. Tabulated rates Another possible format is given in form of a tabulation. This format is common for nuclear reaction codes such as TALYS (Koning et al. 2019). Every rate is tabulated on 30 temperature grid points from 10 −4 to 10 GK and, identical to the Reaclib format, assigned a certain chapter as given in Table 1. Reaction rates that are given in tabulated form will replace the respective reaction rates in Reaclib format. Reverse reactions can be given in tabulated form or calculated with the theory of detailed balance within W N . These calculations will replace all reverse rates that are given in the reaction rate library. Neutrino reactions Neutrino reactions are tabulated versus the neutrino temperatures from 2.8 to 10 MeV on 7 grid points. These reaction rates enter the nuclear reaction network as an additional term in the form of D ( ) D = ( ) ( ) ( ),(82) with the average neutrino cross section integrated over the normalized neutrino spectrum ( ) that depends on the neutrino temperature ( ). Furthermore, = / 4 2 is the neutrino number flux. W N includes a tabulation where the neutrino reactions on nucleons have been calculated as described in e.g., Burrows et al. (2006) with the weak magnetism and recoil corrections as in Horowitz (2002). Within W N we provide the rate table as well as a python script to calculate it. In principle the full neutrino energy distribution could be taken from the hydrodynamic simulation and an appropriate neutrino temperature can be calculated based on this. In W N , the average neutrino energy is used interchangeably with the neutrino temperature by assuming a Fermi-Dirac distribution of the neutrino energies and a zero chemical potential of the neutrinos. For this case = F 3 (0) F 2 (0) = 7 4 180 (3) ≈ 3.1513(83) holds. Here is Riemann's zeta function and F are the Fermi integrals defined as F (0) = 1 Γ( ) ∫ ∞ 0 exp ( ) + 1 d ,(84) with the gamma function Γ( ) = ( + 1)!. We note that current CC-SNe simulations hint towards slight deviations of the Fermi-Dirac distribution. Such a deviation can have an impact on the energy integrated neutrino cross sections that we do not take into account with the provided tabulation (e.g., Tamborra et al. 2012;Mirizzi et al. 2016;Sieverding et al. 2019). For neutrino reactions with heavier nuclei, W N is able to include neutrino interactions that are provided in a separate file. This file is taken from Sieverding et al. (2018) and includes charged-current as well as neutral-current reactions. All of these reactions contain different reaction channels allowing for the ejection of light particles such as neutrons, protons, and alpha-particle. An overview of these cross sections is illustrated in Fig. 9, where we show the cross sections summed over all reaction channels and the average amount of ejected neutrons for neutral-current reactions at = 5 MeV. Including neutrinos in the calculation requires additional information in form of either a tabulation or a parametriza-tion of these neutrino properties. In the case of chargedcurrent reactions, only the properties of electron neutrinos and anti-neutrinos have to be provided. Neutral-current reactions need additional properties of muon and tau (anti-)neutrinos. Within W N it is assumed that the (anti-)neutrino energies (or temperatures) for muon and tau neutrinos are identical ( = ) and they are thus included as species , where has to be provided. Furthermore, the summed luminosities have to be provided ( = + ) for neutrinos and antineutrinos. Treating muon and tau (anti-)neutrinos effectively together as described above may be sufficient as current CC-SNe simulations do not really distinguish between these neutrino flavors and little has been done in this direction so far (However, see Bollig et al. 2017). Theoretical weak rates Theoretical models, e.g., shell-model calculations, are used to obtain weak rates for stellar conditions. These rates are listed on a temperature and electron density grid (e.g., Fuller et al. 1985;Oda et al. 1994;Langanke & Martínez-Pinedo 2001;Pruet & Fuller 2003;Suzuki et al. 2016). A direct tabulation of the rates, however, can lead to large interpolation errors (Fuller et al. 1985). Therefore, the rates are not tabulated directly and instead and effective log eff is stored. This can be converted to the actual rate via (see e.g., Langanke & Martínez-Pinedo 2001): = ln 2 eff .(85) Here, is the ground-state ground-state phase space integral = ∫ ∞ 0 =max( ,1) 2 ( + ) 2 ( ) ,(86) with = ( − )/ the Q-value in units of the electron mass. ( ) = 1 exp 2 − B + 1 ,(87) with the electron chemical potential . We note that these theoretical reaction rates usually neglect atomic electron capture which becomes increasingly important for lower temperatures e.g., for 56 Ni. Therefore, W -N contains the possibility to replace all theoretical decays, electron-and positron captures at low temperatures with the experimental decays provided in the Reaclib. W N supports an individual grid for each reaction for the tabulation of theoretical − -, + -decays, positron-and electron-capture rates. This is necessary as different available tabulations were calculated on different temperature and log grids. We provide a table that was compiled out of various sources covering different regions of the nuclear chart (Fuller et al. 1985), O (Oda et al. 1994), LMP (Langanke & Martínez-Pinedo 2001), PF (Pruet & Fuller 2003), STN (Suzuki et al. 2016). Stable nuclei are indicated as black boxes. ( Fig. 10) . Note that W N also uses electron capture rates on protons as well as positron captures on neutrons from this table. -delayed neutron emission The Reaclib file format only allows -delayed neutron emissions up to three neutrons (Reaclib chapter 11, see Table 1). In practice, decays that emit only up to two neutrons are included. The probability of all other decay channels in the Reaclib format is added to the decay channel with three products. Especially when matter far from the valley of stability on the neutron-rich side is synthesized, -delayed neutron emission of more than two neutrons can occur (e.g., Marketin et al. 2016;Möller et al. 2019). Therefore, W N supports a file format containing the half-lives of the nuclei and the different channel probabilities up to the -delayed emission of ten neutrons. Optionally, average emitted neutrino energies can be provided in this file (to account for the energy loss when self-heating is enabled, see sect. 2.4). Duplicates in Reaclib format will be replaced by the reaction rates in this format. Additionally, there exist user defined parameters to allow for a controlled replacement of rates. With them, one can specify if e.g., also experimentally measured decays should be replaced. Fission reactions and fragments There are various fission modes of which W N includes three: spontaneous fission, neutron-induced fission, and beta-delayed fission. In all of these cases in addition to the probability to undergo fission, the resulting fission fragc.f. with https://groups.nscl.msu.edu/charge_exchange/weakrates.html ment distribution is of importance as well. Investigations for fission barrier heights utilized in astrophysics have been performed from 1980 until today (Howard & Möller 1980;Myers & Świaţecki 1999;Mamdouh et al. 2001;Goriely et al. 2009;Giuliani et al. 2018a,b;Vassh et al. 2019;Giuliani et al. 2020). Neutron induced cross section predictions (or also beta-delayed fission) for astrophysical applications were treated e.g. by Panov et al. (2005); Martinez-Pinedo et al. In the present paper, we provide only a limited set of fission inputs which are available within the W N package and are stored in a separate file. The format is similar to the Reaclib file format, but only the name of the parent nucleus is stored. W N includes the rates of Panov et al. (2005) for -delayed fission, Panov et al. (2010) for neutroninduced fission. Reaction rates for spontaneous fission have been calculated with the semi-empirical formula of Khuyagbaatar (2020), using the fission barriers provided in Möller et al. (2015). These half-lives together with experimentally measured ones are shown in Fig. 11. While in Khuyagbaatar (2020) spontaneous fission half-lives were fitted to nuclei with even neutron and proton numbers only, we use the same equation for all nuclei. The products (or fission fragments) are described by a fission fragment distribution in a probabilistic way. They can either be described by an analytic formula (Kodama & Takahashi 1975;Panov et al. 2001) or more complicated models (e.g., Kelic et al. 2009;Goriely et al. 2009;Mumpower et al. 2020). Within W N we include the fragment distribution of Kodama & Takahashi (1975), Panov et al. (2001), and Mumpower et al. (2020). As pointed out in Mumpower et al. https://nucastro.org, https://www.jinaweb.org/science-research/scientificresources/data and https://www-nds.iaea.org, including TALYS results (2020), the distribution should only be used for -delayed and neutron induced fission. Therefore, W N contains these fragment distributions in combination with the ones of Kodama & Takahashi (1975) for spontaneous fission. REACTION NETWORK APPLICATIONS Example cases In the following, we discuss several example cases calculated with W N that are available together with the code. These examples involve conditions of a variety of scenarios, namely the Big Bang (as described in Winteler 2012), the dynamic ejecta of a NSM (from Korobkin et al. 2012;Rosswog et al. 2013;Piran et al. 2013;, the neutrino driven wind of a NSM (Perego et al. 2014;Martin et al. 2015), the viscous disc ejecta of a NSM (Wu et al. 2016b;, and the dynamic ejecta of a black hole neutron star merger (Korobkin et al. 2012;Rosswog et al. 2013;Piran et al. 2013). Additionally, we provide various conditions within MR-SNe Obergaulinger & Aloy 2017;Reichert et al. 2021Reichert et al. , 2023, classical novae (José & Hernanz 1998;José 2022), the X-ray burst of an accreting neutron star (Schatz et al. 2002), complete Si burning within a CCSN (with a simple parametric model as described in Nadyozhin & Deputovich 2002;Woosley et al. 2002) the neutrino driven wind within a CCSN (Bliss et al. 2018), the detonation phase of a type Ia supernova (with a parametric model as in Meakin et al. 2009), a main s-process (Cescutti et al. 2018;Cescutti 2022), a weak s-process (Hirschi et al. 2004;Nishimura et al. 2017a;Pignatari & Hirschi 2022), hydrostatic hydrogen burning, carbon-oxygen burning, and a simple i-process model (as described in Dardelet et al. 2015). All these conditions are examples in W N and should guide the user on how to use the code. It is noteworthy that the trajectories represent typical conditions in the scenarios and may be used for sensitivity studies, but they do not necessarily reflect the total yields that can be obtained when calculating often thousands of trajectories of the individual scenarios. Furthermore, a different nuclear physics input is used within the example cases and we do not aim to exactly reproduce the abundances that have been obtained within the original publications. All example cases are very diverse in their involved conditions and together they cover a large range of the nuclear chart that is involved in the calculations. In the following sections we present only a subset of the aforementioned examples. Big Bang nucleosynthesis The synthesis of elements during the first minutes after the origin of our universe can be calculated with a relatively small network. Following Winteler (2012), we create a trajectory for a flat, isotropic, and homogeneous universe to describe the conditions during the big bang (see also Von- An important quantity is the initial photon to baryon ratio which was measured by the Planck Satellite (5.96 × 10 −10 ≤ ≤ 6.22 × 10 −10 , Planck Collaboration et al. 2016). By creating one trajectory for each baryon to photon ratio we are able to connect the big bang nucleosynthesis with measurements of abundances in stars and therefore probe the conditions of the big bang. For deuterium, the primordial abundance was determined to (D)/ (H) = (2.527 ± 0.03) × 10 −5 (Cooke et al. 2018, orange band in Fig. 12). For deuterium there is a slight discrepancy with respect to the photon to baryon ratio determined by the Planck Satellite and observed deuterium abundances. As the deuterium abundance is very sensitive to the d(p, ) 3 He reaction rate this discrepancy may vanish in the future with new experimentally determined reaction rates (Mossa et al. 2020;Moscoso et al. 2021). Here, we used the rate of Descouvemont et al. (2004) that is included in the JINA Reaclib. Furthermore, also observations of (D)/ (H) are differing (e.g., Romano et al. 2003). The observed value of ( 3 He)/ (H) = (1.1 ± 0.2) × 10 −5 (Bania et al. 2002) is in perfect agreement with the estimated value. Also the value of ( 4 He) = 1/4 × (0.2561 ± 0.0108)/ (H) (Aver et al. 2010) is in agreement with our calculation (blue band in Fig.12). The observed 7 Li abundance ( ( 7 Li)/ (H) = 1.23 +0.68 −0.32 , Ryan et al. 2000) is in vast discrepancy with the calculated value. This well known problem is in literature referred to as lithium problem (see e.g., Fields 2011;Fields & Olive 2022, for reviews). Main s-process We added a trajectory of a main s-process to the example cases. This trajectory was used for a monte-carlo sensitivity study in Cescutti et al. (2018) and can be accessed via Cescutti (2022). The trajectory was extracted from the 13 C pocket after the 6th thermal pulse of a solar metallizity, 3 M mass AGB star (for more details see the original publication). The final mass fractions agree well with the ones of Cescutti et al. (2018, see Fig. 13) given the fact that we do not attempt to use the exact same nuclear input. Complete Silicon burning The complete Si burning can be described by analytical models. For this, we assume that the time scale behaves according to the free fall time scale(e.g., Arnett 1996): ≈ 446 √ .(88) The density and the temperature T are assumed to follow: ( ) = − /(3 )(89)( ) = − / ,(90) where the shock temperature can be defined as in e.g., Nadyozhin & Deputovich (2002); Woosley et al. (2002): = 2.4 1/4 51 −3/4 0 GK,(91) with the explosion energy 51 in 10 51 erg, and an initial radius 0 in 10 8 cm. The shock density is given by the jump condition ( = 7 0 ). For an initial (pre-shock) density of 0 = 10 6 g cm −3 , an initial radius of 0 = 2 × 10 8 , and an When we further assume an electron fraction of = 0.498 as typical in the Si shell, we obtain final abundances that are located around 56 Fe (Fig. 14). p-process Neutrinos can be crucial to synthesize proton-rich isotopes. If the neutrino flux is strong enough, this can lead to a pprocess. The conditions for this are for example fulfilled in the MR-SNe model 35OC-RO of Obergaulinger & Aloy (2017), and Reichert et al. (2021). The nucleosynthetic flow with and without neutrinos is shown in Fig. 15. The weak r-process The weak r-process, i.e., a synthetization of elements up to the second r-process peak ( ∼ 130) can occur in moderately neutron enriched environments. These conditions can be found in a variety of astrophysical host scenarios. Here, we show an exemplary trajectory from a MR-SNe (Obergaulinger & Aloy 2017;Reichert et al. 2021), the neutrino-driven wind of a NSM (Perego et al. 2014;Martin et al. 2015), and the neutrino driven wind of a CC-SNe (Bliss et al. 2018). The final mass fractions are shown in Fig. 16. Strong r-process Calculating a full r-process is one of the most challenging nuclear reaction network calculations. Here we include ∼ 6500 nuclei up to 337 Rg. The astrophysical host event of the r-process is not fully understood yet. Very promising candidates are NSMs, NSBH mergers, or MR-SNe. For these scenarios, we show the results of individual trajectories in Test scenarios We have implemented a series of tests in order to monitor the performance and consistency of W N . The tests cover a range of numerical and physical scenarios, which we will present in this section. Many of the tests are designed in a way that also an analytic calculation of the result is possible. Furthermore, we implemented technical test cases such as reading the initial composition, the correct reproduction of the input thermodynamic conditions, correct implementation of the different reaction rate formats which we will not elaborate more in the following. -decays A simple nucleosynthesis calculation is given by a -decay. We tested the decay of neutrons to protons, as well as the decay chain of 56 Ni. The results give an interesting insight into the accuracy of the integration using an implicit Euler integration scheme. We recall that this scheme does not have any error estimate for the time step and the convergence criterium of the mass conservation (i.e., = 1, Eq. (54)). While this is common practice for calculations involving a large amount of nuclei and reactions (e.g., Hix & Thielemann 1999), within the tested decays it leads to uncertainties. As an example, we show the time evolution of 56 Ni, 56 Co, and 56 Fe in Fig. 18. The discrepancies between the implicit Euler solution and the analytic solution can be reduced by choosing adapted smaller time steps resulting from smaller Euler values (in the example Euler = 10 −1 was used, see Eq. (57)). The example also shows the strength of the adaptive time step control within the Gear solver which is able to stay close to the analytic solution. Another test is based on the -delayed fission of 295 Am. Identical to a normal -decay, we can calculate the decay of this nucleus via ( ) = 0 − ,(94) with the decay constant . The products of this decay are determined by the fission fragment distribution that can be calculated analytically as in Kodama & Takahashi (1975) or Panov et al. (2001). Additionally, we include the fission fragment distribution of Mumpower et al. (2020) for -delayed and neutron induced fission. This distribution spans a wide range of mass numbers. The different fragments for a simulation time of = 10 −2 s are shown in Fig. 19. The calculated abundance pattern deviates less than 1 % from the input fission fragment distributions. Equilibrium cases Useful scenarios are cases were an equilibrium value is obtained. An equilibrium situation can be challenging for numerical solvers as constant abundances appear like a reaction time scale that is approaching infinity (e.g., Hix & Thielemann 1999;Feger 2011; following, we present the case of an (n, )-( ,n) equilibrium as well as equilibria obtained by electron-and positron captures and neutrino absorption. In the case of an (n, )-( ,n) equilibrium between 64 Ni and 65 Ni the analytic solution of the equilibrium composition can be derived as: (n) = , − √︃ 2 , + 4 A , , /65 −2 A ,(95)( 64 Ni) = (n)(96)( 65 Ni) = 1/65 − (n),(97) For = 8 GK and = 10 9 g cm −3 and matter initially consisting out of pure ( 65 Ni) (which introduced the factor 1/65), we obtain (n) = ( 64 Ni) = 7.35175 × 10 −3 and ( 65 Ni) = 8.03286 × 10 −3 . While the integration with the Gear scheme results in an excellent agreement within 0.0015 %, the time step within the implicit Euler becomes very small. This leads to numerical instabilities and a large deviation from the analytic solution after 10 3 s (see Fig. 20). This instability is unlikely to be resolved by more restrictive time steps in the implicit Euler scheme as the time step is based on changes in the thermodynamic conditions or abundances. Since both are static, the scheme will always attempt very large (possibly too large) time steps. This continues until large errors have been accumulated and the solution diverges. On the other hand, the Gear solver estimates an integration error, independent on changes in conditions or abundances. As a consequence, the solution is more stable. Another equilibrium test scenario is given by the equilibrium of electron-, positron-and neutrino captures. These equilibria are important to understand the initial electron fraction in r-process calculations. In the following we investigate the situation of the equilibrium electron fraction when only considering electron-/positron-captures on nucleons, neutrino absorption on nucleons, and a combination of both. Similar to Just et al. (2022b), we can calculate the equilibrium electron fractions for all three scenarios. Assuming only electron and positron captures, the equilibrium electron fraction for hydrostatic conditions can be obtained by solving: + (n) − − (p) = 0,(98) with the positron and electron capture rate + and − , respectively. For a constant temperature of 30 GK and density of 10 10 g cm −3 we obtain e,em = 0.155. Both integration schemes obtain a great precision, with the Gear solver agreeing within 0.004%, and the implicit Euler agreeing within 0.076%. The implicit Euler integration scheme shows again some numerical noise (upper panel in Fig. 21) and the result is therefore slightly worse compared to the Gear integration scheme. For the scenario with only neutrinos irradiating the matter we can similarly calculate the equilibrium electron fraction: (n) −¯(p) = 0,(99) with the neutrino and anti-neutrino cross sections and¯, respectively. Assuming matter located at a radius of 50 km, irradiated by neutrino luminosities of = 10 52 erg s −1 ,¯= 5 × 10 52 erg s −1 , and neutrino energies of = 25.2 MeV as well as¯= 31.5 MeV we obtain e,abs = 0.214. The final values of both integration schemes agree within 5 × 10 −8 % (middle panel of Fig. 21). Combining electron, positron, and neutrino captures, the equilibrium electron fraction can be obtained by solving ( + + ) (n) − (¯+ − ) (p) = 0.(100) For the conditions assumed here, we obtain e, = 0.1835, which only deviates by 0.014 % from the equilibrium value (lower panel of Fig. 21). Another equilibrium case is given by NSE (Sect. 2.3). We tested that the transition from the NSE region to the network region is consistent. Therefore, we calculated the NSE composition for = 7 GK, = 10 7 g cm −3 , and = 0.5 with and without screening. In addition, we calculated the mass fractions after 10 2 s when starting with neutrons and protons only, using the same hydrostatic conditions and strong reactions only. This system should also approach NSE. Again, we calculate the abundances with and without electron screening corrections (see Fig. 2). As a consequence of the previous outlined tests, we only calculated the test with the Gear integration method. Other tests If the nuclear reaction network is sufficiently large, deriving an analytic expression for the solution is often not possible anymore. In these cases, it is beneficial to compare the result with other nuclear reaction networks. We calculate a case of hydrostatic Carbon-Oxygen burning with = 10 9 g cm −3 and a temperature of 3 GK for 10 12 s. The initial composition consisted of ( 12 C) = ( 16 O) = 0.5. In total we involve 13 nuclei in the calculation. We compare the final abundances of W N (using Gears integration method) with the results of the nuclear reaction networks S N , R N (Navó et al. 2022), and XN (Hix & Thielemann 1999 abundances of W N deviate by less than 1% to all other reaction networks (Tab. 3). The abundant nucleus 56 Ni even agrees with a maximum deviation of 10 −4 % only. We note that we did not tune the specific numerical parameters used in the different codes. More restrictive time steps can therefore lead to an even better agreement. To test the implementation of detailed balance (Sect. 2.2) we repeated the calculation performed in with S N . We calculate the nucleosynthesis of a trajectory of an X-ray burst from Schatz et al. (2001). The result of four calculations is shown in Fig. 22. There, we use the Reaclib v2.2 and calculate the nucleosynthesis with W N using the reverse rates as given by Reaclib (solid orange line). Moreover, we use S N with reverse rates from Reaclib (solid blue line). Additionally, we calculate the same trajectory, but using reverse rates calculated via detailed balance using the Q-value from the mass excess provided within Reaclib (within the winvn file, dashed lines). Both networks agree very well for both cases. The impact of using detailed balance rates with masses from the winvn in contrast to Reaclib reverse rates seems to be larger in S N especially for the smaller mass numbers ∼ 50. However, for both networks there is also a distinct feature visible at ∼ 85. SUMMARY We have summarized the fundamentals of nuclear reaction networks. The implementation was demonstrated with the single-zone nuclear reaction network code W N . We outlined the differential equations that underlie every nuclear reaction network code. Additionally, we presented two implicit numerical techniques to solve these equations, the implicit Euler and Gear's integration scheme. A mandatory ingredient is also the set of reaction rates. The reaction rates can originate from different databases with varying parametrizations. Hereby, one should ensure that the same underlying nuclear physic inputs such as mass models are used. We described the reaction rate formats that are supported by W N , namely the Reaclib reaction rate database, a format for -delayed neutron emission, tabulated rates, theoretical + , − , electron capture and positron capture rates, neutrino reactions, and fission reactions. All these different reaction sources get a different priority assigned and rates that appear in more than one source are replaced by the rate with the highest priority. This priority is chosen arbitrarily without any estimate of the quality of the rate and there could be still some action required if a user want to use specific reaction rates. W N is further able to calculate detailed balance reactions on-the-fly which can be useful especially for tabulated rates, where a tabulation of the reverse reactions could break the detailed balance principle. If included, the detailed balance reactions will replace all reverse reactions in the other reaction rate sources. All charged particle reaction rates can be further altered by electron screening. This correction is implemented with a multiplicative factor to the reaction rates. We presented the the energy feedback from nuclear reactions onto the temperature, which is implemented in the form of an operator splitting method. Finally, we presented simple examples and test cases to demonstrate the reliability of the reaction network code W -N . Using these test cases, we analyzed the advantages and disadvantages of the different implemented numerical integration methods. We conclude that hydrostatic and equilibrium conditions are often more efficient and precise with the Gear integration method. More complex rapidly varying thermodynamic conditions are more efficient with the implicit Euler integration method. Besides the reliability, a large focus during the development of W N was the usability. The code provides an easy interface to the user by a simple parameter file. Additionally, comments are written entirely in a doxygen conform format and the documentation can be accessed along with the code. Due to the modular structure, it is also easy to change the included reactions. Additionally, large effort has been undertaken to supply understandable error messages. To give an example, if a input parameter is misspelled, the error message contains not only that this parameter does not exist, but also points to the most similar existing parameter. With this work W N will be fully public and available for download. Once the manuscript is accepted for publication, we will provide the download links for a code version on github as well as on zenodo. This includes not only the code, but also all example and test cases. If you use them, please cite the according publications that can be found in the documentation. We want to thank M. A. Aloy, F. Montes, M. Obergaulinger, T. Psaltis, H. Schatz, A. Sieverding, and M. Ugliano for many beneficial discussions. We further thank all people that made example trajectories publicly available and L. Bovard, R. Fernández, M. Obergaulinger, and H. Schatz for giving their consent to include their trajectories along with W N . MR acknowledges support from the grants FJC2021-046688-I and PID2021-127495NB-I00, funded by MCIN/AEI/10.13039/501100011033 and by the European Union "NextGenerationEU" as well as "ESF Investing in your future". Additionally, he acknowledges support from the Astrophysics and High Energy This publication benefited highly from collaborations and exchange within the European Cost Action CA16117 "Chemical Evolution as Tracers of the Evolution of the Cosmos" (ChETEC) and the "International Research Network for Nuclear Astrophysics" (IReNA) Software: Matplotlib (Hunter 2007), Numpy (Harris et al. 2020), Scipy (Virtanen et al. 2020), Quadpack (Piessens et al. 1983), Timmes EOS (Timmes & Arnett 1999), ReNet (Navó et al. 2022), XNet (Hix & Thielemann 1999), SkyNet (Lippuner & Roberts 2017), PARDISO (Schenk & Gärtner 2004) https://www.doxygen.nl/index.html 54)) and different values of NR . Right plot: The same, but using an alternative convergence criteria of the Newton-Raphson | max(ì +1 , ì )/min(ì +1 , ì +1 ) − 1| < NR . The lower panels show the deviation defined as Δ = 1 − 1 2 , using the most restrictive parameters as reference ( 1 ). APPENDIX A. CODE CONVERGENCE The accuracy of the nucleosynthesis results not only depend on the nuclear input, but also on numerical parameters. In the following, we investigate in more detail the latter error. For this we use a neutron-rich trajectory from a MR-SNe of the simulations of Winteler et al. (2012). We use reactions from the Jina Reaclib (Cyburt et al. 2010) with additional -decays from the Viola-Seaborg formula, theoretical weak rates from Langanke & Martínez-Pinedo (2001) that we exchange with experimental reaction rates at 10 −1 GK. Fission rates have been used as described in sect. 4.2.7 with the fragment distribution of Panov et al. (2001). First we will investigate different values of NR for the convergence criterion of the root finding algorithm within the implicit Euler method (Eq. (54)). As default in W N , we perform at least two root finding iterations. To avoid a re-adjustment to smaller and smaller time steps due to a not converged root finding (see Fig. 3), we set the maximum amount of allowed root finding iterations to a large value of 1000. The final mass fractions of all runs are shown in the upper left panel of Fig. 23, the difference is defined by Δ = 1 − 1 2 , where we took 1 as mass fractions from the run with NR = 10 −8 . This is shown in the lower left panel of Fig. 23. The maximum deviation is of the order of ∼ 0.1%. Interestingly, it is the iron region that is prone to errors. This part of enhanced errors vanishes completely when not using theoretical weak rates. These rates depend on temperatures as well as densities and can therefore be more challenging to integrate. For values NR > 10 −7 , there is no difference visible. This is due to the fact that the mass for these precisions is already conserved within the minimum of two Newton-Raphson iterations. An alternative convergence criteria of the Newton-Raphson that is not based on baryon conservation is | max( ì +1 , ì )/min( ì +1 , ì +1 ) − 1| < NR for ì +1 > 10 −10 . In other words, every abundance should be converged within a given percentage. The result for this convergence criteria is shown in the right panels of Fig. 23. Again the difference between the most restrictive case and the least restrictive one is of the order of ∼ 0.1%, but the parameter has a much more direct impact on the accuracy. The most restrictive scenarios of both convergence criteria agree even within ∼ 0.01% which demonstrates that both criteria can be used interchangeably and we therefore only include the criterion that is based on baryon conservation as it has a better performance. All previous calculations were done with the same time step factor of Euler = 0.1 (Eq. (57)). We reduced this factor and tested values of 5 × 10 −2 , 1 × 10 −2 , and 5 × 10 −3 . As shown in the left panels of Fig. 24, the abundances are converged within ∼ 10%. 57)). Right plot: The same, but using the Gear solver with different time steps by varying Gear . The lower panels show the deviation defined as Δ = 1 − 1 2 , using the most restrictive parameters as reference ( 1 ). Comparison of a calculation using the implicit Euler method with Euler = 5 × 10 −3 and a calculation using Gear's method with Gear = 10 −9 . We note that the Gear solver in the lower panel is a horizontal line by definition. In practice, it is not feasible to use a factor of Euler = 5 × 10 −3 when calculating many trajectories (c.f., ∼ 3000 versus ∼ 60000 time steps for Euler = 10 −1 and Euler = 5 × 10 −3 , respectively). The Gear integration method on the other hand estimates the time step in a more sophisticated way, based on integration errors. This error is controlled by Gear (Eq. (70)). When reducing Gear , one directly controls the numerical error (right panels of Fig. 24). The error can be reduced to an almost arbitrary precision and for all calculated runs it lies within an astonishing precision of ∼ 0.1%. Finally, it is interesting to compare the most precise calculation using the Gear solver ( Gear = 10 −9 ) with the most precise calculation using the implicit Euler method ( Euler = 5 × 10 −3 ). This comparison is shown in Fig. 25. The difference between the calculation using the Gear solver and the implicit Euler is for most parts within 10%, however some regions, i.e., around ∼ 70 and ∼ 140 are differing by a factor of ∼ 2. However, the largest deviation is visible in the abundance of protons with a factor of 30 showing that the numerical method can also have a strong impact especially on light nuclei such as neutrons, protons, and alphas. Figure 1 . 1Upper panel: Screening correction for the heavy ion reaction 12 C+ 12 C for a constant density of 10 8 g cm −3 and e = 0.5. The screening correction ofKravchuk & Yakovlev (2014) that is used in W N is shown as green line. The screening correction ofKravchuk & Yakovlev (2014) when only using the 0 term that is similar to the original description of Salpeter(1954)is shown as dotted orange line. The screening correction of S N for a pure carbon composition is shown as dashed red line. Bottom panel: Relative differences of the screening corrections relative to the one implemented in W N . The vertical dashed line indicates the intermediate screening regime for Γ 12 = 1. Figure 2 . 2NSE composition with (orange line) and without (blue line) screening corrections in NSE for = 7 GK, = 10 7 g cm −3 , and = 0.5 (solid lines). Dashed lines show the result of two hydrostatic network runs with and without screening corrections using strong reaction rates from the Jina Reaclib. Dashed lines show the same, but replacing the reverse reactions of the Jina Reaclib with reverse rates that are calculated with detailed balance using the mass excess of the Jina Reaclib. The hydrostatic calculations start with half neutrons and half protons and are calculated for 10 3 s. This illustrates the consistency of the network at NSE transition with and without screening corrections when using the same nuclear masses for the reactions and NSE. Figure 3 . 3Flow diagram of W N . Figure taken from Reichert (2021). Figure 4 . 4Sketch of different temperature regimes included in Win-Net. Figure 5 . 5Sketch of the rate replacement procedure. Reaction rates in different formats have different priorities when creating a list with all reactions within W N . The priority of the rates increases from the top to the bottom of the plot. At a certain threshold temperature exp , theoretical + , − , ec, and pc rates get replaced again as they are only valid above certain temperatures (see text). With the exception of Reaclib rates, all other rates are only optionally used. Figure 6 . 6Sketch of most important nuclear reactions(Reichert 2021). Figure 7 . 7Ratio of calculated and experimental -decay half-lifes. The upper panel shows the ratio versus neutron number, the lower panel versus proton number. The different colors indicate the different types of nuclei as indicated in the legend. Figure 8 . 8Upper panel: Q-value for -decay using the masses provided with the Jina Reaclib. Second panel: -decay half-lifes in seconds. Whenever experimental -decay half-lives are available we plot these instead of the parametrized ones. Nuclei that have halflives of 1/2 10 12 yrs are assumed to not -decay. Bottom panel: Distingution between parametrized and experimentally knowndecay half-lives included in the Jina Reaclib. Stable nuclei are shown as black squares, experimentally available -decays within the Jina Reaclib are indicated as darkgrey rectangles. Magic numbers of 50, 82, and 126 are shown as dashed lines. All shown rates are publicly available along with W N . et al. 2009; Figure 9 . 9Energy averaged neutrino cross sections from the table of Sieverding et al. (2018) at = 5 MeV. Shown are the summed cross sections of all reaction channels. The individual panels show charged-current reactions of electron neutrinos , charged-current reactions of electron antineutrinos¯, neutral-current reactions of any neutrino flavor , and the average amount of neutrons for neutral-current reactions of any neutrino flavor . Note that the properties of neutral-current reactions of any antineutrino flavorā re nearly identical to the the lower two panels. Figure 10 . 10Compiled file of theoretical − -, + -decays, positronand electron-capture rates originating from different sources. The sources are FFN Figure 11 . 11Half-lives of spontaneous fission in the nuclear chart (c.f.,Khuyagbaatar 2020). Colored dots indicate experimentally measured half-lives taken from the ENDFS database. ( 2007 ) 2007; Panov et al. (2010); Erler et al. (2012); Giuliani et al. (2018a). Extended compilations have been provided and can be found in several databases . Figure 12 . 12Final abundances relative to hydrogen as a function of the photon to baryon ratio . Horizontal bands show measurements of the respective isotope.lanthen et al. 2009). Furthermore, we assume a freeze-out of weak reactions at = 0.8 MeV. Figure 13 . 13Initial and final mass fractions of a main s-process. The trajectory as well as the final mass fractions are taken from Cescutti et al. (2018) accessed via Cescutti (2022). Figure 14 .( 14Final mass fractions of complete Si burning obtained with a simple parametric model. explosion energy of 10 52 erg we obtain: ) = 7 × 10 6 − / . Fig. 17 .Figure 15 . 1715These trajectories come from a variety of (MMass fractions at = 1.8×10 3 s for one trajectory within the MR-SNe model 35OC-RO of Obergaulinger & Aloy (2017), and Reichert et al. (2021). Upper panel: Calculation without involving neutrino reactions. Lower panel: Calculation using neutrino reactions on nucleons as well as on heavier nuclei (Sieverding et al. 2018). simulations and were presented in Winteler et al. (2012); Korobkin et al. (2012); Rosswog (2013); Piran et al. (2013); Wu et al. (2016a); Bovard et al. (2017); Obergaulinger & Aloy (2017); Reichert et al. (2021); Obergaulinger & Aloy (2021); Reichert et al. (2023). Figure 16 .Figure 17 . 1617Final mass fractions after 1 Gyr for a trajectory of a MR-SNe(Obergaulinger & Aloy 2017, of the neutrino driven wind of a NSM(Perego et al. 2014, Martin et al. 2015, and of the neutrino driven wind of a CC-SNe(Bliss et al. Final mass fractions of various example trajectories. Within MR-SNe, the models used in Winteler et al. (2012), Obergaulinger & Aloy (2017); Reichert et al. (2021), and Obergaulinger & Aloy (2021); Reichert et al. (2023) are shown (red lines). For the dynamic ejecta of a NSM (orange lines) we show the simulations of Korobkin et al. (2012); Rosswog (2013); Piran et al. (2013, model ns10ns10) and Bovard et al. (2017). Furthermore we illustrate the viscous ejecta of a NSM from the calculation of Wu et al. (2016a). The dynamic ejecta of a NSBH merger from Korobkin et al. (2012); Rosswog (2013); Piran et al. (2013, model BH10) is shown as cyan line. Figure 18 . 18Decay of 56 Ni calculated with the Gear (solid line) and implicit Euler solver (dashed line). The analytic solution is shown as dotted lines. Figure 19 . 19The -delayed fission of 295 Am. Shown are the abundances after 10 −2 s for three different fission fragment distributions. Figure 20 .Figure 21 . 2021Upper panel: Mass fractions of neutrons (blue), 64 Ni (orange), and 65 Ni (green) for the implicit Euler (dashed line) and Gear integration scheme (solid line). The analytic equilibrium solution is shown as black dotted lines. Lower panel: The time step of the implicit Euler and Gear integration schemes. Electron fraction under hydrostatic conditions with = 30 GK and = 10 10 g cm −3 . Upper panel: Equilibrium case when involving only electron and positron captures. Middle panel: Equilibrium case when involving only neutrinos with luminosities of = 10 52 erg s −1 ,¯= 5 × 10 52 erg s −1 , and neutrino energies of = 25.2 MeV as well as¯= 31.5 MeV. Figure 22 . 22Composition of an X-ray burst(Schatz et al. 2001) after 10 3 s. The result is shown for W N (orange lines) and S N (blue lines) with (dashed lines) and without (solid lines) calculating reverse reactions via detailed balance. Physics programme of the Generalitat Valenciana ASFAE/2022/026 funded by MCIN and the European Union NextGenerationEU (PRTR-C17.I1). AA, GMP, JK, and MJ acknowledge support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -Project-ID 279384907 -SFB 1245 and the State of Hessen within the Research Cluster ELEMENTS (Project ID 500/10.006). AA, JK, and MJ additionally acknowledge support from the European Research Council under grant EUROPIUM-667912. GMP acknowledges support by the ERC under the European Union's Horizon 2020 research and innovation program (ERC Advanced Grant KILONOVA nr 885281). OK was supported by the US Department of Energy through the Los Alamos National Laboratory (LANL). LANL is operated by Triad National Security, LLC, for the National Nuclear Security Administration of the U.S.DOE (Contract No. 89233218CNA000001). CF acknowledges support from the by United States Department of Energy, Office of Science, Office of Nuclear Physics (award number DE-FG02-02ER41216). RH acknowledges support from the World Premier International Research Centre Initiative (WPI Initiative), MEXT, Japan; the IReNA AccelNet Network of Networks, supported by the National Science Foundation under Grant No. OISE-1927130 and ChETEC-INFRA (grant No 101008324) supported by the European Union's Horizon 2020 research and innovation programme. Figure 23 . 23Left plot: Calculation using the implicit Euler integration scheme using the Newton-Raphson convergence criteria that is based on baryon conservation (Eq. ( Figure 24 . 24Left plot: Calculation using the implicit Euler integration scheme using different time steps by varying Euler (Eq. ( Figure 25 . 25Figure 25. Comparison of a calculation using the implicit Euler method with Euler = 5 × 10 −3 and a calculation using Gear's method with InitializationRead User input (e.g. path to reaction rates)Read Data (e.g. reaction rates) NSE Update dens. and neutrino quantities No Yes Network Estimate timestep Write nal output (e.g., nal abundances) Terminate Ful lled Not ful lled Analyse result (e.g., reaction timescales) Write iterative output (e.g., heating rate) Termination criterion Nuclear heating? Calc. weak reactions Calc. NSE composition Update temp., dens. and neutrino quantities Update temperature Solve reaction network equation Solve reaction network equation together with temperature Converged? Converged? Half the timestep No No Yes Yes Set evolution mode Rotate timelevels Table 1. Amount of reactants and products for different Reaclib chapters.Chapter #Reactants #Products 1 1 1 2 1 2 3 1 3 4 2 1 5 2 2 6 2 3 7 2 4 8 3 1 9 3 2 10 4 2 11 1 4 ). The finalA Z Y WinNet Δ SkyNet Δ ReNet Δ XNet [10 −2 %] [10 −3 %] [10 −1 %]Table 3. Final abundances for hydrostatic Carbon-Oxgen test case. Columns 4-6 show the deviation compared to the results of S N , R N , and XN , respectively.4 2 4.01 × 10 −09 0.82 0.13 0.25 12 6 4.04 × 10 −18 0.53 0.08 0.16 16 8 1.55 × 10 −16 8.06 1.27 2.44 20 10 1.89 × 10 −19 7.23 1.14 2.19 24 12 1.14 × 10 −14 6.61 1.04 2.00 28 14 8.14 × 10 −09 5.79 0.91 1.75 32 16 4.52 × 10 −08 4.96 0.78 1.50 36 18 7.54 × 10 −09 4.13 0.65 1.25 40 20 5.78 × 10 −07 3.31 0.52 1.00 44 22 2.89 × 10 −09 2.48 0.39 0.75 48 24 3.23 × 10 −07 1.65 0.26 0.50 52 26 7.13 × 10 −05 0.82 0.13 0.25 56 28 1.78 × 10 −02 0.003 0.001 0.001 . M Aikawa, M Arnould, S Goriely, A Jorissen, K Takahashi, 10.1051/0004-6361:20052944A&A. 4411195Aikawa, M., Arnould, M., Goriely, S., Jorissen, A., & Takahashi, K. 2005, A&A, 441, 1195, doi: 10.1051/0004-6361:20052944 . M Á Aloy, M Obergaulinger, 10.1093/mnras/staa3273MNRAS. 5004365Aloy, M. Á., & Obergaulinger, M. 2021, MNRAS, 500, 4365, doi: 10.1093/mnras/staa3273 . R A Alpher, H Bethe, G Gamow, 10.1103/PhysRev.73.803Physical Review. 73803Alpher, R. A., Bethe, H., & Gamow, G. 1948, Physical Review, 73, 803, doi: 10.1103/PhysRev.73.803 . A Arcones, G Martínez-Pinedo, L F Roberts, S E Woosley, 10.1051/0004-6361/201014276A&A. 52225Arcones, A., Martínez-Pinedo, G., Roberts, L. F., & Woosley, S. E. 2010, A&A, 522, A25, doi: 10.1051/0004-6361/201014276 Supernovae and Nucleosynthesis: An Investigation of the History of Matter from the Big Bang to the Present Arnett. D Arnett, 10.1007/BF00650291Astrophys. Space. Sci. 5180Arnett, D. 1996, Supernovae and Nucleosynthesis: An Investigation of the History of Matter from the Big Bang to the Present Arnett, W. D. 1969, Astrophys. Space. Sci., 5, 180, doi: 10.1007/BF00650291 . 10.1086/190472ApJS. 35145-. 1977, ApJS, 35, 145, doi: 10.1086/190472 . W D Arnett, J W Truran, 10.1086/150072ApJ. 157339Arnett, W. D., & Truran, J. W. 1969, ApJ, 157, 339, doi: 10.1086/150072 . W D Arnett, J W Truran, S E Woosley, 10.1086/150878ApJ. 165Arnett, W. D., Truran, J. W., & Woosley, S. E. 1971, ApJ, 165, 87, doi: 10.1086/150878 . W D Arnett, C Meakin, R Hirschi, 10.3847/1538-4357/ab21d9ApJ. 882Arnett, W. D., Meakin, C., Hirschi, R., et al. 2019, ApJ, 882, 18, doi: 10.3847/1538-4357/ab21d9 . M Arnould, A&A. 46117Arnould, M. 1976, A&A, 46, 117 . M Arnould, H Norgaard, F K Thielemann, W Hillebrandt, 10.1086/157940ApJ. 237931Arnould, M., Norgaard, H., Thielemann, F. K., & Hillebrandt, W. 1980, ApJ, 237, 931, doi: 10.1086/157940 . E Aver, K A Olive, E D Skillman, 10.1088/1475-7516/2010/05/003JCAP. 3Aver, E., Olive, K. A., & Skillman, E. D. 2010, JCAP, 2010, 003, doi: 10.1088/1475-7516/2010/05/003 . T M Bania, R T Rood, D S Balser, 10.1038/415054aNature. 415Bania, T. M., Rood, R. T., & Balser, D. S. 2002, Nature, 415, 54, doi: 10.1038/415054a . J Barnes, B D Metzger, 10.3847/2041-8213/ac9b41ApJL. 93929Barnes, J., & Metzger, B. D. 2022, ApJL, 939, L29, doi: 10.3847/2041-8213/ac9b41 . D N Basu, P R Chowdhury, C Samanta, 10.1103/PhysRevC.72.051601PhRvC. 7251601Basu, D. N., Chowdhury, P. R., & Samanta, C. 2005, PhRvC, 72, 051601, doi: 10.1103/PhysRevC.72.051601 . W Benz, J G Hills, F K Thielemann, 10.1086/167656ApJ. 342986Benz, W., Hills, J. G., & Thielemann, F. K. 1989, ApJ, 342, 986, doi: 10.1086/167656 . M Bhattacharya, G Gangopadhyay, 10.1016/j.physletb.2007.06.012Physics Letters B. 651263Bhattacharya, M., & Gangopadhyay, G. 2007, Physics Letters B, 651, 263, doi: 10.1016/j.physletb.2007.06.012 . S Bisterzo, C Travaglio, M Wiescher, F Käppeler, R Gallino, 10.3847/1538-4357/835/1/97ApJ. 83597Bisterzo, S., Travaglio, C., Wiescher, M., Käppeler, F., & Gallino, R. 2017, ApJ, 835, 97, doi: 10.3847/1538-4357/835/1/97 . J Bliss, A Arcones, F Montes, J Pereira, 10.1103/PhysRevC.101.055807PhRvC. 10155807Bliss, J., Arcones, A., Montes, F., & Pereira, J. 2020, PhRvC, 101, 055807, doi: 10.1103/PhysRevC.101.055807 . J Bliss, A Arcones, Y.-Z Qian, 10.3847/1538-4357/aade8dApJ. 866105Bliss, J., Arcones, A., & Qian, Y.-Z. 2018, ApJ, 866, 105, doi: 10.3847/1538-4357/aade8d . A M Boesgaard, G Steigman, 10.1146/annurev.aa.23.090185.001535ARA&A. 23319Boesgaard, A. M., & Steigman, G. 1985, ARA&A, 23, 319, doi: 10.1146/annurev.aa.23.090185.001535 . R Bollig, H T Janka, A Lohs, 10.1103/PhysRevLett.119.242702PhRvL. 119242702Bollig, R., Janka, H. T., Lohs, A., et al. 2017, PhRvL, 119, 242702, doi: 10.1103/PhysRevLett.119.242702 . L Bovard, D Martin, F Guercilena, 10.1103/PhysRevD.96.124005PhRvD. 96124005Bovard, L., Martin, D., Guercilena, F., et al. 2017, PhRvD, 96, 124005, doi: 10.1103/PhysRevD.96.124005 . L Bovard, L Rezzolla, 10.1088/1361-6382/aa8d98Classical and Quantum Gravity. 34215005Bovard, L., & Rezzolla, L. 2017, Classical and Quantum Gravity, 34, 215005, doi: 10.1088/1361-6382/aa8d98 . E Bravo, 10.1093/mnras/staa910MNRAS. 4943037Bravo, E. 2020, MNRAS, 494, 3037, doi: 10.1093/mnras/staa910 . E Bravo, D García-Senz, 10.1046/j.1365-8711.1999.02694.xMNRAS. 307984Bravo, E., & García-Senz, D. 1999, MNRAS, 307, 984, doi: 10.1046/j.1365-8711.1999.02694.x . B A Brown, 10.1103/PhysRevC.46.811PhRvC. 46811Brown, B. A. 1992, PhRvC, 46, 811, doi: 10.1103/PhysRevC.46.811 . D A Brown, M B Chadwick, R Capote, 10.1016/j.nds.2018.02.001Nuclear Data Sheets. 148Brown, D. A., Chadwick, M. B., Capote, R., et al. 2018, Nuclear Data Sheets, 148, 1, doi: 10.1016/j.nds.2018.02.001 . C G Broyden, Mathematics of Computation. 19577Broyden, C. G. 1965, Mathematics of Computation, 19, 577 . S W Bruenn, 10.1086/191143ApJS. 62331Bruenn, S. W. 1986, ApJS, 62, 331, doi: 10.1086/191143 . A Burrows, 10.1103/RevModPhys.85.245Rev. Mod. Phys. 85245Burrows, A. 2013, Rev. Mod. Phys., 85, 245, doi: 10.1103/RevModPhys.85.245 . A Burrows, S Reddy, T A Thompson, 10.1016/j.nuclphysa.2004.06.012NuPhA. 777356Burrows, A., Reddy, S., & Thompson, T. A. 2006, NuPhA, 777, 356, doi: 10.1016/j.nuclphysa.2004.06.012 . M Busso, D Vescovi, S Palmerini, S Cristallo, V Antonuccio-Delogu, 10.3847/1538-4357/abca8eApJ. 90855Busso, M., Vescovi, D., Palmerini, S., Cristallo, S., & Antonuccio-Delogu, V. 2021, ApJ, 908, 55, doi: 10.3847/1538-4357/abca8e . G Byrne, A Hindmarsh, 10.1145/355626.355636ACM Transactions on Mathematical Software (TOMS). 171Byrne, G., & Hindmarsh, A. 1975, ACM Transactions on Mathematical Software (TOMS), 1, 71, doi: 10.1145/355626.355636 . C Y Cardall, G M Fuller, 10.1086/310838ApJL. 486111Cardall, C. Y., & Fuller, G. M. 1997, ApJL, 486, L111, doi: 10.1086/310838 Main s-process, 1.2.1, Zenodo. G Cescutti, 10.5281/zenodo.6474686Cescutti, G. 2022, Main s-process, 1.2.1, Zenodo, doi: 10.5281/zenodo.6474686 . G Cescutti, R Hirschi, N Nishimura, 10.1093/mnras/sty1185MNRAS. 4784101Cescutti, G., Hirschi, R., Nishimura, N., et al. 2018, MNRAS, 478, 4101, doi: 10.1093/mnras/sty1185 Principles of stellar evolution and nucleosynthesis: with a new preface. D Clayton, University of Chicago PressClayton, D. 1968, Principles of stellar evolution and nucleosynthesis: with a new preface (University of Chicago Press) . A Coc, E Vangioni, 10.1142/S0218301317410026International Journal of Modern Physics E. 261741002Coc, A., & Vangioni, E. 2017, International Journal of Modern Physics E, 26, 1741002, doi: 10.1142/S0218301317410026 . R J Cooke, M Pettini, C Steidel, 10.3847/1538-4357/aaab53ApJ. 855102Cooke, R. J., Pettini, M., & Steidel, C. C. 2018, ApJ, 855, 102, doi: 10.3847/1538-4357/aaab53 . S Curtis, K Ebinger, C Fröhlich, 10.3847/1538-4357/aae7d2ApJ. 870Curtis, S., Ebinger, K., Fröhlich, C., et al. 2019, ApJ, 870, 2, doi: 10.3847/1538-4357/aae7d2 . R H Cyburt, B D Fields, K A Olive, T.-H Yeh, 10.1103/RevModPhys.88.015004Rev. Mod. Phys. 8815004Cyburt, R. H., Fields, B. D., Olive, K. A., & Yeh, T.-H. 2016, Rev. Mod. Phys., 88, 015004, doi: 10.1103/RevModPhys.88.015004 . R H Cyburt, A M Amthor, R Ferguson, 10.1088/0067-0049/189/1/240The Astrophysical Journal Supplement Series. 189240Cyburt, R. H., Amthor, A. M., Ferguson, R., et al. 2010, The Astrophysical Journal Supplement Series, 189, 240, doi: 10.1088/0067-0049/189/1/240 . M Dan, J Guillochon, M Brüggen, E Ramirez-Ruiz, S Rosswog, 10.1093/mnras/stv2289MNRAS. 4411Dan, M., Guillochon, J., Brüggen, M., Ramirez-Ruiz, E., & Rosswog, S. 2015, MNRAS, 454, 4411, doi: 10.1093/mnras/stv2289 . L Dardelet, C Ritter, P Prado, arXiv:1505.05500arXiv e-printsDardelet, L., Ritter, C., Prado, P., et al. 2015, arXiv e-prints, arXiv:1505.05500. https://arxiv.org/abs/1505.05500 . P A Denissenkov, F Herwig, P Woodward, 10.1093/mnras/stz1921MNRAS. 4884258Denissenkov, P. A., Herwig, F., Woodward, P., et al. 2019, MNRAS, 488, 4258, doi: 10.1093/mnras/stz1921 Atomic Data and Nuclear Data Tables. P Descouvemont, A Adahchour, C Angulo, A Coc, E Vangioni-Flam, 10.1016/j.adt.2004.08.00188203Descouvemont, P., Adahchour, A., Angulo, C., Coc, A., & Vangioni-Flam, E. 2004, Atomic Data and Nuclear Data Tables, 88, 203, doi: 10.1016/j.adt.2004.08.001 . H E Dewitt, H C Graboske, M S Cooper, 10.1086/152061ApJ. 181439Dewitt, H. E., Graboske, H. C., & Cooper, M. S. 1973, ApJ, 181, 439, doi: 10.1086/152061 I Dillmann, M Heil, F Käppeler, 10.1063/1.2187846Capture Gamma-Ray Spectroscopy and Related Topics. A. Woehr & A. Aprahamian819American Institute of Physics Conference SeriesDillmann, I., Heil, M., Käppeler, F., et al. 2006, in American Institute of Physics Conference Series, Vol. 819, Capture Gamma-Ray Spectroscopy and Related Topics, ed. A. Woehr & A. Aprahamian, 123-127, doi: 10.1063/1.2187846 . C L Doherty, P Gil-Pons, L Siess, J C Lattanzio, 10.1017/pasa.2017.52PASA. 3456Doherty, C. L., Gil-Pons, P., Siess, L., & Lattanzio, J. C. 2017, PASA, 34, e056, doi: 10.1017/pasa.2017.52 . J M Dong, H F Zhang, G Royer, 10.1103/PhysRevC.79.054330PhRvC. 7954330Dong, J. M., Zhang, H. F., & Royer, G. 2009, PhRvC, 79, 054330, doi: 10.1103/PhysRevC.79.054330 . T Dong, Z Ren, 10.1140/epja/i2005-10142-yEuropean Physical Journal A. 2669Dong, T., & Ren, Z. 2005, European Physical Journal A, 26, 69, doi: 10.1140/epja/i2005-10142-y . P Eggenberger, S Ekström, C Georgy, 10.1051/0004-6361/202141222A&A. 652137Eggenberger, P., Ekström, S., Georgy, C., et al. 2021, A&A, 652, A137, doi: 10.1051/0004-6361/202141222 . M Eichler, A Arcones, A Kelic, 10.1088/0004-637X/808/1/30ApJ. 80830Eichler, M., Arcones, A., Kelic, A., et al. 2015, ApJ, 808, 30, doi: 10.1088/0004-637X/808/1/30 . J Erler, K Langanke, H Loens, G Martinez-Pinedo, P.-G Reinhard, 10.1103/PhysRevC.85.0258028525802Erler, J., Langanke, K., Loens, H., Martinez-Pinedo, G., & Reinhard, P.-G. 2012, 85, 025802, doi: 10.1103/PhysRevC.85.025802 . E D Feger, TennesseeUniversity of TennesseePhD thesisFeger, E. D. 2011, PhD thesis, University of Tennessee, Tennessee. https://trace.tennessee.edu/utk_graddiss/1048 . B D Fields, 10.1146/annurev-nucl-102010-130445Annual Review of Nuclear and Particle Science. 6147Fields, B. D. 2011, Annual Review of Nuclear and Particle Science, 61, 47, doi: 10.1146/annurev-nucl-102010-130445 . B D Fields, K A Olive, arXiv:2204.03167arXiv e-printsFields, B. D., & Olive, K. A. 2022, arXiv e-prints, arXiv:2204.03167. https://arxiv.org/abs/2204.03167 . W A Fowler, QJRAS. 1582Fowler, W. A. 1974, QJRAS, 15, 82 . W A Fowler, G R Caughlan, B A Zimmerman, 10.1146/annurev.aa.05.090167.002521ARA&A. 5525Fowler, W. A., Caughlan, G. R., & Zimmerman, B. A. 1967, ARA&A, 5, 525, doi: 10.1146/annurev.aa.05.090167.002521 . C Freiburghaus, S Rosswog, F.-K Thielemann, 10.1086/312343ApJL. 525121Freiburghaus, C., Rosswog, S., & Thielemann, F.-K. 1999, ApJL, 525, L121, doi: 10.1086/312343 . U Frischknecht, R Hirschi, M Pignatari, 10.1093/mnras/stv2723MNRAS. 4561803Frischknecht, U., Hirschi, R., Pignatari, M., et al. 2016, MNRAS, 456, 1803, doi: 10.1093/mnras/stv2723 . C Fröhlich, G Martínez-Pinedo, M Liebendörfer, 10.1103/PhysRevLett.96.142502Physical Review Letters. 96142502Fröhlich, C., Martínez-Pinedo, G., Liebendörfer, M., et al. 2006a, Physical Review Letters, 96, 142502, doi: 10.1103/PhysRevLett.96.142502 . C Fröhlich, P Hauser, M Liebendörfer, 10.1086/498224ApJ. 637415Fröhlich, C., Hauser, P., Liebendörfer, M., et al. 2006b, ApJ, 637, 415, doi: 10.1086/498224 . B Fryxell, E Mueller, D Arnett, 10.1086/169657ApJ. 367619Fryxell, B., Mueller, E., & Arnett, D. 1991, ApJ, 367, 619, doi: 10.1086/169657 . S Fujimoto, N Nishimura, Hashimoto, 10.1086/529416ApJ. 6801350Fujimoto, S.-i., Nishimura, N., & Hashimoto, M.-a. 2008, ApJ, 680, 1350, doi: 10.1086/529416 . G M Fuller, W A Fowler, M J Newman, 10.1086/190779ApJS. 48279Fuller, G. M., Fowler, W. A., & Newman, M. J. 1982, ApJS, 48, 279, doi: 10.1086/190779 . 10.1086/163208ApJ. 2931-. 1985, ApJ, 293, 1, doi: 10.1086/163208 . D Garcia-Senz, R M Cabezon, A Arcones, A Relano, F K Thielemann, 10.1093/mnras/stt1821MNRAS. 4363413Garcia-Senz, D., Cabezon, R. M., Arcones, A., Relano, A., & Thielemann, F. K. 2013, MNRAS, 436, 3413, doi: 10.1093/mnras/stt1821 . D García-Senz, R M Cabezón, I Domínguez, F K Thielemann, 10.3847/0004-637X/819/2/132ApJ. 819132García-Senz, D., Cabezón, R. M., Domínguez, I., & Thielemann, F. K. 2016, ApJ, 819, 132, doi: 10.3847/0004-637X/819/2/132 . C W Gear, 10.1145/362566.362571Commun. ACM. 14176Gear, C. W. 1971, Commun. ACM, 14, 176, doi: 10.1145/362566.362571 . S Ghosh, N Wolfe, C Fröhlich, 10.3847/1538-4357/ac4d20ApJ. 92943Ghosh, S., Wolfe, N., & Fröhlich, C. 2022, ApJ, 929, 43, doi: 10.3847/1538-4357/ac4d20 . P Gil-Pons, C L Doherty, J L Gutiérrez, 10.1017/pasa.2018.42PASA. 3538Gil-Pons, P., Doherty, C. L., Gutiérrez, J. L., et al. 2018, PASA, 35, e038, doi: 10.1017/pasa.2018.42 . S A Giuliani, G Martínez-Pinedo, L M Robledo, 10.1103/PhysRevC.97.034323PhRvC. 9734323Giuliani, S. A., Martínez-Pinedo, G., & Robledo, L. M. 2018a, PhRvC, 97, 034323, doi: 10.1103/PhysRevC.97.034323 . S A Giuliani, G Martínez-Pinedo, L M Robledo, 10.1088/1742-6596/940/1/012013Journal of Physics Conference Series. 94012013Journal of Physics Conference SeriesGiuliani, S. A., Martínez-Pinedo, G., & Robledo, L. M. 2018b, in Journal of Physics Conference Series, Vol. 940, Journal of Physics Conference Series, 012013, doi: 10.1088/1742-6596/940/1/012013 . S A Giuliani, G Martínez-Pinedo, M.-R Wu, L M Robledo, 10.1103/PhysRevC.102.045804PhRvC. 10245804Giuliani, S. A., Martínez-Pinedo, G., Wu, M.-R., & Robledo, L. M. 2020, PhRvC, 102, 045804, doi: 10.1103/PhysRevC.102.045804 . S Goriely, A Bauswein, H.-T Janka, 10.1088/2041-8205/738/2/L32ApJL. 73832Goriely, S., Bauswein, A., & Janka, H.-T. 2011, ApJL, 738, L32, doi: 10.1088/2041-8205/738/2/L32 . S Goriely, S Hilaire, A J Koning, M Sin, R Capote, 10.1103/PhysRevC.79.024612PhRvC. 7924612Goriely, S., Hilaire, S., Koning, A. J., Sin, M., & Capote, R. 2009, PhRvC, 79, 024612, doi: 10.1103/PhysRevC.79.024612 . J Görres, M Wiescher, F.-K Thielemann, 10.1103/PhysRevC.51.392PhRvC. 51392Görres, J., Wiescher, M., & Thielemann, F.-K. 1995, PhRvC, 51, 392, doi: 10.1103/PhysRevC.51.392 . S Gronow, B Côté, F Lach, 10.1051/0004-6361/202140881A&A. 65694Gronow, S., Côté, B., Lach, F., et al. 2021, A&A, 656, A94, doi: 10.1051/0004-6361/202140881 . G Halevi, P Mösta, 10.1093/mnras/sty797MNRAS. 4772366Halevi, G., & Mösta, P. 2018, MNRAS, 477, 2366, doi: 10.1093/mnras/sty797 . C R Harris, K J Millman, S J Van Der Walt, 10.1038/s41586-020-2649-2Nature. 585357Harris, C. R., Millman, K. J., van der Walt, S. J., et al. 2020, Nature, 585, 357, doi: 10.1038/s41586-020-2649-2 . J A Harris, W R Hix, M A Chertkow, 10.3847/1538-4357/aa76deApJ. 843Harris, J. A., Hix, W. R., Chertkow, M. A., et al. 2017, ApJ, 843, 2, doi: 10.3847/1538-4357/aa76de . A Heger, C L Fryer, S E Woosley, N Langer, D H Hartmann, 10.1086/375341ApJ. 591288Heger, A., Fryer, C. L., Woosley, S. E., Langer, N., & Hartmann, D. H. 2003, ApJ, 591, 288, doi: 10.1086/375341 . A Heger, S E Woosley, 10.1088/0004-637X/724/1/341ApJ. 724341Heger, A., & Woosley, S. E. 2010, ApJ, 724, 341, doi: 10.1088/0004-637X/724/1/341 . W Hillebrandt, M Kromer, F K Röpke, A J Ruiter, 10.1007/s11467-013-0303-2Frontiers of Physics. 8116Hillebrandt, W., Kromer, M., Röpke, F. K., & Ruiter, A. J. 2013, Frontiers of Physics, 8, 116, doi: 10.1007/s11467-013-0303-2 . R Hirschi, G Meynet, A Maeder, 10.1051/0004-6361:20041095A&A. 425649Hirschi, R., Meynet, G., & Maeder, A. 2004, A&A, 425, 649, doi: 10.1051/0004-6361:20041095 . W R Hix, A M Khokhlov, J C Wheeler, F.-K Thielemann, 10.1086/305968ApJ. 503332Hix, W. R., Khokhlov, A. M., Wheeler, J. C., & Thielemann, F.-K. 1998, ApJ, 503, 332, doi: 10.1086/305968 . W R Hix, B S Meyer, 10.1016/j.nuclphysa.2004.10.009Nuclear Physics A. 777188Hix, W. R., & Meyer, B. S. 2006, Nuclear Physics A, 777, 188, doi: 10.1016/j.nuclphysa.2004.10.009 . W R Hix, S T Parete-Koon, C Freiburghaus, F.-K Thielemann, 10.1086/520672ApJ. 667476Hix, W. R., Parete-Koon, S. T., Freiburghaus, C., & Thielemann, F.-K. 2007, ApJ, 667, 476, doi: 10.1086/520672 . W R Hix, F.-K Thielemann, Journal of Computational and Applied Mathematics. 109321Hix, W. R., & Thielemann, F.-K. 1999, Journal of Computational and Applied Mathematics, 109, 321 . R D Hoffman, S E Woosley, Y.-Z Qian, 10.1086/304181ApJ. 482951Hoffman, R. D., Woosley, S. E., & Qian, Y.-Z. 1997, ApJ, 482, 951, doi: 10.1086/304181 . R D Hoffman, S E Woosley, T A Weaver, T Rauscher, F K Thielemann, 10.1086/307568ApJ. 521735Hoffman, R. D., Woosley, S. E., Weaver, T. A., Rauscher, T., & Thielemann, F. K. 1999, ApJ, 521, 735, doi: 10.1086/307568 . P Höflich, J C Wheeler, F K Thielemann, 10.1086/305327ApJ. 495617Höflich, P., Wheeler, J. C., & Thielemann, F. K. 1998, ApJ, 495, 617, doi: 10.1086/305327 . E M Holmbeck, T M Sprouse, M R Mumpower, 10.3847/1538-4357/aaefefApJ. 87023Holmbeck, E. M., Sprouse, T. M., Mumpower, M. R., et al. 2019, ApJ, 870, 23, doi: 10.3847/1538-4357/aaefef At. Data Nucl. Data Tables. J A Holmes, S E Woosley, W A Fowler, B A Zimmerman, 10.1016/0092-640X(76)90011-518305Holmes, J. A., Woosley, S. E., Fowler, W. A., & Zimmerman, B. A. 1976, At. Data Nucl. Data Tables, 18, 305, doi: 10.1016/0092-640X(76)90011-5 . C J Horowitz, 10.1103/PhysRevD.65.043001PhRvD. 6543001Horowitz, C. J. 2002, PhRvD, 65, 043001, doi: 10.1103/PhysRevD.65.043001 . W M Howard, P Möller, 10.1016/0092-640X(80)90005-425219Howard, W. M., & Möller, P. 1980, 25, 219, doi: 10.1016/0092-640X(80)90005-4 . J D Hunter, 10.1109/MCSE.2007.55Computing in Science & Engineering. 990Hunter, J. D. 2007, Computing in Science & Engineering, 9, 90, doi: 10.1109/MCSE.2007.55 . I Iben, J Tutukov, A V , 10.1086/190932ApJS. 54335Iben, I., J., & Tutukov, A. V. 1984, ApJS, 54, 335, doi: 10.1086/190932 . S Ichimaru, 10.1103/RevModPhys.65.255Rev. Mod. Phys. 65255Ichimaru, S. 1993, Rev. Mod. Phys., 65, 255, doi: 10.1103/RevModPhys.65.255 C Iliadis, 10.1002/9783527692668Nuclear Physics of Stars. Wiley-VCH Verlag GmbH & CoIliadis, C. 2015, Nuclear Physics of Stars (Wiley-VCH Verlag GmbH & Co. KGaA), doi: 10.1002/9783527692668 . C Iliadis, A Champagne, J José, S Starrfield, P Tupper, 10.1086/341400ApJS. 142105Iliadis, C., Champagne, A., José, J., Starrfield, S., & Tupper, P. 2002, ApJS, 142, 105, doi: 10.1086/341400 . N Itoh, H Hayashi, A Nishikawa, Y Kohyama, 10.1086/192264ApJS. 102411Itoh, N., Hayashi, H., Nishikawa, A., & Kohyama, Y. 1996, ApJS, 102, 411, doi: 10.1086/192264 . H.-T Janka, T Melson, A Summa, 10.1146/annurev-nucl-102115-044747Annual Rev. of Nuclear and Particle Science. 66341Janka, H.-T., Melson, T., & Summa, A. 2016, Annual Rev. of Nuclear and Particle Science, 66, 341, doi: 10.1146/annurev-nucl-102115-044747 . J.-A Jiang, M Doi, K Maeda, 10.1038/nature23908Nature. 550Jiang, J.-A., Doi, M., Maeda, K., et al. 2017, Nature, 550, 80, doi: 10.1038/nature23908 J Jose, 10.1201/b19165Stellar Explosions: Hydrodynamics and Nucleosynthesis. Boca RatonCRC PressJose, J. 2016, Stellar Explosions: Hydrodynamics and Nucleosynthesis (Boca Raton: CRC Press), doi: 10.1201/b19165 . J José, 10.5281/zenodo.6474694Nova outburst, 1.1.1, Zenodo. José, J. 2022, Nova outburst, 1.1.1, Zenodo, doi: 10.5281/zenodo.6474694 . J José, M Hernanz, 10.1086/305244ApJ. 494680José, J., & Hernanz, M. 1998, ApJ, 494, 680, doi: 10.1086/305244 . J José, M Hernanz, S Amari, K Lodders, E Zinner, 10.1086/422569ApJ. 612414José, J., Hernanz, M., Amari, S., Lodders, K., & Zinner, E. 2004, ApJ, 612, 414, doi: 10.1086/422569 . O Just, M A Aloy, M Obergaulinger, S Nagataki, 10.3847/2041-8213/ac83a1ApJL. 93430Just, O., Aloy, M. A., Obergaulinger, M., & Nagataki, S. 2022a, ApJL, 934, L30, doi: 10.3847/2041-8213/ac83a1 . O Just, S Goriely, H T Janka, S Nagataki, A Bauswein, 10.1093/mnras/stab2861MNRAS. 5091377Just, O., Goriely, S., Janka, H. T., Nagataki, S., & Bauswein, A. 2022b, MNRAS, 509, 1377, doi: 10.1093/mnras/stab2861 . E A Kaiser, R Hirschi, W D Arnett, 10.1093/mnras/staa1595MNRAS. 496Kaiser, E. A., Hirschi, R., Arnett, W. D., et al. 2020, MNRAS, 496, 1967, doi: 10.1093/mnras/staa1595 . A I Karakas, J C Lattanzio, 10.1017/pasa.2014.21PASA. 3130Karakas, A. I., & Lattanzio, J. C. 2014, PASA, 31, e030, doi: 10.1017/pasa.2014.21 . A I Karakas, M Lugaro, 10.3847/0004-637X/825/1/26ApJ. 82526Karakas, A. I., & Lugaro, M. 2016, ApJ, 825, 26, doi: 10.3847/0004-637X/825/1/26 . L Kawano, D Schramm, G Steigman, 10.1086/166232ApJ. 327750Kawano, L., Schramm, D., & Steigman, G. 1988, ApJ, 327, 750, doi: 10.1086/166232 . A Kelic, M Valentina Ricciardi, K.-H Schmidt, arXiv:0906.4193arXiv e-printsKelic, A., Valentina Ricciardi, M., & Schmidt, K.-H. 2009, arXiv e-prints, arXiv:0906.4193. https://arxiv.org/abs/0906.4193 . A Khokhlov, E Mueller, P Hoeflich, A&A. 270223Khokhlov, A., Mueller, E., & Hoeflich, P. 1993, A&A, 270, 223 . A M Khokhlov, A&A. 245114Khokhlov, A. M. 1991, A&A, 245, 114 . J Khuyagbaatar, 10.1016/j.nuclphysa.2020.121958Khuyagbaatar, J. 2020, NuPhA, 1002, 121958, doi: 10.1016/j.nuclphysa.2020.121958 Stellar Structure and Evolution. R Kippenhahn, A Weigert, A Weiss, 10.1007/978-3-642-30304-3SpringerBerin Heidelberg2nd edn.Kippenhahn, R., Weigert, A., & Weiss, A. 2013, Stellar Structure and Evolution, 2nd edn. (Berin Heidelberg: Springer), doi: 10.1007/978-3-642-30304-3 . C Kobayashi, A I Karakas, M Lugaro, 10.3847/1538-4357/abae65ApJ. 900179Kobayashi, C., Karakas, A. I., & Lugaro, M. 2020, ApJ, 900, 179, doi: 10.3847/1538-4357/abae65 . T Kodama, K Takahashi, 10.1016/0375-9474(75)90381-4NuPhA. 239489Kodama, T., & Takahashi, K. 1975, NuPhA, 239, 489, doi: 10.1016/0375-9474(75)90381-4 . O Koike, M Hashimoto, R Kuromizu, S Fujimoto, 10.1086/381354ApJ. 603242Koike, O., Hashimoto, M.-a., Kuromizu, R., & Fujimoto, S.-i. 2004, ApJ, 603, 242, doi: 10.1086/381354 . A J Koning, D Rochman, J C Sublet, 10.1016/j.nds.2019.01.002Nuclear Data Sheets. 155Koning, A. J., Rochman, D., Sublet, J. C., et al. 2019, Nuclear Data Sheets, 155, 1, doi: 10.1016/j.nds.2019.01.002 . O Korobkin, S Rosswog, A Arcones, C Winteler, 10.1111/j.1365-2966.2012.21859.xMNRAS. 426Korobkin, O., Rosswog, S., Arcones, A., & Winteler, C. 2012, MNRAS, 426, 1940, doi: 10.1111/j.1365-2966.2012.21859.x . M Kostka, N Koning, Z Shand, R Ouyed, P Jaikumar, 10.1051/0004-6361/201322887A&A. 568Kostka, M., Koning, N., Shand, Z., Ouyed, R., & Jaikumar, P. 2014, A&A, 568, A97, doi: 10.1051/0004-6361/201322887 . K Kotake, T Takiwaki, Y Suwa, 10.1155/2012/428757Advances in Astronomy. 428757Kotake, K., Takiwaki, T., Suwa, Y., et al. 2012, Advances in Astronomy, 2012, 428757, doi: 10.1155/2012/428757 . K L Kratz, K Farouqi, L I Mashonkina, B Pfeiffer, 10.1016/j.newar.2008.06.01552390NewARKratz, K. L., Farouqi, K., Mashonkina, L. I., & Pfeiffer, B. 2008, NewAR, 52, 390, doi: 10.1016/j.newar.2008.06.015 . P A Kravchuk, D G Yakovlev, 10.1103/PhysRevC.89.015802PhRvC. 8915802Kravchuk, P. A., & Yakovlev, D. G. 2014, PhRvC, 89, 015802, doi: 10.1103/PhysRevC.89.015802 . I Kullmann, S Goriely, O Just, 10.1093/mnras/stab3393MNRAS. 5102804Kullmann, I., Goriely, S., Just, O., et al. 2022a, MNRAS, 510, 2804, doi: 10.1093/mnras/stab3393 . I Kullmann, S Goriely, O Just, A Bauswein, H T Janka, 10.48550/arXiv.2207.07421arXiv:2207.07421arXiv e-printsKullmann, I., Goriely, S., Just, O., Bauswein, A., & Janka, H. T. 2022b, arXiv e-prints, arXiv:2207.07421, doi: 10.48550/arXiv.2207.07421 . D Kushnir, E Waxman, A I Chugunov, 10.1093/mnras/stz904MNRAS. 486449Kushnir, D., Waxman, E., & Chugunov, A. I. 2019, MNRAS, 486, 449, doi: 10.1093/mnras/stz904 . F Lach, F P Callan, D Bubeck, 10.1051/0004-6361/202141453A&A. 658179Lach, F., Callan, F. P., Bubeck, D., et al. 2022, A&A, 658, A179, doi: 10.1051/0004-6361/202141453 Atomic Data and Nuclear Data Tables. K Langanke, E Kolbe, 10.1006/adnd.2002.088382191Langanke, K., & Kolbe, E. 2002, Atomic Data and Nuclear Data Tables, 82, 191, doi: 10.1006/adnd.2002.0883 Atomic Data and Nuclear Data Tables. K Langanke, G Martínez-Pinedo, 10.1006/adnd.2001.086579Langanke, K., & Martínez-Pinedo, G. 2001, Atomic Data and Nuclear Data Tables, 79, 1, doi: 10.1006/adnd.2001.0865 . S.-C Leung, K Nomoto, 10.3847/1538-4357/aac2dfApJ. 861143Leung, S.-C., & Nomoto, K. 2018, ApJ, 861, 143, doi: 10.3847/1538-4357/aac2df . S.-C Leung, K Nomoto, T Suzuki, 10.3847/1538-4357/ab5d2fApJ. 88934Leung, S.-C., Nomoto, K., & Suzuki, T. 2020, ApJ, 889, 34, doi: 10.3847/1538-4357/ab5d2f . M Limongi, A Chieffi, 10.3847/1538-4365/aacb24ApJS. 23713Limongi, M., & Chieffi, A. 2018, ApJS, 237, 13, doi: 10.3847/1538-4365/aacb24 . J Lippuner, R Fernández, L F Roberts, 10.1093/mnras/stx1987MNRAS. 472904Lippuner, J., Fernández, R., Roberts, L. F., et al. 2017, MNRAS, 472, 904, doi: 10.1093/mnras/stx1987 . J Lippuner, L F Roberts, 10.3847/1538-4365/aa94cbApJS. 233Lippuner, J., & Roberts, L. F. 2017, ApJS, 233, 18, doi: 10.3847/1538-4365/aa94cb . E Livne, D Arnett, 10.1086/176279ApJ. 45262Livne, E., & Arnett, D. 1995, ApJ, 452, 62, doi: 10.1086/176279 . R Longland, D Martin, J José, 10.1051/0004-6361/201321958A&A. 56367Longland, R., Martin, D., & José, J. 2014, A&A, 563, A67, doi: 10.1051/0004-6361/201321958 . A I Macfadyen, S E Woosley, 10.1086/307790ApJ. 524262MacFadyen, A. I., & Woosley, S. E. 1999, ApJ, 524, 262, doi: 10.1086/307790 . K Maeda, Y Terada, 10.1142/S021827181630024XInternational Journal of Modern Physics D. 251630024Maeda, K., & Terada, Y. 2016, International Journal of Modern Physics D, 25, 1630024, doi: 10.1142/S021827181630024X . A Maeder, G Meynet, 10.1103/RevModPhys.84.25Rev. Mod. Phys. 8425Maeder, A., & Meynet, G. 2012, Rev. Mod. Phys., 84, 25, doi: 10.1103/RevModPhys.84.25 . A Mamdouh, J Pearson, M Rayet, F Tondeur, 679337Mamdouh, A., Pearson, J., Rayet, M., & Tondeur, F. 2001, 679, 337 . T Marketin, L Huther, G Martínez-Pinedo, 10.1103/PhysRevC.93.025805PhRvC. 9325805Marketin, T., Huther, L., & Martínez-Pinedo, G. 2016, PhRvC, 93, 025805, doi: 10.1103/PhysRevC.93.025805 . D Martin, Darmstadt, GermanyTechnical University of DarmstadtPhD thesisMartin, D. 2017, PhD thesis, Technical University of Darmstadt, Darmstadt, Germany . D Martin, A Perego, A Arcones, 10.1088/0004-637X/813/1/2ApJ. 813Martin, D., Perego, A., Arcones, A., et al. 2015, ApJ, 813, 2, doi: 10.1088/0004-637X/813/1/2 . G Martinez-Pinedo, D Mocelj, N Zinner, 10.1016/j.ppnp.2007.01.018Prog. Part. Nucl. Phys. 59199Martinez-Pinedo, G., Mocelj, D., Zinner, N., et al. 2007, Prog. Part. Nucl. Phys., 59, 199, doi: 10.1016/j.ppnp.2007.01.018 . G C Mclaughlin, R Surman, 10.1016/j.nuclphysa.2005.05.036NuPhA. 758189McLaughlin, G. C., & Surman, R. 2005, NuPhA, 758, 189, doi: 10.1016/j.nuclphysa.2005.05.036 . C A Meakin, I Seitenzahl, D Townsley, 10.1088/0004-637X/693/2/1188ApJ. 6931188Meakin, C. A., Seitenzahl, I., Townsley, D., et al. 2009, ApJ, 693, 1188, doi: 10.1088/0004-637X/693/2/1188 . Z Meisel, S George, S Ahn, 10.1103/PhysRevC.101.052801PhRvC. 10152801Meisel, Z., George, S., Ahn, S., et al. 2020, PhRvC, 101, 052801, doi: 10.1103/PhysRevC.101.052801 . J D J Mendoza-Temis, M.-R Wu, K Langanke, 10.1103/PhysRevC.92.055805PhRvC. 9255805Mendoza-Temis, J. d. J., Wu, M.-R., Langanke, K., et al. 2015, PhRvC, 92, 055805, doi: 10.1103/PhysRevC.92.055805 . B S Meyer, D C Adams, Meteoritics and Planetary Science Supplement. 425215Meyer, B. S., & Adams, D. C. 2007, Meteoritics and Planetary Science Supplement, 42, 5215 D Mihalas, Foundations of radiation hydrodynamics. Mineola, NYDover PublicationsMihalas, D. 1999, Foundations of radiation hydrodynamics, Dover Books on Physics (Mineola, NY: Dover Publications) . J M Miller, T M Sprouse, C L Fryer, 10.3847/1538-4357/abb4e3ApJ. 90266Miller, J. M., Sprouse, T. M., Fryer, C. L., et al. 2020, ApJ, 902, 66, doi: 10.3847/1538-4357/abb4e3 . A Mirizzi, I Tamborra, H T Janka, 10.1393/ncr/i2016-10120-8Nuovo Cimento Rivista Serie. 391Mirizzi, A., Tamborra, I., Janka, H. T., et al. 2016, Nuovo Cimento Rivista Serie, 39, 1, doi: 10.1393/ncr/i2016-10120-8 Atomic Data and Nuclear Data Tables. P Möller, M R Mumpower, T Kawano, W D Myers, 10.1016/j.adt.2018.03.003125Möller, P., Mumpower, M. R., Kawano, T., & Myers, W. D. 2019, Atomic Data and Nuclear Data Tables, 125, 1, doi: 10.1016/j.adt.2018.03.003 . P Möller, A J Sierk, T Ichikawa, A Iwamoto, M Mumpower, 10.1103/PhysRevC.91.024310PhRvC. 9124310Möller, P., Sierk, A. J., Ichikawa, T., Iwamoto, A., & Mumpower, M. 2015, PhRvC, 91, 024310, doi: 10.1103/PhysRevC.91.024310 . J Moscoso, R S De Souza, A Coc, C Iliadis, 10.3847/1538-4357/ac1db0ApJ. 92349Moscoso, J., de Souza, R. S., Coc, A., & Iliadis, C. 2021, ApJ, 923, 49, doi: 10.3847/1538-4357/ac1db0 . V Mossa, K Stöckel, F Cavanna, 10.1038/s41586-020-2878-4Nature. 587210Mossa, V., Stöckel, K., Cavanna, F., et al. 2020, Nature, 587, 210, doi: 10.1038/s41586-020-2878-4 . P Mösta, L F Roberts, G Halevi, 10.3847/1538-4357/aad6ecApJ. 864171Mösta, P., Roberts, L. F., Halevi, G., et al. 2018, ApJ, 864, 171, doi: 10.3847/1538-4357/aad6ec . E Mueller, A&A. 162103Mueller, E. 1986, A&A, 162, 103 . E Mueller, W D Arnett, 10.1086/164448ApJ. 307619Mueller, E., & Arnett, W. D. 1986, ApJ, 307, 619, doi: 10.1086/164448 . B Müller, 10.1007/s41115-020-0008-5doi: 10.1007/s41115-020-0008-5Living Rev. Comput. Astrophys. 333PASAMüller, B. 2016, PASA, 33, e048, doi: 10.1017/pasa.2016.40 -. 2020, Living Rev. Comput. Astrophys., 6, 3, doi: 10.1007/s41115-020-0008-5 . M R Mumpower, P Jaffke, M Verriere, J Randrup, 10.1103/PhysRevC.101.054607PhRvC. 10154607Mumpower, M. R., Jaffke, P., Verriere, M., & Randrup, J. 2020, PhRvC, 101, 054607, doi: 10.1103/PhysRevC.101.054607 . M R Mumpower, T Kawano, T M Sprouse, 10.3847/1538-4357/aaeacaApJ. 86914Mumpower, M. R., Kawano, T., Sprouse, T. M., et al. 2018, ApJ, 869, 14, doi: 10.3847/1538-4357/aaeaca . W D Myers, W J Świaţecki, 10.1103/PhysRevC.60.0146066014606Myers, W. D., & Świaţecki, W. J. 1999, 60, 014606, doi: 10.1103/PhysRevC.60.014606 . D K Nadyozhin, A Y Deputovich, 10.1051/0004-6361:20011844A&A. 386711Nadyozhin, D. K., & Deputovich, A. Y. 2002, A&A, 386, 711, doi: 10.1051/0004-6361:20011844 . S Nagataki, M Hashimoto, K Sato, S Yamada, 10.1086/304565ApJ. 4861026Nagataki, S., Hashimoto, M.-a., Sato, K., & Yamada, S. 1997, ApJ, 486, 1026, doi: 10.1086/304565 . K Nakamura, T Takiwaki, K Kotake, N Nishimura, 10.1088/0004-637X/782/2/91ApJ. 78291Nakamura, K., Takiwaki, T., Kotake, K., & Nishimura, N. 2014, ApJ, 782, 91, doi: 10.1088/0004-637X/782/2/91 . G Navó, M Reichert, M Obergaulinger, A Arcones, 10.48550/arXiv.2210.11848arXiv:2210.11848arXiv e-printsNavó, G., Reichert, M., Obergaulinger, M., & Arcones, A. 2022, arXiv e-prints, arXiv:2210.11848, doi: 10.48550/arXiv.2210.11848 . N Nishimura, R Hirschi, T Rauscher, . J St, A Murphy, G Cescutti, 10.1093/mnras/stx696MNRAS. 4691752Nishimura, N., Hirschi, R., Rauscher, T., St. J. Murphy, A., & Cescutti, G. 2017a, MNRAS, 469, 1752, doi: 10.1093/mnras/stx696 . N Nishimura, H Sawai, T Takiwaki, S Yamada, F.-K Thielemann, 10.3847/2041-8213/aa5deeApJL. 83621Nishimura, N., Sawai, H., Takiwaki, T., Yamada, S., & Thielemann, F.-K. 2017b, ApJL, 836, L21, doi: 10.3847/2041-8213/aa5dee . N Nishimura, T Takiwaki, F.-K Thielemann, 10.1088/0004-637X/810/2/109ApJ. 810109Nishimura, N., Takiwaki, T., & Thielemann, F.-K. 2015, ApJ, 810, 109, doi: 10.1088/0004-637X/810/2/109 . S Nishimura, K Kotake, M Hashimoto, 10.1086/500786ApJ. 642410Nishimura, S., Kotake, K., Hashimoto, M.-a., et al. 2006, ApJ, 642, 410, doi: 10.1086/500786 . K Nomoto, F.-K Thielemann, S Miyaji, A&A. 149239Nomoto, K., Thielemann, F.-K., & Miyaji, S. 1985, A&A, 149, 239 . K Nomoto, F.-K Thielemann, K Yokoi, 10.1086/162639ApJ. 286644Nomoto, K., Thielemann, F.-K., & Yokoi, K. 1984, ApJ, 286, 644, doi: 10.1086/162639 . M Obergaulinger, M Á Aloy, 10.1093/mnras/stab295doi: 10.1093/mnras/stab295MNRAS. 4694942MNRASObergaulinger, M., & Aloy, M. Á. 2017, MNRAS, 469, L43, doi: 10.1093/mnrasl/slx046 -. 2021, MNRAS, 503, 4942, doi: 10.1093/mnras/stab295 Atomic Data and Nuclear Data Tables. T Oda, M Hino, K Muto, M Takahara, K Sato, 10.1006/adnd.1994.100756231Oda, T., Hino, M., Muto, K., Takahara, M., & Sato, K. 1994, Atomic Data and Nuclear Data Tables, 56, 231, doi: 10.1006/adnd.1994.1007 . K A Olive, D N Schramm, G Steigman, T P Walker, K ; -G Otsuki, G J Mathews, T Kajino, 10.1016/S1384-1076(03)00065-4doi: 10.1016/S1384-1076(03)00065-4PhRvB. 236767Olive, K. A., Schramm, D. N., Steigman, G., & Walker, T. P. 1990, PhRvB, 236, 454, doi: 10.1016/0370-2693(90)90382-G Otsuki, K., Mathews, G. J., & Kajino, T. 2003, NewA, 8, 767, doi: 10.1016/S1384-1076(03)00065-4 . K Otsuki, H Tagoshi, T Kajino, S Wanajo, 10.1086/308632ApJ. 533424Otsuki, K., Tagoshi, H., Kajino, T., & Wanajo, S.-y. 2000, ApJ, 533, 424, doi: 10.1086/308632 . R Pakmor, M Kromer, S Taubenberger, V Springel, 10.1088/2041-8205/770/1/L8ApJL. 7708Pakmor, R., Kromer, M., Taubenberger, S., & Springel, V. 2013, ApJL, 770, L8, doi: 10.1088/2041-8205/770/1/L8 . I V Panov, C Freiburghaus, F K Thielemann, 10.1016/S0375-9474(01)00797-7NuPhA. 688587Panov, I. V., Freiburghaus, C., & Thielemann, F. K. 2001, NuPhA, 688, 587, doi: 10.1016/S0375-9474(01)00797-7 . I V Panov, E Kolbe, B Pfeiffer, 10.1016/j.nuclphysa.2004.09.115Nuclear Physics A. 747633Panov, I. V., Kolbe, E., Pfeiffer, B., et al. 2005, Nuclear Physics A, 747, 633, doi: 10.1016/j.nuclphysa.2004.09.115 . I V Panov, I Y Korneev, T Rauscher, 10.1051/0004-6361/200911967A&A. 51361Panov, I. V., Korneev, I. Y., Rauscher, T., et al. 2010, A&A, 513, A61, doi: 10.1051/0004-6361/200911967 . B Paxton, P Marchant, J Schwab, 10.1088/0067-0049/220/1/15ApJS. 22015Paxton, B., Marchant, P., Schwab, J., et al. 2015, ApJS, 220, 15, doi: 10.1088/0067-0049/220/1/15 . P J E Peebles, 10.1086/148918ApJ. 146542Peebles, P. J. E. 1966, ApJ, 146, 542, doi: 10.1086/148918 . A Perego, M Hempel, C Fröhlich, 10.1088/0004-637X/806/2/275ApJ. 806275Perego, A., Hempel, M., Fröhlich, C., et al. 2015, ApJ, 806, 275, doi: 10.1088/0004-637X/806/2/275 . A Perego, S Rosswog, R M Cabezón, 10.1093/mnras/stu1352MNRAS. 4433134Perego, A., Rosswog, S., Cabezón, R. M., et al. 2014, MNRAS, 443, 3134, doi: 10.1093/mnras/stu1352 . I Petermann, K Langanke, G Martínez-Pinedo, 10.1140/epja/i2012-12122-6European Physical Journal A. 48Petermann, I., Langanke, K., Martínez-Pinedo, G., et al. 2012, European Physical Journal A, 48, 122, doi: 10.1140/epja/i2012-12122-6 R Piessens, E De Doncker-Kapenga, C W Ueberhuber, Quadpack. A subroutine package for automatic integration. Piessens, R., de Doncker-Kapenga, E., & Ueberhuber, C. W. 1983, Quadpack. A subroutine package for automatic integration . M Pignatari, R Hirschi, 10.5281/zenodo.6474728Weak s-process, 1.1.1, ZenodoPignatari, M., & Hirschi, R. 2022, Weak s-process, 1.1.1, Zenodo, doi: 10.5281/zenodo.6474728 . T Piran, E Nakar, S Rosswog, 10.1093/mnras/stt037MNRAS. 4302121Piran, T., Nakar, E., & Rosswog, S. 2013, MNRAS, 430, 2121, doi: 10.1093/mnras/stt037 . C Pitrou, A Coc, J.-P Uzan, E Vangioni, 10.1016/j.physrep.2018.04.005PhR. 7541Pitrou, C., Coc, A., Uzan, J.-P., & Vangioni, E. 2018, PhR, 754, 1, doi: 10.1016/j.physrep.2018.04.005 . 10.1093/mnras/stab135MNRAS. 5022474-. 2021, MNRAS, 502, 2474, doi: 10.1093/mnras/stab135 . P A R Ade, Planck CollaborationN Aghanim, Planck Collaboration10.1051/0004-6361/201525830A&A. 59413Planck Collaboration, Ade, P. A. R., Aghanim, N., et al. 2016, A&A, 594, A13, doi: 10.1051/0004-6361/201525830 . T Plewa, E Müller, A&A. 342179Plewa, T., & Müller, E. 1999, A&A, 342, 179. https://arxiv.org/abs/astro-ph/9807241 . A Y Potekhin, G Chabrier, 10.1103/PhysRevE.62.8554Phys. Rev. E. 62Potekhin, A. Y., & Chabrier, G. 2000, Phys. Rev. E, 62, 8554, doi: 10.1103/PhysRevE.62.8554 . J Powell, B Mueller, D R Aguilera-Dena, N Langer, 10.48550/arXiv.2212.00200arXiv:2212.00200arXiv e-printsPowell, J., Mueller, B., Aguilera-Dena, D. R., & Langer, N. 2022, arXiv e-prints, arXiv:2212.00200, doi: 10.48550/arXiv.2212.00200 . J Pruet, G M Fuller, 10.1086/376753ApJS. 149189Pruet, J., & Fuller, G. M. 2003, ApJS, 149, 189, doi: 10.1086/376753 . J Pruet, S E Woosley, R D Hoffman, 10.1086/367957ApJ. 5861254Pruet, J., Woosley, S. E., & Hoffman, R. D. 2003, ApJ, 586, 1254, doi: 10.1086/367957 . C Qi, D S Delion, R J Liotta, R Wyss, 10.1103/PhysRevC.85.011303PhRvC. 8511303Qi, C., Delion, D. S., Liotta, R. J., & Wyss, R. 2012, PhRvC, 85, 011303, doi: 10.1103/PhysRevC.85.011303 . Y.-Z Qian, S E Woosley, 10.1086/177973ApJ. 471331Qian, Y.-Z., & Woosley, S. E. 1996, ApJ, 471, 331, doi: 10.1086/177973 . D Radice, E Abdikamalov, C D Ott, 10.1088/1361-6471/aab872J. Phys. G. 4553003Radice, D., Abdikamalov, E., Ott, C. D., et al. 2018, J. Phys. G, 45, 053003, doi: 10.1088/1361-6471/aab872 T Rauscher, 10.1140/epja/s10050-022-00866-9doi: 10.1140/epja/s10050-022-00866-9Essentials of Nucleosynthesis and Theoretical Nuclear Astrophysics. 58214Rauscher, T. 2020, Essentials of Nucleosynthesis and Theoretical Nuclear Astrophysics, doi: 10.1088/2514-3433/ab8737 -. 2022, European Physical Journal A, 58, 214, doi: 10.1140/epja/s10050-022-00866-9 At Data Nucl. Data Tables. T Rauscher, F.-K Thielemann, 10.1006/adnd.2000.083475Rauscher, T., & Thielemann, F.-K. 2000, At Data Nucl. Data Tables, 75, 1, doi: 10.1006/adnd.2000.0834 . M Reichert, 10.12921/tuprints-00014198DarmstadtTechnische UniversitätPhD thesisReichert, M. 2021, PhD thesis, Technische Universität, Darmstadt, doi: https://doi.org/10.12921/tuprints-00014198 . M Reichert, M Obergaulinger, M Á Aloy, 10.1093/mnras/stac3185MNRAS. 5181557Reichert, M., Obergaulinger, M., Aloy, M. Á., et al. 2023, MNRAS, 518, 1557, doi: 10.1093/mnras/stac3185 . M Reichert, M Obergaulinger, M Eichler, M Á Aloy, A Arcones, 10.1093/mnras/stab029MNRAS. 5015733Reichert, M., Obergaulinger, M., Eichler, M., Aloy, M. Á., & Arcones, A. 2021, MNRAS, 501, 5733, doi: 10.1093/mnras/stab029 . F Rembges, C Freiburghaus, T Rauscher, 10.1086/304300ApJ. 484412Rembges, F., Freiburghaus, C., Rauscher, T., et al. 1997, ApJ, 484, 412, doi: 10.1086/304300 . D Romano, M Tosi, F Matteucci, C Chiappini, 10.1046/j.1365-2966.2003.07083.xMNRAS. 346295Romano, D., Tosi, M., Matteucci, F., & Chiappini, C. 2003, MNRAS, 346, 295, doi: 10.1046/j.1365-2966.2003.07083.x . F K Röpke, S A Sim, 10.1007/s11214-018-0503-8Space Sci. Rev. 21472Röpke, F. K., & Sim, S. A. 2018, Space Sci. Rev., 214, 72, doi: 10.1007/s11214-018-0503-8 . F K Röpke, M Kromer, I R Seitenzahl, 10.1088/2041-8205/750/1/L19ApJL. 75019Röpke, F. K., Kromer, M., Seitenzahl, I. R., et al. 2012, ApJL, 750, L19, doi: 10.1088/2041-8205/750/1/L19 . S Rosswog, 10.1098/rsta.2012.0272Philosophical Transactions of the Royal Society of London Series A. 37120272Rosswog, S. 2013, Philosophical Transactions of the Royal Society of London Series A, 371, 20272, doi: 10.1098/rsta.2012.0272 . S Rosswog, O Korobkin, arXiv:2208.14026arXiv e-printsRosswog, S., & Korobkin, O. 2022, arXiv e-prints, arXiv:2208.14026. https://arxiv.org/abs/2208.14026 . S Rosswog, T Piran, E Nakar, 10.1093/mnras/sts708MNRAS. 4302585Rosswog, S., Piran, T., & Nakar, E. 2013, MNRAS, 430, 2585, doi: 10.1093/mnras/sts708 . S G Ryan, T C Beers, K A Olive, B D Fields, J E Norris, 10.1086/312492ApJL. 53057Ryan, S. G., Beers, T. C., Olive, K. A., Fields, B. D., & Norris, J. E. 2000, ApJL, 530, L57, doi: 10.1086/312492 . B Sahu, S Bhoi, 10.1103/PhysRevC.93.044301PhRvC. 9344301Sahu, B., & Bhoi, S. 2016, PhRvC, 93, 044301, doi: 10.1103/PhysRevC.93.044301 . A L Sallaska, C Iliadis, A E Champange, 10.1088/0067-0049/207/1/18ApJS. 207Sallaska, A. L., Iliadis, C., Champange, A. E., et al. 2013, ApJS, 207, 18, doi: 10.1088/0067-0049/207/1/18 . E E Salpeter, 10.1071/PH540373Australian Journal of Physics. 7373Salpeter, E. E. 1954, Australian Journal of Physics, 7, 373, doi: 10.1071/PH540373 . E E Salpeter, H M Van Horn, 10.1086/149858ApJ. 155183Salpeter, E. E., & van Horn, H. M. 1969, ApJ, 155, 183, doi: 10.1086/149858 . M A Sandoval, W R Hix, O E B Messer, E J Lentz, J A Harris, 10.3847/1538-4357/ac1d49ApJ. 921113Sandoval, M. A., Hix, W. R., Messer, O. E. B., Lentz, E. J., & Harris, J. A. 2021, ApJ, 921, 113, doi: 10.3847/1538-4357/ac1d49 . G Saxena, M Aggarwal, D Singh, 10.1088/1361-6471/ac991dJournal of Physics G Nuclear Physics. 5015102Saxena, G., Aggarwal, M., Singh, D., et al. 2023, Journal of Physics G Nuclear Physics, 50, 015102, doi: 10.1088/1361-6471/ac991d . H Schatz, priv. communicationSchatz, H. 2022, priv. communication . H Schatz, R Toenjes, B Pfeiffer, 10.1086/342939ApJ. 579626Schatz, H., Toenjes, R., Pfeiffer, B., et al. 2002, ApJ, 579, 626, doi: 10.1086/342939 . H Schatz, A Aprahamian, J Goerres, 10.1016/S0370-1573(97)00048-3PhR. 294167Schatz, H., Aprahamian, A., Goerres, J., et al. 1998, PhR, 294, 167, doi: 10.1016/S0370-1573(97)00048-3 . H Schatz, A Aprahamian, V Barnard, 10.1016/S0375-9474(01)00688-1NuPhA. 688150Schatz, H., Aprahamian, A., Barnard, V., et al. 2001, NuPhA, 688, 150, doi: 10.1016/S0375-9474(01)00688-1 . O Schenk, K Gärtner, 10.1016/j.future.2003.07.011Future Gener. Comput. Syst. 20475Schenk, O., & Gärtner, K. 2004, Future Gener. Comput. Syst., 20, 475, doi: 10.1016/j.future.2003.07.011 . I R Seitenzahl, F K Röpke, M Fink, R Pakmor, 10.1111/j.1365-2966.2010.17106.xMNRAS. 4072297Seitenzahl, I. R., Röpke, F. K., Fink, M., & Pakmor, R. 2010, MNRAS, 407, 2297, doi: 10.1111/j.1365-2966.2010.17106.x . K J Shen, D Boubert, B T Gänsicke, 10.3847/1538-4357/aad55bApJ. 86515Shen, K. J., Boubert, D., Gänsicke, B. T., et al. 2018, ApJ, 865, 15, doi: 10.3847/1538-4357/aad55b . D M Siegel, J Barnes, B D Metzger, 10.1038/s41586-019-1136-0Nature. 569241Siegel, D. M., Barnes, J., & Metzger, B. D. 2019, Nature, 569, 241, doi: 10.1038/s41586-019-1136-0 . A Sieverding, K Langanke, G Martínez-Pinedo, 10.3847/1538-4357/ab17e2ApJ. 876151Sieverding, A., Langanke, K., Martínez-Pinedo, G., et al. 2019, ApJ, 876, 151, doi: 10.3847/1538-4357/ab17e2 . A Sieverding, G Martínez-Pinedo, L Huther, K Langanke, A Heger, 10.3847/1538-4357/aadd48ApJ. 865143Sieverding, A., Martínez-Pinedo, G., Huther, L., Langanke, K., & Heger, A. 2018, ApJ, 865, 143, doi: 10.3847/1538-4357/aadd48 . A Sieverding, P G Waldrop, J A Harris, 10.48550/arXiv.2212.06507arXiv:2212.06507arXiv e-printsSieverding, A., Waldrop, P. G., Harris, J. A., et al. 2022, arXiv e-prints, arXiv:2212.06507, doi: 10.48550/arXiv.2212.06507 . M S Smith, L H Kawano, R A Malaney, 10.1086/191763ApJS. 85219Smith, M. S., Kawano, L. H., & Malaney, R. A. 1993, ApJS, 85, 219, doi: 10.1086/191763 . A Sobiczewski, Z Patyk, S Ćwiok, 10.1016/0370-2693(89)91038-1Physics Letters B. 2241Sobiczewski, A., Patyk, Z., & Ćwiok, S. 1989, Physics Letters B, 224, 1, doi: 10.1016/0370-2693(89)91038-1 . T M Sprouse, M R Mumpower, R Surman, 10.1103/PhysRevC.104.015803PhRvC. 10415803Sprouse, T. M., Mumpower, M. R., & Surman, R. 2021, PhRvC, 104, 015803, doi: 10.1103/PhysRevC.104.015803 . T Sukhbold, T Ertl, S E Woosley, J M Brown, H.-T Janka, 10.3847/0004-637X/821/1/38ApJ. 82138Sukhbold, T., Ertl, T., Woosley, S. E., Brown, J. M., & Janka, H.-T. 2016, ApJ, 821, 38, doi: 10.3847/0004-637X/821/1/38 . R Surman, G C Mclaughlin, 10.1086/381672ApJ. 603611Surman, R., & McLaughlin, G. C. 2004, ApJ, 603, 611, doi: 10.1086/381672 . T Suzuki, H Toki, K Nomoto, 10.3847/0004-637X/817/2/163ApJ. 817163Suzuki, T., Toki, H., & Nomoto, K. 2016, ApJ, 817, 163, doi: 10.3847/0004-637X/817/2/163 . I Tamborra, B Müller, L Hüdepohl, H.-T Janka, G Raffelt, 10.1103/PhysRevD.86.125031PhRvD. 86125031Tamborra, I., Müller, B., Hüdepohl, L., Janka, H.-T., & Raffelt, G. 2012, PhRvD, 86, 125031, doi: 10.1103/PhysRevD.86.125031 . F K Thielemann, PhD thesisThielemann, F. K. 1980, PhD thesis, - . F K Thielemann, M Arnould, W Hillebrandt, A&A. 74175Thielemann, F. K., Arnould, M., & Hillebrandt, W. 1979, A&A, 74, 175 F.-K Thielemann, R Diehl, A Heger, R Hirschi, M Liebendoerfer, 10.1007/978-3-319-91929-4_4Astrophysics with Radioactive Isotopes. R. Diehl, D. H. Hartmann, & N. PrantzosChamSpringer453Thielemann, F.-K., Diehl, R., Heger, A., Hirschi, R., & Liebendoerfer, M. 2018a, in Astrophysics and Space Science Library, Vol. 453, Astrophysics with Radioactive Isotopes, ed. R. Diehl, D. H. Hartmann, & N. Prantzos (Cham: Springer), 173-286, doi: 10.1007/978-3-319-91929-4_4 . F.-K Thielemann, J Isern, A Perego, P Ballmoos, 10.1007/s11214-018-0494-5Space Sci. Rev. 21462Thielemann, F.-K., Isern, J., Perego, A., & von Ballmoos, P. 2018b, Space Sci. Rev., 214, 62, doi: 10.1007/s11214-018-0494-5 . F.-K Thielemann, K Nomoto, M.-A Hashimoto, 10.1086/176980ApJ. 460408Thielemann, F.-K., Nomoto, K., & Hashimoto, M.-A. 1996, ApJ, 460, 408, doi: 10.1086/176980 . F.-K Thielemann, K Nomoto, K Yokoi, A&A. 15817Thielemann, F.-K., Nomoto, K., & Yokoi, K. 1986, A&A, 158, 17 . T A Thompson, A Burrows, B S Meyer, 10.1086/323861ApJ. 562887Thompson, T. A., Burrows, A., & Meyer, B. S. 2001, ApJ, 562, 887, doi: 10.1086/323861 . E Tiesinga, P J Mohr, D B Newell, B N Taylor, 10.1103/RevModPhys.93.025010Reviews of Modern Physics. 9325010Tiesinga, E., Mohr, P. J., Newell, D. B., & Taylor, B. N. 2021, Reviews of Modern Physics, 93, 025010, doi: 10.1103/RevModPhys.93.025010 . F X Timmes, 10.1086/313257ApJS. 124241Timmes, F. X. 1999, ApJS, 124, 241, doi: 10.1086/313257 . F X Timmes, D Arnett, 10.1086/313271ApJS. 125277Timmes, F. X., & Arnett, D. 1999, ApJS, 125, 277, doi: 10.1086/313271 . F X Timmes, R D Hoffman, S E Woosley, 10.1086/313407ApJS. 129377Timmes, F. X., Hoffman, R. D., & Woosley, S. E. 2000, ApJS, 129, 377, doi: 10.1086/313407 . J W Truran, W D Arnett, A G W Cameron, 10.1139/p67-184Canadian Journal of Physics. 452315Truran, J. W., Arnett, W. D., & Cameron, A. G. W. 1967, Canadian Journal of Physics, 45, 2315, doi: 10.1139/p67-184 . J W Truran, A G W Cameron, A Gilbert, 10.1139/p66-049Canadian Journal of Physics. 44Truran, J. W., Cameron, A. G. W., & Gilbert, A. 1966a, Canadian Journal of Physics, 44, 563, doi: 10.1139/p66-049 . J W Truran, C J Hansen, A G W Cameron, A Gilbert, 10.1139/p66-011Canadian Journal of Physics. 44151Truran, J. W., Hansen, C. J., Cameron, A. G. W., & Gilbert, A. 1966b, Canadian Journal of Physics, 44, 151, doi: 10.1139/p66-011 . D Vartanyan, M S B Coleman, A Burrows, 10.1093/mnras/stab3702MNRAS. 5104689Vartanyan, D., Coleman, M. S. B., & Burrows, A. 2022, MNRAS, 510, 4689, doi: 10.1093/mnras/stab3702 . A Vasini, F Matteucci, E Spitoni, arXiv:2204.00510arXiv e-printsVasini, A., Matteucci, F., & Spitoni, E. 2022, arXiv e-prints, arXiv:2204.00510. https://arxiv.org/abs/2204.00510 . N Vassh, R Vogt, R Surman, 10.1088/1361-6471/ab0beaJournal of Physics G Nuclear Physics. 4665202Vassh, N., Vogt, R., Surman, R., et al. 2019, Journal of Physics G Nuclear Physics, 46, 065202, doi: 10.1088/1361-6471/ab0bea . V Viola, G Seaborg, 10.1016/0022-1902(66)80412-8Journal of Inorganic and Nuclear Chemistry. 28741Viola, V., & Seaborg, G. 1966, Journal of Inorganic and Nuclear Chemistry, 28, 741, doi: https://doi.org/10.1016/0022-1902(66)80412-8 . P Virtanen, R Gommers, T E Oliphant, 10.1038/s41592-019-0686-2Nature Methods. 17261Virtanen, P., Gommers, R., Oliphant, T. E., et al. 2020, Nature Methods, 17, 261, doi: 10.1038/s41592-019-0686-2 . P Vonlanthen, T Rauscher, C Winteler, 10.1051/0004-6361/200811297A&A. 50347Vonlanthen, P., Rauscher, T., Winteler, C., et al. 2009, A&A, 503, 47, doi: 10.1051/0004-6361/200811297 . R V Wagoner, W A Fowler, F Hoyle, 10.1086/149126ApJ. 1483Wagoner, R. V., Fowler, W. A., & Hoyle, F. 1967, ApJ, 148, 3, doi: 10.1086/149126 . T P Walker, G Steigman, D N Schramm, K A Olive, H.-S Kang, 10.1086/170255ApJ. 37651Walker, T. P., Steigman, G., Schramm, D. N., Olive, K. A., & Kang, H.-S. 1991, ApJ, 376, 51, doi: 10.1086/170255 . S Wanajo, Y Hirai, N Prantzos, 10.1093/mnras/stab1655MNRAS. 5055862Wanajo, S., Hirai, Y., & Prantzos, N. 2021, MNRAS, 505, 5862, doi: 10.1093/mnras/stab1655 . S Wanajo, T Kajino, G J Mathews, K Otsuki, 10.1086/321339ApJ. 554578Wanajo, S., Kajino, T., Mathews, G. J., & Otsuki, K. 2001, ApJ, 554, 578, doi: 10.1086/321339 . S Wanajo, B Müller, H.-T Janka, A Heger, 10.3847/1538-4357/aa9d97ApJ. 85240Wanajo, S., Müller, B., Janka, H.-T., & Heger, A. 2018, ApJ, 852, 40, doi: 10.3847/1538-4357/aa9d97 . M Wiescher, J Gorres, F.-K Thielemann, H Ritter, A&A. 16056Wiescher, M., Gorres, J., Thielemann, F.-K., & Ritter, H. 1986, A&A, 160, 56 . C Winteler, Basel, SwitzerlandThe University of BaselPhD thesisWinteler, C. 2012, PhD thesis, The University of Basel, Basel, Switzerland . C Winteler, R Käppeli, A Perego, 10.1088/2041-8205/750/1/L22ApJL. 75022Winteler, C., Käppeli, R., Perego, A., et al. 2012, ApJL, 750, L22, doi: 10.1088/2041-8205/750/1/L22 . S E Woosley, W D Arnett, D D Clayton, 10.1086/190282ApJS. 26231Woosley, S. E., Arnett, W. D., & Clayton, D. D. 1973, ApJS, 26, 231, doi: 10.1086/190282 . S E Woosley, A Heger, 10.1086/498500ApJ. 637914Woosley, S. E., & Heger, A. 2006, ApJ, 637, 914, doi: 10.1086/498500 . S E Woosley, A Heger, T A Weaver, 10.1103/RevModPhys.74.1015Reviews of Modern Physics. 741015Woosley, S. E., Heger, A., & Weaver, T. A. 2002, Reviews of Modern Physics, 74, 1015, doi: 10.1103/RevModPhys.74.1015 . S E Woosley, R E Taam, T A Weaver, 10.1086/163926ApJ. 301601Woosley, S. E., Taam, R. E., & Weaver, T. A. 1986, ApJ, 301, 601, doi: 10.1086/163926 . S E Woosley, T A Weaver, 10.1086/192237ApJS. 101181Woosley, S. E., & Weaver, T. A. 1995, ApJS, 101, 181, doi: 10.1086/192237 . M.-R Wu, J Barnes, G Martínez-Pinedo, B D Metzger, 10.1103/PhysRevLett.122.062701PhRvL. 12262701Wu, M.-R., Barnes, J., Martínez-Pinedo, G., & Metzger, B. D. 2019, PhRvL, 122, 062701, doi: 10.1103/PhysRevLett.122.062701 . M.-R Wu, R Fernández, G Martínez-Pinedo, B D Metzger, 10.1093/mnras/stw2156MNRAS. 4632323Wu, M.-R., Fernández, R., Martínez-Pinedo, G., & Metzger, B. D. 2016a, MNRAS, 463, 2323, doi: 10.1093/mnras/stw2156 . 10.1093/mnras/stw2156MNRAS. 4632323-. 2016b, MNRAS, 463, 2323, doi: 10.1093/mnras/stw2156 . Y Xu, K Takahashi, S Goriely, 10.1016/j.nuclphysa.2013.09.007NuPhA. 91861Xu, Y., Takahashi, K., Goriely, S., et al. 2013, NuPhA, 918, 61, doi: 10.1016/j.nuclphysa.2013.09.007 . D G Yakovlev, L R Gasques, A V Afanasjev, M Beard, M Wiescher, 10.1103/PhysRevC.74.035803Phys. Rev. C. 7435803Yakovlev, D. G., Gasques, L. R., Afanasjev, A. V., Beard, M., & Wiescher, M. 2006, Phys. Rev. C, 74, 035803, doi: 10.1103/PhysRevC.74.035803 . D G Yakovlev, D A Shalybkov, Astrophys. Space Phys. Res. 7311Yakovlev, D. G., & Shalybkov, D. A. 1989, Astrophys. Space Phys. Res., 7, 311 . J Yang, M S Turner, G Steigman, D N Schramm, K A Olive, 10.1086/162123ApJ. 281493Yang, J., Turner, M. S., Steigman, G., Schramm, D. N., & Olive, K. A. 1984, ApJ, 281, 493, doi: 10.1086/162123 . Y Zenati, D M Siegel, B D Metzger, H B Perets, 10.1093/mnras/staa3002MNRAS. 4994097Zenati, Y., Siegel, D. M., Metzger, B. D., & Perets, H. B. 2020, MNRAS, 499, 4097, doi: 10.1093/mnras/staa3002
[]
[ "STABILITY OF KERNEL SHEAVES ASSOCIATED TO RANK ONE TORSION-FREE SHEAVES", "STABILITY OF KERNEL SHEAVES ASSOCIATED TO RANK ONE TORSION-FREE SHEAVES" ]
[ "Nick Rekuski " ]
[]
[]
We show the kernel sheaf associated to a sufficiently positive torsion-free sheaf of rank 1 is slope stable. Furthermore, we are able to give an explicit bound for "sufficiently positive." This settles a conjecture of Ein-Lazarsfeld-Mustopa. The main technical lemma is a bound on the number of global sections of a torsion-free, globally generated sheaf in terms of its rank, degree, and invariants of the variety.
null
[ "https://export.arxiv.org/pdf/2303.13459v2.pdf" ]
258,676,650
2303.13459
144c4ddeee516502aea020478b386045b87faa48
STABILITY OF KERNEL SHEAVES ASSOCIATED TO RANK ONE TORSION-FREE SHEAVES May 2023 Nick Rekuski STABILITY OF KERNEL SHEAVES ASSOCIATED TO RANK ONE TORSION-FREE SHEAVES May 2023 We show the kernel sheaf associated to a sufficiently positive torsion-free sheaf of rank 1 is slope stable. Furthermore, we are able to give an explicit bound for "sufficiently positive." This settles a conjecture of Ein-Lazarsfeld-Mustopa. The main technical lemma is a bound on the number of global sections of a torsion-free, globally generated sheaf in terms of its rank, degree, and invariants of the variety. Introduction It is difficult to construct slope stable bundles with given topological invariants on higher dimensional varieties. For example, it is completely open whether there exists a slope stable, rank 2 bundle on P 7 . This difficulty, in part, is because categorical constructions involving slope stable sheaves do not produce slope stable sheaves. In this article, we consider the explicit case of the kernel of the natural surjection 0 → M L → H 0 (L ) ⊗ O X → L → 0 when L is a globally generated, torsion-free sheaf of rank 1. Kernels arising via this construction are called kernel sheaves. Over curves, Ein and Lazarsfeld showed M L is slope stable as soon as deg(L ) > 2g [EL92, Proposition 3.2]. The expectation is that a similar result holds in higher dimensions. The most general higher dimensional result is that on a smooth projective surface or smooth projective higher dimensional variety of Picard rank 1 the kernel bundle associated to a sufficiently positive line bundle is slope stable [ELM13, Theorem A and Proposition C]. However, this method does not give an explicit bound on sufficiently positive nor does it extend to higher dimensional varieties of higher Picard rank [ELM13, Problem 2.4, Conjecture 2.6]. We are able to settle both of these problems: Theorem A (4.3). Suppose L is a globally generated, torsion-free sheaf of rank 1 on a smooth, projective variety X with fixed very ample divisor H. For ease of notation, let g be the sectional genus of X (with respect to H). If H n + n − 1 n − 1 − 1 + 1 then M L is µ H -stable. A priori, the necessary inequalities of Theorem A may never apply. To this end, Corollary 4.4 shows h 0 (L (k)) satisfies the desired inequalities for k ≫ 0. Furthermore, in Remark 4.5 we show how to obtain an effective bound on k ≫ 0 in terms of topological invariants and the Castelnuovo-Mumford regularity of L . The main technical lemma to prove Theorem A is a bound on the number of global sections of a torsionfree, globally generated sheaf solely in terms of topological invariants of that sheaf. We believe this bound is of independent interest. Date: May 12, 2023 . The author was partially supported by the NSF grant DMS-2101761 during preparation of this article. The author is supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, under Award Number DE-SC-SC0022134. This work is also partially supported by an OVPR Postdoctoral Award at Wayne State University. Proposition B (3.5). Along with the assumptions of Theorem A also assume E is a torsion-free, globally generated sheaf. If deg H (E ) ≤ 2g − 2 then h 0 (E ) ≤ 1 + deg H (E ) n deg H (E ) H n + n − 1 n − 1 + rank(E ) − 1. If deg H (E ) ≥ 2g − 1 then h 0 (E ) ≤ H n deg H (E )−(g−1) H n + n − 1 n + 1 + g 2 2g−2 H n + n − 1 n − 1 deg H (E )−(2g−2) H n + n − 2 n − 2 + rank(E ) − 1. In fact, this result holds for any torsion-free sheaf globally generated outside codimension 2 (Definition 3.3 and Remark 3.6). We discuss optimality of Proposition B in Remark 3.10. Outline. In section 2 we recall relevant background regarding slope stable sheaves and kernel sheaves. In section 3 we prove Proposition B. In section 4 we use this bound on global sections to prove Theorem A. Our proof of Theorem A broadly follow the same argument as [But94, Theorem 1.2] where Butler shows on a curve that kernel bundles associated to sufficiently positive slope stable bundles are also slope stable. However, there are new difficulties in higher dimensions that must be addressed. Let N be a maximal destabilizing subsheaf of M L . Since M L is a kernel sheaf, there is an induced short exact sequence 0 → N → O ⊕h 0 (L ) X → C → 0. Using this short exact sequence, we find rank(N ) ≤ h 0 (C ) − rank(C ). On curves, Butler bounds h 0 (C ) − rank(C ) in terms of µ(C ) using [But94, Lemma 1.10] and the Riemann-Roch theorem. Proposition B is our higher dimensional analogue. Butler then uses his bound on rank(N ) to bound µ(N ) in terms of invariants of L [But94, Proposition 1.4]. For higher dimensions, our analogue is Lemma 4.2. Theorem A almost immediately follows. Notation and Assumptions. Suppose X is a smooth, projective variety (i.e. a smooth, integral, projective scheme of finite type over an algebraically closed field) of dimension dim(X) = n. Note we allow arbitrary characteristic of the base field. Fix a very ample divisor H on X. We denote the sectional genus of X with respect to H by g. In other words, g is the genus of the smooth, integral curve H n−1 . By the adjunction formula, we can rewrite the sectional genus as (1) g = 1 + n − 1 2 H n − c 1 (X) · H n−1 2 We use script letters (e.g. F , E , G ) to denote coherent sheaves on X. We reserve script L (L ) for torsionfree sheaves of rank 1. The degree of E (with respect to H) is deg H (E ) = H n−1 · c 1 (E ) where c 1 (E ) is the first Chern class of E viewed as a divisor on X. If n = dim(X) = 1 then deg H (E ) is independent of H so we drop H from the notation deg(E ) = deg H (E ). We also write codim(E ) = dim(X) − dim Supp(E ) and Sing(E ) for the closed subscheme of X consisting of stalks where E is not free. Recall if E is torsion-free then codim Sing(E ) ≥ 2. If x is a real number and n is a nonnegative integer we set (2) x + n n =        (x + n)(x + n − 1) · · · (x + 1) n! : x ≥ 0, n ≥ 1 0 : x < 0, n ≥ 1 1 : n = 0 . If x is a nonnegative integer and n ≥ 1 then x+n n agrees with the usual definition of the binomial coefficient. Generalities on Slope Stability and Kernel Sheaves In this section we recall slope stability and kernel sheaves. We then discuss a brief history of the stability of kernel sheaves. The definitions and results of this section are well known. Definition 2.1. Let X be a smooth, projective variety equipped with ample divisor H. For any nonzero coherent sheaf E we define its slope (with respect to H) to be µ H (E ) = deg H (E ) rank(E ) : rank(E ) = 0 +∞ : rank(E ) = 0 . If dim(X) = 1 then we drop H from the notation: µ H = µ. We say a nonzero torsion-free sheaf E is µ H -(semi)stable if every subsheaf 0 → F → E satisfying 0 < rank(F ) < rank(E ) also satisfies µ H (F )(≤) < µ H (E ). The quantity µ H (E ) is called the slope of E (with respect to H) and so µ H -stability is also often called sloe stability. As is well known, it suffices to only consider saturated subsheaves. Lemma 2.2. With the assumptions of Definition 2.1, the following are equivalent. ( 1) E is µ H -(semi)stable. (2) If 0 → F → E is a proper, nonzero subsheaf such that E /F is torsion-free then µ H (F )(≤) < µ H (E ). We also note the slope is well-behaved in short exact sequences of sheaves supported everywhere. Lemma 2.3 (Seesaw Inequality, [Rud97, Lemma 3.2]). Suppose 0 → F → E → G → 0 is a short exact sequence of coherent sheaves. If rank(F ), rank(E ), rank(G ) = 0 then one of the following inequalities must hold: • µ H (F ) < µ H (E ) < µ H (G ), • µ H (F ) = µ H (E ) = µ H (G ), or • µ H (F ) > µ H (E ) > µ H (G ). We now recall kernel sheaves. Definition 2.4. Suppose E is a globally generated, torsion-free sheaf on X. Therefore, there is a short exact sequence 0 → M E → H 0 (E ) ⊗ O X → E → 0. whose kernel is called the kernel sheaf associated to E . Kernel sheaves are also called syzygy, Lazarsfeld-Mukai, or Lazarsfeld sheaves. If E is clear from context, we will often drop the subscript: M E = M . Moreover, if M E is locally free, then we say M E is a kernel bundle (rather than a kernel sheaf). Over curves, µ-stability of M E is generally well understood. For the following suppose C is a smooth curve of genus g. • If L is a line bundle on C satisfying deg(L ) > 2g then M L is µ H -stable [EL92, Proposition 3.2]. • If E is a µ-stable bundle on C satisfying µ(E ) > 2g then M E is µ-stable [But94, Theorem 1.2]. There are also results improving the necessary bounds in the above results [But97,Cam08], and more recent results have considered the case where E is generated by an incomplete linear system [BBPN15,BPMGNO19]. In higher dimensions, µ H -stability of kernel sheaves is less understood. Results tend to be for specific classes of varieties or do not give effective bounds on positivity. • Assume the base field is of characteristic 0. If d ≥ 0 then the kernel bundle associated to O P n (d) is µ H -semistable [Fle84, Corollary 2.2]. • Suppose L is the image of V ⊗ O X → O P n (d) for some subspace V ⊆ H 0 (O X (d)). If dim(V ) > d − 1 + n n + 1 d d − 1 + n n − 1 d then M L is µ H -stable [Coa11, Theorem 1]. • If X ⊆ P n is a complete intersection of multidegree (d, d, . . . , d) then the kernel bundle associated to O X (d) is µ H -stable [Coa11, Proposition 2]. • Assume X is an abelian (resp. K3) surface over C. If L is a globally generated, ample line bundle on X (resp. satisfying L 2 ≥ 14) then M L is µ L -stable [Cam12, Theorem 1, Theorem 2]. • Suppose X is a surface (resp. dim(X) ≥ 3 and Pic(X) = Z). Assume L = O X (dH + δ) where H · δ = 0 and δ 2 ≤ 0 (resp. L = O X (dH)). If d ≫ 0 then M L is µ H -stable [ELM13, Theorem A, Theorem B]. • Assume X is an abelian variety. If L is an ample line bundle on X then M L ⊗d is µ L -semistable for all d ≥ 2. Furthermore, if X is simple (i.e. X contains no non-trivial abelian subvarieties) L is µ L -stable [CL21, Theorem 1]. • Suppose X is an Enriques (resp. bielliptic) surface over a field of characteristic = 2 (resp. = 2, 3). If L is a globally generated, ample line bundle on X then M L is µ L -stable [MR22, Theorem 3.5]. • Assume X is a Del Pezzo or Hirzebruch surface. If L is globally generated and ample then M L is µ L -stable [TLZ22, Corollary 3.3, Corollary 3.4]. The proofs of the dim(X) ≥ 2 results broadly fall into two techniques. The first technique, due to Coandȃ [Coa11], is to use Green's vanishing theorem [Gre84, 3.a.1] to show kernel sheaves are cohomologically stable-which implies µ H -stable. The second technique, due to Camere [Cam12], is to restrict the problem to curves and analyze the short exact sequence 0 → O ⊕k → M L | H → M L |H → 0 noting that M L |H is µ H -stable by [EL92, Proposition 3.2]. As described in the outline, our method for proving µ H -stability of kernel sheaves is closer to [But94, Theorem 1.2] rather than either of the techniques discussed above. Bounding Global Sections In this section we bound the number of global sections of a globally generated sheaf in terms of its rank and degree. The following binomial identity is well known and can be proven via induction on m−a. The corresponding weaker inequality is used extensively in the proof of Lemma 3.2. See Equation 2 for our convention for the binomial coefficient. x − m − k ≥ 0 then m i=a x − i k = x − a + 1 k + 1 − x − m k + 1 . In particular, we have the inequality m i=a x − i k ≤ x − a + 1 k + 1 . We first bound the global sections of a torsion-free sheaf of rank 1 (not necessarily globally generated). This result can be thought of as a higher dimensional generalization of Clifford's Theorem. The bound when dim(X) = n = 1 is classical. The bound for n ≥ 2 is seemingly new, but the argument is similar to [Lan04, Theorem 3.3]. The bound in the case deg H (L ) ≥ 2g − 1 seems especially complicated, but it is written in this way to lend itself to induction on n-in Proposition 3.5 we simplify this bound. Lemma 3.2. Suppose X is a smooth, projective variety equipped with very ample divisor H. For ease of notation, let g be the sectional genus of X (with respect to H). Furthermore, assume L is a torsion-free sheaf of rank 1 on X. If deg H (L ) ≤ 2g − 2 then h 0 (L ) ≤ H n 2 deg H (L ) H n + n − 1 n + deg H (L ) H n + n − 1 n − 1 . If deg H (L ) ≥ 2g − 1 then h 0 (L ) ≤ H n deg H (L )−(g−1) H n + n − 1 n + n−2 i=0 n − i + g − 1 n − i deg H (L )−(2g−2) H n + i − 1 i 2g−2 H n + n − 1 − i n − 1 − i . Proof. We proceed by induction on n = dim(X). If dim(X) = 1 then the deg(L ) ≤ 2g − 2 bound follows from Clifford's theorem. If deg(L ) ≥ 2g − 1 then, by Serre duality, h 1 (L ) = 0 and so the result follows by the Riemann-Roch theorem. We proceed with the inductive step. Since L is torsion-free, for general hyperplane H, we have the short exact sequence 0 → L (−1) → L → L | H → 0. so h 0 (L ) ≤ h 0 (L | H ) + h 0 (L (−1)). Continuing this process on h 0 (L (−1)) gives h 0 (L ) ≤ deg H (L ) H n i=0 h 0 (L (−i)| H ). where ⌊x⌋ = max{n ∈ Z | n ≤ x}. If deg H (L ) ≤ 2g − 2, by the inductive hypothesis and Lemma 3.1, we find ⌊ deg H (L ) H n ⌋ i=0 h 0 (L (−i)| H ) ≤ ⌊ deg H (L ) H n ⌋ i=0 H n 2 deg H (L ) H n − i + n − 2 n − 1 + deg H (L ) H n − i + n − 2 n − 2 ≤ H n 2 deg H (L ) H n + n − 1 n + deg H (L ) H n + n − 1 n − 1 as desired. If deg H (L ) ≥ 2g − 1 then, by the same argument as above, h 0 (L ) ≤ ⌊ deg H (L ) H n ⌋ i=0 h 0 (L (−i)| H ) = ⌈ deg H (L )−(2g−2) H n ⌉−1 i=0 h 0 (L (−i)| H ) + ⌊ deg H (L ) H n ⌋ i=⌈ deg H (L )−(2g−2) H n ⌉ h 0 (L (−i)| H ) where ⌈x⌉ = min{n ∈ Z | n ≥ x}. We consider each summand separately. In the first summand, i ≤ ⌈ deg H (L )−(2g−2) H n ⌉ − 1 so deg H (L (−i)| H ) > 2g − 2. Therefore, by the inductive hypothesis and Lemma 3.1, ⌈ deg H (L )−(2g−2) H n ⌉−1 i=0 h 0 (L (−i)| H ) ≤ ⌈ deg H (L )−(2g−2) H n ⌉−1 i=0 H n deg H (L )−(g−1) H n − i + n − 2 n − 1 + ⌈ deg H (L )−(2g−2) H n ⌉−1 i=0 n−3 j=0 n − 1 − j + g − 1 n − 1 − j deg H (L )−(2g−2) H n − i + j − 1 j 2g−2 H n + n − 2 − j n − 2 − j ≤ H n deg H (L )−(g−1) H n + n − 1 n + n−3 j=0 n − 1 − j + g − 1 n − 1 − j deg H (L )−(2g−2) H n + j j + 1 2g−2 H n + n − 2 − j n − 2 − j = H n deg H (L )−(g−1) H n + n − 1 n + n−2 j=1 n − j + g − 1 n − j deg H (L )−(2g−2) H n + j − 1 j 2g−2 H n + n − 1 − j n − 1 − j We now consider the second suummand. We will see that this summand only contributes to the "j=0" term in the above formula. In the second summand, i ≥ ⌈ deg H (L )−(2g−2) H n ⌉ so deg H (L (−i)| H ) ≤ 2g − 2. Thus, by the inductive hypothesis ⌊ deg H (L ) H n ⌋ i=⌈ deg H (L )−(2g−2) H n ⌉ h 0 (L (−i)| H ) ≤ ⌊ deg H (L ) H n ⌋ i=⌈ deg H (L )−(2g−2) H n ⌉ H n 2 deg H (L ) H n − i + n − 2 n − 1 + deg H (L ) H n − i + n − 2 n − 2 ≤ H n 2 deg H (L ) H n − ⌈ deg H (L )−(2g−2) H n ⌉ + n − 1 n + deg H (L ) H n − ⌈ deg H (L )−(2g−2) H n ⌉ + n − 1 n − 1 . where the second inequality follows from Lemma 3.1. Since x+n−1 n is increasing for x > 0, we find ⌊ deg H (L ) H n ⌋ i=⌈ deg H (L )−(2g−2) H n ⌉ h 0 (L (−i)| H ) ≤ H n 2 2g−2 H n + n − 1 n + 2g−2 H n + n − 1 n − 1 = n + g − 1 n 2g−2 H n + n − 1 n − 1 . Combining our inequalities for each summand gives the claimed bound for deg H (L ) ≥ 2g − 1. We recall the well known result that the existence of an injection 0 → O ⊕ rank(E ) X → E is equivalent to E being globally generated outside codimension 1. An immediate corollary is if E is globally generated then there exists an injection O ⊕ rank(E ) X → E . This injection will allow us to extend Lemma 3.2 to torsion-free, globally generated sheaves of higher rank. Definition 3.3. Suppose E is a coherent sheaf on X. Let C be the cokernel of the natural morphism H 0 (E ) ⊗ O X → E . We say E is globally generated outside codimension d if codim(C ) ≥ d. Lemma 3.4. Suppose E is a coherent sheaf on X. The sheaf E is globally generated outside codimension 1 if and only if rank(E ) linearly independent global sections of E induce an injection O ⊕ rank(E ) X → E . Proof. First suppose E is globally generated outside codimension 1. Since E is globally generated outside codimension 1, there is an exact sequence 0 → M → H 0 (E ) ⊗ O X → E → C → 0 where codim(C ) ≥ 1. By additivity and positivity of the rank, h 0 (E ) ≥ rank(E ). In particular, we can choose rank(E ) linearly independent global sections of E . With this in mind, for ease of notation, let f denote the composition O ⊕ rank(E ) X → H 0 (E ) ⊗ O X → E where the first morphism is given by choosing rank(E ) linearly independent sections of E . Therefore, for all x ∈ X we have the following exact sequence of coherent O X,x -modules 0 → K er(f ) x → O ⊕ rank(E ) X,x f − → x E x → C oker(f ) x → 0. By definition, f x is an isomorphism outside X \ (Sing(E ) ∪ Supp(C oker(f ))). Since C oker(f ) is supported in codimension 1 and codim(Sing(E )) ≥ 1 (for E is a coherent sheaf), f is an isomorphism outside codimension ≥ 1. In other words, K er(f ) is supported in codimension ≥ 1. Since O rank(E ) X is torsion-free, K er(f ) is either torsion-free or 0. Thus, we must have K er(f ) = 0 as claimed. Hence, O ⊕ rank(E ) X → E is injective, as claimed. For the converse, assume there is an injection f : O ⊕ rank(E ) X → E . The morphism f must factor through the natural morphism H 0 (E ) ⊗ O X → E . Therefore, by the universal property of the cokernel C oker(H 0 (E ) ⊗ O X → E ) ⊆ C oker(f ). By additivity of rank, rank(C oker(f )) = 0 so codim(C oker(f )) ≥ 1-which implies codim(C oker(H 0 (E ) ⊗ O X → E )) ≥ 1 as well. We now bound the global sections of a globally generated, torsion-free sheaf. In fact, our bound holds for sheaves globally generated outside codimension 2. At the same time we simplify the bounds from Lemma 3.2 so that they lend themselves to Lemma 4.2. Later we will see Proposition 3.5 is optimal (Remark 3.10). Proposition 3.5. Assume X is a smooth, projective variety equipped with very ample divisor H. Let g be the genus of the smooth integral curve H n−1 . Assume E is a torsion-free sheaf globally generated outside codimension 2 on X with rank(E ) ≥ 2. If deg H (E ) ≤ 2g − 2 then h 0 (E ) ≤ deg H (E ) 2n + 1 deg H (E ) H n + n − 1 n − 1 + rank(E ) − 1. If deg H (E ) ≥ 2g − 1 then h 0 (E ) ≤ H n deg H (E )−(g−1) H n + n − 1 n + rank(E ) − 1 + (n − 1)(n + g − 1) n deg H (E )−(2g−2) H n + n − 3 n − 2 2g−2 H n + n − 1 n − 1 Proof. Since E is torsion-free and globally generated outside codimension 2, by Lemma 3.4, we can choose rank(E ) − 1 general global sections to obtain the following short exact sequence (3) 0 → O ⊕ rank(E )−1 X → E → L → 0. We claim L is torsion-free. As an aside, if E is globally generated and reflexive this claim is well known (e.g. [ codim(E xt q (L , O X )) = codim(E xt q (E , O X )) ≥ q + 1 for all q ≥ 2. It remains to show codim(E xt 1 (L , O X )) ≥ 2. For ease of notation, write C be the cokernel of the natural morphism H 0 (E ) ⊗ O X → E . Since E is globally generated outside codimension 2, by definition, codim(C ) ≥ 2. Set D 2 = {x ∈ X \ (Sing(E ) ∪ Supp(C ) | dim k span{s 1 (x), s 2 (x), . . . , s rank(E )−1 } ≤ rank(E ) − 2} where s 1 , . . . , s rank(E )−1 are the linearly independent sections. By construction, L is free of rank 1 on each stalk of X \ D 2 . By [Kle74, Remark 6], D 2 has codimension codim(D 2 ) ≥ 2 in X \ (Sing(E ) ∪ Supp(C )). Moreover, since E is torsion-free and globally generated outside codimension 2, codim(Sing(E ) ∪ Supp(C )) ≥ 2. Therefore, we find codim(E xt 1 (L , O X )) ≥ 2. Since codim(E xt q (L , O X )) ≥ q + 1 for all q ≥ 1, by [HL10, Proposition 1.1.10], L is torsion-free. Continuing with the short exact sequence (3), we find h 0 (E ) ≤ rank(E ) − 1 + h 0 (L ). Moreover, by additivity, deg H (L ) = deg H (E ). Thus, since L is torsion-free of rank 1, h 0 (L ) is bounded by the quantity in Lemma 3.2. The remainder of the argument involves simplifying these quantities. If deg H (E ) ≤ 2g − 2, by Lemma 3.2, h 0 (E ) ≤ H n 2 deg H (L ) H n + n − 1 n + deg H (L ) H n + n − 1 n − 1 + rank(E ) − 1 = deg H (E ) 2n + 1 deg H (E ) H n + n − 1 n − 1 + rank(E ) − 1, as claimed. If deg H (E ) ≥ 2g − 1, by Lemma 3.2, h 0 (E ) ≤ H n deg H (E )−(g−1) H n + n − 1 n + rank(E ) − 1 + n−2 i=0 n − i + g − 1 n − i deg H (E )−(2g−2) H n + i − 1 i 2g−2 H n + n − 1 − i n − 1 − i ≤ H n deg H (E )−(g−1) H n + n − 1 n + rank(E ) − 1 + (n − 1)(n + g − 1) n deg H (E )−(2g−2) H n + n − 3 n − 2 2g−2 H n + n − 1 n − 1 , where the second inequality is because x+n n is increasing in n, as desired. Remark 3.6. A natural class of examples of torsion-free sheaves that are globally generated outside codimension 2 but usually not globally generated are reflexive hulls of globally generated, torsion-free sheaves. Explicitly, if E is a torsion-free sheaf globally generated outside codimension 2 then E ∨∨ is also globally generated outside codimension 2. To see this, look at the commutative diagram induced from the composition H 0 (E ) ⊗ O X → E → E ∨∨ . We frequently reference the bounds above, so we introduce the following notation. Definition 3.7. Fix n, d ∈ Z >0 . For ease of notation, we define A H (d) = d 2n + 1 d H n + n − 1 n − 1 − 1 and B H (d) = H n d−(g−1) H n + n − 1 n − 1 + (n − 1)(n + g − 1) n d−(2g−2) H n + n − 3 n − 2 2g−2 H n + n − 1 n − 1 Note A H (deg H (E )) + rank(E ) and B H (deg H (E )) + rank(E ) are exactly the bounds appearing in Proposition 3.5. We note an asymptotic formula for B H (d + kH n ) as k ≫ 0: Remark 3.8. As a polynomial in the variable k, B H (d + kH n ) = H n k n n! + 1 + d − g + n − 1 2 H n k n−1 (n − 1)! + · · · . By Equation 1, it follows that B H (d + kH n ) = H n k n n! + d + c 1 (X) · H n−1 2 k n−1 (n − 1)! + · · · . An easy corollary of Proposition 3.5 is that a globally generated, torsion-free sheaf of degree 0 must be trivial. This result is standard when E is reflexive. The author expects the result is known for torsion-free sheaves but could only find a reference over P n [OSS11, Chapter 2, Lemma 1.3.3]. Either way, the argument is an easy corollary of Proposition 3.5 and we will use this result later. Lemma 3.9. Suppose E is a globally generated, torsion-free sheaf. If deg H (E ) = 0 then E ∼ = O ⊕ rank(E ) X . Proof. By Proposition 3.5, h 0 (E ) ≤ rank(E ). Therefore, since E is globally generated, there is a surjection O ⊕ rank(E ) X → E → 0. By Lemma 3.4 this morphism is also injective. In other words, E = O ⊕ rank(E ) X , as desired. Remark 3.10. We end this section by noting when Proposition 3.5 is close to optimal or not. We also note the assumptions of torsion-free and globally generated are both necessary. (1) Suppose L is a torsion-free sheaf of rank 1. By the Hirzebruch-Riemann-Roch theorem and Remark 3.8, for k ≫ 0 B H (deg H (L (k))) − h 0 (L (k)) ≤ F (k) where F (k) is a polynomial in the variable k of degree n − 2. More generally, the same bound holds for coherent sheaves of the form E = O ⊕ rank(E )−1 X ⊕ L (k). (2) As noted in Lemma 3.9, if deg H (E ) = 0 then the bound is optimal. Similarly for X = P n our bound is optimal for all ranks and degrees. Specifically the vector bundle E ∼ = O ⊕r−1 X ⊕ O X (d) gives equality in Proposition 3.5. Last, if X is a smooth Del Pezzo surface then E = O ⊕r−1 X ⊕ O X (−dK X ) also gives equality in Proposition 3.5. (3) If E is torsion-free (but not necessarily globally generated) then h 0 (E ) ≤ rank(E )H n µ + H (E ) H n + n − 1 + rank(E ) i=1 1 i n . as shown by Langer [Lan04,Theorem 3.3]. If µ + H (E ) − µ H (E ) is small then Langer's bound is stronger than Proposition 3.5. That is to say, Langer's bound is better when E is "close" to being µ H -semistable while Proposition 3.5 is better when E is "far" from being µ H -semistable. This is illustrated in the first two items of this remark. In either case, Langer's bound has a complicated dependence on rank(E ) and µ + H (E ), while the dependence on rank(E ) in Proposition 3.5 is much simpler. (4) If E is torsion-free (but not necessarily globally generated) on a smooth, integral curve of genus g with 0 ≤ µ − H (E ) ≤ µ + H (E ) ≤ 2g − 2 then h 0 (E ) ≤ deg(E ) 2 + rank(E ). To see this, induct on the length of a Harder-Narasimhan filtration of E . The base case is due to Xiao [BPGN97, Theorem 2.1]. This bound is almost the same as Lemma 3.2 except the requirement µ + H (E ) ≤ 2g − 2 is weaker than the requirement deg H (E ) ≤ 2g − 2. Either way, the dependence on µ + H (E ) does not lend itself to the induction in Lemma 3.2. ( 1) The quotient O ⊕h 0 (E ) X /N induced from the composition N → M → O ⊕h 0 (E ) X is torsion-free. (2) deg H (M ) < deg H (N ) < 0. Proof. Suppose 0 → N → M is a nonzero, proper, saturated subsheaf satisfying µ H (N ) ≥ µ H (M ). (1) We have the following commutative diagram with exact rows and columns: 0 0 0 N O ⊕h 0 (E ) X O ⊕h 0 (E ) X /N 0 0 M O ⊕h 0 (E ) X E 0 M /N 0 0 where the dashed arrow exists by the universal property of the cokernel. Therefore, by the Snake Lemma, we obtain the short exact sequence 0 → M /N → O ⊕h 0 (E ) X /N → E → 0. Since N is a saturated subsheaf of M , by definition, M /N is torsion-free. Since M /N and E are torsion-free, O ⊕h 0 (E ) X /N is also torsion-free, as claimed. (2) There is an injection 0 → N → O Proof. By definition A H (d)/d is a polynomial in d whose coefficients are all nonnegative and the constant coefficient is 0. Therefore, A H (d)/d is increasing for d ∈ (0, ∞). It follows that −d/A H (d) is also increasing for d ∈ (0, ∞). We now consider B H (d). For n ≥ 3, B H (d) = F d − (g − 1) + G d − (2g − 2) − 1 where F (resp. G) is a single variable polynomial of degree n (resp. n − 2) whose coefficients are all nonnegative. Therefore, for d > Morally, an improved analysis of this quotient is how Butler is able to obtain a higher rank result for curves [But94,Lemma 1.9]. A similar analysis is done in [Tri10] in the case of line bundles on P n . Either way, in an upcoming article, the author gives partial results for stability of kernel sheaves associated to a µ H -stable, globally generated, torsion-free sheaves (of arbitrary rank) on Del Pezzo surfaces. That article uses a completely different method based on Bridgeland stability. Lemma 3 . 1 . 31If x is a real number and a, k, m are positive integers satisfying ( 5 ) 5Proposition 3.5 is false for globally generated, torsion sheaves. For example, consider O Y (d) as a coherent sheaf on X where codim(Y ) = 1. Then rank(O Y (d)) = 0 and deg H (O Y (d)) = 1 but h 0 (O Y (d)) = d. (6) Proposition 3.5 is false for torsion-free sheaves that are not globally generated. In fact, the number of global sections of an arbitrary torsion-free sheaf cannot be bounded solely in terms of topological invariants. For example, if X = P 1 then the locally-free sheaves O C (a) ⊕ O X (−a) all have the same topological type [OSS11, Introduction of I.6.1], so h 0 (E ) cannot be bounded solely in terms of topological invariants of E .4. Slope Stability of Kernel SheavesThe following lemma shows if 0 → N → M is a saturated subsheaf of a kernel sheaf M then 0 > deg(N ) > deg(M ).Lemma 4.1. Suppose M is a kernel sheaf associated to a globally generated, torsion-free E . Suppose 0 → N → M is a nonzero, proper saturated subsheaf satisfying µ H (N ) ≥ µ H (M ). h 0 0deg H (N ) ≤ 0. If deg H (N ) = 0, by part (1), O ⊕h 0 (E ) X /N is a globally generated, torsion-free sheaf of degree 0. Therefore, by Lemma 3.(E ) ≤ h 0 (N ) + (h 0 (E ) − rank(N )). However, since M is a kernel sheaf, H 0 (M ) = 0 and so H 0 (N ) = 0 as well. Thus, Equation 4 implies rank(N ) = 0. However, since N is nonzero and M is torsion-free, this is not possible. Therefore, we must have deg H (N ) < 0, as claimed. Last, since µ H (N ) ≥ µ H (M ) and M /N is torsion-free, by the seesaw inequality (Lemma 2.3), µ H (M /N ) ≤ µ H (M ). In particular, deg H (M /N ) < 0 so deg H (M ) < deg H (N ), as desired.To prove Theorem A, we first use Proposition 3.5 to show if N → M is a maximal destabilizing subsheaf then µ H (N ) is bounded in terms of invariants of N , H, and X. We then aim to use Lemma 4.1 to bound µ H (N ) in terms of invariants of M and X. The following lemma is a technical step needed to achieve this bound. See Definition 3.7 for a reminder of the functions A H (d) and B H (d). Lemma 4 . 2 . 42The function −d AH (d) is increasing for d ∈ (0, ∞). Similarly, the function −d BH (d) is increasing for d ∈ (2g − 2, ∞). h 0 ( 0L ) − 1 > max deg H (L ) 2g − 2 A H (2g − 2), deg H (L ) deg H (L ) − 1 B H (deg H (L ) − 1) then M is µ H -stable.Proof. We proceed by contradiction, so suppose 0 → N → M is nonzero, proper saturated subsheaf satisfying µ H (N ) ≥ µ H (M ). The composition N → M → O 4.1.1, C is torsion-free. Furthermore, since M is a kernel sheaf, H 0 (M ) = 0 and so H 0 (N ) = 0. Therefore, taking cohomology of the above short exact sequence shows h 0 (L ) ≤ h 0 (C ). In other words,rank(N ) = h 0 (L ) − rank(C ) ≤ h 0 (C ) − rank(C ).Since C is a globally generated, torsion-free sheaf on X, by Proposition 3.5, if deg H (C ) ≤ 2g − 2 then rank(N ) ≤ A H (deg H (C )) and if deg H (C ) > 2g − 2 then rank(N ) ≤ B H (deg H (C )). Furthermore, by (2) of Lemma 4.1, deg H (C ) = − deg H (N ) < − deg H (M ) = deg H (L ). Hence, by Lemma 4.2, if deg H (C ) ≤ 2g − 2 then µ H (N ) = − deg H (C ) rank(N ) ≤ − deg H (C ) A H (deg H (C )) ≤ −(2g − 2) A H (2g − 2) , and if deg H (C ) > 2g − 2 then µ H (N ) = − deg H (C ) rank(N ) ≤ − deg H (C ) B H (deg H (C )) ≤ −(deg H (L ) − 1) B H (deg H (L ) − 1) .By the assumed bound onh 0 (L ) − 1, if deg H (C ) ≤ 2g − 2 then µ H (N ) ≤ −(2g − 2) A H (2g − 2)) < − deg H (L ) h 0 (L ) − 1 = µ H (M ), and if deg H (C ) > 2g − 2 then µ H (N ) ≤ −(deg H (L ) − 1) B H (deg H (L ) − 1)) < − deg H (L ) h 0 (L ) − 1 = µ H (M). Since µ H (N ) ≥ µ H (M ) we have reached a contradiction! Hence, M is µ H -stable, as desired. It is clear from the argument that if we have equality rather than inequality in either of the bounds in Theorem 4.3 then M is only µ H -semistable. We show the bounds of Theorem 4.3 apply for any sufficiently positive twist. This gives a new proof of [ELM13, Theorem A] and generalizes [ELM13, Proposition C] to the case of arbitrary Picard group (which proves [ELM13, Conjecture 2.6]). Corollary 4 . 4 . 44With the assumptions of Theorem 4.3, let M k be the kernel sheaf associated to L ⊗O X (kH). If dim(X) ≥ 2 and k ≫ 0 then M k is µ H -stable. 2g − 2, B H (d)/d is increasing. For n = 1, 2 the result can be checked by directly calculating the derivative of B H (d)/d. It follows that −d/B H (d) is also increasing for d > 2g − 2, as desired.Theorem 4.3. Assume X is a smooth, projective variety with very ample divisor H. Let g be the sectional genus of X with respect to H. Suppose L is a globally generated, torsion-free sheaf of rank 1 on X with associated kernel sheaf M . If Acknowledgments. The author is thankful to Rajesh Kulkarni and Yusuf Mustopa for many useful discussions. The author is also thankful to Federico Caucci, Peter Newstead, and Shitan Xu for comments on an earlier draft of this paper.Proof. By Serre's theorem, if k ≫ 0 then H i (L (k)) = 0 for all i > 0. Therefore, by the Hirzebruch-Riemann-Roch theorem, h 0 (L (k)) − 1 = H n k n n! + deg H (L ) + c1(X)·H n−1 2 (n − 1)! k n−1 + · · · for all k ≫ 0 where the unwritten term is a polynomial in k of degree k n−2 .Since2g−2 H n + n − 1 n − 1 grows linearly in k, the first bound of Theorem 4.3 is satisfied for k ≫ 0. On the other hand, by Remark 3.8,where α is a constant independent of k and the unwritten terms are polynomials in k of smaller degree. Therefore, since n ≥ 2, we find that the second bound of Theorem 4.3 is satisfied for all k ≫ 0. Therefore, M k is µ H -stable for all k ≫ 0, as desired.We remark on how to find explicit bounds for Corollary 4.4. In practice, the arithmetic is too finicky to do by hand, but it is easy using a computer algebra program. This solves [ELM13, Problem 2.4]. . Since h 0 (L (k))(deg H (L (k)) − 1) is a polynomial in k, finding explicit k such that Corollary 4.4 holds is equivalent to finding the largest real zero of the degree n+1 polynomial associated with the above inequalities. For small n this can be found exactly. For any n, such a bound on k can be found using Cauchy's bound on real zeros. Theorem 4.3 does not actually use the assumption rank(L ) = 1. However, the author is unable to give an example of a higher rank sheaf satisfying the bound of Theorem 4.3. In fact, by Remark 3.8,while h 0 (E (k)) = H n rank(E )k n n! + · · · for all k ≫ 0. In other words, the argument of Corollary 4.4 fantastically fails when rank(E ) ≥ 2 and dim(X) ≥ 2. With this in mind, it is natural to ask whether Theorem 4.3 extends to higher ranks. In other words, the kernel sheaf associated to a sufficiently positive, globally generated, torsion-free, µ H -stable sheaf also µ H -stable? As noted in the introduction, [But94, Theorem 1.2] gives such a result for curves. As far as the author knows, there are no such results in higher dimensions. Furthermore, as stated in Remark 3.10, the bound on h 0 (E ) from Proposition 3.5 is close to optimal. For these reasons, if we were to try to generalize the method of Theorem 4.3 to higher ranks, we would need more control over quotient sheaf O ⊕h 0 (E ) X /N . On linear series and a conjecture of D. C. Butler, Internat. U N Bhosle, L Brambila-Paz, P E Newstead, DOI10.1142/S0129167X1550007X.MR3319666↑3J. Math. 26218U. N. Bhosle, L. Brambila-Paz, and P. E. Newstead, On linear series and a conjecture of D. C. Butler, Internat. J. Math. 26 (2015), no. 2, 1550007, 18, DOI 10.1142/S0129167X1550007X. MR3319666 ↑3 Geography of Brill-Noether loci for small slopes. L Brambila-Paz, I Grzegorczyk, P E Newstead, MR1487229 ↑9. 6L. Brambila-Paz, I. Grzegorczyk, and P. E. Newstead, Geography of Brill-Noether loci for small slopes, J. Algebraic Geom. 6 (1997), no. 4, 645-669. MR1487229 ↑9 Generated coherent systems and a conjecture of D. C. Butler, Internat. L Brambila-Paz, O Mata-Gutiérrez, P E Newstead, Angela Ortega, DOI10.1142/S0129167X19500241.MR3961440↑3J. Math. 30525L. Brambila-Paz, O. Mata-Gutiérrez, P. E. Newstead, and Angela Ortega, Generated coherent systems and a conjecture of D. C. Butler, Internat. J. Math. 30 (2019), no. 5, 1950024, 25, DOI 10.1142/S0129167X19500241. MR3961440 ↑3 Normal generation of vector bundles over a curve. David C Butler, 1-34. MR1258911 ↑2J. Differential Geom. 39113David C. Butler, Normal generation of vector bundles over a curve, J. Differential Geom. 39 (1994), no. 1, 1-34. MR1258911 ↑2, 3, 4, 12, 13 . Birational Maps of Moduli of Brill-Noether Pairs. preprint, available at alg-geom/9705009. ↑3, Birational Maps of Moduli of Brill-Noether Pairs (1997), preprint, available at alg-geom/9705009. ↑3 About the stability of the tangent bundle restricted to a curve. Chiara Camere, DOI10.1016/j.crma.2008.02.006MR2417562 ↑3. English346with English and French summariesChiara Camere, About the stability of the tangent bundle restricted to a curve, C. R. Math. Acad. Sci. Paris 346 (2008), no. 7-8, 421-426, DOI 10.1016/j.crma.2008.02.006 (English, with English and French summaries). MR2417562 ↑3 About the stability of the tangent bundle of P n restricted to a surface. DOI10.1007/s00209-011-0874-y.MR2917155↑4Math. Z. 2711-2, About the stability of the tangent bundle of P n restricted to a surface, Math. Z. 271 (2012), no. 1-2, 499-507, DOI 10.1007/s00209-011-0874-y. MR2917155 ↑4 Stability of syzygy bundles on abelian varieties. Federico Caucci, Martí Lahoz, DOI10.1112/blms.12481.MR4311817↑4Bull. Lond. Math. Soc. 534Federico Caucci and Martí Lahoz, Stability of syzygy bundles on abelian varieties, Bull. Lond. Math. Soc. 53 (2021), no. 4, 1030-1036, DOI 10.1112/blms.12481. MR4311817 ↑4 On the stability of syzygy bundles. Iustin Coandȃ, DOI10.1142/S0129167X1100688X.MR2794459Internat. J. Math. 2244Iustin Coandȃ, On the stability of syzygy bundles, Internat. J. Math. 22 (2011), no. 4, 515-534, DOI 10.1142/S0129167X1100688X. MR2794459 ↑3, 4 Stability and restrictions of Picard bundles, with an application to the normal bundles of elliptic curves. Lawrence Ein, Robert Lazarsfeld, DOI10.1017/CBO9780511662652.011.MR1201380↑1Complex projective geometry. Trieste; CambridgeCambridge Univ. Press1794Lawrence Ein and Robert Lazarsfeld, Stability and restrictions of Picard bundles, with an application to the normal bundles of elliptic curves, Complex projective geometry (Trieste, 1989/Bergen, 1989), Lon- don Math. Soc. Lecture Note Ser., vol. 179, Cambridge Univ. Press, Cambridge, 1992, pp. 149-156, DOI 10.1017/CBO9780511662652.011. MR1201380 ↑1, 3, 4 Stability of syzygy bundles on an algebraic surface. Lawrence Ein, Robert Lazarsfeld, Yusuf Mustopa, DOI10.4310/MRL.2013.v20.n1.a7.MR3126723↑1Math. Res. Lett. 20112Lawrence Ein, Robert Lazarsfeld, and Yusuf Mustopa, Stability of syzygy bundles on an algebraic surface, Math. Res. Lett. 20 (2013), no. 1, 73-80, DOI 10.4310/MRL.2013.v20.n1.a7. MR3126723 ↑1, 4, 11, 12 Restrictions of semistable bundles on projective varieties. Hubert Flenner, DOI10.1007/BF02566370.MR780080↑3Comment. Math. Helv. 594Hubert Flenner, Restrictions of semistable bundles on projective varieties, Comment. Math. Helv. 59 (1984), no. 4, 635-650, DOI 10.1007/BF02566370. MR780080 ↑3 Koszul cohomology and the geometry of projective varieties. L Mark, Green, J. Differential Geom. 1914Mark L. Green, Koszul cohomology and the geometry of projective varieties, J. Differential Geom. 19 (1984), no. 1, 125-171. MR739785 ↑4 The geometry of moduli spaces of sheaves. Daniel Huybrechts, Manfred Lehn, Cambridge Mathematical Library. CambridgeCambridge University Press82nd ed.Daniel Huybrechts and Manfred Lehn, The geometry of moduli spaces of sheaves, 2nd ed., Cambridge Mathe- matical Library, Cambridge University Press, Cambridge, 2010. MR2665168 ↑7, 8 The transversality of a general translate. L Steven, Kleiman, Compositio Math. 287Steven L. Kleiman, The transversality of a general translate, Compositio Math. 28 (1974), 287-297. MR360616 ↑7 Moduli spaces of sheaves in mixed characteristic. Adrian Langer, DOI10.1215/S0012-7094-04-12434-0.MR2085175Duke Math. J. 12439Adrian Langer, Moduli spaces of sheaves in mixed characteristic, Duke Math. J. 124 (2004), no. 3, 571-586, DOI 10.1215/S0012-7094-04-12434-0. MR2085175 ↑4, 9 Classical setting: line bundles and linear series. Robert Lazarsfeld, Positivity in algebraic geometry. I, Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. BerlinSpringer-Verlag483rd Series. A Series of Modern Surveys in MathematicsRobert Lazarsfeld, Positivity in algebraic geometry. I, Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics], vol. 48, Springer-Verlag, Berlin, 2004. Classical setting: line bundles and linear series. MR2095471 ↑12 A note on stability of syzygy bundles on Enriques and bielliptic surfaces. Jayan Mukherjee, Debaditya Raychaudhury, DOI10.1090/proc/15934.MR4446224↑4Proc. Amer. Math. Soc. 1509Jayan Mukherjee and Debaditya Raychaudhury, A note on stability of syzygy bundles on Enriques and bielliptic surfaces, Proc. Amer. Math. Soc. 150 (2022), no. 9, 3715-3724, DOI 10.1090/proc/15934. MR4446224 ↑4 Vector bundles on complex projective spaces, Modern Birkhäuser Classics. Christian Okonek, Michael Schneider, Heinz Spindler, Birkhäuser/Springer Basel AG, BaselCorrected reprint of the 1988 edition. With an appendix by S. I. Gelfand. MR2815674 ↑9Christian Okonek, Michael Schneider, and Heinz Spindler, Vector bundles on complex projective spaces, Modern Birkhäuser Classics, Birkhäuser/Springer Basel AG, Basel, 2011. Corrected reprint of the 1988 edition; With an appendix by S. I. Gelfand. MR2815674 ↑9 Reflexive Garben auf P 4. Christian Okonek, DOI10.1007/BF01457237Math. Ann. 2602GermanChristian Okonek, Reflexive Garben auf P 4 , Math. Ann. 260 (1982), no. 2, 211-237, DOI 10.1007/BF01457237 (German). MR664377 ↑7 Stability for an abelian category. Alexei Rudakov, DOI10.1006/jabr.1997.7093.MR1480783↑3J. Algebra. 1971Alexei Rudakov, Stability for an abelian category, J. Algebra 197 (1997), no. 1, 231-245, DOI 10.1006/jabr.1997.7093. MR1480783 ↑3 H-stability of syzygy bundles on some regular algebraic surfaces. H Torres-López, A G Zamora, DOI10.1007/s13366-021-00594-z.MR4473919↑4Beitr. Algebra Geom. 633H. Torres-López and A. G. Zamora, H-stability of syzygy bundles on some regular algebraic surfaces, Beitr. Algebra Geom. 63 (2022), no. 3, 589-598, DOI 10.1007/s13366-021-00594-z. MR4473919 ↑4 Semistability of syzygy bundles on projective spaces in positive characteristics. V Trivedi, DOI10.1142/S0129167X10006598.MR2747739↑13Internat. J. Math. 2111V. Trivedi, Semistability of syzygy bundles on projective spaces in positive characteristics, Internat. J. Math. 21 (2010), no. 11, 1475-1504, DOI 10.1142/S0129167X10006598. MR2747739 ↑13
[]
[ "LAMOST Spectrograph Response Curves: Stability and Application to flux calibration", "LAMOST Spectrograph Response Curves: Stability and Application to flux calibration" ]
[ "Bing Du \nKey Laboratory of Optical Astronomy\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina\n\nUniversity of Chinese Academy of Sciences\n100049BeijingChina\n", "A-Li Luo \nKey Laboratory of Optical Astronomy\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina\n\nUniversity of Chinese Academy of Sciences\n100049BeijingChina\n", "Zhong-Rui Bai \nKey Laboratory of Optical Astronomy\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina\n\nUniversity of Chinese Academy of Sciences\n100049BeijingChina\n", "Xiao Kong \nKey Laboratory of Optical Astronomy\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina\n", "Jian-Nan Zhang \nKey Laboratory of Optical Astronomy\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina\n", "Yan-Xin Guo \nKey Laboratory of Optical Astronomy\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina\n", "Neil James Cook \nKey Laboratory of Optical Astronomy\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina\n\nCentre for Astrophysics Research\nSchool of Physcis, Astronomy and Mathematics\nUni-versity of Hertfordshire\nCollege Lane, HartfieldAL10 9ABUK\n", "Wen Hou \nKey Laboratory of Optical Astronomy\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina\n", "Hai-Feng Yang \nKey Laboratory of Optical Astronomy\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina\n", "Yin-Bi Li \nKey Laboratory of Optical Astronomy\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina\n", "Yi-Han Song \nKey Laboratory of Optical Astronomy\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina\n", "Jian-Jun Chen \nKey Laboratory of Optical Astronomy\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina\n", "Ke-Fei Wu \nKey Laboratory of Optical Astronomy\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina\n", "Meng-Xin Wang \nKey Laboratory of Optical Astronomy\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina\n", "You-Fen Wang \nKey Laboratory of Optical Astronomy\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina\n", "Yong-Heng Zhao \nKey Laboratory of Optical Astronomy\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina\n" ]
[ "Key Laboratory of Optical Astronomy\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina", "University of Chinese Academy of Sciences\n100049BeijingChina", "Key Laboratory of Optical Astronomy\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina", "University of Chinese Academy of Sciences\n100049BeijingChina", "Key Laboratory of Optical Astronomy\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina", "University of Chinese Academy of Sciences\n100049BeijingChina", "Key Laboratory of Optical Astronomy\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina", "Key Laboratory of Optical Astronomy\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina", "Key Laboratory of Optical Astronomy\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina", "Key Laboratory of Optical Astronomy\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina", "Centre for Astrophysics Research\nSchool of Physcis, Astronomy and Mathematics\nUni-versity of Hertfordshire\nCollege Lane, HartfieldAL10 9ABUK", "Key Laboratory of Optical Astronomy\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina", "Key Laboratory of Optical Astronomy\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina", "Key Laboratory of Optical Astronomy\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina", "Key Laboratory of Optical Astronomy\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina", "Key Laboratory of Optical Astronomy\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina", "Key Laboratory of Optical Astronomy\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina", "Key Laboratory of Optical Astronomy\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina", "Key Laboratory of Optical Astronomy\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina", "Key Laboratory of Optical Astronomy\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina" ]
[]
The task of flux calibration for LAMOST (Large sky Area Multi-Object Spectroscopic Telescope) spectra is difficult due to many factors. For example, the lack of standard stars, flat fielding for large field of view, and variation of reddening between different stars especially at low galactic latitudes etc. Poor selection, bad spectral quality, or extinction uncertainty of standared stars not only might induce errors to the calculated spectral response curve (SRC), but also might lead to failures in producing final 1D spectra. In this paper, we inspected spectra with Galactic latitude |b| ≥ 60 • and reliable stellar parameters, determined through the LAMOST Stellar Parameter Pipeline (LASP), to study the stability of the spectrograph. To guarantee the selected stars had been observed by each fiber, we selected 37,931 high quality exposures of 29,000 stars from LAMOST DR2, and more than 7 exposures for each fiber. We calculated the SRCs for each fiber for each exposure, and calculated the statistics of SRCs for spectrographs with both the fiber variations and time variations. The result shows that the average response curve of each spectrograph (henceforth ASPSRC) is relatively stable with statistical errors ≤ 10%. From the comparison between each ASP-SRC and the SRCs for the same spectrograph obtained by 2D pipeline, we find that the ASPSRCs are good enough to use for the calibration. The ASPSRCs have been applied to spectra which were abandoned by LAMOST 2D pipeline due to the lack of standard stars, increasing the number of LAMOST spectra by 52,181 in DR2. Comparing those same targets with SDSS, the relative flux differences between SDSS spectra and that of LAMOST spectra with the ASPSRC method are less than 10%, which underlines that the ASPSRC method is feasible for LAMOST flux calibration.
10.3847/1538-4365/227/2/27
[ "https://arxiv.org/pdf/1611.08216v1.pdf" ]
119,233,943
1611.08216
97ddaa9394a998bb87b01e290d572db89c9bf5b9
LAMOST Spectrograph Response Curves: Stability and Application to flux calibration 24 Nov 2016 -2 - Bing Du Key Laboratory of Optical Astronomy National Astronomical Observatories Chinese Academy of Sciences 100012BeijingChina University of Chinese Academy of Sciences 100049BeijingChina A-Li Luo Key Laboratory of Optical Astronomy National Astronomical Observatories Chinese Academy of Sciences 100012BeijingChina University of Chinese Academy of Sciences 100049BeijingChina Zhong-Rui Bai Key Laboratory of Optical Astronomy National Astronomical Observatories Chinese Academy of Sciences 100012BeijingChina University of Chinese Academy of Sciences 100049BeijingChina Xiao Kong Key Laboratory of Optical Astronomy National Astronomical Observatories Chinese Academy of Sciences 100012BeijingChina Jian-Nan Zhang Key Laboratory of Optical Astronomy National Astronomical Observatories Chinese Academy of Sciences 100012BeijingChina Yan-Xin Guo Key Laboratory of Optical Astronomy National Astronomical Observatories Chinese Academy of Sciences 100012BeijingChina Neil James Cook Key Laboratory of Optical Astronomy National Astronomical Observatories Chinese Academy of Sciences 100012BeijingChina Centre for Astrophysics Research School of Physcis, Astronomy and Mathematics Uni-versity of Hertfordshire College Lane, HartfieldAL10 9ABUK Wen Hou Key Laboratory of Optical Astronomy National Astronomical Observatories Chinese Academy of Sciences 100012BeijingChina Hai-Feng Yang Key Laboratory of Optical Astronomy National Astronomical Observatories Chinese Academy of Sciences 100012BeijingChina Yin-Bi Li Key Laboratory of Optical Astronomy National Astronomical Observatories Chinese Academy of Sciences 100012BeijingChina Yi-Han Song Key Laboratory of Optical Astronomy National Astronomical Observatories Chinese Academy of Sciences 100012BeijingChina Jian-Jun Chen Key Laboratory of Optical Astronomy National Astronomical Observatories Chinese Academy of Sciences 100012BeijingChina Ke-Fei Wu Key Laboratory of Optical Astronomy National Astronomical Observatories Chinese Academy of Sciences 100012BeijingChina Meng-Xin Wang Key Laboratory of Optical Astronomy National Astronomical Observatories Chinese Academy of Sciences 100012BeijingChina You-Fen Wang Key Laboratory of Optical Astronomy National Astronomical Observatories Chinese Academy of Sciences 100012BeijingChina Yong-Heng Zhao Key Laboratory of Optical Astronomy National Astronomical Observatories Chinese Academy of Sciences 100012BeijingChina LAMOST Spectrograph Response Curves: Stability and Application to flux calibration 24 Nov 2016 -2 -Received ; acceptedSubject headings: techniques: spectroscopic -methods: data analysis-methods: statistical The task of flux calibration for LAMOST (Large sky Area Multi-Object Spectroscopic Telescope) spectra is difficult due to many factors. For example, the lack of standard stars, flat fielding for large field of view, and variation of reddening between different stars especially at low galactic latitudes etc. Poor selection, bad spectral quality, or extinction uncertainty of standared stars not only might induce errors to the calculated spectral response curve (SRC), but also might lead to failures in producing final 1D spectra. In this paper, we inspected spectra with Galactic latitude |b| ≥ 60 • and reliable stellar parameters, determined through the LAMOST Stellar Parameter Pipeline (LASP), to study the stability of the spectrograph. To guarantee the selected stars had been observed by each fiber, we selected 37,931 high quality exposures of 29,000 stars from LAMOST DR2, and more than 7 exposures for each fiber. We calculated the SRCs for each fiber for each exposure, and calculated the statistics of SRCs for spectrographs with both the fiber variations and time variations. The result shows that the average response curve of each spectrograph (henceforth ASPSRC) is relatively stable with statistical errors ≤ 10%. From the comparison between each ASP-SRC and the SRCs for the same spectrograph obtained by 2D pipeline, we find that the ASPSRCs are good enough to use for the calibration. The ASPSRCs have been applied to spectra which were abandoned by LAMOST 2D pipeline due to the lack of standard stars, increasing the number of LAMOST spectra by 52,181 in DR2. Comparing those same targets with SDSS, the relative flux differences between SDSS spectra and that of LAMOST spectra with the ASPSRC method are less than 10%, which underlines that the ASPSRC method is feasible for LAMOST flux calibration. Introduction The LAMOST is a quasi-meridian reflecting Schmidt telescope with an effective aperture of ∼4m and field of view (FoV) of 5 degree in diameter. At the focal plane, 4,000 robotic optical fibers of aperture size 3.3 arcsec projected on the sky relay the target light to 16 spectrographs, 250 fibers each (Cui et al. 2012;Deng et al. 2012). Proceeded by one-year Pilot Survey, the LAMOST Regular Surveys started in September 2012. The wavelength range of LAMOST covers 3,700 to 9,000Å and is recorded in two arms, a blue arm (3,700-5,900Å) and a red arm (5,700-9,000Å), with the resolving power of R∼1,800. A final spectrum is obtained by merging several exposures and connecting wavelength bands. Raw data from the LAMOST surveys are reduced with the LAMOST 2D pipeline (Luo et al. 2015). The procedures used by the 2D pipeline, similar to those of SDSS (Stoughton et al. 2002), aim to extract spectra from the CCD images and then calibrate them. The main tasks of the 2D pipeline include the steps of fiber tracing, flux extraction, wavelength calibration, flat fielding, sky subtraction, flux calibration, multi-exposure co-addition and the connection of the two wavelength bands. Since the data reduction steps are the reverse process of the data acqusition process, we should understand the data acquisition process of LAMOST, which can be simplified as follows. F o (j, λ) = [F i (j, λ) × d s (λ) + sky r (λ)] × d f (λ) × d p (λ) + scatter(j, λ) + C k (j, λ) + B (1) In this equation, F o (j, λ) is the observed signal, where j denotes the j -th fiber and λ denotes the wavelength; F i (j, λ) is the target signal before pass through the atmosphere; d s (λ) is the extinction function including atmospheric and interstellar reddening; sky r (λ) is sky background; d f (λ) is the fiber transmission function, a random number selected from a Gaussian distribution, with a mean of 0.9 and variance of 1.0; d p (λ) is the spectral response function due to the dispersion of the spectrograph; scatter(j, λ) is the scattering light including symmetrical scattering and the cross-contamination of fibers; C k (j, λ) is the parameter to compensate with cosmic rays; B is the CCD background. The purpose of the LAMOST flux calibration is to remove the spectral response curve (SRC) from observations. Considering that d f (λ) is divided during the flat field, the SRCs of spectrographs could be simplified as shown in equation 2, which only includes d s (λ) and d p (λ). SRC(j, λ) = d s (λ) × d p (λ)(2) In the real flux calibration process, d s (λ) and d p (λ) are considered as a whole SRC, by which the single exposure is divided. For the LAMOST 2D pipeline, selection of standard stars is the first step of flux calibration (Song et al. 2012). The pipeline selects standard stars automatically by comparing all the observed spectra with the KURUCZ library produced based on the ATLAS9 stellar atmosphere model data (Castelli et al. 2004). For each of the 16 spectrographs, several high quality spectra with SNR≥10, 5,750 K≤ T eff ≤7,250 K, log g≥3.5 dex and -1.0 dex≤[Fe/H]≤ 0 dex are selected as standard stars. Actually, the LAMOST 2D pipeline picks out standards with the temperature in the range of 6,000-7,000 K at first step, if there is not enough stars in this range, the 2D pipeline will extend the range to 5,750-7,250 K. If more than 3 standard stars are found for a spectrograph, the SRCs of the spectrograph can be derived by comparing the observed spectra with synthetic spectra (using the corresponding parameters from the KURUCZ spectral library). Because the 2D pipeline estimates the parameters by simple fitting with KURUCZ model, the parameters have great uncertainties for the stars with [Fe/H]<-1.0 dex, meanwhile considering that the number of metal poor stars is small in each spectrograph, the 2D pipeline uses the metallicity cut of -1.0 dex≤[Fe/H]≤ 0 dex for the selection of the standards. Unfortunately, for the current LAMOST 2D pipeline, when there are not enough suitable standard stars, data reduced of the spectrograph has to be suspended. In this paper, to rescue the unsuccessful spectra, we propose a novel flux calibration method based on the stability analysis of the SRCs. Thanks to more than 2 million spectra, with reliable stellar parameters in DR2, we are able to statistically measure the instrument stability. Through stellar parameters, the SRC of each fiber could be obtained. By averaging SRCs in each spectrograph, we can get an average spectrograph SRC (ASPSRC), and use it to calibrate spectra in each spectrograph without pre-selecting the flux standard stars assuming the ASPSRC is sufficiently stable. This flux calibration method can rescue more spectra from LAMOST which were abandoned by 2D pipeline. The paper is organized as follows. Section 2 details of the procedures used to create the ASPSRC for each spectrograph. The accuracy analysis of the ASPSRC and its application to flux calibration are presented in Section 3. We conclude with Section 4 which summarizes and discusses the results. Statistical Spectrograph Response Curves Selection of the Sample Work by Xiang et al.(2015) show that variations of the SRCs exist, this is done by using stars in high dense fields, however these suffer from high interstellar extinction. However, to study the variations of the SRCs one should use stars with less extinction. Therefore, we selected stars at high Galactic latitude to analyze instrument response (Fitzpatrick et al. 1999(Fitzpatrick et al. , 2007. To obtain a good approximation of the ASPSRCs, we require as many flux standard stars possible. To ensure the quality of the sample, the stellar parameters of LASP (Wu et al. 2011a,b) were used to select the F-stars with the highest signal to noise ratios (SNRs). We selected stars with 6,000 K≤ T eff ≤7,000 K, log g≥3.5 dex and Galactic (Xiang et al. 2015). As the ASPSRCs are derived from a great number of standard stars instead of a group of several standards in the 2D pipeline, and 90% of the metallicities are in the range of [Fe/H]≥ -1.0 dex, which the averaged SRCs are generated from. The accuracy of the parameters measured by LASP are good enough even for metal poor stars, and will not affect the averaged result. Thus we did not use a metallicity cut in this sample selection. With the benefit of the large sample of targets that satisfied the above parameter space, we find that there are sufficient and appropriate exposures across all fibers and spectrographs to allow us to use them as standards. Fig 1 shows the histogram of the number of standards per spectrograph from DR2, which indicates that at least 7 standards per fiber (with 250 fibers in each spectrograph this is equivalent to at least 1,750 individual exposures, shown in Fig 1). Fig 2 shows the histogram of their effective temperatures, mostly located in the vicinity of 6,100 K (i.e. F8 type stars). Spectral Response Curves Let F o (λ) and F i (λ) denote the measured and intrinsic spectral flux density thus, F o (λ) = d s (λ)d p (λ)F i (λ)(3) where d s (λ) is the combined atmospheric and interstellar extinction, and d p (λ) the telescope and instrumental response. In this work, we adopted synthetic flux as F i (λ), which is calculated using SPECTRUM based on the ATLAS9 stellar atmosphere model data released by Fiorella Castelli. The synthetic spectra of 1Å dispersion from the library of KURUCZ were used, then the spectra were degraded to the LAMOST spectral resolution by convolving with a Gaussian function. Only those with a constant micro-turbulent velocity of 2.0 km/s and a zero rotation velocity were adopted, since these two parameters have little effect on the spectral energy distribution (SED) at a given temperature (Grupp 2004). The interstellar extinction can be neglected owing to our selection of high latitude standards, however the atmospheric extinction can not be separated from instrumental response. The SRCs in this paper include atmospheric extinction, and their variations are included in the uncertainty of the SRCs. It is generally assumed that the SRCs are smooth functions of wavelength. In order to derive the SRCs, we applied a low-order piecewise polynomial fitting to the ratios of the observed and the synthetic spectra of the standard stars. Derivation of the ASPSRCs For the 250 fibers in each spectrograph, at least 1800 good SRCs (through multiexposures) were derived (excluding the bad fibers). We chose the fitted SRCs rather than the direct ratios of observed and synthetic to estimate the ASPSRC because the direct ratios are susceptible to noise. We concentrate on the relative flux calibration rather than absolute flux calibration, such that, for a given spectrograph, SRCs yielded by the spectra of the individual standard stars were divided by the average of their SRCs (i.e. the SRCs were scaled to a mean value of unity). It is generally assumed that the differences in the We expect that there is a unified response for a given spectrograph during exposures of different times and using different plates. The red curves in Fig 5, Fig 6, Fig 7 and We calculated the mean values of absolute and relative uncertainties for g, r and i-bands, which are presented in Table 1. Table 1 shows that for all 16 spectrographs, the uncertainties are smaller than 8% for both g and i-bands. The r-band is located at the edge of both arms and thus due to the low sensitivities, the uncertainties for r-band are much larger ( for example Spectrograph No.5 can differ by up to 11.13%. This means the fluxes and centroids of the lines located at the junction of the blue and red arms (such as Na D at λ5,892Å), are sometimes not credible. Time Variations Generally, the LAMOST observational season spans nine months from September to Still, we can conclude that spectrograph No.4,No.11,No.15 and No.16 are more stable than others during the DR2 period. Flux Calibration Based on ASPSRCs The spectral flux calibration of target objects is generally achieved through obtaining separate measurements of spectrophotometric standard stars (Oke et al. 1990;Hamuy et al. 1992Hamuy et al. , 1994 on the same observing night with the same instrumental setup. However large spectroscopic survey, obtaining separate measurements of sufficient standard stars for each night and each spectrograph becomes impossible, and an alternative strategy has to be adopted. In the case of the Sloan Digital Sky Survey (York et al. 2000), F turn-off stars within the FoV are used to calibrate the spectra. These standards are preselected based on the photometric colors and are observed simultaneously with the targets (Stoughton et al. 2002;Yanny et al. 2009). The intrinsic SEDs of F turn-off stars are well determined by theoretical models of stellar atmospheres and the effects of interstellar extinction can be characterized and removed using the all-sky extinction map of Schlegel et al. (1998) (Schlegel et al. 1998;Schlafly et al. 2010). Without a photometric system for LAMOST, and lacking of extinction values especially for low galactic latitudes, the standard stars are not pre-assigned. Usually, the flux standard stars are selected from the spectra in each spectrograph after observation. Sometimes, the selection of the standard stars fails, thus the spectrograph of plate has to be abandoned by the LAMOST 2D pipeline. This is indeed why the ASPSRC method is important, as using fixed instrumental response curves can recover some of these abandoned plates. Co-add the multi-exposures To improve the SNRs and overcome the effect of cosmic rays, each field is designed to be exposed multiple times. The spectra of each single exposure may be on different scales due to the variation of observational condition. The spectra on different scales can not be co-added since they are divided by the same ASPSRC. Fig The monochromatic AB magnitude is defined as the logarithm of a spectral flux density with a zero-point of 3631 Jansky (Oke et al. 1983), where 1 Jansky = 1 Jy = 10 −26 W Hz −1 m −2 = 10 −23 ergs −1 Hz −1 cm −2 . If the spectral flux density is denoted f ν , the monochromatic AB magnitude is: m AB (ν) = −2.5 log 10 f ν − 48.60(4) Actual measurements are always made across some continuous range of wavelengths. The bandpass AB magnitude is defined similarly, and the zero point corresponds to a bandpass-averaged spectral flux density of 3631 Jansky. m AB = −2.5 log 10 ( f ν (hν) −1 e(ν)dν 3631Jy(hν) −1 e(ν)dν )(5) where e(ν) is the equal-energy filter response function. The (hν) −1 term assumes that the detector is a photon-counting device such as a CCD or photomultiplier. The synthetic magnitude can be obtained by convolving the flux spectra with the SDSS g and i band transmission curves (Hamuy et al. 1992(Hamuy et al. , 1994. We adopted the g and i filter zero points from Pickles (Pickles 2010). The spectra are then scaled by comparing the synthetic magnitude with the photometry magnitude. The scale coefficient SC(g) and SC(i) are obtained as follows, and are multiplied with observed spectra. SC(g) = 10 −0.4×[mag(g)−mag synthetic (g)] ; SC(i) = 10 −0.4×[mag(i)−mag synthetic (i)](6) The spectra of Fig 13 were scaled using the method described above. The rescaled spectra can then be co-added, and the final spectra derived, which is shown in Fig 13 (bottom panel). It should be noted that this method is subject to the SNR of the spectra, since the synthetic magnitudes depend on the quality of the spectra. This method needs the photometry magnitudes of g and i-band for each target, thus we cross-matched LAMOST targets with Pan-STARRS1 (Tonry et al. 2012) within 3 mas. The LAMOST sources are selected from multiple catalogs with multi-band photometry. Consequently, not all the LAMOST targets overlap Pan-STARRS1. By cross-matching, we found that about 80% of the LAMOST targets are included in Pan-STARRS1. For those targets not in the Pan-STARRS1 catalog, SDSS PetroMag ugriz had been adopted. About have to only use the overlap between the blue and red arms in a very small wavelength range to connect them, that might lead to an piecing discontinuity if the signal noise ratio is too low in the overlap. For the final spectra, spline fitting with strict flux conservation is adopted to re-bin the spectra to a common wavelength grid. Once the flux is co-added by this method, the blueand red-arms are pieced together directly and the SEDs are consistent with their target colors. For the ones which don't have photometry in the optical band, but have multiple exposures, we scaled the flux of multi-exposures to the flux of exposure with the highest SNR. After the multi-exposures being co-added, The blue and red-arm are pieced together with adjusting one of the scale (using the overlaps) to yield the final spectra. Accuracy analysis for flux calibration through ASPSRC Before discussing the accuracy of the ASPSRCs, we studied the SRCs of the DR2 plates which are derived from the LAMOST 2D pipeline to further confirm the stability of the LAMOST spectrograph response curves. and it is consistent with the average SRC of the SRCs from the LAMOST 2D pipeline. Table 1 shows the mean uncertainties of ASPSRCs are smaller than 10%, which are consistent with the 1 σ uncertainties of the SRCs at high Galactic latitude from the 2D pipeline. To verify the feasibility of applying the ASPSRCs to the flux calibration, we selected stars observed by both LAMOST and SDSS. We cross-matched the abandoned targets of LAMOST DR2 with SDSS DR12, and obtained 1,746 spectra of 1,702 stars with SNRs higher than 6. We have calibrated the LAMOST spectra abandoned by 2D pipeline and divided them by the spectra of the same sources from SDSS. The ratios of the two spectra were calculated and then scaled to their median values of unity, and the results are shown in Fig 14. The ratios yield an average that is almost constant around 1.0 for the whole spectral wavelength coverage except for sky emission lines region; oxygen and water vapor bands of the earth's atmosphere are attributed to the uncertainties of flat-fielding and sky-subtraction. The standard deviation is less than 10% at wavelengths from 4,500Å to 8,000Å but for both edges, the standard deviation increases to 15% due to the rapid decline of the instrumental throughput. The results show that flux calibration using ASPSRCs has achieved a precision of ∼ 10% between 4,100Å and 8,900Å. For the bright and very bright plates, most can be calibrated successfully by 2D pipeline. However for LAMOST faint plates (F-plates) of DR2, the flux-calibration failure rate of the 2D pipeline is around 9% and for the medium plates (M-plates), the failure rate is around 8%. Fig 15 to Fig 17 show the spectra of galaxies, QSOs and stars rescued from the abandoned plates. We compared the rescued spectra with that of SDSS DR12 (the former are plotted with black curves and the latter are represented with red curves). Most match with their corresponding SDSS spectra quite well, with differences of only a few percent for their continua. For LAMOST 20130208-GAC062N26B1-sp13-112, the spectra of the red arm has turbulent components for the spectrograph No.13, this is explained by the spectrograph having problems caused by the cooling system of the CCD. For LAMOST 20140306-HD134348N172427B01-sp10-014, the SED from the ASPSRC method is bluer than that of SDSS. We believe this is due to the fact, we do not separate the Earth's atmospheric extinction from the response of the spectrograph. Generally, the variations of the optical atmospheric extinction curve can be calculated by low order polynomials (Patat et al. 2011). The atmospheric extinction curve included in the ASPSRC is an average one, and multiplication by a low order polynomial is required to obtain the real atmospheric extinction curve when the target observed. Therefore, some spectra calibrated using the ASPSRCs need low order polynomials to match SDSS spectra. The atmospheric extinction of LAMOST will be deeply studied and integrated into this work. Overall, the ASPSRCs flux calibration has achieved a precision of ∼ 10% for the LAMOST wavelength range. The potential uncertainties and temporal variations of the atmospheric extinction generally do not have an impact on the final accuracy of spectral lines, though they do affect the shapes of SEDs deduced (low order polynomials). Rescue the Abandoned Targets For the LAMOST DR2, there are 1,095 spectrographs with 385 plates which have been abandoned by 2D pipeline due to the failure of finding standard stars. We started with the 2D pipeline for fiber tracing, flux extraction, wavelength calibration, flat fielding and sky subtraction. The ASPSRCs were then adopted to calibrate the 195,694 spectra in 1,095 spectrographs. After the flux calibration and the co-add, the LAMOST 1D pipeline was employed to classify the spectra and measure the radial velocity for stars, and the redshift for galaxies and QSOs. Based on a cross-correlation method, the 1D pipeline recognizes the spectral classes and simultaneously determines the radial velocities or redshifts from the best fit correlation function. The 1D pipeline produces four primary classifications, namely STAR, GALAXY, QSO and UNKNOWN. It is difficult to recognize galaxy and QSO spectra and determine their redshift, and as such the LAMOST 1D pipeline does not work as well as for stellar classification due to the SNRs of galaxy and QSO spectra being relatively lower. An additional independent pipeline, the Galaxy Recognition Module (GM for short), has been designed for galaxy recognition and redshift measurement. After the 1D pipeline was run it automatically identifies galaxies and measures the redshifts by recognizing lines. The redshifts of galaxies are determined through line centers. Before line centers are measured, a Gaussian function with sigma of 1.5 times the wavelength step is applied to the spectra to eliminate noises. The continua, which were smoothed by a median filter, are divided to complete normalization. Those data points that exceed 2σ of a normalized spectrum are selected as candidates of emission lines, then a set of Gaussian functions is used to fit the lines. All the line centers are compared with line lists, which are spaced by steps of 0.0005 in redshift(z). If most of the lines are matched successfully with heavily weighted lines such as NaD, Mgb, CaII H or CaII K for absorption galaxies, or Hα, OII, Hβ, OIII or NII for emission galaxies, the spectrum is classified as galaxy, and the corresponding z is the raw redshift of the spectrum. However, for QSOs, the classifications and measurements highly depend on visual inspection. We combined the classification of GM, 1D pipeline and expert inspection and thus the final classifications of the spectra of the 1,095 spectrographs is presented in Table 2. In total 52,181 additional spectra has been recognized in DR2, and will be officially released in the Third Data Release (DR3) of the LAMOST Regular Survey. The fraction of objects rescued is about 52,000/2,000,000 (∼ 2.5%). For the rescued 52,181 targets, we evaluated the quality by plotting the magnitude against SNR relationships for galaxies and QSOs, and stars. For galaxies and QSOs, most of the magnitudes are spread between 17.0 and 19.0, which is shown in Fig 18. This is close to the limit of LAMOST observation, consequently, the majority of their SNRs are so low that they do not reach a SNR of 10. To reduce the differences of SNRs due to differences in exposure times, all of the SNRs in this paper were scaled to 5400s. For stars, there are two peaks in the distribution of magnitudes, as shown in Fig 19. The magnitudes of A,F,G,K-type stars range from 13.0 to 17.0, and M-type stars range from 15.0 to 18.0 magnitudes. The SNRs of the stars are higher than that of galaxies and QSOs, however, most are below 30, which is comparatively low for stars. With the exception of M-type stars, we selected the stars with SNRs in the r band larger than 2.0 for the release. Therefore, an obvious cut is seen in the bi-modal point distributions of early and late type stars in Fig 19. For F,G,K-type stars, by running LASP, we parameterized those with SNRs in the g band larger than 6.0 for nights with a dark moon and 15.0 for nights with a bright moon. The final stellar parameters coverage is presented in Fig 20. Revision of the 2D calibration To minimize potential errors introduced by poor sky subtraction, the current LAMOST 2D pipeline (v2.7) scales the sky spectrum to obtain the same flux intensities of sky emission lines as that of the target spectra, which it will be subtracted from. It is assumed that the emission lines are homogeneous across the FoV of individual spectrographs (about 1 deg). However, the continuum sky-background and sky emission lines originate from very different sources and are excited by different mechanisms, thus their emission levels are unlikely to scale linearly. In fact, even amongst the sky emission lines, lines from different species may have quite different behavior in terms of their temporal and spatial variations (Oliva et al. 2015). Consequently, scaling the sky spectra by the measured fluxes of sky emission lines risks subtracting an incorrect level of sky background. For a minority of spectra the standard stars telluric bands are extremely under subtracted, and thus it turns out that the SRCs of the standards are over-fitted (see Fig 21). The oxygen band is under-subtracted for the spectra of the standards. This leads to the over fitted SRC also containing the oxygen band and introduces artificial spectral lines to all the spectra of the spectrograph, making the classification of spectra by 1D pipeline difficult. Fig 21 using the 2D pipeline), which is plotted with black curves. We recalibrated the spectra using the ASPSRCs, which is presented with red curves in Fig 22. After recalibration, the spectra was classified as F0 by the 1D pipeline (an improvement over the 'Unknown' classification by the 1D pipeline previously). Comparing the ASPSRCs with the SRCs from the LAMOST 2D pipeline, we found 6 spectrographs from DR2 have this problem, and all of the 6 plates were observed in the nights with very bright moon. The ASPSRC method has been used to correct this problem, and the spectra of this 6 spectrographs will be released in LAMOST DR3. Analysis and Discussions We have applied the ASPSRCs to the flux calibration for LAMOST, however, there are still some uncertainties in the ASPSRCs caused by individual SRCs. The causes of these variations in the shape of SRCs might be attributed to several factors. First of all, although we selected the standard stars from high galactic latitudes to minimize the affects from variations of interstellar extinction, the effect of the Earth's atmospheric extinction still exist. Typical atmospheric extinction curves are smooth functions of wavelength in the LAMOST wavelength coverage (Bongard et al. 2013;Cullen et al. 2011), and this is usually true for the variations of atmospheric extinction, which can be well represented by low order polynomials. Therefore, the mean atmospheric extinction curve included in the ASPSRCs does not affect the spectral lines of the calibrated spectra. SNRs lower than about 10, the discrepancies increase rapidly, along with some systematic differences (Xiang et al. 2015). To minimize the uncertainties introduced by spectral SNRs, we selected the standard stars with SNRs larger than 20 to obtain the ASPSRCs. In addition, errors due to the stellar atmospheric parameters of standard stars also cause variations in the SRCs. For flux standard stars of 5,750 K≤ T eff ≤6,750 K, an error of 150 K in T eff can lead to a maximum uncertainty of 12% in the shape of the stellar SED and thus it will change the shape of the SRC derived from it. Uncertainties caused by errors in log g are negligible (i.e. for an estimated uncertainty of 0.25 dex in log g, about 1% for the whole wavelength range is affected). Metallicity mainly affects the blue-arm spectra at wavelengths less than 4,500Å. An error of 0.2 dex in [Fe/H] can change the SED shape between 3,800Å and 4,500Å by approximately 3%, while the effects at wavelengths greater than 4, 500Å are only marginally (Xiang et al. 2015). This is the reason we remove the candidates with standards which above uncertainties in T eff larger than 150 K. The advantage of the ASPSRCs comes from using an average SRC of instrument response curves and the average of atmospheric extinction curves, although there are many uncertainties introduced by the influencing factors discussed, which will eliminate the effects. Our experiments prove that all the influencing factors on accuracy of flux calibration is less than 10 during the DR2 period. The average SRCs are presented in Table 3 to Table 6. One can use them to calibrate spectra of the LAMOST DR2 catalogue. For the spectra observed subsequent to DR2, new ASPSRCs will need to be produced to counter variations from the instrument. Fig 3 shows examplesof the SRC fitting for one fiber in both arms. For each standard star, the blue-and red-arm spectra were divided into five and six wavelength regions respectively, and each region was fitted with a second-or third-order polynomial, which are represented by the thick colored lines inFig 3.The piecewise polynomials were derived through minimizing |synthetic × polynomial − observed|. We defined a series of clean spectral regions avoiding the prominent stellar absorption features and the telluric absorption bands. The fitted polynomial values of data in these clean regions is indicated by asterisks inFig 3, whichwere used for the final SRC fitting. The wavelengths of the join points were fixed to space between 200Å for adjacent spectral regions, the overlaps were median filtered to join together the adjacent regions. The final SRCs are represented by the black curves in the inserts inFig 3. Fig 5 , 5sensitivity of the individual fibers are well corrected via flat-fielding and thus the 250 fibers of a given spectrograph share a single SRC. Accordingly, the SRCs of the fibers can be regarded as independent measurements of the SRC, thus the ASPSRC and uncertainties can be derived by traditional statistical methods. The means and standard deviations of the Gaussian functions in Fig 4 give three examples of the spectral response and uncertainty estimation at wavelength point 4,000Å, 4,500Å, and 5,000Å of spectrograph No.1. All wavelength points contribute to the final ASPSRC for a spectrograph, and the red curves in Fig 6, Fig 7 and Fig 8 show the blue and red-arm ASPSRCs of the 16 spectrographs. next June. The DR2 collected the observed data from October 2011 to June 2014, and there are nine quarters totally (about 3 consecutive months for a quarter). We calculated ASPSRCs for each quarter (hereafter called Quarter ASPSRC to distinguish from DR2 ASPSRC), and compared these nine Quarter ASPSRCs with the DR2 ASPSRC for each spectrograph. The distributions of residual between the nine Quarter ASPSRCs and the DR2 ASPSRC are shown inFig 9, Fig 10, Fig 11 and Fig 12 (blue arm in left panel and red arm in right panel). The figures show that there are not obvious gradual and systematic errors along with time. 13 shows the spectra of 6 exposures. Not only are the scales of exposures different but also the scales of the two arms are discrepant. For the LAMOST 2D pipeline, the single exposure spectra are scaled to the median of the multi-exposures. Here we try to scale the blue and red band according to the photometry of the g and i band respectively. 60% of the LAMOST targets have the overlapping observations with SDSS. However, there are still about 10% of the LAMOST targerts neither included in Pan-STARRS1 nor in SDSS, and it is difficult to obtain their photometry in the optical band. For the LAMOST, the efficiencies of the blue and red-arms can not be corrected by flat fielding since the throughput of the two arms for each spectrograph are different and varies when the telescope pointing changes. The flat field of the two arms for each spectrograph are processed independently. Using photometry, we can avoid big scale jump between the two arms although there are some photometry errors. Without the reference of photometry, we Fig 5 to Fig 8 shows the distributions of the SRCs of the DR2 plates for high Galactic latitude (left panel) and for low Galactic latitude (right panel), the standard deviations of the SRCs, as a function of wavelength, are shown by the red dashed curves. As described in Section 2.3, we used stars with high Galactic latitude and with high SNR to get the ASPSRCs. The red solid curves in Fig 5 to Fig 8 represent the ASPSRCs, Fig 22 shows one example of an artificial spectra calibrated by the over-fitted SRC (from Its variations are included in the overall variations of SRCs.Secondly, fiber positioning may introduce variations in the fiber spectral response during the tracing of the targets. This means the SRCs of the individual fibers probably vary from observations of one plate to another(Chen et al. 2015). The variations of the fiber flat fields will have an impact on sky subtraction and flux calibration, introducing the uncertainties to the SRCs. To make matters worse uncertainties introduced by such variations do not depend on the spectral SNRs. That is to say, even spectra of very high SNRs may have incorrectly shaped SEDs. Attempts to characterize and correct for such variations of the fiber flat fields is under way. If the condition allows, it is better to obtain the ASPSRCs once a year or once a quarter to overcome these instrumental changes.Thirdly, the spectral SNRs of standard stars have an impact on the SRCs derived from them. To test how the SEDs of LAMOST Galactic targets are affected by limited SNRs, the spectral and photometric (g−r) colors have been compared as a function of the spectral SNRs. The results shows that at SNRs exceeding 20, the spectral colors and photometric colors agree well, with a mean difference of 0.01-0.02 mag and no systematic trend, while at Fig. 1 . 1-Histogram of the numbers of exposures of standard stars selected per spectrograph. Fig. 2 .Fig. 3 . 23-Histogram of the effective temperatures of the selected standard stars. -Examples of SRC fitting for blue-(left) and red-arm (right) spectra. The grey lines are the ratios of observed flux density and the synthetic flux density, the ratios have been scaled by their mean value. The blue-and red-arm spectra are divided into five and six wavelength regions respectively, and each region is fitted with a second-or third-order polynomial, which are represented by the thick lines in RGB colors (e.g. The blue arm has 5 bins drawn in the order of red, green, blue, red and green; while the red arm has six bins drawn in the order of red, green , blue, red, green, and blue). Asterisks are selected from the fitted polynomial curves avoiding prominent spectral features, which are used for the final SRC fitting. Fig. 4 . 4-Histograms of SRCs in 4,000Å(left), 4,500Å(middle) and 5,000Å(right) of spectrograph No.1, the red dashed curves are Gaussian fits to the distributions, the mean and dispersion of the Gaussian fit to the ASPSRC and uncertainty are also marked. Fig. 5 . 5-The distributions of SRCs of spectrographs from No.1 to No.4, the SRCs were derived from 2D pipeline for the DR2 plates. The grey contours represent the distributions of SRCs from 372 plates with 914 exposures for high galactic latitude in left panel, and the distributions of SRCs from 1,759 plates with 3,387 exposures for the low galactic latitude in right panel. The standard deviation of the SRCs as a function of wavelength is shown by the dashed curves, and the ASPSRCs we described in this paper are shown by solid curves. Fig. 6 . 6-The distributions of SRCs of spectrographs from No.5 to No.8. The convention is the same as inFig 5. Fig. 7 . 7-The distributions of SRCs of spectrographs from No.9 to No.12. The convention is the same as inFig 5. Fig. 8 . 8-The distributions of SRCs of spectrographs from No.13 to No.16. The convention is the same as in Fig 5. Fig. 9 . 9-The distributions of residuals between the nine Quarter ASPSRCs and the DR2 ASPSRC (blue arm in left panel and red arm in right panel), for spectrographs from No.1 to No.4. The box extends from the lower to upper quartile values of the error, with a line at the median. The whiskers extend from the box to show the range of the error. Flier points are those past the end of the whiskers. Fig. 13 . 13-Example of a star with 6 exposures in different scales, for blue and red-arms, the spectra of equal exposure are plotted by the same color (top). The rescaled spectra (middle) are scaled depend on g and i magnitudes. The co-added spectra (bottom) are adopted as the final spectra. Fig. 14 . 14-Distribution of the ratios of spectral pairs observed by both LAMOST and SDSS, each point on the panel is a ratio value of one pair, the contour represent the distribution of the ratio values of 1,746 spectral pairs. The smoothed mean and standard deviation of the ratios, as a function of wavelength, are shown by the solid and dashed curves. Fig. 15 . 15-Comparison of the rescued spectra of galaxies (black) with SDSS DR12 spectra (red). For each panel, the upper part shows the relative flux density as a function of wavelength, whereas the lower part shows the ratios of LAMOST and SDSS. Fig. 16 . 16-Comparison of the rescued spectra of QSOs (black) with SDSS DR12 spectra (red). Fig. 17 . 17-Comparison of the rescued spectra of stars (black) with SDSS DR12 spectra (red). Fig. 18 . 18-Histogram of the g,r,i magnitudes of galaxies and QSOs, which are rescued by the ASPSRCs from the abandoned spectrographs of the 2D pipeline. The diagrams of SNRs and magnitudes are provided for g,r,i-bands. Fig. 19 . 19-Histogram of the g,r,i magnitudes of stars, which are rescued by the ASPSRCs from the abandoned spectrographs of the 2D pipeline. The diagrams of SNRs and magnitudes are provided for g,r,i-bands. Fig. 20 . 20-The coverage map of stellar parameters of F,G,K-type stars. Fig. 21 . 21-Comparison of the over fitted SRCs from the 2D pipeline (black) with the ASP-SRCs (red). Fig. 22 . 22-Comparison of the artificial spectra calibrated by adopting the over fitted SRCs in Fig 18 (black) with the spectra recalibrated by the ASPSRCs (red). The spectra is classified as an F0 instead of "Non" after recalibration. Table 1 : 1The absolute and relative uncertainties of the g r i regions.No-spectrograph g-absolute g-relative r-absolute r-relative i-absolute i-relative sp01 0.054 6.35% 0.054 6.75% 0.033 2.83% sp02 0.046 5.12% 0.045 5.96% 0.028 2.41% sp03 0.047 4.71% 0.033 6.34% 0.031 2.74% sp04 0.048 5.49% 0.039 6.74% 0.031 2.72% sp05 0.053 6.95% 0.065 11.13% 0.040 3.51% sp06 0.051 5.07% 0.049 7.83% 0.035 3.20% sp07 0.048 5.22% 0.046 6.21% 0.031 2.71% sp08 0.052 6.30% 0.052 6.91% 0.038 3.38% sp09 0.053 6.57% 0.051 6.46% 0.031 2.74% sp10 0.046 5.25% 0.046 7.32% 0.039 3.60% sp11 0.045 5.57% 0.036 6.70% 0.033 2.93% sp12 0.054 6.71% 0.047 7.88% 0.047 4.11% sp13 0.053 6.44% 0.041 6.94% 0.034 2.99% sp14 0.048 5.97% 0.039 6.74% 0.033 2.83% sp15 0.041 5.12% 0.035 5.82% 0.029 2.60% sp16 0.041 5.02% 0.034 5.66% 0.032 2.82% Table 2 : 2The final released spectral classifications of the abandoned spectrographs.Type Number total 52181 Galaxy 1163 QSO 201 BA 1477 F 8454 G 13875 K 14440 M 12571 Table 3 : 3: The blue-arm ASPSRCs of LAMOST spectrographs from No.1 to No.8 at 100Asteps. All the ASPSRCs are scaled to a mean value of unity.No.1 No.2 No.3 No.4 No.5 No.6 No.7 No.8 Wavelength SRC(Err) SRC(Err) SRC(Err) SRC(Err) SRC(Err) SRC(Err) SRC(Err) SRC(Err) 3650 0.142(0.036) 0.132(0.027) 0.106(0.016) 0.070(0.026) 0.054(0.054) 0.150(0.027) 0.098(0.016) 0.048(0.053) 3750 0.245(0.051) 0.244(0.039) 0.261(0.040) 0.187(0.030) 0.122(0.046) 0.268(0.044) 0.202(0.034) 0.156(0.029) 3850 0.351(0.063) 0.356(0.048) 0.409(0.051) 0.307(0.041) 0.207(0.035) 0.397(0.056) 0.325(0.049) 0.267(0.039) 3950 0.460(0.070) 0.467(0.054) 0.550(0.058) 0.429(0.050) 0.305(0.043) 0.531(0.065) 0.455(0.058) 0.379(0.048) 4050 0.570(0.076) 0.578(0.060) 0.682(0.063) 0.551(0.059) 0.415(0.069) 0.670(0.072) 0.586(0.068) 0.491(0.059) 4150 0.680(0.077) 0.687(0.061) 0.805(0.065) 0.671(0.062) 0.534(0.053) 0.808(0.073) 0.716(0.072) 0.602(0.063) 4250 0.789(0.075) 0.795(0.059) 0.917(0.066) 0.789(0.063) 0.659(0.071) 0.943(0.072) 0.844(0.072) 0.710(0.064) 4350 0.896(0.069) 0.900(0.054) 1.022(0.061) 0.903(0.060) 0.788(0.084) 1.073(0.067) 0.973(0.064) 0.820(0.064) 4450 1.005(0.064) 1.007(0.050) 1.124(0.057) 1.010(0.059) 0.919(0.054) 1.197(0.060) 1.110(0.057) 0.942(0.063) 4550 1.120(0.056) 1.130(0.043) 1.224(0.049) 1.110(0.054) 1.050(0.051) 1.302(0.048) 1.251(0.048) 1.072(0.058) 4650 1.190(0.046) 1.207(0.037) 1.294(0.041) 1.201(0.047) 1.174(0.045) 1.374(0.038) 1.359(0.038) 1.157(0.053) 4750 1.270(0.037) 1.288(0.033) 1.339(0.031) 1.281(0.036) 1.281(0.037) 1.446(0.031) 1.476(0.028) 1.277(0.044) 4850 1.353(0.026) 1.400(0.033) 1.363(0.022) 1.344(0.022) 1.363(0.032) 1.493(0.028) 1.565(0.016) 1.396(0.032) 4950 1.423(0.016) 1.511(0.034) 1.394(0.015) 1.372(0.013) 1.434(0.036) 1.514(0.027) 1.634(0.014) 1.492(0.019) 5050 1.484(0.022) 1.604(0.031) 1.432(0.020) 1.417(0.018) 1.507(0.030) 1.533(0.029) 1.702(0.026) 1.572(0.014) 5150 1.538(0.046) 1.667(0.032) 1.429(0.030) 1.456(0.028) 1.577(0.038) 1.554(0.039) 1.782(0.040) 1.683(0.031) 5250 1.584(0.065) 1.705(0.042) 1.383(0.039) 1.477(0.038) 1.608(0.050) 1.543(0.051) 1.788(0.056) 1.775(0.050) 5350 1.618(0.082) 1.721(0.064) 1.336(0.050) 1.472(0.047) 1.613(0.062) 1.515(0.064) 1.773(0.073) 1.814(0.071) 5450 1.626(0.091) 1.685(0.085) 1.280(0.059) 1.442(0.057) 1.605(0.077) 1.455(0.073) 1.681(0.087) 1.797(0.088) 5550 1.564(0.090) 1.536(0.096) 1.185(0.067) 1.394(0.067) 1.571(0.099) 1.303(0.079) 1.331(0.088) 1.683(0.093) 5650 1.340(0.084) 1.205(0.092) 1.068(0.070) 1.329(0.074) 1.498(0.111) 0.979(0.085) 0.769(0.064) 1.409(0.080) 5750 0.845(0.082) 0.604(0.076) 0.928(0.068) 1.229(0.077) 1.351(0.097) 0.430(0.074) 0.257(0.028) 0.853(0.054) 5850 0.114(0.017) 0.095(0.017) 0.749(0.063) 1.060(0.075) 0.977(0.272) 0.134(0.034) 0.007(0.000) 0.127(0.009) Table 4 : 4: The red-arm ASPSRCs of LAMOST spectrographs from No.1 to No.8 at 100Asteps. No.1 No.2 No.3 No.4 No.5 No.6 No.7 No.8 Wavelength SRC(Err) SRC(Err) SRC(Err) SRC(Err) SRC(Err) SRC(Err) SRC(Err) SRC(Err) 5680 0.070(0.047) 0.116(0.030) -0.005(0.003) 0.006(0.009) -0.018(0.008) 0.185(0.057) 0.329(0.065) 0.081(0.032) 5780 0.333(0.047) 0.330(0.045) 0.031(0.007) 0.044(0.007) 0.074(0.070) 0.269(0.049) 0.433(0.049) 0.273(0.035) 5880 0.509(0.062) 0.478(0.049) 0.140(0.014) 0.187(0.020) 0.273(0.088) 0.356(0.053) 0.520(0.053) 0.423(0.056) 5980 0.620(0.065) 0.569(0.046) 0.283(0.028) 0.371(0.044) 0.458(0.091) 0.436(0.054) 0.601(0.053) 0.537(0.065) 6080 0.696(0.064) 0.639(0.046) 0.427(0.039) 0.545(0.061) 0.585(0.088) 0.513(0.057) 0.674(0.054) 0.626(0.069) 6180 0.760(0.062) 0.711(0.047) 0.543(0.044) 0.675(0.066) 0.672(0.079) 0.587(0.059) 0.748(0.054) 0.703(0.069) 6280 0.820(0.056) 0.779(0.046) 0.634(0.046) 0.763(0.065) 0.745(0.077) 0.659(0.059) 0.817(0.053) 0.770(0.068) 6380 0.882(0.053) 0.855(0.046) 0.715(0.049) 0.834(0.062) 0.815(0.078) 0.735(0.061) 0.892(0.054) 0.842(0.067) 6480 0.938(0.048) 0.929(0.044) 0.791(0.050) 0.895(0.058) 0.881(0.070) 0.810(0.059) 0.960(0.052) 0.905(0.063) 6580 0.995(0.044) 1.001(0.042) 0.867(0.048) 0.956(0.053) 0.949(0.064) 0.884(0.057) 1.024(0.049) 0.970(0.059) 6680 1.052(0.041) 1.071(0.039) 0.943(0.046) 1.019(0.047) 1.020(0.062) 0.956(0.054) 1.086(0.047) 1.035(0.055) 6780 1.104(0.038) 1.137(0.035) 1.019(0.044) 1.080(0.041) 1.086(0.051) 1.026(0.049) 1.141(0.043) 1.100(0.050) 6880 1.154(0.034) 1.198(0.033) 1.091(0.041) 1.138(0.034) 1.151(0.042) 1.091(0.044) 1.192(0.037) 1.161(0.046) 6980 1.196(0.033) 1.250(0.028) 1.157(0.038) 1.190(0.032) 1.207(0.034) 1.152(0.036) 1.235(0.032) 1.212(0.038) 7080 1.234(0.032) 1.297(0.024) 1.219(0.034) 1.239(0.026) 1.258(0.031) 1.207(0.029) 1.273(0.026) 1.261(0.033) 7180 1.265(0.029) 1.334(0.021) 1.273(0.030) 1.279(0.022) 1.300(0.031) 1.254(0.022) 1.302(0.021) 1.298(0.028) 7280 1.288(0.029) 1.362(0.019) 1.320(0.027) 1.312(0.021) 1.332(0.030) 1.291(0.019) 1.323(0.018) 1.328(0.025) 7380 1.304(0.025) 1.379(0.014) 1.357(0.018) 1.336(0.013) 1.353(0.024) 1.319(0.012) 1.333(0.010) 1.349(0.021) 7480 1.312(0.025) 1.384(0.013) 1.384(0.014) 1.350(0.015) 1.366(0.022) 1.339(0.012) 1.336(0.010) 1.361(0.019) 7580 1.313(0.023) 1.378(0.015) 1.401(0.011) 1.355(0.020) 1.366(0.024) 1.349(0.015) 1.329(0.012) 1.361(0.019) 7680 1.305(0.024) 1.363(0.017) 1.407(0.014) 1.350(0.022) 1.356(0.028) 1.351(0.020) 1.313(0.018) 1.348(0.022) 7780 1.291(0.023) 1.339(0.021) 1.407(0.018) 1.338(0.026) 1.337(0.028) 1.345(0.024) 1.287(0.021) 1.327(0.023) 7880 1.271(0.024) 1.308(0.021) 1.399(0.018) 1.318(0.026) 1.313(0.030) 1.331(0.029) 1.255(0.025) 1.298(0.026) 7980 1.246(0.028) 1.269(0.025) 1.385(0.022) 1.291(0.027) 1.286(0.032) 1.313(0.034) 1.218(0.029) 1.261(0.031) 8080 1.218(0.033) 1.230(0.030) 1.366(0.025) 1.264(0.029) 1.252(0.041) 1.292(0.041) 1.178(0.034) 1.216(0.038) 8180 1.180(0.035) 1.186(0.036) 1.342(0.031) 1.232(0.030) 1.220(0.046) 1.267(0.045) 1.130(0.040) 1.171(0.050) 8280 1.139(0.041) 1.139(0.043) 1.313(0.037) 1.198(0.031) 1.187(0.052) 1.235(0.053) 1.082(0.044) 1.120(0.061) 8380 1.101(0.041) 1.092(0.042) 1.286(0.043) 1.170(0.038) 1.158(0.047) 1.203(0.052) 1.032(0.046) 1.081(0.056) 8480 1.062(0.045) 1.042(0.048) 1.256(0.047) 1.142(0.044) 1.128(0.046) 1.163(0.053) 0.980(0.049) 1.034(0.061) 8580 1.014(0.045) 0.986(0.050) 1.222(0.051) 1.117(0.045) 1.091(0.048) 1.116(0.055) 0.925(0.051) 0.977(0.065) 8680 0.962(0.046) 0.925(0.051) 1.182(0.054) 1.091(0.047) 1.048(0.053) 1.060(0.053) 0.868(0.052) 0.921(0.068) 8780 0.907(0.046) 0.853(0.049) 1.125(0.055) 1.047(0.050) 0.990(0.053) 0.991(0.051) 0.806(0.051) 0.855(0.067) 8880 0.841(0.050) 0.773(0.049) 1.037(0.059) 0.972(0.054) 0.916(0.056) 0.911(0.052) 0.740(0.054) 0.778(0.064) 8980 0.757(0.044) 0.677(0.044) 0.898(0.053) 0.840(0.043) 0.794(0.060) 0.820(0.052) 0.670(0.051) 0.673(0.069) Table 5 : 5: The blue-arm ASPSRCs of LAMOST spectrographs from No.9 to No.16 at 100Asteps.No.9 No.10 No.11 No.12 No.13 No.14 No.15 No.16 Wavelength SRC(Err) SRC(Err) SRC(Err) SRC(Err) SRC(Err) SRC(Err) SRC(Err) SRC(Err) 3650 0.080(0.012) 0.096(0.019) 0.082(0.014) 0.027(0.006) 0.058(0.027) 0.069(0.023) 0.013(0.037) 0.049(0.018) 3750 0.160(0.027) 0.201(0.036) 0.179(0.025) 0.122(0.019) 0.106(0.017) 0.132(0.028) 0.154(0.031) 0.144(0.020) 3850 0.254(0.038) 0.309(0.046) 0.275(0.033) 0.228(0.032) 0.171(0.023) 0.199(0.033) 0.272(0.040) 0.241(0.029) 3950 0.359(0.046) 0.428(0.054) 0.375(0.039) 0.338(0.042) 0.249(0.031) 0.277(0.041) 0.390(0.045) 0.347(0.036) 4050 0.470(0.056) 0.557(0.061) 0.480(0.046) 0.449(0.051) 0.338(0.039) 0.372(0.048) 0.509(0.051) 0.463(0.044) 4150 0.581(0.063) 0.686(0.064) 0.587(0.051) 0.559(0.056) 0.435(0.042) 0.479(0.052) 0.622(0.053) 0.583(0.049) 4250 0.690(0.068) 0.802(0.066) 0.694(0.054) 0.666(0.063) 0.536(0.045) 0.589(0.055) 0.716(0.053) 0.699(0.051) 4350 0.805(0.067) 0.923(0.062) 0.806(0.056) 0.779(0.064) 0.649(0.048) 0.709(0.055) 0.801(0.049) 0.822(0.051) 4450 0.936(0.067) 1.066(0.058) 0.930(0.057) 0.908(0.065) 0.782(0.054) 0.844(0.055) 0.908(0.046) 0.961(0.051) 4550 1.061(0.062) 1.185(0.050) 1.045(0.054) 1.029(0.065) 0.912(0.059) 0.973(0.053) 1.013(0.041) 1.084(0.046) 4650 1.161(0.051) 1.272(0.040) 1.136(0.048) 1.138(0.060) 1.033(0.061) 1.085(0.049) 1.104(0.036) 1.181(0.040) 4750 1.247(0.040) 1.344(0.029) 1.209(0.039) 1.235(0.052) 1.148(0.061) 1.186(0.046) 1.184(0.031) 1.255(0.032) 4850 1.346(0.029) 1.417(0.018) 1.284(0.030) 1.319(0.040) 1.265(0.056) 1.293(0.042) 1.265(0.027) 1.314(0.026) 4950 1.444(0.026) 1.497(0.015) 1.369(0.022) 1.383(0.027) 1.400(0.049) 1.399(0.040) 1.354(0.023) 1.395(0.020) 5050 1.522(0.033) 1.558(0.016) 1.439(0.013) 1.443(0.012) 1.517(0.038) 1.492(0.036) 1.443(0.022) 1.473(0.017) 5150 1.574(0.048) 1.597(0.027) 1.472(0.017) 1.508(0.016) 1.586(0.025) 1.561(0.033) 1.512(0.024) 1.528(0.018) 5250 1.615(0.054) 1.592(0.039) 1.475(0.026) 1.536(0.034) 1.636(0.025) 1.590(0.034) 1.539(0.032) 1.558(0.025) 5350 1.660(0.062) 1.606(0.055) 1.478(0.040) 1.536(0.054) 1.697(0.037) 1.605(0.039) 1.570(0.040) 1.597(0.037) 5450 1.682(0.073) 1.595(0.069) 1.474(0.054) 1.528(0.070) 1.736(0.056) 1.614(0.049) 1.576(0.049) 1.612(0.050) 5550 1.619(0.088) 1.497(0.081) 1.441(0.067) 1.508(0.081) 1.705(0.076) 1.591(0.061) 1.526(0.057) 1.586(0.064) 5650 1.431(0.094) 1.289(0.088) 1.385(0.079) 1.472(0.088) 1.634(0.092) 1.545(0.073) 1.442(0.062) 1.501(0.075) 5750 1.069(0.087) 0.873(0.075) 1.295(0.088) 1.403(0.095) 1.514(0.106) 1.456(0.081) 1.305(0.066) 1.308(0.078) 5850 0.524(0.060) 0.168(0.019) 1.147(0.089) 1.283(0.097) 1.302(0.111) 1.290(0.086) 1.078(0.067) 0.941(0.068) Table 6 : 6: The red-arm ASPSRCs of LAMOST spectrographs from No.9 to No.16 at 100Asteps.No.9 No.10 No.11 No.12 No.13 No.14 No.15 No.16 Wavelength SRC(Err) SRC(Err) SRC(Err) SRC(Err) SRC(Err) SRC(Err) SRC(Err) SRC(Err) 5680 -0.004(0.088) 0.027(0.015) 0.007(0.033) 0.013(0.005) 0.017(0.049) 0.004(0.000) -0.013(0.041) 0.024(0.008) 5780 0.251(0.031) 0.198(0.028) 0.041(0.007) 0.027(0.005) 0.065(0.012) 0.045(0.007) 0.086(0.010) 0.085(0.012) 5880 0.508(0.063) 0.340(0.048) 0.159(0.021) 0.136(0.020) 0.251(0.034) 0.183(0.022) 0.254(0.031) 0.275(0.031) 5980 0.665(0.067) 0.437(0.055) 0.330(0.039) 0.329(0.045) 0.482(0.055) 0.370(0.041) 0.416(0.046) 0.447(0.045) 6080 0.749(0.068) 0.517(0.060) 0.501(0.054) 0.524(0.067) 0.630(0.062) 0.551(0.053) 0.556(0.054) 0.552(0.048) 6180 0.814(0.063) 0.590(0.063) 0.633(0.060) 0.656(0.078) 0.731(0.064) 0.691(0.058) 0.666(0.055) 0.643(0.051) 6280 0.873(0.059) 0.659(0.065) 0.726(0.059) 0.732(0.080) 0.808(0.061) 0.790(0.059) 0.751(0.054) 0.722(0.051) 6380 0.942(0.055) 0.726(0.066) 0.807(0.059) 0.811(0.083) 0.873(0.060) 0.872(0.060) 0.826(0.052) 0.801(0.053) 6480 1.010(0.050) 0.794(0.067) 0.876(0.057) 0.877(0.080) 0.935(0.058) 0.945(0.058) 0.895(0.050) 0.876(0.052) 6580 1.074(0.047) 0.865(0.064) 0.945(0.054) 0.939(0.076) 0.995(0.055) 1.017(0.054) 0.965(0.047) 0.946(0.050) 6680 1.138(0.044) 0.938(0.061) 1.016(0.051) 1.013(0.073) 1.056(0.051) 1.087(0.051) 1.037(0.044) 1.019(0.048) 6780 1.193(0.039) 1.012(0.056) 1.087(0.047) 1.084(0.068) 1.112(0.047) 1.153(0.046) 1.110(0.041) 1.088(0.045) 6880 1.244(0.035) 1.085(0.050) 1.155(0.043) 1.156(0.065) 1.164(0.042) 1.217(0.042) 1.177(0.037) 1.153(0.042) 6980 1.281(0.029) 1.154(0.045) 1.213(0.037) 1.214(0.056) 1.207(0.039) 1.272(0.038) 1.239(0.034) 1.210(0.037) 7080 1.315(0.025) 1.217(0.038) 1.267(0.033) 1.267(0.049) 1.242(0.034) 1.319(0.031) 1.294(0.028) 1.261(0.032) 7180 1.337(0.022) 1.271(0.032) 1.312(0.028) 1.317(0.041) 1.270(0.029) 1.357(0.026) 1.338(0.024) 1.307(0.027) 7280 1.347(0.020) 1.314(0.026) 1.344(0.025) 1.354(0.035) 1.293(0.027) 1.385(0.023) 1.371(0.022) 1.342(0.023) 7380 1.350(0.017) 1.345(0.017) 1.369(0.019) 1.383(0.025) 1.315(0.018) 1.405(0.015) 1.392(0.014) 1.368(0.016) 7480 1.345(0.021) 1.363(0.014) 1.382(0.016) 1.402(0.017) 1.333(0.014) 1.416(0.011) 1.399(0.010) 1.383(0.013) 7580 1.331(0.023) 1.369(0.012) 1.386(0.013) 1.409(0.013) 1.337(0.014) 1.412(0.014) 1.394(0.011) 1.386(0.012) 7680 1.307(0.023) 1.365(0.016) 1.381(0.014) 1.407(0.016) 1.333(0.019) 1.397(0.018) 1.379(0.015) 1.377(0.016) 7780 1.271(0.028) 1.353(0.022) 1.366(0.020) 1.393(0.021) 1.320(0.023) 1.368(0.022) 1.353(0.017) 1.360(0.020) 7880 1.227(0.024) 1.333(0.026) 1.344(0.019) 1.367(0.030) 1.298(0.025) 1.330(0.025) 1.322(0.021) 1.333(0.022) 7980 1.180(0.026) 1.309(0.034) 1.319(0.025) 1.338(0.038) 1.267(0.029) 1.285(0.030) 1.283(0.025) 1.300(0.027) 8080 1.131(0.027) 1.282(0.041) 1.290(0.032) 1.296(0.049) 1.228(0.033) 1.236(0.034) 1.243(0.032) 1.262(0.033) 8180 1.078(0.035) 1.254(0.051) 1.255(0.040) 1.244(0.057) 1.185(0.036) 1.183(0.039) 1.202(0.036) 1.220(0.038) 8280 1.024(0.044) 1.227(0.055) 1.215(0.048) 1.183(0.062) 1.140(0.037) 1.130(0.041) 1.160(0.042) 1.176(0.045) 8380 0.982(0.048) 1.200(0.064) 1.179(0.051) 1.137(0.062) 1.096(0.046) 1.080(0.045) 1.120(0.043) 1.141(0.046) 8480 0.937(0.052) 1.164(0.068) 1.140(0.055) 1.090(0.064) 1.054(0.050) 1.033(0.048) 1.079(0.047) 1.102(0.051) 8580 0.888(0.062) 1.121(0.071) 1.093(0.057) 1.046(0.069) 1.013(0.052) 0.989(0.048) 1.039(0.048) 1.055(0.051) 8680 0.828(0.070) 1.070(0.074) 1.040(0.062) 0.992(0.071) 0.969(0.054) 0.945(0.051) 0.994(0.050) 0.998(0.052) 8780 0.756(0.072) 1.005(0.071) 0.971(0.060) 0.936(0.072) 0.911(0.058) 0.892(0.049) 0.931(0.049) 0.933(0.052) 8880 0.672(0.078) 0.920(0.078) 0.883(0.063) 0.854(0.074) 0.828(0.063) 0.815(0.053) 0.851(0.055) 0.853(0.051) 8980 0.562(0.079) 0.787(0.062) 0.754(0.056) 0.716(0.070) 0.700(0.047) 0.693(0.047) 0.720(0.046) 0.734(0.048) AcknowledgementsThe authors would like to thank M.-S. Xiang . A-Li Luo, RAA. 15A-Li Luo, et al., 2015, RAA, 15, 8, 1095-1124 . S Bongard, A Canto, F Cellier-Holzem, A&A. 5498Bongard, S., Canto, A., Cellier-Holzem, F., et al., 2013, A&A, 549, 8 F Castelli, R L Kurucz, New Grids of ATLAS9 Model Atmospheres. 405087arXivCastelli, F.,Kurucz, R. L., et al., 2004, New Grids of ATLAS9 Model Atmospheres, arXiv, 0405087 . J J Chen, Z R Bai, RAA. 15608Chen, J.J., Bai, Z. R., et al., 2015, RAA, 15, 608 . X Q Cui, Y H Zhao, RAA. 121197Cui, X.Q., Zhao, Y. H., et al., 2012, RAA, 12, 1197 . H B Cullen, M S Margaret, PASP. 1231302Cullen, H. B., Margaret M. S., PASP, 123, 1302 . L C Deng, H J Newberg, C Liu, RAA. 12735Deng, L.C., Newberg, H. J., Liu, C., et al., 2012, RAA, 12, 735 . F Patat, S Moehler, A&A. 52791Patat, F., Moehler, S., et al., 2011, A&A , 527, 91 . E L Fitzpatrick, PASP. 11163Fitzpatrick, E. L. , et al., 1999, PASP, 111, 63 . E L Fitzpatrick, D Massa, ApJ. 663320Fitzpatrick, E. L., Massa, D., 2007, ApJ, 663, 320 . M Hamuy, A R Walker, N B Suntzeff, PASP. 104533Hamuy, M., Walker, A. R., Suntzeff, N. B., et al., 1992, PASP, 104, 533 . M Hamuy, N B Suntzeff, S R Heathcote, PASP. 106566Hamuy, M., Suntzeff, N. B., Heathcote, S. R., et al., 1994, PASP, 106, 566 . F Grupp, A&A. 426309Grupp, F., 2004, A&A, 426, 309 . J B Oke, J E Gunn, AJ. 266713Oke, J. B., Gunn, J. E., 1983, AJ, 266, 713 . J B Oke, AJ. 991621Oke, J. B., et al., 1990, AJ, 99, 1621 . E Oliva, L Origlia, S Scuderi, A&A. 58147Oliva, E., Origlia, L., Scuderi, S., et al., 2015, A&A, 581, 47 UBVRI-ZY and ugriz zeropoints from 19 calspec standards. A Pickles, Pickles, A., 2010, UBVRI-ZY and ugriz zeropoints from 19 calspec standards, STSCI . E F Schlafly, D P Finkbeiner, D J Schlegel, ApJ. 7251175Schlafly, E. F., Finkbeiner, D. P., Schlegel, D. J., 2010, ApJ, 725, 1175 . D J Schlegel, D P Finkbeiner, M Davis, ApJ. 500525Schlegel, D. J., Finkbeiner, D. P., Davis, M., 1998, ApJ, 500, 525 . Y H Song, A.-L Luo, G Comte, RAA. 12453Song, Y.H., Luo, A.-L., Comte, G. et al., 2012, RAA, 12, 453 . C Stoughton, R H Lupton, M Bernardi, AJ. 123485Stoughton, C., Lupton, R. H., Bernardi, M. et al., 2002, AJ, 123, 485 . J L Tonry, C W Stubbs, K R Lykke, AJ. 75099Tonry, J. L., Stubbs, C. W., Lykke, K. R. et al., 2012, AJ, 750, 99 . M X Xiang, X W Liu, MNRAS. 448103Xiang, M.X., Liu, X.W., et al., 2015, MNRAS, 448,90,103 . Y Wu, 8Wu, Y., et al., 2011, RAA, 11, 8, 924-946 . Y Wu, AA. 52571Wu, Y., et al., 2011, AA, 525, A71 . B Yanny, C Rockosi, H J Newberg, AJ. 1374377Yanny, B., Rockosi, C., Newberg, H. J. et al., 2009, AJ, 137, 4377 . D G York, J Adelman, J E Anderson, AJ. 1201579York, D. G., Adelman, J., Anderson, J. E. et al., 2000, AJ, 120, 1579
[]
[ "A chiral covariant approach to ρρ scattering", "A chiral covariant approach to ρρ scattering" ]
[ "D Gülmez \nHelmholtz-Institut für Strahlen-und Kernphysik and Bethe Center for Theoretical Physics\nUniversität Bonn\nD-53115BonnGermany\n", "U.-G Meißner \nHelmholtz-Institut für Strahlen-und Kernphysik and Bethe Center for Theoretical Physics\nUniversität Bonn\nD-53115BonnGermany\n\nInstitute for Advanced Simulation\nInstitut für Kernphysik\nJülich Center for Hadron Physics\nForschungszentrum Jülich\nD-52425JülichGermany\n", "J A Oller \nDepartamento de Física\nUniversidad de Murcia\nE-30071MurciaSpain\n" ]
[ "Helmholtz-Institut für Strahlen-und Kernphysik and Bethe Center for Theoretical Physics\nUniversität Bonn\nD-53115BonnGermany", "Helmholtz-Institut für Strahlen-und Kernphysik and Bethe Center for Theoretical Physics\nUniversität Bonn\nD-53115BonnGermany", "Institute for Advanced Simulation\nInstitut für Kernphysik\nJülich Center for Hadron Physics\nForschungszentrum Jülich\nD-52425JülichGermany", "Departamento de Física\nUniversidad de Murcia\nE-30071MurciaSpain" ]
[]
We analyze vector meson -vector meson scattering in a unitarized chiral theory based on a chiral covariant framework. We show that a pole assigned to the scalar meson f 0 (1370) can be dynamically generated from the ρρ interaction, while this is not the case for the tensor meson f 2 (1270) as found in earlier works. We show that the generation of the tensor state is untenable due to the extreme non-relativistic kinematics used before. We further consider the effects arising from the coupling of channels with different orbital angular momenta which are also important. We suggest to use the formalism outlined here to obtain more reliable results for the dynamical generation of resonances in the vector-vector interaction.
10.1140/epjc/s10052-017-5018-z
[ "https://arxiv.org/pdf/1611.00168v2.pdf" ]
119,196,627
1611.00168
cc1c8e93a15d829278206c35b38a54e8f3895f45
A chiral covariant approach to ρρ scattering D Gülmez Helmholtz-Institut für Strahlen-und Kernphysik and Bethe Center for Theoretical Physics Universität Bonn D-53115BonnGermany U.-G Meißner Helmholtz-Institut für Strahlen-und Kernphysik and Bethe Center for Theoretical Physics Universität Bonn D-53115BonnGermany Institute for Advanced Simulation Institut für Kernphysik Jülich Center for Hadron Physics Forschungszentrum Jülich D-52425JülichGermany J A Oller Departamento de Física Universidad de Murcia E-30071MurciaSpain A chiral covariant approach to ρρ scattering We analyze vector meson -vector meson scattering in a unitarized chiral theory based on a chiral covariant framework. We show that a pole assigned to the scalar meson f 0 (1370) can be dynamically generated from the ρρ interaction, while this is not the case for the tensor meson f 2 (1270) as found in earlier works. We show that the generation of the tensor state is untenable due to the extreme non-relativistic kinematics used before. We further consider the effects arising from the coupling of channels with different orbital angular momenta which are also important. We suggest to use the formalism outlined here to obtain more reliable results for the dynamical generation of resonances in the vector-vector interaction. Introduction It is now commonly accepted that some hadron resonances are generated by strong non-perturbative hadron-hadron interactions. Arguably the most famous example is the Λ(1405), that arises from the coupled-channel dynamics of the strangeness S = −1 ground state octet meson-baryon channels in the vicinity of the πΣ and K − p thresholds [1]. This resonance also has the outstanding feature of being actually the combination of two near poles, the so-called two-pole nature of the Λ(1405). In a fieldtheoretic sense, one should consider this state as two particles. This fact was predicted theoretically [2,3] and later unveiled experimentally [4] (see also the discussion in Ref. [5]). Another example is the scalar meson f 0 (980) close to theKK threshold, that is often considered to arise due to the strong S-wave interactions in the ππ-KK system with isospin zero [6][7][8]. A new twist was given to this field in Ref. [9] where the S-wave vector-vector (ρρ) interactions were investigated and it was found that due to the strong binding in certain channels, the f 2 (1270) and the f 0 (1370) mesons could be explained as ρρ bound states. This approach offered also an explanation why the tensor state f 2 is lighter than the scalar one f 0 , as the leading order attraction in the corresponding ρρ channel is stronger. This work was followed up by extensions to SU(3) [10], to account for radiative decays [11] and many other works, see e.g the short review in Ref. [12]. These results are certainly surprising and at odds with well-known features of the strong interactions. In this respect it is a text-book result that the f 2 (1270) fits very well within a nearly ideally-mixed P -wave qq nonet comprising as well the a 2 (1320), f 2 (1525) and K * s (1430) resonances [13][14][15][16]. Values for this mixing angle can be obtained from either the linear or quadratic mass relations as in Ref. [16]. Non-relativistic quark model calculations [17], as well as with relativistic corrections [18], predict that the coupling of the tensor mesons to γγ should be predominantly through helicity two by an E1 transition. This simple qq picture for the tensor f 2 (1270) resonance has been recently validated by the analyses performed in Ref. [19] of the high-statistics Belle data [20,21] on γγ → ππ in both the neutral and charged pion channels. Another point of importance in support of the qq nature of the f 2 (1270) is Regge theory, since this resonance lays in a parallel linear exchange-degenerate Regge trajectory with a "universal" slope parameter of around 1 GeV [22,23]. Masses and widths of the first resonances with increasing spin laying on this Regge trajectory (ρ, f 2 , ρ 3 , f 4 ) are nicely predicted [24] by the dual-hadronic model of Lovelace-Shapiro-Veneziano [25]. One should stress that the results of Ref. [9] were obtained based on extreme non-relativistic kinematics, p 2 i /m 2 ρ 0, with p the rho-meson three-momentum and m ρ the vector meson mass. This approximation, however, leads to some severe simplifications: • Due to the assumed threshold kinematics, the full ρ propagator was reduced to its scalar form, thus enabling the use of techniques already familiar from the pion-pion interaction [8]. This was applied when considering the iteration of the interactions in the Bethe-Salpeter equation. • Based on the same argument, the algebra involving the spin and the isospin projectors of the two vector-meson states could considerably be simplified. However, as √ s th = 2m ρ = 1540 MeV, the lighter of the bound states is already quite far away from the 2ρ threshold. It is therefore legitimate to question the assumptions made in Ref. [9]. In this work, we will reanalyze the same reactions using a fully covariant approach. This is technically much more involved than the formalism of the earlier works. However, as our aim is to scrutinize the approximations made there, we stay as much as possible close to their choice of parameters. Additionally, we also consider coupled-channel scattering including channels with nonzero orbital angular momentum, that is, we go beyond the S-wave scattering approximation of Ref. [9]. The inclusion of coupled channels is also important when moving away from threshold. The authors of this reference only considered scattering in S-wave because of the same type of near-threshold arguments. As will be shown, the near-threshold approximation is only reliable very close to threshold. Our work is organized as follows: In Sec. 2 we outline the formalism to analyze ρρ scattering in a covariant fashion. In particular, we retain the full propagator structure of the ρ, which leads to a very different analytic structure of the scattering amplitude compared to the extreme non-relativistic framework. We also perform a partial-wave projection technique, that allows to perform the unitarization of the tree-level scattering amplitudes using methods well established in the literature. An elaborate presentation of our results is given in Sec. 3, where we also give a detailed comparison to the earlier work based on the non-relativistic framework. Next, we consider the effect of the coupling between channels with different orbital angular momentum. We also improve the unitarization procedure by considering the first-iterated solution of the N/D method in Sec. 4, reinforcing our results obtained with the simpler unitarization method. We conclude with a summary and discussion in Sec. 5. A detailed account of the underlying projection formalism is given in App. A. Formalism The inclusion of vector mesons in a chiral effective Lagrangian can be done in a variety of different ways, such as treating them as heavy gauge bosons, using a tensor field formulation or generating them as hidden gauge particles of the non-linear σ-model. All these approaches are equivalent, as shown e.g. in the review [26]. While in principle the tensor field formulation is preferable in the construction of chiral-invariant building blocks, we stick here to the hidden symmetry approach as this was also used in Ref. [9]. To be specific, the Lagrangian for the interactions among vector mesons is taken from the pure gaugeboson part of the non-linear chiral Lagrangian with hidden local symmetry [27,28], L = − 1 4 F µν F µν .(1) Here, the symbol . . . denotes the trace in SU(2) flavor space and the field strength tensor F µν is F µν =∂ µ V ν − ∂ ν V µ − ig[V µ , V ν ] ,(2) with the coupling constant g = M V /2f π and f π ≈ 92 MeV [5] the weak pion decay constant. The vector field V µ is V µ = 1 √ 2 ρ 0 ρ + ρ − − 1 √ 2 ρ 0(3) From the Lagrangian in Eq. (1) one can straightforwardly derive the interaction between three and four vector mesons and the corresponding vertices. The corresponding Lagrangians are denoted as L 3 and L 4 , respectively. The former one gives rise to ρρ interactions through the exchange of a ρ meson and the latter corresponds to purely contact interactions. We did not include the ω resonance in Eq. (3) since it does not contribute to the interaction part (in the isospin limit). Consider first the contact vertices for the 4ρ interaction. These can be derived from Eq. (2) by keeping the terms proportional to g 2 , leading to L 4 = g 2 2 V µ V ν V µ V ν − V µ V µ V ν V ν .(4) The three different isospin (I) amplitudes for ρρ scattering (I = 0, 1 and 2) can be worked out from the knowledge of the transitions ρ + (p 1 )ρ − (p 2 ) → ρ + (p 3 )ρ − (p 4 ) and ρ + (p 1 )ρ − (p 2 ) → ρ 0 (p 3 )ρ 0 (p 4 ) by invoking crossing as well. We have indicated the different four-momenta by p i , i = 1, . . . , 4. The scattering amplitude for the former transition is denoted by A(p 1 , p 2 , p 3 , p 4 ) and the latter one by B(p 1 , p 2 , p 3 , p 4 ), which are shown in Figs. 1 and 2, respectively. ρ + ρ + ρ + ρ + ρ + ρ + ρ − ρ − ρ − ρ − ρ − ρ − ρ 0 ρ 0ρ + ρ − → ρ + ρ − . The contributions to those amplitudes from L 4 , cf. Eq. (4), are indicated by the subscript c and are given by: A c (k 1 , k 2 , k 3 , k 4 ) = − 2g 2 (2 (1) µ (2) ν (3) ν (4) µ − (1) µ (2) µ (3) ν (4) ν − (1) µ (2) ν (3) µ (4) ν ) , B c (k 1 , k 2 , k 3 , k 4 ) =2g 2 (2 (1) µ (2) µ (3) ν (4) ν − (1) µ (2) ν (3) µ (4) ν − (1) µ (2) ν (3) ν (4) µ ) .(5) In this equation, the (i) µ corresponds to the polarization vector of the i th ρ. Each polarization vector is characterized by its three-momentum p i and third component of the spin σ i in its rest frame, so that ρ + ρ + ρ + ρ − ρ − ρ 0 ρ 0 ρ 0 ρ 0 Figure 2: Feynman diagrams for the tree-level amplitude ρ + ρ − → ρ 0 ρ 0 . (i) µ ≡ (p i , σ i ) µ . Explicit expressions of these polarization vectors are given in Eqs. (A.9) and (A.10) of Appendix A. In the following, so as to simplify the presentation, the tree-level scattering amplitudes are written for real polarization vectors. The same expressions are valid for complex ones by taking the complex conjugate of the polarization vectors attached to the final particles. 1 Considering the one-vector exchange terms, we need the three-vector interaction Lagrangian L 3 . It reads ǫ(p, σ 3 ) ǫ(k, σ 2 ) ǫ(q, σ 1 )L 3 =ig (∂ µ V ν − ∂ ν V µ )V µ V ν .(6) The basic vertex is depicted in Fig. 3 which after a simple calculation can be written as V 3 = − √ 2g (q µ (1) ν − q ν (1) µ ) (3) µ (2) ν − (k µ (2) ν − k ν (2) µ ) (1) µ (3) ν − (p µ (3) ν − p ν (3) µ ) (2) µ (1) ν .(7) In terms of this vertex, one can straightforwardly calculate the vector exchange diagrams in Figs. 1 and 2. The expression for the t-channel ρ-exchange amplitude, the middle diagram in Fig. 1, and denoted by 1 The polarization vectors (p, σ) in the Appendix A are complex, so that the polarization vectors associated with the final-state ρρ should be complex conjugated in this case. A t (p 1 , p 2 , p 3 , p 4 ; 1 , 2 , 3 , 4 ), is A t (p 1 , p 2 , p 3 , p 4 ; 1 , 2 , 3 , 4 ) = 2g 2 (p 1 − p 3 ) 2 − m 2 ρ + i0 + (p 1 (p 2 + p 4 ) + p 3 (p 2 + p 4 )) 1 · 3 2 · 4 +4( 1 · k 3 4 · k 2 2 · 3 + 1 · k 3 2 · k 4 3 · 4 + 3 · k 1 4 · k 2 1 · 2 + 2 · k 4 3 · k 1 1 · 4 ) −2( 1 · k 3 ( 3 · k 2 + 3 · k 4 ) 2 · 4 + 3 · k 1 ( 1 · k 2 + 1 · k 4 ) 2 · 4 + 2 · k 4 ( 4 · k 1 + 4 · k 3 ) 1 · 3 + 4 · k 2 ( 2 · k 1 + 2 · k 3 ) 1 · 3 ) ,(8) where for short, we have rewritten (i) → i , and the scalar products involving polarization vectors are indicated with a dot. The u-channel ρ-exchange amplitude A u (p 1 , p 2 , p 3 , p 4 ; 1 , 2 , 3 , 4 ) can be obtained from the expression of A t by exchanging p 3 ↔ p 4 and 3 ↔ 4 . In the exchange for the polarization vectors they always refer to the same arguments of three-momentum and spin, that is, (p 3 , σ 3 ) ↔ (p 4 , σ 4 ). In this way, A u (p 1 , p 2 , p 3 , p 4 ; 1 , 2 , 3 , 4 ) =A t (p 1 , p 2 , p 4 , p 3 ; 1 , 2 , 4 , 3 ) .(9) Notice that the second diagram in Fig. 2 is a sum of the t-channel and u-channel ρ-exchange diagrams. The s-channel exchange amplitude (the last diagram in Fig. 1) can also be obtained from A t by performing the exchange p 2 ↔ −p 3 and 2 ↔ 3 , with the same remark as above for the exchange of polarization vectors. We then have: A s (p 1 , p 2 , p 3 , p 4 ; 1 , 2 , 3 , 4 ) =A t (p 1 , −p 3 , −p 2 , p 4 ; 1 , 3 , 2 , 4 ) .(10) The total amplitudes for ρ + ρ − → ρ + ρ − and ρ + ρ − → ρ 0 ρ 0 are A =A c + A t + A s , B =B c + A t + A u ,(11) with the usual arguments (p 1 , p 2 , p 3 , p 4 ; 1 , 2 , 3 , 4 ). By crossing we also obtain the amplitude for ρ + ρ + → ρ + ρ + [that we denote as C(p 1 , p 2 , p 3 , p 4 ; 1 , 2 , 3 , 4 )] from the one for ρ + ρ − → ρ + ρ − by exchanging p 2 ↔ −p 4 and 2 ↔ 4 , that is, C(p 1 , p 2 , p 3 , p 4 ; 1 , 2 , 3 , 4 ) =A(p 1 , −p 4 , p 3 , −p 2 ; 1 , 4 , 3 , 2 ) .(12) The amplitude C is purely I = 2, that we denote as T (2) . The amplitude B is an admixture of the I = 0, T (0) , and I = 2 amplitudes, B = 1 3 (T (0) − T (2) ) ,(13) from which we find that T (0) =3B + C .(14) To isolate the I = 1 amplitude, T (1) , we take the ρ + ρ − elastic amplitude A which obeys the following isospin decomposition A = 1 6 T (2) + 1 2 T (1) + 1 3 T (0) .(15) Taking into account Eqs. (14) we conclude that T (1) = 2A − 2B − C .(16) In terms of these amplitudes with well-defined isospin the expression in Eq. (A.48) for calculating the partial-wave amplitudes in the SJI basis (states with well-defined total angular momentum J, total spin S, orbital angular momentum and isospin I), denoted as T S;¯ S (s) = Y 0 (ẑ) 2(2J + 1) σ 1 , σ 2 ,σ 1 σ 2 , m dp Y m (p ) * (σ 1 σ 2 M |s 1 s 2 S)(mMM | SJ)(σ 1σ2M |s 1s2S )(0MM |¯ S J) ×T (I) (p 1 , p 2 , p 3 , p 4 ; 1 , 2 , 3 , 4 ) ,(17) with s the usual Mandelstam variable, p 1 = |p|ẑ, p 2 = −|p|ẑ, p 3 = p and p 4 = −p , M = σ 1 + σ 2 and M =σ 1 +σ 2 The Mandelstam variables t and u for ρρ scattering in the isospin limit are given by t = −2p 2 (1−cos θ) and u = −2p 2 (1 + cos θ), with θ the polar angle of the final momentum. The denominator in A t due to the ρ propagator, cf. Eq. (8), vanishes for t = m 2 ρ and similarly the denominator in A u for u = m 2 ρ . When performing the angular projection in Eq. (17) these poles give rise to a left-hand cut starting at the branch point s = 3m 2 ρ . This can be easily seen by considering the integration on cos θ of the fraction 1/(t − m 2 ρ + iε), which gives the same result both for the t and the u channel exchange, 1 2 +1 −1 d cos θ 1 −2p 2 (1 − cos θ) − m 2 ρ + iε = − 1 4p 2 log 4p 2 + m 2 ρ m 2 ρ + 4p 2 m 4 ρ iε ,(18) with ε → 0 + . The argument of the log becomes negative for 4p 2 < −m 2 ρ , which is equivalent to s < 3m 2 ρ . Because of the factor p 2 ε the imaginary part of the argument of the log below the threshold is negative which implies that the proper value of the partial-wave amplitude on the physical axis below the branch point at s = 3m 2 ρ is reached in the limit of vanishing negative imaginary part of s. The presence of this branch point and left-hand cut was not noticed in Ref. [9], where only the extreme non-relativistic reduction was considered, so that the ρ propagators in the ρ-exchange amplitudes collapsed to just a constant. Once we have calculated the partial-wave projected tree-level amplitude we proceed to its unitarization making use of standard techniques within unitary chiral perturbation theory [2,8,29]. This is a resummation technique that restores unitarity and also allows to study the resonance region. It has been applied to many systems and resonances by now, e.g. in meson-meson, meson-baryon, nucleon-nucleon and W W systems. Among many others we list some pioneering works for these systems [2,3,8,[30][31][32][33][34][35][36][37][38][39][40]. In the last years this approach has been applied also to systems containing mesons and baryons made from heavy quarks, some references on this topic are [41][42][43][44][45]. The basic equation to obtain the final unitarized T matrix in the subspace of coupled channels SJI, with the same JI, is 2 T (JI) (s) = I − V (JI) (s) · G(s) −1 · V (JI) (s) .(19) Here, G(s) is a diagonal matrix made up by the two-point loop function g(s) with ρρ as intermediate states, g(s) →i d 4 q (2π) 4 1 (q 2 − m 2 ρ )((P − q) 2 − m 2 ρ ) ,(20) where P 2 = s and within our normalization, cf. Eq. (A.53), Im g(s) = −|p|/8π √ s. The loop function g(s) is logarithmically divergent and it can be calculated once its value at a given reference point is subtracted. In this way, one can write down a once-subtracted dispersion relation for g(s) whose result is 3 g(s) = 1 (4π) 2 a(µ) + log m 2 ρ µ 2 + σ [log(σ + 1) − log(σ − 1)] ,(21) with σ = 1 − 4m 2 ρ s ,(22) and µ is a renormalization scale typically taken around m ρ , such the sum a(µ)+ log m 2 ρ /µ 2 is independent of µ. The subtraction constant in Eq. (21) could depend on the quantum numbers , S and J, but not on I due to the isospin symmetry [3]. To compare with the results of Ref. [9], we also evaluate the function g(s) introducing a threemomentum cutoff q max , the resulting g(s) function is denoted by g c (s), g c (s) = 1 2π 2 qmax 0 dq q 2 w(s − 4w 2 + iε) ,(23) with w = q 2 + m 2 ρ . This integral can be done algebraically [46] g c (s) = 1 (4π) 2   σ   log   σ 1 + m 2 ρ q 2 max + 1   − log   σ 1 + m 2 ρ q 2 max − 1     +2 log    m ρ q max   1 + 1 + m 2 ρ q 2 max        .(24) Typical values of the cutoff are around 1 GeV. The unitarity loop function g(s) has a branch point at the ρρ threshold (s = 4m 2 ρ ) and a unitarity cut above it (s > 4m 2 ρ ). The physical values of the T -matrix T (JI) (s), with s > 4m 2 ρ , are reached in the limit of vanishing positive imaginary part of s. Notice that the left-hand cut present in V (JI) (s) for s < 3m 2 ρ does not overlap with the unitarity cut, so that V (JI) (s) is analytic in the complex s-plane around the physical s-axis for physical energies. In this way, the sign of the vanishing imaginary part of s for V (JI) (s) is of no relevance in the prescription stated above for reaching its value on the real axis with s < 3m 2 ρ according to the Feynman rules. We can also get a natural value for the subtraction constant a in Eq. (21) by matching g(s) and g c (s) at threshold where σ = 0. For µ = m ρ , a usual choice, the final expression simplifies to a = −2 log q max m ρ   1 + 1 + m 2 ρ q 2 max   .(25) It is also worth noticing that Eq. (19) gives rise to a T -matrix T (IJ) (s) that is gauge invariance in the hidden local symmetry theory because this equation just stems from the partial-wave projection of a complete on-shell tree-level calculation within that theory, which certainly is gauge invariant. Results One of our aims is to check the stability of the results of Ref. [9] under relativistic corrections, particularly regarding the generation of the poles that could be associated with the f 0 (1370) and f 2 (1270) resonances as obtained in that paper. The main source of difference between our calculated V (JI) (s) and those in Ref. [9] arises from the different treatment of the ρ-meson propagator. The point is that the authors of Ref. [9] take the non-relativistic limit of this propagator so that from the expression 1/(t−m 2 ρ ), cf. Eq. (8), or 1/(u − m 2 ρ ), only −1/m 2 ρ is kept. This is the reason that the tree-level amplitudes calculated in Ref. [9] do not have the branch point singularity at s = 3m 2 ρ nor the corresponding left-hand cut for s < 3m 2 ρ . It turns out that for the isoscalar tensor case, the resonance f 2 (1270) is below this branch point, so that its influence cannot be neglected when considering the generation of this pole within this approach. Uncoupled S-wave scattering The issue on the relevance of this branch point singularity in the ρ-exchange amplitudes was not addressed in Ref. [9] and it is indeed very important. This is illustrated in Fig. 4 where we plot the potentials V (JI) (s) in S-wave ( = 0) (only S-wave scattering is considered in Ref. [9]). 4 From top to bottom and left to right we show in the figure the potentials for the quantum numbers (J, I) equal to (0, 0), (2, 0), (0, 2), (2, 2) and (1, 1). The red solid and black dotted lines correspond to the real and imaginary parts of our full covariant calculation of the V (JI) (s), respectively, while the blue dashed ones are the results of Ref. [9]. The imaginary part in our results for V (JI) (s) appears below s < 3m 2 ρ due to the left-hand cut that arises from the t-and u-channel ρ-exchanges. It can be seen that our results and those of Ref. [9] are typically close near threshold (s = 4m 2 ρ ) but for lower values of s they typically depart quickly due to the onset of the branch point singularity at s = 3m 2 ρ . The strength of this singularity depends on the channel, being particularly noticeable in the (J, I) = (2, 0) channel, while for the (0, 0) channel it is comparatively weaker. The strongest attractive potentials in the near threshold region occur for (J, I) = (0, 0) and (2, 0) and in every of these channels Ref. [9] found a bound-state pole that the authors associated with the f 0 (1370) and f 2 (1272) resonances, respectively. For the (0, 0) quantum numbers the pole position is relatively close to the ρρ threshold, while for (2, 0) it is much further away. Two typical values of the cutoff q max were used in Ref. [9], q max = 875 MeV and 1000 MeV. We employ these values here, too, together with q max = m ρ (so that we consider three values of q max separated by around 100 MeV), and study the pole positions for our T (JI) (s) amplitudes in S wave. We only find a bound state for the isoscalar scalar case, while for the tensor case no bound state is found. In Table 1 we give the values of the pole positions for our full calculation for q max = m ρ (first), 875 (second) and 1000 MeV (third row). For comparison we also give in round brackets the bound state masses obtained in Ref. [9], when appropriate. As indicated above, the strong differences for V (20) (s) between our full covariant calculation and the one in Ref. [9] in the extreme non-relativistic limit, cf. Table 1: Pole position for the partial wave T (00) (s) (2nd column), residue (3rd column) and compositeness (4th column) as a function of the three-momentum cutoff q max (1st column). In the last three rows we take into account the finite width of the ρ in the evaluation of the g(s) function, as indicated between parenthesis by (conv). For details, see the text. In addition we also show in the third column of Table 1 the residue of T (00) (s) at the pole position s P . For a generic partial wave T (JI) S;¯ S (s), its residue at a pole is denoted by γ (JI) S γ (JI) S and is defined as γ (JI) S γ (JI) S = − lim s→s P (s − s P )T (JI) S;¯ S (s) .(26) In terms of these couplings one can also calculate the compositeness X (JI) S associated with this bound state [47][48][49], X (JI) S = − γ (JI) S; S 2 ∂g(s) ∂s s P ,(27) which in our case determines the ρρ component in such bound state. Notice that the derivative of g(s) from Eq. (21) (which is negative below threshold) does not depend on the subtraction constant, the dependence on the latter enters implicitly by the actual value of the pole position s P . Of course, if one uses a three-momentum cutoff then g c (s) must be employed in the evaluation of X (JI) S . The compositeness obtained for the pole positions in Table 1 is given in the fourth column of the same table. As expected the ρρ component is dominant, with X (00) 00 > 0.5, and increases as the pole moves closer to threshold, so that it is 73% for q max = m ρ and √ s P = 1516 MeV. We can also determine the pole positions when g(s) is calculated with exact analytical properties, Eq. (21), and taking for a the values from Eq. (25) as a function of q max . The results are given Table 2, where we also give the residue at the pole position and the calculated compositeness, in the same order as in Table. 1. The results obtained are quite close to those in this table so that we refrain of further commenting on them. Nonetheless, we should stress again that we do not find any pole for the isoscalar tensor case. We could try to enforce the generation of an isoscalar tensor pole by varying q max , when using g c (s), or by varying a, if Eq. (21) is used. In the former case a much lower value of q max is required than the chiral expansion scale around 1 GeV (q max 400 MeV), while for the latter a qualitatively similar situation arises when taking into account the relationship between a and q max of Eq. (25). Even more serious are two facts that happen in relation with this isoscalar tensor pole. First one should stress that such pole appears associated to the evolution with q max or a of a pole in the first Riemann sheet, which violates Table 2: Pole positions for the partial wave T (00) (s) (2nd column), residue (3rd column) and compositeness (4th column) as a function of the subtraction constant a (1st column). In the last three rows we take into account the finite width of the ρ in the evaluation of the g(s) function, this is indicated between parenthesis by (conv). analyticity. This is shown in Fig. 5 where we exhibit the evolution of this pole as a function of q max . We start the series at a low value of q max = 300 MeV, where we have two poles on the real axis, and increase the cutoff in steps of δq max = 50 MeV. These two poles get closer and merge for q max = 403.1 MeV. For larger values of the cutoff the resulting pole moves deeper into the complex plane of the physical or first Riemann sheet. Second, we obtain that X (20) 02 is larger than 1. For example, for q max = 400 MeV, there are two poles at 1422.4 and 1463.4 MeV with X (20) 02 = 2.7 and 3.8, in order, which of course makes no sense as compositeness factors have to be less or equal to one. Next, we take into account the finite width of the ρ meson in the evaluation of the unitarity two-point loop function g(s). As a result the peak in the modulus squared of the isoscalar scalar amplitude now acquires some width due to the width itself of the ρ meson. To take that into account this effect we convolute the g(s) function with a Lorentzian mass squared distribution for each of the two ρ mesons in the intermediate state [9,33]. The resulting unitarity loop function is denoted by g(s) and is given by g(s) = 1 N 2 (mρ+2Γρ) 2 (mρ−2Γρ) 2 dm 2 1 Γm 1 /π (m 2 1 − m 2 ρ ) 2 + m 2 1 Γ 2 (mρ+2Γρ) 2 (mρ−2Γρ) 2 dm 2 2 Γm 2 /π (m 2 2 − m 2 ρ ) 2 + m 2 2 Γ 2 g(s, m 2 1 , m 2 2 ) .(28) The normalization factor N is N = (mρ+2Γρ) 2 (mρ−2Γρ) 2 dm 2 Γm/π (m 2 − m 2 ρ ) 2 + m 2 Γ 2 ,(29) with Γ(m) the width of the ρ meson with mass m. Due to the P -wave nature of this decay to ππ, we take into account its strong cubic dependence on the decaying pion three-momentum and use the approximation Γ(m) =Γ ρ m 2 − 4m 2 π m 2 ρ − 4m 2 π 3 θ(m − 2m π )(30) with m π the pion mass and Γ ρ ∼ = 148 MeV [5]. The function g(s, m 2 1 , m 2 2 ) is the two-point loop function with different masses, while in Eq. (21) we give its expression for the equal mass case. When evaluated + λ 1/2 (s) 2s log λ 1/2 (s) + s − m 2 2 + m 2 1 − log λ 1/2 (s) − s + m 2 2 − m 2 1 + log λ 1/2 (s) + s + m 2 2 − m 2 1 − log λ 1/2 (s) − s − m 2 2 + m 2 1 ,(31) with λ 1/2 (s) = s 2 + m 4 1 + m 4 2 − 2sm 2 1 − 2sm 2 2 − 2m 2 1 m 2 2 . The algebraic expression of this function when calculated with a three-momentum cutoff for different masses can be found in Ref. [46], to which we refer the interested reader. When using the convoluted g(s) function we find similar masses for the peak of |T (00) | 2 in the (0, 0) channel compared to the case without convolution. The resulting peak positions are given in the last three rows of Tables 1 and 2. The effects of the non-zero ρ width are clearly seen in Fig. 6, where we plot |T (00) (s)| 2 for the different values of q max shown in Table 1. The shape of the peaks follows quite closely a Breit-Wigner form, though it is slightly wider to the right side of the peak. We find that the width decreases with the increasing value of q max , being around 45, 65 and 95 MeV for q max = 1000, 875 and 775 MeV, respectively, of similar size as those found in Ref. [9]. When using a subtraction constant instead of q max , relating them through Eq. (25), the picture is quite similar. The peak positions are given in the last three columns of Table 2 while the widths obtained are around 105, 70 and 50 MeV for a = −1.70, −1.94 and −2.14, in order. These widths are significantly smaller than the PDG values assigned to the f 0 (1370) resonance of 200-500 MeV [5]. Due to the coupling of the ρρ and ππ, this pole could develop a larger width. This is approximated in Ref. [9] by considering the imaginary part of the ππ box diagram, with a ρ → ππ vertex at each of the vertices of the box. These vertices are also worked out from the non-linear chiral Lagrangian with hidden gauge symmetry [27,28]. We refer to Ref. [9] for details on the calculation of this contribution. According to this reference one has to add to V (00) 00;00 and to V (20) 02;02 the contribution V (JI) 2π , given by V (00) 2π =20i Im V ππ , V (20) 2π =8i Im V ππ .(32) In the calculation of the function V ππ , Ref. [9] introduces a monopole form factor F (q) for each of the four ρ → ππ vertices in the pion box calculation, F (q) = Λ 2 − m 2 π Λ 2 − (k − q) 2(33) with k 0 = √ s/2, k = 0, q 0 = √ s/2 and q the integration variables. This introduces a sizeable dependence of the results on the value of Λ. Nonetheless, in order to compare with Ref. [9] we follow the very same scheme of calculation and take the same values for Λ, that is, 1200, 1300 and 1400 MeV. 5 The inclusion of the ππ box diagram, on top of the convolution with the ρ mass squared distribution for calculating the g(s) function, does not alter the previous conclusion on the absence of a pole in the isoscalar tensor channel. However, the isoscalar scalar pole develops a larger width around 200-300 MeV, that increases with Λ, as can be inferred from Fig. 7, where we plot |T (00) (s)| 2 . On the other hand, the position of the peak barely changes compared to the one given in the last two rows of Table 1.Tentatively this pole could be associated to the f 0 (1370) resonance, which according to Refs. [50,51] decays mostly to ππ with a width around 200 MeV. In the PDG [5] the total width of the f 0 (1370) is given with a large uncertainty, within the range 200-500 MeV and the ππ decay mode is qualified as dominant. The nearby f 0 (1500) resonance has a much smaller width, around 100 MeV, and its coupling and decay to ππ is suppressed. These properties of the f 0 (1500) are discussed in detail in Ref. [50]. Coupled-channel scattering We now consider the impact on our results when allowing for the coupling between channels with different orbital angular momenta, an issue not considered in Ref. [9]. In Table 3 we show the different channels that couple for given JI quantum numbers and pay special attention to the (J, I) = (0, 0) and (2, 0) channels. Apart from the conservation of J and I, one also has to impose invariance under parity, which avoids the mixing between odd and even 's. When including coupled channel effects, one finds two poles in the channels with (J, I) = (0, 0), that are reported in Table 4 for various values of q max (shown in the first column). We give from left to right the pole mass (second column), the residues (third and fourth ones) and compositeness coefficients (fifth and sixth ones) of the different channels, ( , S) = (0, 0) and (2, 2), respectively. One of the poles is heavier and closer to the ρρ threshold with similar properties as the pole in the uncoupled case, compare with Table 1, particularly for q max = 775 MeV. Nonetheless, as q max increases the difference of the properties of this pole between the coupled and uncoupled cases is more pronounced. In particular let us remark that now X (00) 00 is always 0.7 and for q max = 1 GeV the residue to the channel ( , S) = (0, 0) is much larger than in the uncoupled case. Additionally, we find now a lighter pole which lays above the branch point singularity at Table 4: Bound state poles in the partial wave amplitudes of quantum numbers (J, I) = (0, 0) with varying cutoff q max , which is indicated in the first column. The masses (2nd column), the residues to ( , S) = (0, 0) and (2, 2) (3rd and 4th columns) and the compositeness coefficients X 00) 00 and X 00) 22 (6th and 7th columns) are also given. For the lighter poles the compositeness coefficients are small and negative, so that they cannot be interpreted as physical states contrary to common wisdom [47][48][49]. to the ( , S) = (2, 2) channel, but as the cutoff increases its residue for the channel ( , S) = (0, 0) also increases in absolute value and it is the largest for q max = 1 GeV. It is then clear that both channels ( , S) = (0, 0) and (2, 2) are relevant for the origin of this pole. Note that the residues for this lighter pole are negative, which is at odds with the standard interpretation of the residue (γ (00) S ) 2 of a bound state as the coupling squared. This implies that the compositeness coefficients X (00) S are all negative, which is at odds with a probabilistic interpretation as suggested in Refs. [47][48][49] for bound states. The moduli of the |X (00) S | are all small because this lighter pole lays quite far from the ρρ threshold. The fact that its mass is not far from the strong branch point singularity at √ 3 m ρ makes that this pole is very much affected by the left-hand cut discontinuity. In this respect, it might well be that the presence of this pole with anomalous properties is just an artefact of the unitarization formula of Eq. (19), that treats the left-hand cut discontinuity of the potential perturbatively. One can answer this question by solving exactly the N/D method [52,53], so that the left-hand cut discontinuity of the potential is properly treated and the resulting amplitude has the right analytical properties. Let us recall that Eq. (19) is an approximate algebraic solution of the N/D method by treating perturbatively the left-hand cut discontinuities of the coupled partial waves [29,36,38]. For the uncoupled scattering such effects are further studied in detail in the next section. Table 5: Bound state poles in the partial wave amplitudes of quantum numbers (J, I) = (2, 0) with different cutoffs q max , indicated in the first column. The masses (2nd column) and the residues to ( , S) = (0, 2), (2, 0) and (2, 2) (3rd, 4th and 5th columns) are shown. Here, "−0.0" denotes a small but negative number. For the (J, I) = (2, 0) partial waves we have three coupled channels, ( , S) = (0, 2), (2, 0) and (2, 2) and, contrary to the uncoupled case, we now find a pole that lays above the branch point singularity. We give its mass and residues for different q max in Table 5, with the same notation as in Table 4. Notice that these pole properties are very stable under the variation of q max . This pole couples by far much more strongly to the channel with ( , S) = (2, 2) than to any other channel. This indicates that it is mainly due to the dynamics associated with the ( , S) = (2, 2) channel. But the same comments are in order here as given above for the lighter isoscalar scalar pole, because its residues shown in Table 5 are negative and so are the corresponding compositeness coefficients. Hence, the lighter pole for (J, I) = (0, 0) and the one found for (2, 2) cannot be considered as robust results of our analysis. This has to be contrasted to the case of the heavier isoscalar scalar pole that is stable under relativistic corrections, coupled-channel effects and has quite standard properties regarding its couplings and compositeness coefficients. First iterated solution of the N/D method In this section for definiteness we only consider uncoupled scattering. We have in mind the (J, I) = (0, 0) and (J, I) = (2, 0) quantum numbers to which special attention has been paid in the literature concerning the generation of poles that could be associated with the f 0 (1370) and f 2 (1270) resonances, as discussed above. Further applications of the improved unitarization formalism presented in this section are left for future work. According to the N/D method [54] a partial-wave amplitude can be written as T = N (s) D(s) ,(34) Below threshold along the real s axis this equation is purely real because D(s) has a non-vanishing imaginary part only for s > s th . However, with our unitarization procedure from leading-order unitary chiral perturbation theory (UChPT) we have obtained the approximation T (JI) (s) = V (JI) (s) 1 − V (JI) (s)g c (s) ,(36) and the resulting equation to look for the bound states is D U (s) = 1 − V (s)g c (s) = 0 .(37) Notice that Eq. (36), contrary to the general Eq. (35), has an imaginary part below the branch-point singularity at s = 3m 2 . We can go beyond this undesired situation by considering the first-iterated solution to the N/D method. This is indeed similar to Eq. (36) but improving upon it because it allows us to go beyond the on-shell factorization employed in this equation. In the first-iterated N/D solution one identifies the numerator function N (s) to the tree-level calculation V (JI) (s) and employs the exact dispersive expression for D(s). Namely, it reads 6 N (s) = V (JI) (s) , D(s) = γ 0 + γ 1 (s − s th ) + 1 2 γ 2 (s − s s th ) 2 + (s − s th )s 2 π ∞ s th ds ρ(s )V (JI) (s ) (s − s th )(s − s)(s ) 2 , T N D (s) = N (s) D(s) ,(38)γ 0 + γ 1 (s − s th ) + 1 2 γ 2 (s − s s th ) 2 = 1 − V (s)g c (s) − (s − s th )s 2 π ∞ s th ds ρ(s )V (s ) (s − s th )(s − s)(s ) 2 ≡ ω(s) .(39) In this way, γ 0 =1 − V (s th )g c (s th ) , γ 1 =ω (s th ) , γ 2 =ω (s th ) .(40) The dependence of our present results on the cutoff used in T (JI) (s) stems from the matching conditions of Eq. (40). However, let us stress that the analytical properties of D(s) and N (s) are correct, they have the RHC and LHC with the appropriate extent and branch point singularities, respectively, and the resulting amplitude is unitarized. Results J = 0, I = 0 We plot D(s) for (J, I) = (0, 0) in Fig. 8 for q max = 0.7, 1 and 1.3 GeV by the red solid, green dashed and blue dash-dotted lines, in order. The crossing with the zero line (dotted one) indicates the mass of the bound state. This mass decreases with increasing q max , being around 1.4 GeV for the largest cutoff and very close to threshold for the smallest. In Fig. 9 we compare the real (left) and imaginary parts (right panel) of D(s) and D U (s) = 1 − V (s)g c (s) for a cutoff of 1 GeV. We do not show more values of the cutoff because the same behavior results. The function D(s) and D U (s) match up to around the branch-point singularity at √ s = 3m 2 ρ . Below it D U (s) becomes imaginary, cf. Eq. (37), while D(s) remains real and has this right property by construction, cf. Eq. (38). Above threshold the imaginary parts of both functions coincide as demanded by unitarity. We see that for these quantum numbers our new improved unitarization formalism and the one used to derive Eq. (19) agree very well. The boundstate mass remains the same as given in Table 1 because the functions D(s) and D U (s) match perfectly well in the region where these poles occur, as it is clear from Fig. 9. This should be expected because for (J, I) = (0, 0) the branch point singularity was much weaker than for other cases, e.g. (J, I) = (2, 0), as discussed above. negative so that there is by far no pole in the f 2 (1270) region. In order to show the curves more clearly we use only q max = 1 GeV in Fig. 11, for other values of q max the behavior is the same. In the left panel we compare the real parts of T N D (s) (black solid) and T (20) (s) (red dashed) while in the right panel we proceed similarly for the real parts of D(s) and D U (s) = 1 − V (20) (s)g c (s), with the same type of lines in order. All the functions match near the threshold region and above it, but they strongly depart once we approach the LHC branch-point singularity at s = 3m 2 ρ and beyond (for smaller values of s). Notice that D(s), which has not such branch-point singularity, follows then the smooth decreasing trend already originated for 3m 2 < s < 4m 2 . For (J, I) = (0, 0) the corresponding smooth trend is that of a decreasing function, cf. Fig. 8. The branch-point singularity is clearly seen in T (20) (s) because it is proportional to V (20) (s). In summary, the conclusions obtained in Sec. 3.1 regarding the generation of the pole that could be identified with the f 0 (1370) and the absence of that associated with the f 2 (1270) as claimed in Ref. [9] fully hold. As a matter of fact, they get reinforced after considering the more elaborated unitarization process that is obtained here by taking the first-iterated N/D solution. Summary and conclusions In this paper, we have revisited the issue of resonance generation in unitarized ρρ scattering using a chiral covariant formalism. The main results of our study can be summarized as follows: i) We have developed a partial-wave projection formalism that is applicable to the covariant treatment of ρρ scattering. In particular, we point out that accounting for the full ρ-meson propagator leads to a branch point in the partial wave projected amplitudes at s = 3m 2 ρ 1.8 GeV 2 , about 208 MeV below the 2ρ threshold. This branch point does not appear in the extreme non-relativistic treatment of the propagator. ii) Evaluating the T -matrix using the standard form, see Eq. (19) that treats the left-cut perturbatively, we find a pole in the scalar isoscalar channel close to the ρρ threshold that can be associated with the Re T (20) , q max =1 GeV f 0 (1370) resonance, in agreement with the findings of Ref. [9], though there are minor quantitative differences. iii) In contrast to Ref. [9], we do not find a tensor state below the scalar one. This can be traced back to the influence of the aforementioned branch point. We therefore conclude that the state that is identified in Ref. [9] with the f 2 (1270) is an artifact of the non-relativistic approximation and its generation does not hold from the arguments given in that reference. iv) We have also worked out the effects of the coupling between channels with different orbital angular momenta, which lead to additional states. These, however, have negative composite coefficients and are thus not amenable to a simple bound state interpretation. As these states are close to the branch point at s = 3m 2 ρ , the perturbative treatment of the left-hand cut, as employed here, is certainly not sufficient to decide about their relevance. v) We have improved the treatment of the left-hand cut by employing the first-iterated N/D method, in particular this method avoids the factorization approach of leading order UChPT. We worked the solutions that follow for uncoupled scattering in the (J, I) = (0, 0) and (2, 0) channels. The outcome fully agrees with the conclusions already obtained from UChPT and, notably, the absence of a pole that could be associated with the f 2 (1270) is firmly reinforced. A lesson from points iii) and v) is clear. A strongly attractive interaction in a given channel is a necessary but by far not sufficient condition to generate a multi-hadron bound state. This argument, as used in Ref. [9], is in general terms too naive because it does not take into account the possible raise of a singularity in the true potential between the range of validity of the approximation used and the predicted bound-state mass from the latter. It could be rephrased as trying to deduce the values of the function 1/(1 + x) for x < −1 by knowing its values for x around 0. We conclude that the approach presented here should be used to investigate the possible generation of meson resonances from the interaction of vector mesons. In the next steps, we will investigate how the relativistic effects affect the conclusions of the SU(3) calculation of Ref. [10] and will further sharpen the framework along the lines mentioned, in particular by solving exactly the N/D equations [53]. A Partial-wave projection formalism In this appendix we detail the projection formalism used in this work to calculate the different ρρ partial waves. First, we give the expression for the polarization vectors for a massive spin-one particle with a three-momentum p and third component of spin σ in the z axis of its rest frame, that we denote by ε(p, σ). In the rest frame they are given by ε(0, σ) = 0 ε σ , (A.1) with ε 0 =    0 0 1    , ε ±1 = ∓1 √ 2    1 ±i 0    . (A.2) Next, we take a Lorentz transformation U (p) along the vector p that takes the particle four-momentum at rest to its final value, U (p) m 0 = E p p , (A.3) with E p = m 2 + p 2 . We also introduce the rotation R(p) that takesẑ top, R(p)ẑ =p . (A.4) In terms of the polar (θ) and azimuthal (φ) angles ofp this rotation is defined as R(p) =R z (φ)R y (θ) , (A.5) with the subscripts z and y indicating the axis of rotation. For latter convenience we write the Lorentz transformation U (p) as U (p) =R(p)B z (|p|)R(p) −1 , (A.6) where B z (|p|) is a boost along theẑ axis with velocity v = −β and β = |p|/E p . Namely, B z (|p|) =      γ 0 0 γβ 0 0 0 0 0 0 0 0 γβ 0 0 γ      (A.7) and γ = 1 1 − β 2 . (A.8) Notice that one could also include any arbitrary rotation around theẑ axis to the right end of Eq. (A.5). Of course, this does not have any affect on either Eqs. (A.4) and (A.6) (for the latter one let us note that B z (|p|) commutes with a rotation around theẑ axis). The action of U (p) on (0, σ i ) gives us the polarization vectors with definite three-momentum p, whose expressions are ε(p, 0) =      γβ cos θ 1 2 (γ − 1) sin 2θ cos φ 1 2 (γ − 1) sin 2θ sin φ 1 2 (1 + γ + (γ − 1) cos 2θ)      , ε(p, ±1) = ∓ 1 √ 2      γβe ±iφ sin θ 1 + (γ − 1)e ±iφ sin 2 θ cos φ ±i + (γ − 1)e ±iφ sin 2 θ sin φ 1 2 (γ − 1)e ±iφ sin 2θ      . (A.9) The previous equation can be written in more compact form as ε(p, σ) = γβp · ε σ ε σ +p (γ − 1)p · ε σ . (A.10) In terms of the polarization vectors in Eq. (A.10) we can write the vector field for the neutral ρ 0 particle, ρ 0 µ (x), as ρ 0 µ (x) = σ d 3 p (2π) 3 2E p ε(p, σ) µ e −ip x a(p, σ) + ε(p, σ) * µ e ip x a(p, σ) † , (A.11) with the corresponding similar expressions for the ρ ± (x) fields. Here a(p, σ) and a(p, σ) † refer to the annihilation and creation operators, with the canonical commutation relation [a(p , σ ), a(p, σ) † ] = δ σσ (2π) 3 2E p δ(p − p ) . (A.12) In order to check the time-reversal and parity-invariance properties of the vector-vector scattering amplitudes worked out from the chiral Lagrangians in Eq. (1) we notice that the polarization vectors in Eq. (A.9) satisfy the following transformation properties: ε(p, σ)) . ε(−p, σ) * =(−1) σ (−ε(p, σ) 0 , ε(p, σ)) , ε(−p, σ) =(−ε(p, σ) 0 , (A.13) A one-particle state |p, σ is obtained by the action of the creation operators on the vacuum state, |p, σ =a(p, σ) † |0, σ . (A.14) From Eq. (A.12) it follows the following normalization for such states p , σ |p, σ =δ σ σ (2π) 3 2E p δ(p − p) . (A.15) Next, we consider a two-body state characterized by the CM three-momentum p and the third components of spin σ 1 and σ 2 in their respective rest frames. This state is denoted by |p, σ 1 σ 2 . Associated to this, we can define the two-body state with orbital angular momentum with its third component of orbital angular momentum m, denoted by | m, σ 1 σ 2 as | m, σ 1 σ 2 = 1 √ 4π dp Y m (p)|p, σ 1 σ 2 . (A.16) Let us show first that this definition is meaningful because the state | m, σ 1 σ 2 transforms under the rotation group as the direct product of the irreducible representations associated to the orbital angular momentum and the spins s 1 and s 2 of the two particles. Every single-particle state |p, σ under the action of a rotation R transforms as R|p, σ =RU (p)|0, σ = U (p )U (p ) −1 RU (p)|0, σ , (A.17) and p = Rp. 7 It is straightforward to show that R = U (p ) −1 RU (p) . (A.18) For that we explicitly write the Lorentz transformations U (p ) and U (p) as in Eq. (A.6) so that U (p ) −1 RU (p) =R(p )B z (|p|) −1 R(p ) −1 RR(p)B z (|p|)R(p) −1 . (A.19) Next, the product of rotations R(p ) −1 RR(p) is a rotation around the z axis, R z (γ), since it leaves invariantẑ. Thus, 20) or, in other terms, R(p ) −1 RR(p) = R z (γ) ,(A.R(p ) =RR(p)R z (γ) −1 ,(A.R|p, σ =U (p )R|0, σ = σ D (s) (R) σ σ |p , σ , (A.22) with D (s) (R) the rotation matrix in the irreducible representation of the rotation group with spin s. Now, we can use this result to find the action of the rotation R on the state |p, σ 1 σ 2 which is the direct product of the states |p, σ 1 and | − p, σ 2 (once the trivial CM movement is factorized out [56]). In this way, R|p, σ 1 σ 2 = σ 1 ,σ 2 D (s 1 ) (R) σ 1 σ 1 D (s 2 ) (R) σ 2 σ 2 |p , σ 1 σ 2 . (A. 23) We are now ready to derive the action R on | m, σ 1 σ 2 , R| m, σ 1 σ 2 = σ 1 ,σ 2 D (s 1 ) (R) σ 1 σ 1 D (s 2 ) (R) σ 2 σ 2 1 √ 4π dp Y m (R −1p )|p , σ 1 σ 2 = σ 1 ,σ 2 ,m D ( ) (R) m m D (s 1 ) (R) σ 1 σ 1 D (s 2 ) (R) σ 2 σ 2 | m , σ 1 σ 2 . (A.24) In this equation we have made use of the property of the spherical harmonics 24) shows that under rotation the states defined in Eq. (A.16) has the right transformation under the action of a rotation R, and our proposition above is shown to hold. Now, because of the transformation in Eq. (A.24), corresponding to the direct product of spins s 1 , s 2 and , we can combine these angular momentum indices and end with the LSJ basis. In the latter every state is labelled by the total angular momentum J, the third component of the total angular momentum µ, orbital angular momentum and total spin S (resulting from the composition of spins s 1 and s 2 ). Namely, we use the notation |Jµ, S for these states which are then given by Y m (R −1p ) = m D ( ) (R) m m Y m (p ) . (A.25) Equation (A.|Jµ, S = σ 1 ,σ 2 ,m,M (σ 1 σ 2 M |s 1 s 2 S)(mM µ| SJ)| m, σ 1 σ 2 , (A.26) where we have introduced the standard Clebsch-Gordan coefficients for the composition of two angular momenta. 8 Next we introduce the isospin indices α 1 and α 2 corresponding to the third components of the isospins τ 1 and τ 2 . This does not modify any of our previous considerations since isospin does not transform under the action of spatial rotations. Within the isospin formalism the ρρ states obey Bose-Einstein statistics and these symmetric states are defined by |p, σ 1 σ 2 , α 1 α 2 S = 1 √ 2 (|p, σ 1 σ 2 , α 1 α 2 + | − p, σ 2 σ 1 , α 2 α 1 ) , (A.27) with the subscript S indicating the symmetrized nature of the state under the exchange of the two particles. One can invert Eq. (A.16) and give the momentum-defined states in terms of those with well-defined orbital angular momentum, |p, σ 1 σ 2 , α 1 α 2 = √ 4π ,m Y m (p) * | m, σ 1 σ 2 , α 1 α 2 = √ 4π J, µ, , m S, M, I, t 3 Y m (p) * (σ 1 σ 2 M |s 1 s 2 S)(mM µ| SJ)(α 1 α 2 t 3 |τ 1 τ 2 I)|Jµ, S, It 3 , (A.28) with I the total isospin of the particle pair and t 3 is the third component. Taking into account this result we can write the symmetrized states as |p, σ 1 σ 2 , α 1 α 2 S = √ 4π J, µ, , m S, M, I, t 3 1 + (−1) +S+I √ 2 (σ 1 σ 2 M |s 1 s 2 S)(mM µ| SJ)(α 1 α 2 t 3 |τ 1 τ 2 I)Y m (p) * |Jµ, S, It 3 . (A. 29) In deducing this expression we have taken into account the following symmetric properties of the Clebsch-Gordan coefficients (σ 2 σ 1 M |s 2 s 1 S) =(−1) S−s 1 −s 2 (σ 1 σ 2 M |s 1 s 2 S) , (α 2 α 1 t 3 |t 2 t 1 I) =(−1) I−t 1 −t 2 (α 1 α 2 t 3 |τ 1 τ 2 I) , Y m (−p) =(−1) Y m (p) . (A.30) Of course, due to the fact that we are dealing with indistinguishable bosons within the isospin formalism it follows that s 1 = s 2 , τ 1 = τ 2 as well as that 2τ 1 and 2s 1 are even numbers. The combination (1 + (−1) +S+I )/ √ 2 in Eq. (A.29) is denoted in the following as χ( ST ) and takes into account the Bose-Einstein symmetric character of the two-particles, so that only states with even + S + I are allowed. The inversion of Eq. (A.29) gives (we assume in the following that +S+I =even, so that χ( ST ) = √ 2) |Jµ, S, It 3 = 1 √ 8π σ 1 , σ 2 M, m α 1 , α 2 dpY m (p)(σ 1 σ 2 M |s 1 s 2 S)(mM µ| SJ)(α 1 α 2 t 3 |τ 1 τ 2 I)|p, σ 1 σ 2 , α 1 α 2 S . (A.31) We can also express the state |Jµ, S, It 3 in terms of the states |p, σ 1 σ 2 , α 1 α 2 without symmetrization by inverting Eq. (A.28). We would obtain the same expression as Eq. (A.31) but with a factor 1/ √ 4π instead of 1/ √ 8π, namely, The extra factor of 1/ √ 2 in Eq. (A.31) is a symmetrization factor because of the Bose-Einstein symmetry properties of the two-particle state in the symmetrized states |p, σ 1 σ 2 , α 1 α 2 S , which disappears when employing the nonsymmetrized states. In order to obtain the normalization of the states |Jµ, S, It 3 it is indeed simpler to use Eq. (A.32) though, of course, the same result is obtained if starting from Eq. (A.31). The two-body particle states with definite three-momentum satisfy the normalization p , σ 1 σ 2 , α 1 α 2 |p, σ 1 σ 2 , α 1 α 2 = 16π 2 √ s |p| δ(p −p) , (A.33) The total energy conservation guarantees that the modulus of the final and initial three-momentum in Eq. (A.34) is the same, that we denote by |p|. In terms of this result and Eq. (A.32) it follows straightforwardly by taking into account the orthogonal properties of Clebsch-Gordan coefficients and spherical harmonics that J µ , S , I t 3 |Jµ, S, It 3 = 4π √ s |p| δ J J δ µ µ δ δ S S δ I I δ t 3 t 3 (A. 34) We are interested in the partial-wave amplitude corresponding to the transition between states with quantum numbers J¯ S I to states J SI, that corresponds to the matrix element withT the T -matrix scattering operator. Here we take the convention that the quantum numbers referring to the initial state are barred. Of course, the matrix element in Eq. (A.35) is independent of µ and t 3 because of invariance under rotations in ordinary and isospin spaces, respectively. We can calculate this scattering matrix element in terms of those in the basis with definite three-momentum by replacing in Eq. (A.35) the states in the J S basis as given in Eq. (A.31). We then obtain in a first step T (JI) S;¯ S = 1 8π dp dp Y m (p ) * Ym (p)(σ 1 σ 2 M |s 1 s 2 S)(mM µ| SJ)(α 1 α 2 t 3 |τ 1 τ 2 I) ×(σ 1σ2M |s 1s2 S)(mM µ|¯ S J)(ᾱ 1ᾱ2 t 3 |τ 1τ2 I) S p , σ 1 σ 2 , α 1 α 2 |T |p,σ 1σ2 ,ᾱ 1ᾱ2 S , (A.36) Here we have not shown the explicit indices over which the sum is done in order not to overload the notation. 9 We use next the rotation invariance of the T -matrix operatorT to simplify the previous integral so that, at the end, we have just the integration over the final three-momentum angular solid. There are several steps involved that we give in quite detail. The referred rotational invariance ofT implies that it remains invariant under the transformationT → R(p)T R(p) † , which implies at the level of the matrix elements that S p , σ 1 σ 2 , α 1 α 2 |T |p,σ 1σ2 ,ᾱ 1ᾱ2 S = S p , σ 1 σ 2 , α 1 α 2 |R(p)T R(p) † |p,σ 1σ2 ,ᾱ 1ᾱ2 S . S;¯ S = 1 8π dp dp (σ 1 σ 2 M |s 1 s 2 S)(mM µ| SJ)(α 1 α 2 t 3 |τ 1 τ 2 I)D (s 1 ) Let us recall that from the composition of two rotation matrices one has that [56,57] where we have removed the primes on top of the spin and orbital angular momentum third-component symbols and in the previous sum M = σ 1 + σ 2 andM =σ 1 +σ 2 . Next, we derive the unitarity relation corresponding to our normalization for the partial-wave projected amplitudes T such that now the diagonal matrix elements of the identity operator I and S are just 1 and η i e 2iδ i , in order, where η i is the inelasticity for channel i and δ i its phase shift. σ 1 σ 1 (R † ) * D (s 2 ) Figure 1 : 1Feynman diagrams for the tree-level amplitude Figure 3 : 3Three-ρ vertex from L 3 . S ;¯ S (s) for the transition (¯ S JI) → ( SJI), simplifies to T (JI) Figure 4 : 4Fig. 4b, imply the final disappearance of the deep bound state for the isoscalar tensor case. The nominal three-momentum of a ρ around the mass of the f 2 (1270) has a modulus of about 0.6m ρ 460 MeV and for such high values of three-momentum relativistic corrections are of importance, as explicitly calculated here. On the contrary, the (0, 0) pole is located closer to the ρρ threshold and the results are more stable against relativistic corrections, though one still finds differences of around 20 MeV in the bound state mass. ) V(11) (s) S-wave potentials V (JI) (s) (in MeV) for our calculation (real part: red solid line, imaginary part: black dotted line) and for the calculation of Ref.[9] (blue dashed lines). Figure 5 : 5Evolution of the poles in the physical Riemann sheet for the isoscalar tensor channel as a function of q max . Two poles are present on the real axis for our starting value of q max = 300 MeV, and along the trajectory we increase q max in steps of δq max = 50 MeV. The two poles merge at q max = 403.1 MeV and for larger values of the cutoff there is one pole that moves deeper into the complex plane.in terms of a dispersion relation it reads, Figure 6 : 6The amplitude squared |T 00 | 2 when using the convoluted g(s) function with a cutoff q max . The blue dashed line corresponds to q max = 775 MeV, the red solid one to 875 MeV and the black dotted one to 1000 MeV. Figure 7 : 7|T (00) (s)| 2 with the ππ box diagram contribution included for different values of Λ in F (q), cf. Eq. (33). Specifically, Λ = 1200 (black solid line), 1300 (red dashed line) and 1400 MeV (blue dotted line), for q max = 875 (left panel) and q max = 1000 MeV (right panel). (J, I) ( , S) √ 3 m ρ 1343 MeV. For lower values of the cutoff q max this pole couples more strongly q max (MeV) where the function D(s) has only the unitarity or right-hand cut (RHC) while N (s) only has the left-hand cut (LHC). The secular equation for obtaining resonances and bound states corresponds to look for the zeros of D(s), D(s i ) = 0 . with the phase space factor ρ(s) given by ρ(s) = σ(s)/16π. We have taken three subtractions in the dispersion relation for D(s) because V (JI) (s) diverges as s 2 for s → ∞.From our present study we have concluded that T (JI) (s) is stable in the threshold region under relativistic corrections as well as under the addition of coupled channels. Because of the stability of the results in this region under relativistic corrections and by visual inspection of the potentials inFig. 4one concludes that the near-threshold region is quite safe of the problem related to the branchpoint singularity of the LHC associated with one-ρ crossed-channel exchanges. We then determine the subtractions constants, γ 0 , γ 1 and γ 2 in D(s) by matching T N D (s), Eq.(38), and T (JI) (s), Eq.(36), around threshold (s th ). At the practical level it is more convenient to match 1/T (s), so that in the threshold region up to O(s 3 ) one has: Figure 8 : 8D(s) function, Eq. (38), for (J, I) = (0, 0). Above threshold only the real part is shown. Figure 9 :Figure 10 : 910(J, I) = (0, 0). Real (left) and imaginary (right panel) parts of D(s) compared also with D U (s) = 1 − V (00) (s)g c (s) from leading-order UChPT. 4.2 Results J = 2, I = 0 We plot D(s) for (J, I) = (2, 0) in Fig. 10 for q max = 0.7, 1 and 1.3 GeV by the red solid, green dashed and blue dash-dotted lines, respectively. One can see that in this region the function D(s) is large and D(s) function, Eq. (38), for (J, I) = (2, 0). Above threshold only the real part is shown. Figure 11 : 11(J, I) = (2, 0). Left panel: real part of T N D (s) (black solid) compared with that of T (20) (s) (red dashed). Right panel: comparison between the real parts of D(s) (black solid) and D U (s) = 1 − V (20) (s)g c (s) (red dashed). σ 1 , σ 2 M, m α 1 121, α 2 dpY m (p)(σ 1 σ 2 M |s 1 s 2 S)(mM µ| SJ)(α 1 α 2 t 3 |τ 1 τ 2 I)|p, σ 1 σ 2 , α 1 α 2 .(A.32) T( JI) S;¯ S = Jµ, S, It 3 |T |Jµ,¯ S , It 3 , (A.35) σ action of the rotation R(p) † (R(p) †p =ẑ and R(p) †p =p ) the final and initial states transform as, cf. Eq. (A.23),R(p) † |p,σ 1σ2 ,ᾱ 1ᾱ2 S = 2σ 2 (R † )|ẑ,σ 1σ 2 ,ᾱ 1ᾱ2 S , R(p) † |p , σ 1 σ 2 , α 1 α 2 S = σ 1 ,σ 2 D (s 1 ) σ 1 σ 1 (R † )D (s 2 ) σ 2 σ 2 (R † )|p , σ 1 σ 2 , α 1 α 2 S , (A.38)with the convention that R inside the argument of the rotation matrices refers to R(p).We insert Eqs. (A.37) and (A.38) into Eq. (A.36), and next transformp →p as integrations variables, take into account the invariance of the solid angle measure under such rotation and use Eq. (A.25) for Ym (p) =Ym (R(p)ẑ) = m D (¯ ) m m (R † )Ym ¯ (ẑ) , Y m (p ) =Y m (R(p)p ) = m D ( ) m m (R † )Y m (p ) . (A.39) Then, Eq. (A.36) for T (JI) S;¯ S can be rewritten as T (JI) (R † ) * Y m (p ) * (σ 1σ2M |s 1s2S )(mM µ|¯ S J)(ᾱ 1ᾱ2 t 3 |τ 1τ2 I) m (R † )Ym ¯ (ẑ) S p , σ 1 σ 2 , α 1 α 2 |T | |p|ẑ,σ 1σ 2 ,ᾱ 1ᾱ2 S . (A.40) m 1 ,m 2 (m 1 m 2 M 122| 1 2 L)D ( 1 ) m 1 m 1 (R)D ( 2 )m 2 m 2 (R) = M (m 1 m 2 M | 1 2 L)D (L) M M (R) . (A.41) , ¯ S . We write theŜ matrix asŜ =I − iT (A.49) which satisfies the standard unitarity relationŜ ·Ŝ † =I , (A.50) with I the identity matrix. In terms of the T -matrix, cf. (A.49), this implies that T −T † = − iTT † . (A.51) Expressed with the matrix elements in the basis SJ this relation becomes 2ImT (JI) S;¯ S = − Jµ, S, T t 3 |TT † |Jµ,¯ S , T t 3 . (A.52) In deriving the left-hand side of this equation we have taken into account that because of time-reversal symmetry T (JI) S;¯ S = T (JI) S ; S . On the right-hand side we introduce now a two-body resolution of the identity of states |Jµ, S, It 3 (we have restricted our vector space to the one generated by these states) such that, taking into account their normalization in Eq. (A.34), S; ,S T (JI) * ,S ;¯ S . (A.53)The phase space factor is included in the diagonal matrixρ ij = |p| i 8π √ s δ ij . (A.54)A more standard definition of the S-matrix implies to redefine it aŝ S → S Table 3 : 3Coupled-channels with different orbital angular momentum. 21 ) 21Taking into account Eqs. (A.20) and (A.21) in Eq. (A.19) it follows the result in Eq. (A.18) because B z (|p|) and R z (γ) commute. Then Eq. (A.17) implies that In order to easier the comparison with Ref.[9] we take the same sign convention for matrices V (s) and T (s) as in that reference. It is the same result as calculating g(s) in dimensional regularization, d = 4 + 2 , and replacing the 1/ divergence by a constant, cf.[46]. Partial waves with = 0 are considered in Sec. 3.2. Another more complete scheme is two work explicitly with coupled-channel scattering as done in Ref.[50], where ρρ and ππ channels, among many others, were explicitly included. In this way resonances develop decay widths in a full nonperturbative fashion because of the coupling between channels. A comprehensive introduction to the N/D method is given in Refs.[29,53,55,56]. For a general Lorentz transformation these manipulations give rise to the Wigner rotation[56]. The Clebsch-Gordan coefficient (m1m2m3|j1j2j3) is the composition for j1 + j2 = j3, with mi referring to the third components of the spins. They correspond to those indicated under the summation symbol in Eq. (A.31) both for the initial and final states. AcknowledgementsWe thank Maxim Mai for useful discussions and for his contribution during the early stages of this investigation. We would also like to thank E. Oset for some criticism which led us to add some additional material to the manuscript. This work is supported in part by the DFG (SFB/TR 110, "Symmetries and the Emergence of Structure in QCD"). JAO would like to acknowledge partial financial support from the MINECO (Spain) and ERDF (European Commission) grant FPA2013-40483-P and by the Spanish Excellence Network on Hadronic Physics FIS2014-57026-REDT. The work of UGM was supported in part by The Chinese Academy of Sciences (CAS) President's International Fellowship Initiative (PIFI) grant no. 2015VMA076.The same relation in Eq. (A.41) is applied once more to the following combinations in Eq. (A.43): S;¯ S = Y 0 (ẑ) 2(2J + 1) σ 1 , σ 2 ,σ 1 σ 2 , α 1 , α 2 α 1 ,ᾱ 2 , m dp Y m (p ) * (σ 1 σ 2 M |s 1 s 2 S)(mMM | SJ)(σ 1σ2M |s 1s2S )(0MM |¯ S J) ×(α 1 α 2 t 3 |τ 1 τ 2 I)(ᾱ 1ᾱ2 t 3 |τ 1τ2 I) S p , σ 1 σ 2 , α 1 α 2 |T | |p|ẑ,σ 1σ2 ,ᾱ 1ᾱ2 S , (A.48) . R H Dalitz, S F Tuan, Phys. Rev. Lett. 2425R. H. Dalitz and S. F. Tuan, Phys. Rev. Lett. 2 (1959) 425. . J A Oller, U.-G Meißner, hep-ph/0011146Phys. Lett. B. 500263J. A. Oller and U.-G. Meißner, Phys. Lett. B 500, 263 (2001) [hep-ph/0011146]. . D Jido, J A Oller, E Oset, A Ramos, U.-G Meißner, nucl- th/0303062Nucl. Phys. A. 725181D. Jido, J. A. Oller, E. Oset, A. Ramos and U.-G. Meißner, Nucl. Phys. A 725, 181 (2003) [nucl- th/0303062]. . H Y Lu, CLAS CollaborationarXiv:1307.4411Phys. Rev. C. 8845202H. Y. Lu et al. [CLAS Collaboration], Phys. Rev. C 88, 045202 (2013) [arXiv:1307.4411]. . C Patrignani, Particle Data GroupChin Phys. C. 40100001C. Patrignani et al. (Particle Data Group), Chin Phys. C 40, 100001 (2016). . J D Weinstein, N Isgur, Phys. Rev. D. 412236J. D. Weinstein and N. Isgur, Phys. Rev. D 41, 2236 (1990). . G Janssen, B C Pearce, K Holinde, J Speth, nucl-th/9411021Phys. Rev. D. 522690G. Janssen, B. C. Pearce, K. Holinde and J. Speth, Phys. Rev. D 52, 2690 (1995) [nucl-th/9411021]. . J A Oller, E Oset, Nucl. Phys. A. 620438J. A. Oller and E. Oset, Nucl. Phys. A 620, 438 (1997); . hep- ph/9702314Nucl. Phys. A. 652407(E) Nucl. Phys. A 652, 407 (1999) [hep- ph/9702314]. . R Molina, D Nicmorus, E Oset, arXiv:0809.2233Phys. Rev. D. 78114018hep-phR. Molina, D. Nicmorus and E. Oset, Phys. Rev. D 78, 114018 (2008) [arXiv:0809.2233 [hep-ph]]. . L S Geng, E Oset, arXiv:0812.1199Phys. Rev. D. 7974009hep-phL. S. Geng and E. Oset, Phys. Rev. D 79 (2009) 074009 [arXiv:0812.1199 [hep-ph]]. . H Nagahiro, J Yamagata-Sekihara, E Oset, S Hirenzaki, R Molina, arXiv:0809.3717Phys. Rev. D. 79114023hep-phH. Nagahiro, J. Yamagata-Sekihara, E. Oset, S. Hirenzaki and R. Molina, Phys. Rev. D 79, 114023 (2009) [arXiv:0809.3717 [hep-ph]]; . J J Xie, E Oset, L S Geng, arXiv:1509.06469Phys. Rev. C. 9325202nucl-thJ. J. Xie, E. Oset and L. S. Geng, Phys. Rev. C 93, 025202 (2016) [arXiv:1509.06469 [nucl-th]]. . E Oset, L S Geng, R Molina, J. Phys. Conf. Ser. 34812004E. Oset, L. S. Geng and R. Molina, J. Phys. Conf. Ser. 348 (2012) 012004; D B Lichtenberg, Unitary Symmetry and Elementary Particles. New YorkAcademic Press2nd editionD. B. Lichtenberg, Unitary Symmetry and Elementary Particles, 2nd edition, Academic Press, New York, 1978. . M Koll, R Ricken, D Merten, B C Metsch, H R Petry, hep- ph/0008220Eur. Phys. J. A. 973M. Koll, R. Ricken, D. Merten, B. C. Metsch and H. R. Petry, Eur. Phys. J. A 9, 73 (2000) [hep- ph/0008220]. . R Ricken, M Koll, D Merten, B C Metsch, H R Petry, hep-ph/0008221Eur. Phys. J. A. 9221R. Ricken, M. Koll, D. Merten, B. C. Metsch and H. R. Petry, Eur. Phys. J. A 9, 221 (2000) [hep-ph/0008221]. Review on 'Quark Model' in PDG. C. Amsler, T. DeGrand and B. KruscheReview on 'Quark Model' in PDG(2016) [5] by C. Amsler, T. DeGrand and B. Krusche. . M Krammer, H Krasemann, Phys. Lett. B. 7358M. Krammer and H. Krasemann, Phys. Lett. B 73, 58 (1978). . Z P Li, F E Close, T Barnes, Phys. Rev. D. 432161Z. P. Li, F. E. Close and T. Barnes, Phys. Rev. D 43, 2161 (1991). . L Y Dai, M R Pennington, arXiv:1404.7524Phys. Rev. D. 90336004hep-phL. Y. Dai and M. R. Pennington, Phys. Rev. D 90, no. 3, 036004 (2014) [arXiv:1404.7524 [hep-ph]]. . T Mori, Belle Collaborationhep-ex/0610038Phys. Rev. D. 7551101T. Mori et al. [Belle Collaboration], Phys. Rev. D 75, 051101 (2007) [hep-ex/0610038]; . arXiv:0704.3538J. Phys. Soc. Jap. 7674102hep-exJ. Phys. Soc. Jap. 76, 074102 (2007) [arXiv:0704.3538 [hep-ex]]. . K Abe, Belle CollaborationarXiv:0711.1926hep-exK. Abe et al. [Belle Collaboration], arXiv:0711.1926 [hep-ex]; . S Uehara, Belle CollaborationarXiv:0805.3387Phys. Rev. D. 7852004hep-exS. Uehara et al. [Belle Collaboration], Phys. Rev. D 78, 052004 (2008) [arXiv:0805.3387 [hep-ex]]; . S Uehara, Belle CollaborationarXiv:0903.3697Phys. Rev. D. 7952009hep-exS. Uehara et al. [Belle Collaboration], Phys. Rev. D 79, 052009 (2009) [arXiv:0903.3697 [hep-ex]]. . J A Carrasco, J Nebreda, J R Pelaez, A P Szczepaniak, arXiv:1504.03248Phys. Lett. B. 749399hep-phJ. A. Carrasco, J. Nebreda, J. R. Pelaez and A. P. Szczepaniak, Phys. Lett. B 749, 399 (2015) [arXiv:1504.03248 [hep-ph]]; . J T Londergan, J Nebreda, J R Pelaez, A Szczepaniak, arXiv:1311.7552Phys. Lett. B. 7299hep-phJ. T. Londergan, J. Nebreda, J. R. Pelaez and A. Szczepaniak, Phys. Lett. B 729, 9 (2014) [arXiv:1311.7552 [hep-ph]]; . J R Pelaez, F J Yndurain, hep-ph/0312187Phys. Rev. D. 69114001J. R. Pelaez and F. J. Yndurain, Phys. Rev. D 69, 114001 (2004) [hep-ph/0312187]; . R Garcia-Martin, R Kaminski, J R Pelaez, J Ruiz De Elvira, F J Yndurain, arXiv:1102.2183Phys. Rev. D. 8374004hep-phR. Garcia-Martin, R. Kaminski, J. R. Pelaez, J. Ruiz de Elvira and F. J. Yndurain, Phys. Rev. D 83, 074004 (2011) [arXiv:1102.2183 [hep-ph]]. . A V Anisovich, V V Anisovich, A V Sarantsev, hep- ph/0003113Phys. Rev. D. 6251502A. V. Anisovich, V. V. Anisovich and A. V. Sarantsev, Phys. Rev. D 62, 051502 (2000) [hep- ph/0003113]. . B Ananthanarayan, G Colangelo, J Gasser, H Leutwyler, hep- ph/0005297Phys. Rept. 353B. Ananthanarayan, G. Colangelo, J. Gasser and H. Leutwyler, Phys. Rept. 353, 207 (2001) [hep- ph/0005297]. . G Veneziano, Nuovo Cim. 57190G. Veneziano, Nuovo Cim. 57, 190 (1968); . C Lovelace, Phys. Lett. B. 28264C. Lovelace, Phys. Lett. B 28, 264 (1968); . J A Saphiro, Phys. Rev. 1791345J. A. Saphiro, Phys. Rev. 179, 1345 (1969). . U.-G Meißner, Phys. Rept. 161213U.-G. Meißner, Phys. Rept. 161 (1988) 213. . M Bando, T Kugo, S Uehara, K Yamawaki, T Yanagida, Phys. Rev. Lett. 541215M. Bando, T. Kugo, S. Uehara, K. Yamawaki and T. Yanagida, Phys. Rev. Lett. 54, 1215 (1985). . M Bando, T Kugo, K Yamawaki, Phys. Rept. 164217M. Bando, T. Kugo and K. Yamawaki, Phys. Rept. 164, 217 (1988). . J A Oller, E Oset, hep-ph/9809337Phys. Rev. D. 6074023J. A. Oller and E. Oset, Phys. Rev. D 60, 074023 (1999) [hep-ph/9809337]. . N Kaiser, P B Siegel, W Weise, nucl-th/9505043Nucl. Phys. A. 594325N. Kaiser, P. B. Siegel and W. Weise, Nucl. Phys. A 594 (1995) 325 [nucl-th/9505043]; . N Kaiser, P B Siegel, W Weise, nucl-th/9507036Phys. Lett. B. 36223N. Kaiser, P. B. Siegel and W. Weise, Phys. Lett. B 362 (1995) 23 [nucl-th/9507036]. . Z H Guo, J A Oller, J Ruiz De, Elvira , arXiv:1206.4163Phys. Rev. D. 8654006hep-phZ. H. Guo, J. A. Oller and J. Ruiz de Elvira, Phys. Rev. D 86, 054006 (2012) [arXiv:1206.4163 [hep-ph]]. . L Roca, E Oset, J Singh, hep-ph/0503273Phys. Rev. D. 7214002L. Roca, E. Oset and J. Singh, Phys. Rev. D 72, 014002 (2005) [hep-ph/0503273]. . L Alvarez-Ruso, J A Oller, J M Alarcon, arXiv:0906.0222Phys. Rev. D. 8054011hep-phL. Alvarez-Ruso, J. A. Oller and J. M. Alarcon, Phys. Rev. D 80, 054011 (2009) [arXiv:0906.0222 [hep-ph]]; . arXiv:1007.4512Phys. Rev. D. 8294028hep-phPhys. Rev. D 82, 094028 (2010) [arXiv:1007.4512 [hep-ph]]. . S Sarkar, E Oset, M J Vicente, Vacas, nucl-th/0407025Nucl. Phys. A. 75090Nucl. Phys. AS. Sarkar, E. Oset and M. J. Vicente Vacas, Nucl. Phys. A 750, 294 (2005) Erratum: [Nucl. Phys. A 780, 90 (2006)] [nucl-th/0407025]. . J A Oller, Nucl. Phys. A. 72585J. A. Oller, Nucl. Phys. A 725, 85 (2003). . J A Oller, hep-ph/9908493Phys. Lett. B. 477187J. A. Oller, Phys. Lett. B 477, 187 (2000) [hep-ph/9908493]. . J A L Oller ; R, A Delgado, F J Dobado, Llanes-Estrada, arXiv:1408.1193Phys. Rev. Lett. 477221803Phys. Lett. B. hep-phJ. A. Oller, Phys. Lett. B 477, 187 (2000) [hep-ph/9908493] R. L. Delgado, A. Dobado and F. J. Llanes-Estrada, Phys. Rev. Lett. 114, 221803 (2015) [arXiv:1408.1193 [hep-ph]]; . A Dobado, M J Herrero, J R Pelaez, E. Ruiz Morales, hep-ph/9912224Phys. Rev. D. 6255011A. Dobado, M. J. Herrero, J. R. Pelaez and E. Ruiz Morales, Phys. Rev. D 62, 055011 (2000) [hep-ph/9912224]. . U.-G Meißner, J A Oller, nucl-th/9912026Nucl. Phys. A. 673311U.-G. Meißner and J. A. Oller, Nucl. Phys. A 673, 311 (2000) [nucl-th/9912026]; . J M Alarcon, J Martin Camalich, J A Oller, arXiv:1210.4450Annals Phys. 336413hep-phJ. M. Alarcon, J. Martin Camalich and J. A. Oller, Annals Phys. 336, 413 (2013) [arXiv:1210.4450 [hep-ph]]. . M Albaladejo, J A Oller, L Roca, arXiv:1011.1434Phys. Rev. D. 8294019hep-phM. Albaladejo, J. A. Oller and L. Roca, Phys. Rev. D 82, 094019 (2010) [arXiv:1011.1434 [hep-ph]]. . P C Bruns, M Mai, U.-G Meißner, arXiv:1012.2233Phys. Lett. B. 697254nucl-thP. C. Bruns, M. Mai and U.-G. Meißner, Phys. Lett. B 697 (2011) 254 [arXiv:1012.2233 [nucl-th]]. . M Mai, U.-G Meißner, arXiv:1411.7884Eur. Phys. J. A. 51330hep-phM. Mai and U.-G. Meißner, Eur. Phys. J. A 51 (2015) no.3, 30 [arXiv:1411.7884 [hep-ph]]. . D Gamermann, E Oset, arXiv:0704.2314Eur. Phys. J. A. 33119hep-phD. Gamermann and E. Oset, Eur. Phys. J. A 33, 119 (2007) [arXiv:0704.2314 [hep-ph]]. . J M Dias, F Aceti, E Oset, arXiv:1410.1785Phys. Rev. D. 91776001hep-phJ. M. Dias, F. Aceti and E. Oset, Phys. Rev. D 91, no. 7, 076001 (2015) [arXiv:1410.1785 [hep-ph]]. . O Romanets, L Tolos, C Garcia-Recio, J Nieves, L L Salcedo, R G E Timmermans, arXiv:1202.2239Phys. Rev. D. 85114032hep-phO. Romanets, L. Tolos, C. Garcia-Recio, J. Nieves, L. L. Salcedo and R. G. E. Timmermans, Phys. Rev. D 85, 114032 (2012) [arXiv:1202.2239 [hep-ph]]. . X W Kang, J A Oller, arXiv:1606.06665Phys. Rev. D. 94554010hep-phX. W. Kang and J. A. Oller, Phys. Rev. D 94, no. 5, 054010 (2016) [arXiv:1606.06665 [hep-ph]]. . L Roca, M Mai, E Oset, U.-G Meißner, arXiv:1503.02936Eur. Phys. J. C. 755218hep-phL. Roca, M. Mai, E. Oset and U.-G. Meißner, Eur. Phys. J. C 75 (2015) no.5, 218 [arXiv:1503.02936 [hep-ph]]. . J A Oller, E Oset, J R Pelaez, hep-ph/9804209Phys. Rev. D. 5999903Phys. Rev. DJ. A. Oller, E. Oset and J. R. Pelaez, Phys. Rev. D 59, 074001 (1999); (E) Phys. Rev. D 60, 099906 (1999); (E) Phys. Rev. D 75, 099903 (2007). [hep-ph/9804209]. . T Hyodo, D Jido, A Hosaka, arXiv:1108.5524Phys. Rev. C. 8515201nucl-thT. Hyodo, D. Jido and A. Hosaka, Phys. Rev. C 85, 015201 (2012) [arXiv:1108.5524 [nucl-th]]. . F Aceti, E Oset, arXiv:1202.4607Phys. Rev. D. 8614012hep-phF. Aceti and E. Oset, Phys. Rev. D 86, 014012 (2012) [arXiv:1202.4607 [hep-ph]]. . T Sekihara, T Hyodo, D Jido, arXiv:1411.2308PTEP. 2015hep-phT. Sekihara, T. Hyodo and D. Jido, PTEP 2015, 063D04 (2015) [arXiv:1411.2308 [hep-ph]]. . M Albaladejo, J A Oller, arXiv:0801.4929Phys. Rev. Lett. 101252002hep-phM. Albaladejo and J. A. Oller, Phys. Rev. Lett. 101, 252002 (2008) [arXiv:0801.4929 [hep-ph]]. . D V Bugg, arXiv:0706.1341Eur. Phys. J. C. 5255hep-exD. V. Bugg, Eur. Phys. J. C 52, 55 (2007) [arXiv:0706.1341 [hep-ex]]. . M Albaladejo, J A Oller, arXiv:1107.3035arXiv:1201.0443Phys. Rev. C. 8434005nucl-th. nucl-thM. Albaladejo and J. A. Oller, Phys. Rev. C 84, 054009 (2011) [arXiv:1107.3035 [nucl-th]]; ibid 86 034005 (2012) [arXiv:1201.0443 [nucl-th]]. . Z H Guo, J A Oller, G Ríos, arXiv:1305.5790Phys. Rev. C. 89114002nucl-thZ. H. Guo, J. A. Oller and G. Ríos, Phys. Rev. C 89, no. 1, 014002 (2014) [arXiv:1305.5790 [nucl-th]]; . J A Oller, arXiv:1402.2449Phys. Rev. C. 9324002nucl-thJ. A. Oller, Phys. Rev. C 93, 024002 (2016) [arXiv:1402.2449 [nucl-th]]; . D R Entem, J A Oller, arXiv:1610.01040nucl-thD. R. Entem and J. A. Oller, arXiv:1610.01040 [nucl-th]. . G F Chew, S Mandelstam, Phys. Rev. 119467G.F. Chew and S. Mandelstam, Phys. Rev. 119 (1960) 467. . M Albaladejo, J A Oller, arXiv:1107.3035Phys. Rev. C. 8454009nucl-thM. Albaladejo and J. A. Oller, Phys. Rev. C 84, 054009 (2011) [arXiv:1107.3035 [nucl-th]]; . J A Oller, arXiv:1402.2449Phys. Rev. C. 9324002nucl-thJ. A. Oller, Phys. Rev. C 93, 024002 (2016) [arXiv:1402.2449 [nucl-th]]. A D Martin, T D Spearman, Elementary Particle Theory. AmsterdamNoth-Holland Publishing CompanyA. D. Martin and T. D. Spearman, Elementary Particle Theory, Noth-Holland Publishing Company, Amsterdam, 1970. M E Rose, Elementary Theory of Angular Momentum. New YorkDoverM. E. Rose, Elementary Theory of Angular Momentum, Dover, New York, 1995.
[]
[ "Dynamical Symmetry Approach to Periodic Hamiltonians", "Dynamical Symmetry Approach to Periodic Hamiltonians" ]
[ "Hui Li [email protected] \nCenter for Theoretical Physics\nSloane Physics Laboratory\nYale University\n06520-8120New HavenCT\n", "Dimitri Kusnezov [email protected] \nCenter for Theoretical Physics\nSloane Physics Laboratory\nYale University\n06520-8120New HavenCT\n" ]
[ "Center for Theoretical Physics\nSloane Physics Laboratory\nYale University\n06520-8120New HavenCT", "Center for Theoretical Physics\nSloane Physics Laboratory\nYale University\n06520-8120New HavenCT" ]
[]
We show that dynamical symmetry methods can be applied to Hamiltonians with periodic potentials. We construct dynamical symmetry Hamiltonians for the Scarf potential and its extensions using representations of su(1, 1) and so(2, 2). Energy bands and gaps are readily understood in terms of representation theory. We compute the transfer matrices and dispersion relations for these systems, and find that the complementary series plays a central role as well as non-unitary representations.
10.1063/1.533265
[ "https://export.arxiv.org/pdf/solv-int/9912007v1.pdf" ]
15,675,122
solv-int/9912007
ddd3cfb45a9e1c70a62c30e23f2c3a713e598330
Dynamical Symmetry Approach to Periodic Hamiltonians arXiv:solv-int/9912007v1 6 Dec 1999 April 1999 Hui Li [email protected] Center for Theoretical Physics Sloane Physics Laboratory Yale University 06520-8120New HavenCT Dimitri Kusnezov [email protected] Center for Theoretical Physics Sloane Physics Laboratory Yale University 06520-8120New HavenCT Dynamical Symmetry Approach to Periodic Hamiltonians arXiv:solv-int/9912007v1 6 Dec 1999 April 19991PACs numbers: 0365Fd0220-a0220Sv1130-j keywords: representation theorydynamical symmetryexactly solvable modelsperiodic potentialsband structure 1 We show that dynamical symmetry methods can be applied to Hamiltonians with periodic potentials. We construct dynamical symmetry Hamiltonians for the Scarf potential and its extensions using representations of su(1, 1) and so(2, 2). Energy bands and gaps are readily understood in terms of representation theory. We compute the transfer matrices and dispersion relations for these systems, and find that the complementary series plays a central role as well as non-unitary representations. I. Introduction Lie-algebraic techniques have found wide application to physical systems and generally provide descriptions of bound states or scattering states [1,2,3]. Once an algebraic structure is identified, such as a spectrum generating algebra, exactly solvable limits of the theory, or dynamical symmetries, can be constructed [4]. Here representation theory provides a full classification of states and often transitions [5]. These dynamical symmetry limits can be intuitive guides to the more general structure and behavior of solutions of the problem. Quantum systems can be characterized by three types of spectra: discrete (bound states), continuous (scattering states) and bands (periodic potentials). The third case corresponds to spectra with energy bands and gaps. Up to now, however, dynamical symmetry treatments have focused only on the first two, leaving the case of band structure and its connection to representation theory unclear. In this article, we extend the dynamical symmetry approach to quantum systems by showing that Lie algebras and representation theory can also be used to treat Hamiltonians with periodic potentials, allowing the calculation of dispersion relations and transfer matrices [6]. We will focus our attention here on the Scarf potential [7] and its generalizations and show how representations of so(2, 1) and so (2,2) can be used to explain energy bands and gaps. The representations which will be necessary are the projective representations of su(1, 1) ∼ so(2, 1). These have three families, known as the discrete, principal, and complementary series. The discrete and principal series have found much application in physics. For instance, the Pöschl-Teller Hamiltonian, H = −d/dx 2 + g/ cosh 2 x, can be expressed as an su(1, 1) dynamical symmetry [8], with the discrete and principal series describing the bound and scattering states. The complementary series, however, with −1/2 < j < 0, has found little application in physics and is considered to be more of a curiosity. We will see that this series is precisely what is needed to describe band structure in certain periodic potentials, and further, that the unitary representations correspond to the energy gaps, rather than the bands. II. Scarf Potential The Scarf potential [7] provides a convenient starting point for the dynamical symmetry analysis of periodic systems. It was originally introduced as an example of an exactly solvable crystal model. The starting point is the Hamiltonian H sc = − d 2 dx 2 + g sin 2 x .(1) The potential is shown in Fig. 1. (We choose units with mass M = 1/2 andh = 1.) The strength of the potential g is usually expressed as g = s 2 − 1/4 since for g ≤ −1/4, one can no longer define a Hilbert space for which the Hamiltonian is self-adjoint [9]. The dispersion relation for this Hamiltonian was found to be E(k) = 1 π 2 cos −1 (sin πs cos kπ) 2(2) with the band edges for the n−th band: E ± n = (n + 1 2 ± s) 2 .(3) The bands become degenerate as s → 0. For s = 1/2, the motion is that of a free particle with E(k) = k 2 . While Scarf originally showed that the potential admits band structure for 0 < s ≤ 1/2, it was demonstrated more recently that the Hamiltonian has bands for 1/2 ≤ s < 1 [9]. In our analysis, we will see that the entire range of 0 < s < 1 arises naturally from representation theory. In order to realize the Scarf problem as a dynamical symmetry, we consider the Lie algebras isomorphic to so(3). We will see that while different constructions are possible, not all are fruitful. A. so(3) realization The relationship of the Scarf Hamiltonian to so(3) was noted some time ago by Gürsey [10]. Consider the realization of so(3) given by the generators: I ± = e ±iφ [± ∂ ∂θ + cot θ(∓ 1 2 + i ∂ ∂φ )](4)I 3 = −i ∂ ∂φ(5)I 2 = I + I − + I 2 3 − I 3 = − ∂ 2 ∂θ 2 − 1 sin 2 θ ( ∂ 2 ∂φ 2 + 1 4 ) − 1 4(6) which satisfy the usual commutation relations: [I 3 , I + ] = I + , [I 3 , I − ] = −I − , [I + , I − ] = 2I 3 .(7) Then, using the basis ψ m j = √ sin θ P m j (cos θ), with the unitary representations of so(3) labeled by (j, m), the Casimir invariant I 2 can be rewritten as the Schrödinger equation: [− d 2 dθ 2 + m 2 − 1 4 sin 2 θ ] ψ m j (θ) = (j + 1 2 ) 2 ψ m j (θ).(8) While this is Scarf's Hamiltonian with g = m 2 − 1/2 (similar to g = s 2 − 1/2 in (1)), it is not a useful realization for several reasons. For instance, one cannot obtain any band structure from the discrete representations of so (3). Here the spectrum is labeled by (j + 1/2), which identifies only bound states. Further, the strength of the potential, m 2 − 1/4, is only negative for m = 0. In this case g = −1/4 and the Hamiltonian is no longer self-adjoint. Finally, since m appears in the strength g of the potential, a given representation j would correspond to different forms of the Hamiltonian, rather than the spectrum of a single Hamiltonian. For this reason, the previous realizations of H Sc are not useful for the discussion of band structure. B. so(2, 1) realization A more suitable realization of the Scarf Hamiltonian can be found using so(2, 1) ∼ su(1, 1). To obtain this form, we perform the following transformations of the so(3) algebra: (i) scaling the wavefunction by 1 √ sin θ , (ii) changing cos θ → tanh θ and (iii) taking θ → iθ. The result is the so(2, 1) realization I ± = e ±iφ (∓ sin θ ∂ ∂θ + i cos θ ∂ ∂φ )(9)I 3 = −i ∂ ∂φ(10)I 2 = −I + I − + I 2 3 − I 3 = sin 2 θ( ∂ 2 ∂θ 2 − ∂ 2 ∂φ 2 )(11) which satisfies the commutation relations The Casimir operator, using the basis states ψ m j (θ) = P m j (i cot θ), 0 < θ < π 2 , reduces to Scarf's Hamiltonian in the dynamical symmetry form: [− d 2 dθ 2 + j(j + 1) sin 2 θ ] ψ m j (θ) = m 2 ψ m j (θ).(13) While this Hamiltonian is more pleasing than Eq. (8) in the sense that a single representation j will account for the spectral properties, given by m 2 , the standard unitary representations (given in Appendix A) are not yet sufficient to describe the bands. These come in three series. The principal series with j = −1/2 + iρ, ρ > 0, the discrete series D ± j where j = −n/2 for n = 1, 2, ..., and the complementary series, −1/2 < j < 0. In order to realize band structure as a dynamical symmetry, it is clear that we must consider slightly more general representations. For Hamiltonians with periodic potentials, V (x + a) = V (x), Bloch's theorem requires the form of the wavefunctions to be [11] Ψ k (x) = e ikx u k (x), u k (x + a) = u k (x),(14) so that Ψ k (x + a) = exp(ika)Ψ k (x) is not single valued. To obtain multi-valued functions, we pass to the projective unitary representations of su(1, 1) ∼ so(2, 1) [12,13]. In contrast to the more familiar representations of so(3) which are related to the orthogonal symmetries in the vector space R 3 , the projective representations are associated with equivalence classes of vectors defined up to a phase ( as in Eq. 14). The action of a group on the projective space (rather than a vector space), defined by this equivalence class of states, leads to the projective representations. While these are multi-valued representations of the group, they are proper representations of the algebra and are hence suitable. Consequently, the single-valued representations of this covering group of su(1, 1) are infinitely many-valued representations of su(1, 1). Such representations have been used to describe bound and scattering states in the Pöschl-Teller potential [8]. They fall into the same three series as the usual unitary representations of su(1, 1) discussed above (see Appendix A). We will see that for our Scarf dynamical symmetry (13), the discrete series corresponds to the band edges, the complementary series provides the bands and gaps, while the principal series is unphysical, corresponding to the regime where the Hamiltonian is not self-adjoint. Consider first the complementary series of the projective unitary representations of so(2, 1). Here we must have − 1 2 < j < 0, or − 1 4 < j(j + 1) < 0.(15) This is precisely the range of g = j(j + 1) studied initially by Scarf in Eq. (1). The states are labeled by two quantum numbers j, m, with unitary representations given by the range of quantum numbers: m = m 0 ± n (n = 0, 1, · · ·), 0 ≤ m 0 < 1, m 0 (1 − m 0 ) < −j(j + 1) < 1 4 .(16) The last condition provides the range: 0 < m 0 < −j, and 1 + j < m 0 < 1,(17) which is illustrated in Fig. 2. For a given value of j, j(j + 1) (dots) separates unitary from non-unitary representations. The unitary representations are given by values of m for which the periodically continued parabola (dashes and solid) are above j(j + 1). One can now see that these unitary representations correspond to the band gaps rather than the bands by taking j → 0. In this case the Hamiltonian (13) is that of a free particle, so that the spectrum is E = m 2 ≥ 0. From Eqs. (−j + n) 2 < E < (1 + j + n) 2 , n − j < m < 1 + j + n.(18) The band edges are not contained in the complementary series. In contrast to the states in the band, the edge states are periodic. They form a discrete set of states which are associated with the discrete series. These series D ± j have the representations j < 0 with m given by D + j : m = −j, 1 − j, 2 − j, ... (19) D − j : m = j, j − 1, j − 2, ...(20) When we restrict to the range of physical interest, − 1 2 < j < 0, this series provides the upper and lower band edges (compare to Eq. (18)): D ± j (lower) : E = (n − j) 2 (21) D ± −j−1 (upper) : E = (n + j + 1) 2 .(22) Eq. (22) arises from the invariance of our Hamiltonian (13) under j → −1 − j, allowing both discrete series D ± j and D ± −1−j . Other discrete representations with j < −1 are not useful for band structure. The band spectrum of the Scarf potential which includes both the discrete and complementary series is shown in Fig. 3. The shaded region corresponds to the bands (non-unitary) and the unshaded to the gaps (unitary). The remaining representations, the principal series, has j = − 1 2 + iρ (ρ > 0). This gives a potential with strength g = j(j + 1) < − 1 4 , for which the Hamiltonian is no longer self-adjoint and is of no physical interest. Note that we have explained the band structure for strengths of the potential −1/4 < g = j(j + 1) < 0 and found agreement with Scarf [7]. More recently it was noted that for 0 ≤ g < 3/4, there is also band structure [9]. In this range the potential is strictly positive. (The origin of the band structure here is that the matching conditions on the wavefunctions around the singularity in the potential, needed to have a self-adjoint Hamiltonian, in a sense 'dilute' the infinite potential at these points and allow bands.) While our so(2, 1) realization above cannot account for this range of g, we will see in Section III, that a limiting case of an so(2, 2) dynamical symmetry will account for this range using the same complementary series. For g ≥ 3/4, there is no band structure and the discrete projective representation then describe the bound state spectrum. C. Transfer matrix The transfer matrix T for the period x ∈ (− π 2 , π 2 ) can be computed directly from wave functions. However, the quadratic singularity of the potential requires some care. There are two approaches one can consider, but both are equivalent [7,9,14]. In the first, we compute the transfer matrix at x = ±ε. We then match the transfer matrices on both sides of the singularity as ε → 0, which results in matching conditions on the wavefunctions. This procedure is not equivalent to an analytical continuation around the origin. The second arises in the construction of the Hilbert space of functions for which H is self-adjoint. This gives rise to equivalent matching conditions around the origin [9]. The matrix elements of the transfer matrix are related to the values of the even and odd solutions and their first derivatives at π 2 (see Appendix B). We find T = α β β * α *(23) where α and β are determined by the representations of the complementary and discrete series j, m as: α = e −imπ cos πm sin π(j + 1 2 ) + i − 2 m Γ( 1−j+m 2 )Γ( 1−j−m 2 ) Γ(− j−m 2 )Γ(− j+m 2 ) + m 2 Γ( 1+j+m 2 )Γ( 1+j−m 2 ) Γ( 2+j+m 2 )Γ( 2+j−m 2 ) cos π j+m 2 cos π j−m 2 sin π(j + 1 2 ) (24) β = i e imπ cos π j+m 2 cos π j−m 2 sin π(j + 1 2 ) 2 m Γ( 1−j+m 2 )Γ( 1−j−m 2 ) Γ(− j−m 2 )Γ(− j+m 2 ) + m 2 Γ( 1+j+m 2 )Γ( 1+j−m 2 ) Γ( 2+j+m 2 )Γ( 2+j−m 2 ) . Although the Scarf Hamiltonian can be obtained from the Pöschl-Teller potential V (x) = g/ cosh 2 x through a transformation, the above transfer matrix is not related to that of the Pöschl-Teller in any simple manner. The Bloch form of the so(2, 1) wave functions for the n−th period, (n − 1 2 )π < x ≤ (n + 1 2 )π, of the Scarf Hamiltonian are readily found to be Ψ k (x) = f k (x − nπ)e ikx(26) where: f k (z) = e −ik(z+ π 2 ) [aP m j (i cot z) + bP −m j (i cot z)](27) and: a = (−) −j/2 √ π2 −m sin mπ cos kπ 2 Γ( 1−j−m 2 ) Γ( −j+m 2 ) − sin kπ 2 Γ( 2+j−m 2 ) Γ( 1+j+m 2 ) (−) j (28) b = −(−) −j/2 √ π2 m sin mπ cos kπ 2 Γ( 1−j+m 2 ) Γ( −j−m 2 ) − sin kπ 2 Γ( 2+j+m 2 ) Γ( 1+j−m 2 ) (−) j .(29) Since − π 2 < z ≤ π 2 , f k (z) is made periodic, and Ψ k (x) satisfies Bloch's theorem. D. Dispersion relation Once we have the transfer matrix, the dispersion relation is obtained from α by the condition [11,15]: cos πk = Re(αe imπ ) = cos πm sin π(j + 1 2 ) . Solving for the energy E = m 2 , we find: E(k) = m 2 = 1 π 2 [cos −1 (sin π(j + 1 2 ) cos πk)] 2 .(31) This is precisely the result (2) obtained by Scarf. Again, the values of j and m are determined from the representations given in (18) and (21)-(22). From the dispersion relation, we can also compute the group velocity V and the effective mass M * . These will depend only upon the representation labels j and m. We have: V(j, m) = ∂E ∂k = 2m √ cos 2 πj − cos 2 πm sin πm(32) This is plotted in Fig. 4(a) for selected values of j. V(j, m) vanishes on the band edges. For j = 0, the Hamiltonian (13) describes free motion, and we expect V = ±k/M = ±2k (dots), while for j → −1/2, we have degenerate bands, and V → 0 at half-integer values of m. For the effective mass: 1 M * (j, m) = ∂ 2 E ∂k 2 = 2 cos 2 jπ sin 2 mπ − cot 2 mπ + mπ sin 2 jπ cot mπ sin 2 mπ .(33) (Note that this differs slightly from the result derived in [7].) In Fig. 4 E. Variation of Scarf potential In the next section we will present a dynamical symmetry Hamiltonian for a variation of the Scarf potential using so (2,2). This potential will have several limits where the Hamiltonian reduces to the Scarf case, including the 1/cos 2 x potential. In order to compare the transfer matrix in this limit to the Scarf result, we consider the Scarf Hamiltonian translated by π 2 , [− d 2 dθ 2 + j(j + 1) cos 2 θ ] ψ m j (θ) = m 2 ψ m j (θ).(34) The dispersion relation E(k) and the energy band structure will remain the same as before. The transfer matrix for (− π 2 , π 2 ), on the other hand, will change. The new transfer matrix can be calculated easily from a translation of the solutions of Scarf case: α = e −imπ cos πm sin π(j + 1 2 ) + i −m Γ(j + 1 2 )Γ( 1−j+m 2 )Γ( 1−j−m 2 ) Γ(−j − 1 2 )Γ( 2+j+m 2 )Γ( 2+j−m 2 ) + 1 m Γ(−j − 1 2 )Γ( 1+j+m 2 )Γ( 1+j−m 2 ) Γ(j + 1 2 )Γ(− j−m 2 )Γ(− j+m 2 ) cos π j+m 2 cos π j−m 2 sin π(j + 1 2 ) (35) β = −i e imπ cos π j+m 2 cos π j−m 2 sin π(j + 1 2 ) m Γ(j + 1 2 )Γ( 1−j+m 2 )Γ( 1−j−m 2 ) Γ(−j − 1 2 )Γ( 2+j+m 2 )Γ( 2+j−m 2 ) + 1 m Γ(−j − 1 2 )Γ( 1+j+m 2 )Γ( 1+j−m 2 ) Γ(j + 1 2 )Γ(− j−m 2 )Γ(− j+m 2 )(36) III. Generalized Scarf Potential We have now shown that band structure can arise naturally as a dynamical symmetry. We would like to build on the analysis of the Scarf problem and study a different class of periodic potentials. Consider an extension of the Scarf potential given by a generalized Pöschl-Teller Hamiltonian [16] [− d 2 dx 2 + g 1 sin 2 x + g 2 cos 2 x ] Ψ(x) = E Ψ(x), (g 1 , g 2 > − 1 4 )(37) While this Hamiltonian is exactly solvable, we would like to see how band structure can be obtained from representation theory using dynamical symmetry considerations. We will relate this Hamiltonian to the so(4) and so(2, 2) algebras and develop the band structure from the complementary series. We plot some forms of this potential in Fig. 5 for several values of g 1 and g 2 . Our study will be restricted to the range −1/4 < g 1 , g 2 ≤ 0. A. so(4) realization We start with the realization of the so(4) algebra: A ± = 1 2 e ±i(φ+α) ± ∂ ∂θ + cot 2θ i ∂ ∂φ + i ∂ ∂α ∓ 1 − i sin 2θ ∂ ∂φ − ∂ ∂α A 3 = − i 2 ∂ ∂φ + ∂ ∂α (38) B ± = 1 2 e ±i(φ−α) ± ∂ ∂θ + cot 2θ i ∂ ∂φ − i ∂ ∂α ∓ 1 − i sin 2θ ∂ ∂φ + ∂ ∂α B 3 = − i 2 ∂ ∂φ − ∂ ∂α which have the commutation relations: [A 3 , A + ] = A + , [A 3 , A − ] = −A − , [A + , A − ] = 2A 3 , [B 3 , B + ] = B + , [B 3 , B − ] = −B − , [B + , B − ] = 2B 3 , [A, B] = 0.(39) Since this is the direct product of two so(3) algebras, the quadratic Casimir invariant has the form: C 2 = 2(A 2 + B 2 ) = 2(A + A − + A 2 3 − A 3 + B + B − + B 2 3 − B 3 ) (40) = − ∂ 2 ∂θ 2 + 1 cos 2 θ − ∂ 2 ∂φ 2 − 1 4 + 1 sin 2 θ − ∂ 2 ∂α 2 − 1 4 − 1 The representations of so(4) can be labeled by (j 1 , m; j 2 , c), where j 1 , j 2 , m, c are nonnegative integers or half integers and −j 1 ≤ m ≤ j 1 , −j 2 ≤ c ≤ j 2 . It is easy to check that, as differential operators, A 2 = B 2 . So for this realization, we only need to consider symmetric representations with j 1 = j 2 = j. Hence, C 2 = 4j(j + 1). The resulting Schrödinger equation is [− d 2 dθ 2 + (m + c) 2 − 1 4 cos 2 θ + (m − c) 2 − 1 4 sin 2 θ ] ψ m,c j (θ) = (2j + 1) 2 ψ m,c j (θ)(41) While this is suitable for bound states, the discrete representations of so(4) do not explain band structure, and the strength of the potential is not in the range of physical interest. B. so(2, 2) realization We can derive a more suitable realization by passing to so (2,2). Starting with the above generators, we (i) scale the wavefunctions by 1 √ sin θ , (ii) transform cos θ → tanh θ and (iii) take θ → iθ. This results in the so(2, 2) realization: A ± = 1 2 e ±i(φ+α) ± cos θ ∂ ∂θ + i 1 + sin 2 θ 2 sin θ ∂ ∂φ + ∂ ∂α +i cos 2 θ 2 sin θ ∂ ∂φ − ∂ ∂α ∓ 1 2 sin θ A 3 = − i 2 ∂ ∂φ + ∂ ∂α (42) B ± = 1 2 e ±i(φ−α) ± cos θ ∂ ∂θ + i 1 + sin 2 θ 2 sin θ ∂ ∂φ − ∂ ∂α +i cos 2 θ 2 sin θ ∂ ∂φ + ∂ ∂α ∓ 1 2 sin θ B 3 = − i 2 ∂ ∂φ − ∂ ∂α with the commutation relations [A 3 , A + ] = A + , [A 3 , A − ] = −A − , [A + , A − ] = −2A 3 , [B 3 , B + ] = B + , [B 3 , B − ] = −B − , [B + , B − ] = −2B 3 , [A, B] = 0.(43) The quadratic Casimir invariant now has the form C 2 = 2(A 2 + B 2 ) = 2(−A + A − + A 2 3 − A 3 − B + B − + B 2 3 − B 3 )(44) = cos 2 θ ∂ 2 ∂θ 2 − cos 2 θ ∂ 2 ∂α 2 + cos 2 θ sin 2 θ ( ∂ 2 ∂φ 2 + 1 4 ) − 3 4 The states of the representations of so(2, 2) can be labeled by a direct product of representations of so(2, 1), denoted (j 1 , m; j 2 , c). Again, as differential operators, A 2 = B 2 so that j 1 = j 2 = j. Replacing θ by x, this leads to the Schrödinger equation: [− d 2 dx 2 + (m + c) 2 − 1 4 sin 2 x + (2j + 1) 2 − 1 4 cos 2 x ] ψ m,c j (x) = (m − c) 2 ψ m,c j (x)(45) Two independent solutions [17,18] in the region 0 < x < π 2 are: ψ 1 (x) = (sin 2 x) 1 4 − m+c 2 (cos 2 x) −j− 1 4 2 F 1 (−c − j, −m − j; 1 − m − c; sin 2 x) (46) ψ 2 (x) = (sin 2 x) 1 4 + m+c 2 (cos 2 x) −j− 1 4 2 F 1 (m − j, c − j; 1 + m + c; sin 2 x) In order to develop the band structure of this Schrödinger equation, we must construct the complementary series of the projective representations of so(2, 2) ∼ su(1, 1)⊕su(1, 1). This direct product structure allows us to simply use the results discussed in the Scarf dynamical symmetry. The complementary series, labeled by (j, m, c), is constructed as follows. For ranges of m and c which correspond to unitary representations of (projective) complementary series su(1, 1), the resulting so(2, 2) representation is also unitary. For ranges of m and c which are both non-unitary, the resulting direct product becomes unitary in the strip of physical interest, 0 < |m + c| ≤ 1/2 . The remaining cases when m is unitary and c is non-unitary and the case with m and c interchanged, result in non-unitary representations of the complementary series of so(2, 2). These non-unitary representations correspond to the energy bands of the extended Scarf potential, which can be seen by taking limiting cases where (i) the potential reduces to the Scarf case (see below) and (ii) the potential vanishes and the spectrum is continuous. Since the eigenvalue of our Hamiltonian is E = (m − c) 2 , and the strength of the potential is labeled by j and m + c, it is convenient to plot the resulting unitary and non-unitary representations of so(2, 2) versus m + c for selected values of j. This is done in Fig. 6. Here the energy gaps correspond to the shaded regions and the bands to the unshaded regions. Three values of j are chosen: (a) j = −0.45, (b) j = −0.35 and (c) j = −0.25. Case (c) corresponds to the Scarf potential limit. As j → −1/2 or |m+c| → 0, the bands become degenerate. On the other hand, when j → −1/4 and |m + c| → 1/2, the spectrum becomes continuous. For the band edges, one takes the direct product of su(1, 1) discrete projective representations. The bands E = (m − c) 2 are given by the following ranges of quantum numbers in the (m, c) plane: 2n − (m o + c o ) − 2j ≤ m − c ≤ 2n + 1− | 2j + 1 − m o − c o |, (47) 2n + 1+ | 2j + 1 − m o − c o | ≤ m − c ≤ 2n + 2 + 2j + m o + c o , where n = 0, 1, 2, ... and 0 <| m + c |≤ 1 2 , 0 < 2j + 1 ≤ 1 2 .(48) C. Transfer matrix Due to the strong singularity structure of the potential, one again must introduce boundary conditions for the solutions at singularities such that the Schrödinger operator can be made self-adjoint. Such an analysis has been undertaken in Refs. [9,17]. We can then easily compute the transfer matrix for the interval x ∈ (− π 2 , π 2 ) using the boundary values and first derivatives at π 2 . The transfer matrix is: T = α β β * α *(49) where α = e −i(m−c)π cos π(m − c) + cos π(2j + 1) cos π(m + c) sin π(2j + 1) sin π(m + c) + i 1 m − c Γ(−2j − 1)Γ(1 + j − m)Γ(1 + j − c) Γ(2j + 1)Γ(−j − m)Γ(−j − c) − (m − c) Γ(2j + 1)Γ(−j + m)Γ(−j + c) Γ(−2j − 1)Γ(1 + j + m)Γ(2 + j + c) sin π(j − m) sin π(j − c) sin π(2j + 1) sin π(m + c) (50) β = −i e i(m−c)π sin π(j − m) sin π(j − c) sin π(2j + 1) sin π(m + c) 1 m − c Γ(−2j − 1)Γ(1 + j − m)Γ(1 + j − c) Γ(2j + 1)Γ(−j − m)Γ(−j − c) + (m − c) Γ(2j + 1)Γ(−j + m)Γ(−j + c) Γ(−2j − 1)Γ(1 + j + m)Γ(2 + j + c)(51) D. Dispersion relation The dispersion relation is computed as before, using cos πk = Re(αe i(m−c)π ): cos πk = cos π(m − c) + cos π(2j + 1) cos π(m + c) sin π(2j + 1) sin π(m + c) . If we denote m + = m + c, m − = m − c,(53) then E = (m − c) 2 = m 2 − , and we find: E(k) = m 2 − = 1 π 2 cos −1 (cos πk sin π(2j + 1) sin πm + − cos π(2j + 1) cos πm + ) 2(54) The band structure could be explained through the projective representations of so(2, 2) when 0 < |m + c| ≤ 1 2 and − 1 2 < j ≤ − 1 4 . Again, non-unitary representations give the energy bands while unitary representations correspond to energy gaps. The group velocity for this potential is V(j, m, c) = ∂E ∂k = 2m − sin πm − sin 2 (2πj) − cos 2 πm + − cos 2 πm −(55) +2 cos πm − cos πm + cos 2πj] 1/2 The behavior is shown in Fig. 7 for selected values of j and m + c given by the dashed lines in Fig. 6. The effective mass M * (j, m, c) is given by: 1 M * = ∂ 2 E ∂k 2 = 2m − π [cot πm − − cos 2πj cos πm + csc πm − ] +2 csc 2 πm − (1 − m − π cot πm − ) sin 2 2πj − cos 2 πm +(56) − cos 2 πm − + 2 cos πm − cos πm + cos 2πj E. Limiting cases There are three cases where the extended Scarf potential reduces to the Scarf case: (i) When 2j + 1 = 1 2 , the potential becomes the Scarf potential and the transfer matrix is equivalent to Eqs. Of the three limiting cases, it is case (ii) which provides something new. To compare to the Scarf results, we let 2j + 1 =j + 1 2 , so that the potential (57) becomesj(j + 1)/ sin 2 x. For the full complementary series −1/2 < j ≤ 0, we have −1/2 <j ≤ 1/2 which corresponds to potentials g/ sin 2 x with −1/4 < g < 3/4. From (47) we find that the energy bands are given by 2n −j ≤ m − c ≤ 2n + 1 +j, (58) 2n + 1 −j ≤ m − c ≤ 2n + 2 +j. This is precisely the band structure obtained in Eq. (18), (21)-(22), but now extended to positive coupling constants, 0 ≤ g < 3/4, while using the same complementary series. This agrees with the more recent observation that the Scarf potential admits band structure for ranges of the strength which are positive [9]. It also exemplifies the fact that a dynamical symmetry does not necessarily exhaust all possible regimes of band structure, and that other realizations might provide additional regions. In principle we can extend our analysis of the generalized Scarf potential to g 1 , g 2 > 0 as well, but we do not do so here. IV. Conclusions We have shown that dynamical symmetry techniques can be applied to Hamiltonians with periodic potentials, and band structure can arise naturally from representation theory. This fills a long-standing gap in the algebraic approach to quantum systems. We have constructed dynamical symmetry Hamiltonians in so(2, 1) and so(2, 2) which can be expressed as Schrödinger operators with periodic potentials. Using projective representations motivated by Bloch's theorem, we have seen that the complementary series of so(2, 1) and so(2, 2) (and their non-unitary representations) are needed to explain band structure, while the discrete representations are important for band edges. As far as we know, this is the first application of the su(1, 1) complementary series to a physical problem. It now seems reasonable to loosely associate the three series of projective representations, discrete, principal and complementary, with the quantum problems of bound states, scattering states and energy bands. Using our dynamical symmetries, Hamiltonians such as Scarf's and its extension can be reduced to quadratic forms of the Cartan subalgebra generators, such as H = J 2 z , which are readily solved. We are then able to derive not only the band structure, but the dispersion relation and transfer matrix as well. It would be interesting to develop higher dimensional periodic Hamiltonians connected to representations of u(n, m) or so(n, m). In this case, the inclusion of additional discrete symmetries using point groups would be possible, and extensions to non-dynamical symmetry problems could be pursued. ACKNOWLEDGMENTS We would like to thank F. Iachello for many useful discussions. This work was supported by DOE grant DE-FG02-91ER40608. where α n = −j(j + 1) + (m 0 + n − 1)(m 0 + n) and β n = −j(j + 1) + (m 0 − n)(m 0 − n + 1). The above relations imply ||I n+1 + f || 2 = (I n+1 + f, I n+1 + f ) = α n+1 ||I n + f || 2 (A7) ||I n+1 − f || 2 = (I n+1 − f, I n+1 − f ) = β n+1 ||I n − f || 2 (A8) Starting with the initial state f , we can generate the coefficients α k and β k (k > 0). Of these coefficients, only β 1 can be positive or negative. This distinguishes the unitary and non-unitary representations. For instance β 1 > 0 when m 0 (m 0 − 1) > j(j + 1), which gives the complementary series. When we are in the region −j < m 0 < 1 + j, β 1 < 0. So if we start with a state f labeled by (j, m 0 ) with −j < m 0 < 1 + j, we find that all states obtained by operating with I + will have norms of the same sign. These are related to all the states I n − f by a sign change in the norm. Consequently, the states of the non-unitary representation can be divided into two families. In each family, the states have norms of the same sign, while the two families are related by a change in sign in the norm. Appendix B: A Formula for the Transfer Matrix When the potential is symmetric about the center of each period, it is convenient to consider even and odd solutions g(E, x), u(E, x) such that g(E, 0) = 1, g ′ (E, 0) = 0, (B1) u(E, 0) = 0, u ′ (E, 0) = 1. (B2) Let us define [14] g(E, − a 2 ) = g(E, a 2 ) = g 0 (E); (B3) g ′ (E, − a 2 ) = −g ′ (E, a 2 ) = g ′ 0 (E); (B4) u(E, − a 2 ) = −u(E, a 2 ) = u 0 (E); (B5) u ′ (E, − a 2 ) = u ′ (E, a 2 ) = u ′ 0 (E);(B6) According to the definition of transfer matrix [15], we can derive a formula as follows: T = α β β * α * (B7) where α = e −ika [(g 0 u ′ 0 + g ′ 0 u 0 ) + i(u ′ 0 g ′ 0 /k − u 0 g 0 k)] (B8) β = −ie ika (u ′ 0 g ′ 0 /k + u 0 g 0 k)(B9) and k = √ E, a is the period. [I 3 3, I + ] = I + , [I 3 , I − ] = −I − , [I + , I − ] = −2I 3 . (16)-(17) andFig. 2, we see that as j → 0 the allowed values of m become restricted to m = 0, ±1, ±2, .... Therefore, for a specific j, E = m 2 has band structure, with the range of m from unitary projective representations giving the energy gaps. The non-unitary projective representations of the complementary series give the energy bands (b), 1/M * (j, m) is shown for selected values of j. For j = 0, M * = M = 1/2, while for j → −1/2, 1/M * → 0. (24)-(25). (ii) When m + c = 1 2 , Eq. (38) reduces to the potential transfer matrix is consistent with the results of Sec. II.E. (iii) When |m + c| = 2j + 1, the Hamiltonian reduces to the Scarf potential with twice the period. Appendix A: Representations of so(2,1)First, let us recall the presentation of so(3). The algebra can be realized as differential operators on the sphere x 2 + y 2 + z 2 = 1. The representations are labeled by (j, m) where j is any non-negative half integer and −j ≤ m ≤ j.The so(2, 1) algebra can be realized as differential operators on a hyperboloid −x 2 − y 2 + z 2 = 1. The unitary representations are[12]:where j is a negative integer or half integer and m = −j, −j + 1, . . .. • The discrete series D − j , j < 0, m = j, j − 1, ...Since we find that the non-unitary representations are important for the bands, we review their origin[13]. Assuming I 3 f = m 0 f , with 0 ≤ m 0 < 1, and using the commutation relations for so(2, 1), we havewhere I 2 = j(j + 1) is the Casimir, a constant for a specific representation. Replacing f by I n−1 + f and I n−1 − f (n = 1, 2, . . .) in the last two equations, we get For a survey, see for example: Dynamical Groups and Spectrum Generating Algebras. A.Barut, A. Bohm and Y.Ne'emanWorld ScientificSingaporeFor a survey, see for example: Dynamical Groups and Spectrum Generating Algebras, Eds. A.Barut, A. Bohm and Y.Ne'eman (World Scientific, Singapore, 1987). . F Iachello, Chem. Phys. Lett. 78581F. Iachello, Chem. Phys. Lett. 78 581 (1981); . F Iachello, R D Levine, J. Chem. Phys. 773046F. Iachello and R. D. Levine, J. Chem. Phys. 77 3046 (1982); F Iachello, R Levine, Algebraic Theory of Molecules. OxfordOxford PressF. Iachello and R. Levine, Algebraic Theory of Molecules (Oxford Press, Oxford, 1995); F Iachello, A Arima, The Interacting Boson Model. CambridgeCambridge PressF. Iachello and A. Arima, The Interacting Boson Model (Cambridge Press, Cambridge, 1987). . Y Alhassid, F Gürsey, F Iachello, Phys. Rev. Lett. 50873Y. Alhassid, F. Gürsey and F. Iachello, Phys. Rev. Lett. 50 873 (1983). . F See For Example, Iachello, Rev. Nuovo Cimento. 191and references there inSee for example, F. Iachello, Rev. Nuovo Cimento 19 1 (1996), and references there in. . D Kusnezov, Phys. Rev. Lett. 79537D. Kusnezov, Phys. Rev. Lett. 79 537 (1997). . H Li, D Kusnezov, Yale Univ, preprintH. Li and D. Kusnezov, Yale Univ. preprint (1999); Group 22: International Colloquium on Group Theoretical Methods in Physics. S.P. Corney, R. Delbourgo and P.D. JarvisCambridge, MA310Internationalibid, in Group 22: International Colloquium on Group Theoretical Methods in Physics, Eds. S.P. Corney, R. Delbourgo and P.D. Jarvis, (International, Cambridge, MA, 1999), p. 310. . F L Scarf, Phys. Rev. 1121137F. L. Scarf, Phys. Rev. 112 1137 (1958). . Y Alhassid, F Gürsey, F Iachello, Ann. Phys. (NY). 167181Y. Alhassid, F. Gürsey and F. Iachello, Ann. Phys. (NY) 167 181 (1983); . A Frank, K B Wolf, Phys. Rev. Lett. 521737A. Frank and K. B. Wolf, Phys. Rev. Lett. 52 1737 (1984). . F Gesztesy, W Kirsch, Journal für Mathematik. 36228F. Gesztesy and W. Kirsch, Journal für Mathematik 362 28 (1984). F Gürsey, Group Theoretical Methods in Physics XI. BerlinSpringer-Verlag106F. Gürsey, in Group Theoretical Methods in Physics XI, (Springer-Verlag, Berlin, 1983) p.106. Quantum Theory of Solids. C Kittel, WileyNew YorkC. Kittel, Quantum Theory of Solids, (Wiley, New York, 1963). . V Bargmann, Ann. Math. 48568V. Bargmann, Ann. Math. 48 568 (1947). . L Pukánszky, Math. Annalen. 15696L. Pukánszky, Math. Annalen 156 96 (1964); . A O Barut, C , Proc. Roy. Soc. London. 287532A. O. Barut and C. Fronsdal, Proc. Roy. Soc. London A287, 532 (1965). . H M James, Phys. Rev. 761602H. M. James, Phys. Rev. 76 1602 (1949). C Cohen-Tannoudji, B Diu, F Laloe, Quantum Mechanics. New YorkWiley1C. Cohen-Tannoudji, B. Diu and F. Laloe, Quantum Mechanics (Wiley, New York, 1977) Vol.1 . . G Pöschl, E Teller, Z. Phys. 83143G. Pöschl and E. Teller, Z. Phys. 83 143 (1933); . L D Salem, R Montemayor, Phys. Rev. A47. 105L.D. Salem and R. Montemayor, Phys. Rev. A47 105 (1993). . F Gesztesy, C Macdeo, L Streit, J. Phys. A: Math. Gen. 18503F. Gesztesy, C. Macdeo and L. Streit, J. Phys. A: Math. Gen.18 L503 (1985). W MillerJr, Lie Theory and Special Functions. New YorkAcademic PressW. Miller Jr., Lie Theory and Special Functions, (Academic Press, New York, 1968).
[]
[ "Deep Learning for Road Traffic Forecasting: Does it Make a Difference?", "Deep Learning for Road Traffic Forecasting: Does it Make a Difference?" ]
[ "Eric L Manibardo ", "Ibai Laña ", "Senior Member, IEEEJavier Del Ser " ]
[]
[]
Deep Learning methods have been proven to be flexible to model complex phenomena. This has also been the case of Intelligent Transportation Systems (ITS), in which several areas such as vehicular perception and traffic analysis have widely embraced Deep Learning as a core modeling technology. Particularly in short-term traffic forecasting, the capability of Deep Learning to deliver good results has generated a prevalent inertia towards using Deep Learning models, without examining in depth their benefits and downsides. This paper focuses on critically analyzing the state of the art in what refers to the use of Deep Learning for this particular ITS research area. To this end, we elaborate on the findings distilled from a review of publications from recent years, based on two taxonomic criteria. A posterior critical analysis is held to formulate questions and trigger a necessary debate about the issues of Deep Learning for traffic forecasting. The study is completed with a benchmark of diverse short-term traffic forecasting methods over traffic datasets of different nature, aimed to cover a wide spectrum of possible scenarios. Our experimentation reveals that Deep Learning could not be the best modeling technique for every case, which unveils some caveats unconsidered to date that should be addressed by the community in prospective studies. These insights reveal new challenges and research opportunities in road traffic forecasting, which are enumerated and discussed thoroughly, with the intention of inspiring and guiding future research efforts in this field.
10.1109/tits.2021.3083957
[ "https://arxiv.org/pdf/2012.02260v1.pdf" ]
227,305,640
2012.02260
e0c8f5fd2d9a949ede49109549db323ebcf24957
Deep Learning for Road Traffic Forecasting: Does it Make a Difference? Eric L Manibardo Ibai Laña Senior Member, IEEEJavier Del Ser Deep Learning for Road Traffic Forecasting: Does it Make a Difference? 1Index Terms-Machine LearningDeep Learningshort-term traffic forecastingdata-driven traffic modelingspatio-temporal data mining Deep Learning methods have been proven to be flexible to model complex phenomena. This has also been the case of Intelligent Transportation Systems (ITS), in which several areas such as vehicular perception and traffic analysis have widely embraced Deep Learning as a core modeling technology. Particularly in short-term traffic forecasting, the capability of Deep Learning to deliver good results has generated a prevalent inertia towards using Deep Learning models, without examining in depth their benefits and downsides. This paper focuses on critically analyzing the state of the art in what refers to the use of Deep Learning for this particular ITS research area. To this end, we elaborate on the findings distilled from a review of publications from recent years, based on two taxonomic criteria. A posterior critical analysis is held to formulate questions and trigger a necessary debate about the issues of Deep Learning for traffic forecasting. The study is completed with a benchmark of diverse short-term traffic forecasting methods over traffic datasets of different nature, aimed to cover a wide spectrum of possible scenarios. Our experimentation reveals that Deep Learning could not be the best modeling technique for every case, which unveils some caveats unconsidered to date that should be addressed by the community in prospective studies. These insights reveal new challenges and research opportunities in road traffic forecasting, which are enumerated and discussed thoroughly, with the intention of inspiring and guiding future research efforts in this field. I. INTRODUCTION It is undeniable that the boom of the Big Data era has revolutionized most research fields [1]. The reason for this advent is that much more data are collected from a variety of sources, which must be processed and converted into various forms of knowledge for different stakeholders. Intelligent Transportation Systems (ITS), which aim to improve efficiency and security of transportation networks, embody one of the domains that has largely taken advantage of the availability of data generated by different processes and agents that interact with transportation. Some examples of ITS applications and use cases that benefit from data availability are railway passenger train delay prediction [2], airport gate assignment problem [3], adaptive control of traffic signaling in urban areas [4] and improvements of autonomous driving [5], to mention a few. Within the diversity of ITS sub-domains, this work is focused on traffic state forecasting. An accurate traffic state prediction, based on measurements of different nature (e.g. Eric average speed, occupancy, travel time, etc.), can be used to enhance traffic management and implement operational measures to relieve or prevent traffic congestion and its consequent implications [6], [7]. Motivated by this problem, a plethora of short-term traffic forecasting works is published every year, as can be seen in recent surveys on this topic [8], [9], [10]. Although there are plenty of data-driven methods that can deliver a short-term traffic prediction model, Deep Learning methods have monopolized the majority of publications of this type in recent years, becoming the reference for the community when facing new forecasting problems [11], [12]. This apogee of the application of Deep Learning methods to ITS problems is commonly justified by its theoretical capability to approximate any non-linear function [13], which is often the case of patterns underneath traffic time series [14]. In general, short-term prediction models estimate future time series values based on recent measurements, whereas longterm traffic forecasting solutions rather focuses on finding typical traffic profiles. However, Deep Learning models have their own drawbacks in the form of an inability to understand their behavior [15], [16], and the need for large quantities of data and specialized hardware resources. Under this premise, this work elaborates on Deep Learning for short-term traffic forecasting in order to ascertain the areas in which its implementation brings the best outcomes, as well as other scenarios where less computational expensive datadriven methods provide similar or superior performance. To shed light on this matter, we first analyze thoroughly recent literature on traffic forecasting, specifically those works that propose Deep Learning based solutions. Parting from this prior analysis of the state of the art, we enumerate and discuss a series of insights, good and poor practices followed by the community to date. Our critical analysis is supported by the results yielded by an experimental study comprising several shallow and Deep Learning models and traffic forecasting datasets. Finally, we outline several research niches, open challenges, and valuable research directions for the community, in close connection to the overall conclusions drawn from our study on the current status of the field. In summary, the main contributions of this paper can be described as follows: • We categorize recent Deep Learning based short-term traffic forecasting publications under a taxonomy that considers two criteria: 1) the specific forecasting problem; and 2) the selected techniques and methods to model the actual phenomena. • We critically examine the state of the art by following the above criteria, which allow us to detect research trends and to extract insights about overlooked issues and pitfalls. • We design an extensive experimentation comprising traffic data of different characteristics (e.g. highways and urban arterials) captured from several locations, covering the most common scopes of traffic forecasting, intending to show good practices when evaluating the performance of novel traffic forecasting techniques. • We enumerate a series of learned lessons drawn from the experimental setup, underscoring poor research habits that should be avoided for the sake of valuable advances in the field. • Finally, we discuss challenges and research opportunities of the field, all directed towards achieving actionable and trustworthy short-term traffic forecasting models. The rest of the paper is organized as follows: Section II provides an introduction to the evolution of short-term traffic forecasting in recent years, Deep Learning concepts, and how this technology has become the spearhead of the traffic prediction field. Section III defines the proposed taxonomic criteria, classifies and reviews the recent state of the art. A discussion on the findings and conclusions drawn by the previous review is held in Section IV. Next, a case study is conducted in Section V, and lessons learned therefrom are covered in Section VI. Challenges and research opportunities related to short-term traffic forecasting are discussed in Section VII. Finally, Section VIII ends this survey with a summary of final thoughts and an outlook. II. CONCEPTS AND PRELIMINARIES Short-term traffic forecasting has been one of the cornerstones for traffic management, as it is a reliable tool to manage and maintain traffic networks. In turn, Deep Learning comprehends a mixture of data-driven models which excellent results in many applications have stimulated their widespread adoption for short-term traffic forecasting. With that in mind, the trajectory of both research fields and their relationships are reviewed in this section, in order to provide a better understanding of how Deep Learning techniques have become dominant in the short-term traffic forecasting realm. A. Deep Learning Machine Learning techniques provide a compendium of tools to develop data-based mathematical representations of real-world processes. These representations allow automatizing certain tasks or even predicting future states of the processes being modeled. As a subset of Machine Learning, Deep Learning is inspired by the structure of human brains. The hierarchical composition of neural units, which are the fundamental building block of Deep Learning architectures, allows theoretically approximating any kind of non-linear function [17]. Since in nature there is an abundance of processes that can be modeled as non-linear functions, Deep Learning has quickly become the dominant approach in many applications. The capabilities of Deep Learning have been particularly relevant in natural language processing [18] and computer vision [19], among others, revolutionizing those fields. As a consequence, scholars are constantly applying these techniques to other areas of knowledge, seeking to extrapolate the benefits observed for these applications to other domains. Deep Learning models, like others belonging to different subsets of Machine Learning, can perform many tasks such as unsupervised learning, classification, or regression. But what makes them particularly relevant is their unique capabilities to automatically learn hierarchical features from data that are useful for the task under consideration. Classical Machine Learning methods are also called flat or shallow learning methods because they cannot learn data representations directly from unprocessed data. Feature extraction needs to be applied beforehand, often assisted by expert knowledge about the domain of the problem. Deep Learning methods, however, can learn an implicit representation of raw data for a better understanding of the process to be modeled. This capability has been proven to go beyond human reasoning limits. As a result, for many fields dealing with complex, highly dimensional data, features discovered by Deep Learning methods lead to unprecedented performance with respect to the state of the art. The other main capability of Deep Learning methods stems from their architectural flexibility: data fusion. Deep Learning flexible architectures allow for the different format data types to be merged, combining the information of multiple sources and extracting more knowledge about the process to model. Therefore, Deep Learning allows researchers to resolve complex learning problems, specially when dealing with highlydimensional data. B. Short-term traffic forecasting The development of the short-term traffic forecasting field began when researchers started to apply time series forecasting methods to characterize traffic congestion measurements [20]. Back then, one popular approach relied on the assumption that the process that generated the traffic time series could be approximated using statistical methods like auto-regressive integrated moving average (ARIMA) [21], [22]. These predictive models were only capable of predicting a single target point of a road map. With the beginning of the new millennium, the complexity of modeling techniques started to increase sharply, unleashing new research opportunities for the traffic forecasting arena. Vlahogianni et al. [9], who analyzed short-term forecasting literature from 2004 to 2012, brought up that researchers are distancing themselves from what are considered classical statistical methods (i.e. auto-regressive models), drifting towards data-driven approaches [23]. The primary motivation for this shift remains on the ineffectiveness of classical methods to forecast while facing unstable conditions. The nature of the traffic is not stationary or linear, as a manifold of studies have hitherto shown [24], [25], [26], [27]. Unfortunately, autoregressive models tend to focus on the average behavior, so peaks and rapid fluctuations are generally missed [8]. Further into the review in [9], the literature analyzed therein inspected the scope of application, input and output data type, prediction horizon, and proposed technique of publications. Finally, challenges identified in this seminal review stressed out the overabundance of studies focused on freeways and interurban road traffic. Models for urban road traffic data were 2014 [83] revealed to be less frequently studied. Furthermore, only a few solutions capable of predicting traffic simultaneously at different locations of the road network were known at the time [28], [29], [30], due to the scarcity of open-access traffic data for numerous points in a network, together with the high complexity of solving the interactions between the studied roads of the area. After assimilating the criticism and challenges established in [9], another survey [10], published years thereafter, proposed new insights unattended until then. The newer literature review over the 2014-2016 period showed an increase in the number of publications focused on prediction at urban roads, which evinced that the research field covers nowadays most of possible geographic contexts of traffic prediction. Also in connection with the prospects in [10], there is also an increasing interest within the community in obtaining networkwide predictions, possibly promoted by the improvement in spatial data coverage and computing capacity achieved over the years [31], [32]. Among other points, [10] also underscored the need for establishing a unified set of metrics that permit to fairly compare performance between different models. Absolute error metrics provide interpretable values when comparing models for a same dataset, enabling a qualitative analysis of the error, as these express the error into traffic units (for instance, vehicles per hour). However, if the benchmark comprises several traffic datasets, relative error metrics should be considered for proper model comparison. This way the magnitude of the traffic unit does not affect the comparison study. Lastly, this survey highlighted an intrinsic problem of data-driven models: concept drift [33]. Since data-driven models acquire information from large data collections in order to extract traffic patterns and provide accurate predictions, performance is affected by exogenous non-planned events such as accidents, roads works or other circumstantial changes. That same year, Ermagun et al. [34] analyzed the methodology and proposed methods for capturing spatial information over road networks. Their assumption is that present information of spatial relationships between road nodes should improve short-term predictive model performance. The study, which spans the period 1984-2016, offers an overview of the concerns of researchers in the field: 65.3% of revised works are concentrated on traffic flow, 19.2% speed, and the remaining travel time. Likewise, only 26.5% chose urban zones as the implementation area, whereas the remainder are concentrated at freeways, confirming the postulated trend of Vlahogianni et al. in [9]. Finally, the survey concludes by encouraging the community to portray road networks as graphs [35], since they ease the representation of inter-nodal relationships and their subsequent use in modeling. To round up this tour on the recent history of the field, in 2019 Angarita et al. [36] propose a general taxonomy for traffic forecasting data-driven models. The motivation of their work is not only to classify and revise learning models used to date, but also to categorize the approached traffic forecasting problems. In terms of data source type, data granularity, input and output nature, and overall scope. On the other hand, the reviewed models are sorted by pre-processing technique, type of in/out data, and step-ahead prediction. After analyzing the state of the art, they find no data-driven approach that suits all forecasting situations. All the above surveys offer insights into the goals pursued by the field, as well as an outline of the opportunities and challenges that should be addressed in prospective studies. Vlahogianni et al. advocate for data-driven approaches, which were already gaining impulse at the time [9]. Posterior surveys confirmed this trend, and data-driven models prevail nowadays as the preferred option for short-term traffic modeling. The work of Laña et al. concludes that most possible geographic scopes are covered in the state of the art since, in the origins of the short-term traffic forecasting field, there was a shortage of publications based on urban traffic data [10]. In turn, Ermagun et al. grant importance to spatio-temporal relationships between nodes of traffic networks, which is one of the most exploited relationships to extract knowledge in the actual literature [34]. On a closing note, the taxonomy of Angarita et al. in [36] classifies traffic forecasting publications from a supervised learning perspective, which inspires in part the criteria later adopted in this work. Table I summarizes the criteria under consideration for each survey that has been published so far on Deep Learning models for short-term traffic forecasting. C. When Deep Learning meets traffic forecasting As it can be concluded from the most recent surveys on short-term traffic forecasting, Deep Learning models have been F, S, O F, D F, S, TT -F, S, D, O, TT F, S F F, S, D, C, A F, S, D, O, TT Context U, H -------U, H Sensing technique ------Yes -Yes Temporal resolution Yes ----Yes --Yes Dependencies ST, T --ST -ST, T -ST, T ST, T Image representation ----- applied in this research area mostly since the last decade. Figure 1 depicts a timeline with important milestones and achievements in short-term traffic forecasting approached via Deep Learning models. Among them, recent surveys that address short-term traffic forecasting in conjunction with Deep Learning methods are analyzed in this section, in order to highlight the need for the synthesis and investigation presented in this work. Starting with [37], this work focuses on different Deep Learning architectures applied for short-term traffic forecasting and explains their components and operation. A categorization of the reviewed models is presented, providing an overview of new modeling proposals. The second and third surveys [38], [39] analyze several Deep Learning methods for different transportation topics, including traffic signal control, autonomous driving and traffic state prediction. Therefore, the authors do not stress on the specific short-term traffic forecasting sub-domain, and only a few works concerning this topic are considered. - - - Yes Graph representation - - - Yes Yes - - Yes Yes Coverage - - - - - - - - Yes # of steps ahead Yes - - - - - - - Further away from the subject of short-term traffic forecasting, [40] revolves around spatio-temporal data mining as a general task that can be formulated in many application domains. Indeed, authors review Deep Learning models proposed for transportation and human mobility, but also take into account other unrelated topics like neuroscience and crime analysis. As a result, this survey only provides insights for some traffic forecasting solutions that benefit from spatiotemporal relationships. Another survey on traffic prediction is available at [41], where authors summarize the state of the art on traffic prediction methods, and comment on the different Deep Learning architectures. It is the only work among those reviewed in Table I that performs an empirical study. This experimental setup aims for comparing performance among recent Deep Learning methods, but no further insights are given in regard to whether such performance levels are superior to those rendered by simpler learners. Next, both [42] and [43], provide an overview of existing Deep Learning methods for traffic flow forecasting. Future challenges for the research field are discussed in [42], such as a lack of well-established benchmark datasets, the inclusion of contextual data (for instance, weather data) and the development of graph-based modeling techniques. Finally, [44] conforms to a further overview of Deep Learning methods applied to short-term traffic forecasting. The authors classify published models by generation, according to the complexity and structure of the Deep Learning technique. After analyzing the summarized works at Table I, we conclude that they do not entirely provide a comprehensive, critical vision of the use of Deep Learning models for shortterm traffic forecasting. Those who match the topic are restricted to an overview of the components of available Deep Learning architectures, while the remaining ones gravitate around general subjects like transportation or spatio-temporal data mining. It is our belief that a survey should go beyond an overview of recent Deep Learning techniques, towards answering other important questions such as why? and what for?. Deep Learning models lead the majority of short-term traffic forecasting benchmarks, but often authors do not discuss the caveats related to their implementation. Some endemic features of Deep Learning do not comply with the requirements of traffic managers, including their computational complexity and black-box nature. Therefore, the adoption of such modeling techniques should be supported by other evidences and statements than a performance gain over other data-driven methods. Based on this rationale, this overview does not elaborate on the different Deep Learning architectures used in the literature, but instead focuses on classifying it according to alternative criteria more aligned with the questions formulated above. III. LITERATURE REVIEW In order to acquire a thorough understanding of the current use of Deep Learning techniques for short-term traffic forecasting, in this section a taxonomy for categorizing the published works during recent years is proposed. For this purpose, previous surveys serve as a starting point towards finding the common criteria that define these categories. A literature review is performed subsequently as per the defined criteria. A. Proposed taxonomy The proposed taxonomy follows two complementary strategies that recursively appear as such in the literature. The first criterion determines and characterizes the traffic forecasting problem to be solved, whereas the second criterion categorizes the Deep Learning method(s) in use for tackling it. We now describe such criteria in detail: Criterion 1. How to characterize the proposed problem: Research activity of short-term traffic forecasting comprehends multiple combinations of traffic measurements, which can be combined to achieve predictions of increased quality. To illustrate the taxonomy based on the first criterion, we have constructed a tree diagram (Figure 2), which represents the patterns existing in the field. Splits' order is chosen according to their effect on the proposed problem. This way, features that yield a major discrepancy for the addressed approach are placed at higher levels of the tree, and vice versa. Following the above guidelines, the first split is made according to the nature of traffic measurements. After reviewing the short-term traffic forecasting literature, two main strategies can be discerned: forecasting flow, understood as the number of vehicles that pass through the location of interest during a time interval, and speed, defined as the average speed over a certain time period of all vehicles that traverse the target location. Other traffic measurements are travel time, occupancy, transport user demand (e.g. for taxis or bikes) and congestion level, all grouped under the category others, since the number of contributions that focus on these measurements is notably lower than the previous categories. The second split in the tree considers the traffic context: urban or freeway. The different circumstances that occur in these contexts [195] generate more stable traffic patterns at highways, in contrast to urban routes, whose traffic flows are conditioned by traffic lights and signals, among other events. The third split is set on how vehicular data are collected. Roadside sensing gathers measurements directly from road segments by using inductive loops, radar, or computer vision. On the other hand, GPS and other positioning sensing technologies allow tracking vehicle travel trajectory and speed by timestamped geolocalization measurements. These data collecting strategies are defined as Roadside Car Data (RCD) and Floating Car Data (FCD), respectively. The last split addresses how the collected traffic data are aggregated. Sensors can feature different sampling frequencies, from a few seconds to several minutes. Since these sampling frequencies can impose -if high enough -a significant variability on the traffic measurement, the collected data is usually aggregated into lower temporal resolutions. Three prediction temporal resolutions [5,10,15] in minutes appear to be the most commonly used ones in the reviewed literature corpus. Additionally, the O symbol appended at the labels of the third split refers to other less used data temporal resolutions (for instance, 30 minutes). Before proceeding further, it is important to note that some publications may appear in multiple leaf nodes of the tree diagram. This is due to research work matching the criteria of different categories (for example, if the proposed model predicts diverse traffic measurements, or if different kind of data sources are addressed). Criterion 2. How to categorize a Deep Learning technique: Deep Learning architectures can be designed to adapt to diverse case studies. This design flexibility yields a heterogeneous mixture of modeling strategies. Under this premise, different features of Deep Learning methods are considered in this second criterion. A sunburst diagram ( Figure 3) is selected to illustrate the different types of Deep Learning architectures proposed in the short-term traffic forecasting literature. The width of each angular sector is proportional to the number of research papers that fall within the category, relative to the total number of revised publications. The most valuable information to predict the traffic state is usually that related to the target road. Previously collected data of the same road are in general good predictors of its short-term traffic profile. This statement is supported by the remarkable performance often offered by naive methods such as the historical average [196], which computes the next traffic prediction value as the mean value of recent measurements at the considered point of the traffic network. On the other hand, historical information of the surrounding areas (i.e. nearby roads) and measurements of downstream and upstream points of the same road have been lately incorporated to the input of the traffic forecasting model, as they can possess interesting correlations with the traffic of the target placement [197]. The spatio-temporal relationships between vicinity areas can provide better predictors of the traffic profile to be modeled [198], [199]. Those publications that feed the forecasting model exclusively with temporal data collected from the target road are categorized as temporal, whereas those that also resort to traffic measurements of other points in the same road network are categorized as spatio-temporal. The next considered split is the format in which traffic measurements are expressed. Data related to traffic conditions are usually represented as time series, since their values are correlated through time [20]. Those publications that follow a traditional time series forecasting approach are cataloged as time series. Another possible approach consists of expressing the traffic state as an image. The great development in Deep Learning architectures (in particular convolutional networks) has led to a revolution in the image processing field [200], [201], [202]. In the context of traffic forecasting, the concept idea is to develop a model that predicts an image with traffic states (e.g. an image of a traffic network colored according to congestion levels). The predicted image can be transformed to express average speed, road congestion, and other traffic descriptors. Processing image representations of traffic networks allows predicting at once the traffic state at various roads of the network. The last considered format in this second split consists of expressing traffic data as graphs. Since traffic is restricted to road networks, it can be formulated as a graph modeling problem, where the structure of the road network is abstracted as a graph G = (V, E, A) [203]. In G, V is a set of N nodes representing road locations, whereas E is a set of edges repre- RCD -10 -RCD -15 [47] RCD -O RCD -10 [179] RCD -15 [190] Context: Freeway FCD -5 - FCD -10 - FCD -15 - FCD -O -FCD -5 - FCD -10 - FCD -15 - FCD -O [154]- RCD -O - FCD -5 - FCD -10 [180] FCD -15 - FCD -O [ RCD -5 [159] [191] RCD -10 - [192] FCD -10 [193] FCD -15 [194] FCD -O - senting the roads connecting such locations, and A ∈ R NxN is an adjacency matrix, in which each element a i,j represents a numerical quantification of the proximity between nodes of the network in terms of traffic flow (e.g. the reachability from one node of the graph to another, or the intensity of traffic between them). This representation of a road network and its traffic, and the use of graph embedding techniques for their input to the Deep Learning models allows providing network-wide predictions and learn from the relationships between nodes of the graph. RCD -15 - RCD -O - FCD -5 Further along this second split, predictive models can be designed to forecast traffic state for one or multiple points of a traffic network. Those works that provide network-wide predictions are classified as network. In the case where models predict the traffic state of a single road, the research work at hand is labeled as point. Some studies predict different road congestion states simultaneously by using multiple models, but because the spatial coverage for each model remains to one road they are also cataloged as point. The fourth considered split is the number of steps-ahead pre- dicted by the model. For the simplest case, the model forecasts a single step-ahead point of the sequence (single-step), but there are models capable of predicting multiple steps ahead (multi-step). Another approach, known as multistage prediction, consists of generating a multiple steps-ahead forecasts by using a single step-ahead model, which cyclically uses as input data the recently predicted values [204]. As this strategy employs single step-ahead models, the corresponding contributions are classified as single-step. B. Understanding Deep Learning based short-term traffic forecasting literature according to the proposed taxonomy Once revised works have been categorized by the proposed problem and by the chosen Deep Learning approach, an indepth literature review is performed, in order to objectively assess the trends followed by the community in this field of research. A first inspection of the taxonomy depicted in Figure 2 reveals that the 5-minute temporal resolution positions itself as the most common in the reviewed literature. Almost half of the distinct data collections used by the reviewed papers gather data using 5 minutes sampling frequency. In addition, this trend is strengthened by the presence of Caltrans Performance Measurement System (PeMS) [205], which is by far the most popular traffic database, and also employs this sampling frequency. The 10 and 15 minute temporal resolution has less available original data collections, but sometimes the authors aggregate 5-minute data to obtain these resolutions, so the number of publications in this context increases slightly. Lastly, other temporal resolutions (denoted by the O symbol) deserve a special mention. This group merges uncommon values from 2, 3, 6, or 16 minutes to 1 or 2 hours. Some of these temporal resolutions are acquired from data collections that have been utilized only once. The 30 and 60 minutes temporal resolutions are, however, adopted in many works, usually based on FCD from taxi flow or transport user demand. Transport user demand predictions (e.g. number of bikes expected to be rented during a time interval) usually employs low temporal resolutions, as these rates suffice for capturing the collective behavior of the population. When focusing on traffic flow forecasting models, there is a clear tendency towards using RCD from freeways. The high cost of roadside sensors makes them to be typically deployed on critical road sections such as freeways, so there are more data sources of this kind than from urban arterials. However, since RCD is highly biased by the deployment location, its potential is limited when developing general-purpose traffic forecasting models. Interestingly, there are not FCD based reviewed works that forecast freeway flow. FCD that captures flow measurements is mainly obtained from taxis and logistics services, or from passengers carrying cell phones in the vehicle. In the case of urban flow prediction, there are several published works, yet the majority of them are conducted over taxi or bike floating data. Since this sensing technique only captures a fraction of the circulating vehicles, FCD is usually utilized to predict flow values of certain vehicles type, and is hence not suitable for general flow forecasting problems. Research contributions are more balanced towards traffic speed prediction, covering all data type and granularity combinations, except for FCD at freeways, where only one work has been found [154]. PeMS and Los Angeles County highway dataset (METR-LA) [206] are the preferred option when looking for freeway speed RCD. For speed prediction task, FCD provide reliable measurements since the average speed of the sensed vehicles (even though it is only a part of the vehicle fleet) can be considered as the average circulation speed on the road for an specific time interval. Lastly, the others label blend together a mixture of works that predict traffic congestion [31], [178], [179], [191], [193], expected travel time [159], [183], [194], occupancy [45] and traffic performance index [192]. A special mention must be made to those works which predict service demand, understood as the number of vehicles necessary to cover a passenger demand. In this context, taxi demand is the most covered scope, probably due to the high data availability [181], [182], [184], [187], [188], [190]. There are also works focused on sharing bike demand [180], [189]. In either case, the others label covers different combinations of data types and temporal resolutions, so there is not a clear trend in this subset of contributions. When the focus is placed on the employed methodology, Figure 3 unveils a clear increase of published studies that combine spatial and temporal information over recent years [34]. There are three times more works of this nature compared to those based only on temporal information. For a publication to be classified as temporal, the presented study can only take advantage of the knowledge from historical records at the point for which a predictions is issued. Therefore, the input format can only be classified as time series, since image and graph data representations always express information from multiple points of a traffic network. In turn, if we combine the number of publications based on temporal information with those based on spatio-temporal information, it can be seen that more than half of the works formulate the input data as time series, which is the basic format to express traffic state. As stated in the work of Ermagun et al. [34], the number of works based on graph theory [207] has increased notably in recent years. Describing a traffic network as a graph adds spatio-temporal relational information between the different places where traffic state prediction is required, providing network-wide forecasts. For the remaining input formats, traffic representation as an image is the least chosen option, with about an eighth part of reviewed works. Some of these studies generate images from time series transformations of different points of the network expressed as matrices. Since the model is fed with images, even if they are a representation of multiple time series, these publications are classified as image. Graph based, image based, together with some time series based model works, represent more than half of revised publications dealing with network-wide coverage solutions. While these studies usually concentrate on performing simultaneous predictions for multiple traffic network points, publications classified as point often put their effort on other specific issues like traffic signal processing [111], [132], the exploration of new data sources [59], [118], the improvement of performance under particular situations [103], [165] or missing data [47], [160]. Finally, single-step models represent the majority of existing publications, as is in general an easier modeling task when compared to multi-step prediction. However, there is a surprisingly high amount of contributions (17.6%) that provide network-wide multi-step prediction, considering the difficulty of predicting multiple steps-ahead of traffic state values for different locations simultaneously. IV. CRITICAL ANALYSIS A critical look to the preceding literature review raises some questions about the suitability of Deep Learning techniques for the task of short-term traffic forecasting: is it always the best choice? In this section, the main aspects of this consideration are assessed trying to answer to eight questions, and examined towards opening a debate: A. When is a forecast considered long-term? B. Are traffic datasets correctly selected? C. Can Deep Learning models be trained with scarce data? D. Does the use of contextual data yield any benefit? E. Is data representation an objective or a circumstance? F. Is automatic feature extraction interesting for traffic data? G. What possibilities does data fusion offer? H. Are comparison studies well designed? A. When is a forecast considered long-term? The use of Deep Learning techniques for traffic forecasting is relatively recent [37]. However, the frontier between short and long-term predictions seems to remain largely ambiguous for many authors, thus jeopardizing the identification of Deep Learning models devised to tackle one or the other problem. This lack of consensus hinders the proper selection of modeling counterparts in benchmarks arising in the newer studies, often featuring an assorted mixture of short-and long-term approaches. Authors of some related works establish the distinction between short-and long-term forecasting in terms of the prediction horizon, claiming that a prediction further than one hour ahead should be considered as long-term. This is by all means an unreliable consideration since, for a model where the time between consecutively arriving samples is one hour, a one-hour-ahead prediction problem would translate to a one-step-ahead forecasting task. There are other shared interpretations by which short-term forecasting is assumed to cover only the very first time steps (usually no more than five steps) disregarding the temporal resolution of the time series at hand. However, for a fixed temporal resolution, models can be prepared to directly output one particular forecasting horizon (e.g. twelve-step-ahead). This case would entail some authors to classify it as long-term, while others would claim that it is short-term prediction, as the model is trained to forecast only that specific time step. In our best attempt at homogenizing the meaning of these concepts, we herein clarify the applicability of both approaches. Short-term predictions allow travelers to select among more quick and efficient routes, by avoiding bottlenecks. Likewise, local authorities can quickly respond and hopefully circumvent traffic congestion. They are, therefore, operational models [208], which predictions are restricted to delimited geographical areas, since the interactions of the surroundings affect the traffic itself. On the other hand, longterm estimations allow traffic managers to prepare and implement strategic measures in case of predictable eventualities, such as sports events, weather conditions, road pricing, or general strikes [209]. The management of large areas (i.e. city-wide) may improve, for example, the design of road side infrastructure [210], eventually leading to more fluent traffic. Based on this rationale, short-term models are usually built based on recent past observations of the target road and its vicinity to estimate their immediate subsequent ones. Here is where the distinction between approaches can be made: the model construction methodology. Long-term traffic estimation models seek different traffic patterns (e.g. typical daily traffic profiles), and decide which of these patterns suits best the traffic behavior of the selected road for the date under choice [211]. The chosen pattern among all those elicited by the model becomes the prediction for the entire interval. Therefore, long-term estimation is, in general, less accurate and prone to larger errors in the presence of unexpected circumstances or when the selected output traffic pattern is inaccurate. By contrast, they provide a general idea of the expected behavior that can be used by traffic managers to decide strategic measures. Short-term forecasting models, on the other hand, issue their predictions by learning from recent past observations, obtaining more reliable forecasts as the model has access to better predictors for the target variable. B. Are traffic datasets correctly selected? Presented literature review unveils another issue: the majority of publications select only one data source or multiple of the same scope (for instance, traffic collected in highways or urban areas, but not from both in the same study). This trend is observable by placing attention on duplicated citations at different leaves of the tree diagram in Figure 2. Benchmarks comprising datasets of different characteristics is a good practice that should be widely adopted for assessing the performance of newly proposed Deep Learning models. As addressed in Section III-A, there are some characteristics of a traffic forecasting problem that can appreciably affect the model performance, namely data source type, data source context, predicted variable. From the perspective of the data source type, RCD is an integrated count of any transportation vehicle that passes through the sensor location, while FCD is usually collected by vehicle types like taxis, buses, trucks, or bikes. The different way in which these two data types are gathered can impact severely on the time series behavior, leading to mismatches in the performance comparison. Besides the data type, the data collecting context is also relevant. Urban traffic is regulated by road signs and light traffics, leading to a particular driving behavior with higher data dispersion. On the other hand, freeway traffic forecasting is an easier task when compared to urban, since traffic profiles are usually more stable in the absence of traffic signs, pedestrians and other urban circumstances. Lastly, the different predicted variables (flow, speed, travel time) can express traffic congestion states but have different profiles and behaviors. Traffic speed measurements conform to a stable signal over time that exhibits scarce yet deep valleys when a traffic bottleneck occurs. By contrast, traffic flow measurements often show different kinds of daily patterns, where the difficulty resides in predicting sudden spikes. To sum up, a Deep Learning architecture providing good performance results for a certain traffic data source could fail to generalize nicely to other traffic data sources with different characteristics. This behavior can be detected after testing a proposed Deep Learning method, along with a mixture of datadriven algorithms, to a collection of traffic data sources with varying characteristics. Otherwise, the novelty of the proposed model should be circumscribed to the characteristics of the traffic data source(s) over which it has been tested, rather than claiming for a superior model for traffic forecasting in the wide sense. C. Can Deep Learning models be trained with scarce data? The ITS community has leaned towards Deep Learning based on the premise that these techniques can extract knowledge from unprocessed data more effectively than shallow learning methods. This mindset might be mistaken, as shallow learning models are advantageous in scarce data scenarios. The main reason for it is that shallow learning models often require fewer parameters to be fit, leading to faster and less computationally demanding training processes, but also to less complex models. Since Deep Learning architectures have a potentially large number of trainable parameters, larger datasets are needed to prevent algorithms to learn the detail and noise in the training data (overfitting), to the extent that, unless properly counteracted, it negatively impacts on the performance of the model in real-life scenarios. Therefore, in the context of scarce data, shallow learning methods may overcome the performance of Deep Learning models whenever the validation and test stages are designed and carried out correctly. Some of the works analyzed in our literature study consider very small periods of traffic data for training and testing. It could be thought that the results of these works are biased, since one could intuitively expect that the traffic behavior changes between months, weekdays, and daily hours [212]. As an example, if a forecasting model is trained over February data, and tested over measurements collected in March, both winter months have similar traffic behavior. This issue with a limited training data context is precisely the case where Deep Learning is prone to overfitting, leading to a higher yet biased performance on a test set. After enough training epochs, the model is good at the exposed scenario: forecasting traffic at winter, non-vacation months. This means that this Deep Learning model can be proficient forecasting in these highly specific circumstances, but will probably have trouble to generalize to other scenarios, rendering it useless. Since shallow learning methods usually have less trainable parameters, they can potentially outperform Deep Learning models in this scarce training data scenario, due to a less overfitting over data distribution. In order to avoid overfitting of the model, the training samples -trainable parameters ratio should be maintained high, and the more trainable parameters of a model, the more training data should be required [213]. If this availability does not hold, the results of Deep Learning modeling experiments can be excellent due to overfitting, and be far from the good generalization properties sought for realizable traffic forecasting, which can lead to inconclusive insights. D. Does the use of contextual data yield any benefit? The performance of predictive models can be improved with information that does not directly express the road traffic state. We refer to it as contextual data, since this data indicates temporal, meteorological, social, or other circumstances that can indirectly influence traffic profile. Calendar information [214], usually discretized as [weekday, saturday, sunday], is commonly used as an additional source of knowledge [60], [100], [152], [181], supported by the intuition that traffic profile varies between workdays and weekends [215]. Another option is to provide the interval of the day, ensuring that the learning algorithm is able to correlate the temporal instant with traffic peaks [117], [128], [129], [191]. Weather has also been shown to affect drivers' behavior, eventually having an impact in the overall traffic [216]. Precipitations, wind, fog, and extreme temperatures are considered as model inputs in many traffic forecasting publications, intended to help predicting unusual traffic profiles [49], [60], [100], [159]. In this line, air pollution can be used as a congestion predictor, based on the idea that certain pollution gases (for instance, CO, CO 2 , and NOx) are expelled by exhaust systems. Therefore, air pollution should increase during traffic congestion and high occupancy periods, so models can benefit from this relationship [56], [217]. Lastly, other events like manifestations, sports games, or accidents can be fed to forecasting models in order to identify uncommon traffic profiles [32], [62], [182], [191]. In what regards to Deep Learning models, the inclusion of previously described contextual data does not differ from its implementation with other Machine Learning models. These contextual data can be expressed as time series (e.g. temperature or air pollution), or as a discrete sequence of finite values (for instance, calendar information or timestamp). Just by increasing input dimensionality, both Deep and Machine Learning models can append new sources of knowledge towards enhancing forecasting performance. However, within the bounds of network-wide traffic predictions, Deep Learning architectures stand out in the use of contextual data. The model can be fed with dedicated contextual data for each node of the traffic network, such as accidents or road cuts. This inherent capability of Deep Learning allows flexible solutions where contextual data serve as input only by demand at specific points of the neural network, avoiding output prediction noise due to high dimensionality inputs. E. Is data representation an objective or a circumstance? As previously explained, short-term forecasting models are usually built upon recent past traffic state observations. The most common option, as it can be observed in Figure 3, is to express traffic measurements as a vector for single road state prediction, or as a matrix for multiple-point prediction. Some researchers transform traffic time series into images, and estimate the images that best represent the network behavior at the time horizon for which the prediction is issued. Other authors instead design graph representations of the traffic network, aiming to learn from the spatial relationships between nodes. However, the choice of data representation format does not always respond to a practical consideration. Sometimes, the actual contribution of a published work is to effectively adapt traffic forecasting tasks to image-based Deep Learning architectures. The method with which the traffic information is transformed into an image is the claimed cornerstone of the proposed method. However, this traffic representation does not add any valuable knowledge to the field, as it is just another way of expressing a time series. When describing a network as a matrix, its structure predetermines the connections between the analyzed roads that a Deep Learning architecture is able to model. Convolutional filters (which are commonly used for image processing) usually look for adjacent values to discover interesting high-dimensional features, so the same information arranged differently can produce contrasting performance results. Moreover, the complexity of an actual road network can hardly be represented only by the nodes that have sensors on them (which are the ones considered for any data-based study). Thus, the picture that represents the road network is distorted with regard to the actual road network. For a convolutional filter, the adjacency of two pixels has a particular meaning in the way they are processed, but this adjacency can have very different meanings within a network in terms of real adjacency. Hence, the claimed "spatial" awareness that this kind of methods provide must be handled with caution. Anyhow, traffic forecasting as an image can be interesting when the inputs are indeed images, for instance, screenshots from navigation services, satellite imagery, or other similar sources, as this is its original data format. On the other hand, graph theory suits better for network representations, by providing node relationships (both directed and non-directed variants [207]), which are indeed supplementary information. The underlying structure of traffic data conforms a non-Euclidean space, as a traffic network can not be modeled in a n−dimensional linear space without losing information (for instance, direction of the edges or values associated to nodes) [218]. It is for this reason that graph representations are best suited for network-wide forecasting models, where topological information of the traffic network can be fully exploited by the model. In the case where graph modeling is not an option (e.g. unclear node assignment), time series arranged as a matrix provides a flexible and straightforward format. F. Is automatic feature extraction interesting for traffic data? As previously stated in Section II-A, the most recognized capability of Deep Learning models is their ability to learn hierarchical data representations autonomously, overriding the need for handcrafting features from traffic data. As per many related studies, it is often argued that any non-Deep Learning based traffic prediction model potentially achieves a lower performance due to the fact that Deep Learning is able to model long-term dependencies in data (as opposed to handcrafted features). However, this point of view can be debatable. Feature engineering is a difficult task that requires time, effort and domain knowledge from researchers. Nonetheless, the problem is that the predictive power of the produced features directly conditions the performance of prediction models. When input data is not self-descriptive and genuine features are not available, Deep Learning may outperform shallow learning due to its capability to learn from raw data. Nevertheless, traffic data used as inputs for traffic forecasting directly express traffic state. As an example, when the average speed of the road is available, the speed value determines if drivers are facing a free-flow traffic state or different severity levels of bottlenecks. The model only needs to interpret these values to output a proper prediction, and probably will not need any additional features. Traffic observations can indeed be processed to obtain more complex and specific indicators [197], [219], but models are often trained upon raw traffic data. Thus, it could be said that the feature values automatically extracted by Deep Learning architectures in recurrent networks are in fact, the extraction of long-term patterns, since short-term dependencies can be modeled by a multi-layer perceptron or other basic models. Furthermore, given the nature of the data handled in traffic forecasting, in many occasions the expert knows the recurrence patterns in advance, which makes the feature learning capability of Deep Learning less relevant for the prediction task. In summary, automated feature extraction is a powerful feature of Deep Learning, but in the context of traffic forecasting it could not be a deciding factor for selecting this modeling approach against other data-driven methods. G. What possibilities does data fusion offer? In addition to traffic recordings, other types of data sources may improve the prediction accuracy of traffic forecasting models. Beyond the feature mapping capacity of Deep Learning methods, a motivational driver for using these techniques should be its capability for in-model data fusion. Data fusion is defined as the capacity for automatically or semi-automatically transform information from different sources into a representation of the modeled process [220]. In this context, there are some data abstractions that can not be processed by shallow learning methods. For instance, graph theory is able to model traffic network topology, and therefore the relationships between neighboring interconnected roads. Researchers take advantage of this representation via graph embedding layers to enhance the overall prediction performance of the model, as it can learn the traffic stream direction directly based on how the nodes of the graph are connected [73], [145], [169]. Another example is text data, which is often asynchronously generated. There are some works that use Twitter messages [118] or queries issued for the same destination in a navigation service as congestion predictors [150]. Images are also data representations that can be processed by Deep Learning architectures. Some studies arrange snapshots of network-wide traffic congestion maps as a time series, and resort to Deep Learning architectures for motion prediction to estimate the future trajectory of objects [55], [193]. Other works convert traffic speed time series from multiple points of a traffic network into a heatmap, where color expresses the speed value [125], [155]. All these examples illustrate the way in which data fusion capabilities can be used to take advantage of the Deep Learning methods potential. Finally, complex neural architectures can assimilate ondemand specific data sources like weather or air pollution, by directly inserting these features at specific layers (generally after convolutional and recurrent layers, as these data do not need feature mapping). The model would use this information only when needed (e.g. during a special event like a football match), disabling these inputs during normal operation, to reduce model output noise. It does not seem that the traffic forecasting research community has taken advantage of this capability, which could be considered even more interesting for this particular field that its feature extraction competence. H. Are comparison studies well designed? The heterogeneity of methodological procedures for comparing traffic forecasting models is also visible in the literature review. For the comparison to be useful for the community, methodologically principled comparisons should be performed. Otherwise, the reported results in upcoming literature might be misleading, and disguise the real performance of novel traffic forecasting methods. For instance, some works compare their proposed model to simpler Deep Learning architectures. Instead, other contributions choose a mixture of naive, statistical, and Deep Learning models, but miss to include any kind of shallow learning method in the comparison. This variability of comparison methodologies make such studies inconclusive. In order to provide verifiable evidence of the performance improvement achieved by the proposed model, several baselines combined with state-of-theart methods should be analyzed and compared to each other. Starting with those methods without complexity, a few revised papers include a naive model as a baseline. These low-complexity straightforward methods have two main representatives: latest value (LV) (also referred to as persistence) and historical average (HA) [196]. Since LV uses the most recently recorded traffic value as its prediction, no further calculation is required. On the other hand, HA consists of averaging past traffic data of the same interval of the day and weekday to produce the forecasting value of perform some sort of rolling average over the latter available values. This way, HA requires past sample values for computing the mean for every new prediction. In fact, HA should take into account the patterns that the expert knows in advance (for example, daily and night traffic patterns). Due to their low computational effort, at least one naive method should be considered in the comparison study, as they establish the lowest performance expected to be surpassed by a more elaborated model. If a novel forecasting method performs slightly better, equal or even worse than naive methods, the complexity introduced during training would render this method irrelevant to solve such forecasting task. Therefore, these naive methods allow assessing the balance between the complexity of the proposed model and its achieved performance gap. Some works revised in the literature analysis compare a novel Deep architecture against different statistical methods (for instance, an ARIMA model). These methods can be set as a performance baseline, but their parameter tuning should be fully guaranteed to ensure that the statistical model is properly fit to the traffic data. According to [23], the comparison between statistical and neural network models is unfair, as complex nonlinear models are compared to linear statistical models, drawing attention to performance metrics. Unfortunately, our literature study confirms that this malpractice still can be found in recent research. The aforementioned naive methods also provide lower bounds for the performance of traffic forecasting models. As opposed to statistical methods, they do not have adjustable parameters, so naive methods can provide a more reliable baseline for distinct traffic forecasting scenarios. Furthermore, the community could be overlooking other benefits carried by statistical methods, such as their ability to provide insights on the data and its structure. Simple neural architectures should not be the only ones chosen for comparing newer Deep Learning proposals (for example, stacked auto-encoders). The recent literature should be revised to elaborate comprehensive comparison studies, not only with basic Deep Learning architectures that presumably will perform worse than the proposed method, but also with the latest novel architectures, especially for spatial-temporal modeling (e.g. graph convolutional networks). Finally, it should be highlighted that almost none of the revised works provides complexity measures for the models under comparison. Complexity is usually quantified by the number of internal parameters to be fit. Another wellestablished metric is the raw training time, always determined under identical conditions (i.e. same train data collection, computing resource and software framework). After building a performance benchmark, adding complexity measures should be mandatory for the sake of fairness in comparisons. With each passing year, it becomes more difficult to overcome the performance of previous proposals, narrowing the room for improvement between the latter and the emerging architectures. In this context, these measurements provide an objective tool to judge whether the complexity introduced in the novel traffic forecasting method compensates for the performance gain over the last dominating technique. Only in this way it can be verified whether the proposed model yields an effective and efficient improvement for traffic forecasting. V. CASE STUDY From our previous analysis we have concluded that the application of Deep Learning methods to short-term traffic forecasting has been, to a point, questionable. In some cases, authors do not justify the high computational complexity inherent to their proposed method, nor do they compare it to less complex modeling alternatives. In turn, the configuration of the comparison studies and the lack of depth in the discussion and analysis of the obtained results do not often clarify whether newly proposed methods outperform the state of the art at the time of their publication. This section describes a case study, which serves as an informed assessment of the effects of all the particularities of the Deep Learning methods previously described. To this end, the effectiveness of these techniques when predicting short-term traffic measurements is verified and compared to modeling techniques with less computational complexity. A. Experimental setup A traffic forecasting case study has been designed with the aim to showcase all the details and obstacles that come along with a Deep Learning comparison study. The critical literature analysis has demonstrated that Deep Learning is a suitable option for modeling spatio-temporal relationships (whenever enough data granularity is available for such relationships to be of predictive value for the traffic state to be predicted), or to map data that are not available as time series. Those solutions that address the problem as a general time series forecasting problem disregarding the nature of the time series (i.e. the same techniques would be applied for medical or stock market time series) are defined as conventional time series approaches. They only employ past traffic measurements as the input to the model since these features are good descriptors of the future traffic state. Deep Learning can predict graph or image representations that express network-wide traffic areas, but for conventional time series forecasting, the choice of such complex and computational consuming techniques must be solidly justified. To shed light on this matter, a case study is designed where the goal is to resolve a traffic time series forecasting problem. According to the proposed taxonomy, traffic forecasting setups can differ in the nature of traffic measurements, the area under scope, the sensing technique, and the way data are aggregated. Although the intention is to emulate all possible cases, the number of possible setup combinations is high, so a representative subset of problems has been selected. As shown in Figure 2, traffic flow and speed forecasting are the traffic measurements mostly addressed by the works revised in our literature study. While both time series are related by the fundamental diagram of traffic flow [221], predicting speed is in general an easier task since, for most of the time, traffic circulates at the speed limit of the road (free-flow). It is, therefore, a more stable -hence, predictable -signal over time. However, traffic flow has a wider dynamic value range, and in general undergoes multiple variations throughout the day. Likewise, drivers introduce different behaviors in cities [222]. Urban trips are exposed to a manifold of factors such as roundabouts, pedestrian crossings or traffic lights. These aspects make data noisier and hence harder to predict. In contrast, highway traffic is not affected by such factors, so forecasting freeway traffic is in general much easier. Based on the above reasons, at least four datasets should be needed to cover all possible combinations of flow and speed forecasting over urban and highway areas. Table II summarizes the attributes of each selected data source according to the taxonomy defined in Section III-A. All data sources gather traffic information by using roadside sensors. To the best of our knowledge, no public FCD data source covers one complete year of data, which is a requirement to gauge the perform of the model throughout all seasons of the year. The temporal resolution is kept to the original value provided by the data repository. The forecasting problem is formulated as a regression task, where the previous measurements of each target road collected at times {t − 4, . . . , t} are used as features to predict the traffic measurement at the same location and time t + h. Four prediction horizons h ∈ {1, 2, 3, 4} are considered, so that a separate single-step prediction model is trained for each h value and target location. Figure 4 describes the proposed experimental setup. For each traffic data source, 10 points of the road network are selected, always choosing locations that offer diverse traffic profiles. Then, a regression dataset for each target placement is built, covering data of one year. The first three weeks of every month are used for model training, whereas the remaining days are kept for testing. This split criterion allows verifying whether models are capable to learn traffic profiles that vary between seasons and vacations days. Fig. 4: Experimental setup used in our case study. After building regression datasets for every target location, training and testing data is reserved for every month along the year. Cross-validation provides measures to select the best hyperparameter configuration for every model in the benchmark via Bayesian optimization. Finally, the optimized model learns from all available training data, and predictions are generated for all testing data. In order to find the best hyper-parameter values for each regression model, three-fold cross-validation is performed: two weeks of every month are used for training, and the remaining ones of the reserved training data are used for validation. The average of the three validation scores (one per every partition) is used as the objective function of a Bayesian optimizer [226], which searches for the best hyper-parameter configuration efficiently based on the aforementioned objective function. After evaluating 30 possible configurations for each model, the best hyper-parameter configuration is set on the model at hand, which is trained over all training data. Once trained, model performance scores are computed over the data held for testing. This process reduces the chances to have a bias in the comparisons later discussed due to a bad hyper-parameter configuration of the models. The purpose of the case study is to identify the model that best predicts the traffic signal for each of the prediction horizons. To this end, we compute the R 2 score over the testing data to measure the quality of predictions between real and predicted traffic measurements. This score is given by: R 2 . = 1 − t∈Ttest (o t − o t ) 2 t∈Ttest (o t −ō t ) 2 ,(1) where T test denotes the set of time slots belonging to the test partition of the dataset at hand, o t denotes the real observed value at test time t,ō t its average, and o t the predicted one. The forecasting methods that will compose the benchmark are selected from the most commonly used algorithms and architectures in the state of the art. Statistical methods are not included in this case study, since the naive LV method already provides a performance baseline that suggests interesting insights in the experimentation. Inspired by revised works, a categorized list of learning methods is presented: Naive Latest and Attention mechanism based Auto-encoder with Convolutional input layers [ATT]. All datasets, Python source code, details on the hyperparameters sought for every model in the benchmark, sizes of Deep Learning models (number of trainable parameters), and simulation results are publicly available at https://github. com/Eric-L-Manibardo/CaseStudy2020. B. Results and statistical analysis The obtained simulation results are presented an analyzed hereby, emphasizing on the performance gaps between models and their statistical significance. The discussion begins with Figure 5, which displays the overall performance, computed as the mean R 2 score averaged over the 10 datasets of each data source, for every learning method and analyzed forecasting horizon h. As expected, the performance of the models degrades consistently as the prediction horizon increases. Traffic data corresponding to the California data source are stable, which can be appreciated by a simple visual inspection of their profiles: a high R 2 score is obtained for this dataset even when predicting four steps ahead (h = 4). As stated in Section III-B, the PeMS data source is the most popular option for ITS studies, especially when novel forecasting methods are presented. In this study, we have collected only datasets from District 4 (the so-called Bay Area), as data from other districts also provide stable traffic measurements, and District 4 is the most commonly selected sector among the revised literature. The nature of traffic measurements, jointly with the scope area of data sources, can suggest in advance how forecasting performance degrades when the prediction horizon h is increased. Both in the city and in highways, drivers tend to maintain a nominal speed whenever possible, so time series drops suddenly. Thereby, only the last timestamps provide information on this phenomena [165]. Results for New York and Seattle data sources corroborate this statement, where the performance degradation maintains a similarly decaying trend. In the case of flow data, traffic at urban roads can differ significantly depending on the selected location. Main roads maintain a nearly constant traffic flow as trucks, taxis, and other basic services vehicles occupy the roads at night and early morning hours. This is not the case of special districts like the surroundings of universities, shopping malls and recreational areas, which impact on the traffic flow trends according to the schedules of their activities. Traffic flow at highways does not face these issues, degrading the forecasting performance more smoothly when increasing the prediction horizon, as it can be observed in the California test results. With the focus set on the results of each model for the same collection of datasets, some of them render similar scores. At a first glance, the five Deep Learning architectures under consideration perform similarly to ensemble methods (except ADA). Shallow learning methods obtained a slightly lower R 2 score. Nevertheless, if the payoff for a minor performance degradation is a faster training time and less computational resource requirements, shallow learning methods should be taken into consideration. SVR is an exception, which holds by far, the longest optimization time among the analyzed methods. As long as researchers do not set iteration limit when searching the hyper plane combination that best fits the data distribution, SVR can demand long hyper-parameter optimization periods [227]. To end with, the relatively good forecasting performance of the naive LV method for low values of the forecasting horizon h imposes a narrow gap for improvement, as evinced by the negligible R 2 differences noted between models. Given such small differences between the scores attained by the models, it is necessary to assess whether they are significant in the statistical sense. Traditionally standard null hypothesis testing has been adopted in this regard, including Rows stand for data source and forecasting horizon, while columns correspond to the considered forecasting models. L V L R K N N D T R E L M S V R A D A R F R E T R G B R X G B R F N N C N N L S post-hoc tests and graphical representations (e.g. critical distance plots [228]) to visually assess which counterparts in the benchmark are performing best with statistical significance. However, recently criticism has arisen around the use of these tests, due to their lack of interpretability and the sensitivity of their contributed statistical insights, and to the number of samples used for their computation. In this context, the seminal work by Benavoli et al in [229] exposed the drawbacks of standard hypothesis testing, and promoted the use of Bayesian analysis for multiple comparison analysis. We embrace this new methodological trend, and compute a Bayesian analysis between every (Deep Learning, ensemble) model pair, which output is shown in Figure 6 (rows: Deep Learning models, columns: ensemble models). Bayesian analysis performed on every such pair allows computing the probability that one model outperforms another, based on the test results obtained by each of them over all locations, datasets and h values. The obtained probability distribution can be sampled via Monte Carlo and displayed in barycentric coordinates, comprising two regions: one where the first model outperforms the second, and vice-versa. Additionally, a region of practical equivalence (where results can be considered to be statistically equivalent) can be set as per a parameter called rope. This parameter indicates the minimum difference between the scores of both methods for them to be considered significantly different to each other. The value of rope depends on the task being solved. For example a forecasting error difference of one single car when predicting traffic flow at highways of 300 passing vehicles per analyzed interval can be ignored, as this margin does not affect a practical implementation of the predicting models. The results of the Bayesian analysis depicted in Figure 6 reveals that LSTM and CNN have a slightly higher probability of providing better results than GBR and XGBR ensembles. However, the situation changes for RFR and ETR. The sampled probabilities of both ensembles when compared to Deep Learning variants are skewed towards the regions of practical equivalence (e.g. RFR versus LSTM) or towards the region where the ensemble performs better than the Deep Learning models (e.g. ETR versus CLSTM). On a concluding note, the statistical analysis concludes that from the statistical point of view, there is no clear winner in the benchmark, nor any empirically supported reason for using Deep Learning based traffic forecasting models detrimentally to shallow modeling alternatives. VI. LEARNED LESSONS It has been concluded from the experimental results that Deep Learning models do not provide consistently better results than shallow modeling approaches. Furthermore, whenever hyper-parameters are properly tuned beforehand, ensemble methods outperform Deep Learning models in some cases. This fact demonstrates that parameter tuning should be mandatory in prospective studies to avoid unfair comparisons. Unfortunately, hyper-parameter tuning stage is often neglected or mentioned very superficially, without the relevance it deserves. Besides, the training complexity of this kind of algorithms is widely overlooked. Our literature analysis unveils that shortterm traffic forecasting publications are leaning towards more complex models on the understanding that their increased modeling power can improve the state of the art, often by narrow performance margins. However, such slight performance gaps do not translate into practical advantages for real traffic scenarios [146]. For a similar and sometimes even better result, classic Machine Learning techniques can perform as well as Deep Learning, but with less complexity and computational requirements. It is also important to underscore the essential role of naive methods when establishing the minimum complexity of the designed task ( Figure 5). These baseline models should take part in any traffic forecasting benchmark. The task to be solved in the case study (i.e. predicting traffic state at a single road) was chosen on purpose to show that for simple tasks, complex models do not significantly improve the performance of a naive model. The most meaningful information for the target to be predicted is made available at the input of every model (previous recent measurements collected at the target road). Consequently there are no complex relationships to be modeled, and ultimately, Deep Learning architectures can not provide better results than shallow learning methods. A lower performance bound can also be established by means of autoregressive models, but they are very sensitive to parameter configuration. By contrast, the lack of parameters of naive methods make them a better choice to ascertain the improvement margin that can be achieved by virtue of databased models. Another relevant aspect is how train and test data are arranged. A common practice observed in the literature is that test data are carefully chosen in order to obtain the desired performance for the presented traffic forecasting method. Test data are often selected from short temporal intervals, with almost identical characteristics than the training data. This methodology neglects some of the basic notions of Machine Learning: whenever possible, test data should be different (yet following the same distribution) than training data to check the generalization capabilities of the developed model. Some of the analyzed papers reserve only one month of traffic data for training, and one week for testing. As a result of this partitioning criterion, the results can be misleading as learned traffic behavior can be identical to that present in the test subset, thereby generalizing poorly when modeling traffic belonging to other periods along the year. In this context, different train/test partitioning choices are enabled by the amount of available data. In the best of circumstances, the data source covers at least two complete years, so researchers can train the model over the data collected in the first year, and check its generalization capabilities by testing over the data of the second year. Throughout the year, the traffic profile can change in some points of a traffic network due to e.g. road adjustments, extreme meteorological events or sociopolitical decisions. These circumstances generate unusual traffic daily patterns that modify the data distribution, inducing an additional level of difficulty for the learning and adaptation capabilities of data-based models. In this context, it is remarkable the fact that PeMS, arguably the most commonly used data source as it provides several years of traffic measurements, is not commonly utilized over the entire time span covered by this dataset. The second option is to have only one complete year of traffic data. In this case, we suggest arranging the data as done in our case study: three weeks of every month as train data, and the remaining days of every month for testing. This configuration allows the model to learn from different traffic patterns, so that authors can check if the model generalizes properly to unseen data using the test holdout and considering, at least, all traffic behaviors that can occur during the year for the location at hand. The last case corresponds to a data source that does not cover an entire year. In this scenario, the generalization of the model's performance to the overall year cannot be fully guaranteed because, depending on the time range covered by the dataset, patterns learned by the model can only be used to produce forecasts for a short period of the year. Given the amount of traffic data available nowadays for experimentation, it should not be an issue for prospective works to find a public traffic data source that matches the desired characteristics for the study, and also provides at least a full year of measurements. Finally, a good practice that unfortunately is not mostly adopted in traffic forecasting is to release the source code and data producing the results of the proposed model to the public domain. This practice would ease the revision process, ensure the reproducibility of the reported results, and foster future research efforts aimed at their improvement. Clearly, this practice is stringently subject to the confidentiality of the traffic data under consideration, but whenever possible, traffic datasets, source and results should be left in public repositories (e.g. GitHub, BitBucket and the like), so that new ideas and investigations do not depart from scratch, and advances over the field become more reliable, verifiable and expedited. VII. CHALLENGES AND RESEARCH OPPORTUNITIES As new data processing and modeling techniques flourish in the community, emerging research paths arise to yield more precise and wider covering traffic forecasting models. This section points out challenges that need to be faced, as well as research opportunities that should be explored by the community in years to come. Figure 7 summarizes graphically our vision on the future of this research area, which we next describe in detail. A. Actionability: adaptive models and prediction confidence The literature review has demonstrated that there is an increasing race towards finding the best performing traffic forecasting model. However, model actionability should be the ultimate goal for works in the field, which has not exclusively to do with the precision of the forecasts [230]. If we split data-driven modeling into sequential stages, a traffic forecasting scenario covers 1) data sensing; 2) data When one of these stages is granted too much relevance, important aspects in other phases of the data pipeline can be neglected. For instance, datasets are sometimes composed of handpicked locations of the traffic network (i.e. the data source), coincidentally those with more stable patterns that could lead to unrealistically good model performance levels. Additionally, traffic data might evolve over long time periods, which leads to the fifth and often overseen stage: model adaptation [231]. The idea of model adaptation is conceptually simple: traffic data is continuously fed to the model, which uses the new information to adapt to contextual changes affecting its learned knowledge [232], [233]. For this purpose, online learning techniques allow for the incremental update of the model when fed with new data, whereas concept drift handling approaches permit to adapt the behavior of the forecasting model to changing data distributions. Although the literature provides specific publications about this topic [165], [234], [235], [236], [237], it remains as a largely uncharted research area in traffic forecasting. Challenges & Opportunities New Modelling Techniques Lastly, for a model to become fully actionable, we firmly advocate for the addition of confidence metrics to predictions, so that traffic managers can trust and assess the uncertainty as-sociated to the traffic forecasts, and thus make better informed decisions. From a strategic point of view, confidence estimation in travel demand prediction has a solid research background [238], [239], [240], [241], [242], which helps design and scale properly road infrastructure. Confidence for longterm congestion predictions have also relevant contributions [211], [243]. However, there are no remarkable contributions on this matter for short-term traffic forecasting. All in all, forecasting models are the bridge connecting raw data to reliable decisions for traffic management. This need for actionable decisions require far more insights that a single quantitative proof of the average precision achieved by forecasting models. B. Need for a centralized traffic data repository The review of selected works has uncovered an increasing number and diversity of traffic data sources in use during recent years. The issue arises precisely by the number of available options. Even for a specific data source, different datasets can be furnished depending on the location of measurement, time intervals or aggregation rate, among other choices. Researchers often apply different preprocessing techniques (usually designed and implemented ad-hoc for the study) to prepare the data for better modeling performance due to more representative examples. For this reason, the ITS community has so far generated multiple versions of many data sources, leading to incongruities in benchmarks comprising state of the art solutions. All these issues could be overcome if a single point of information was made available for the community: in short, a centralized traffic data repository. This repository would store different versions of traffic datasets in an uniform format, according to the different preprocessing techniques applied to the original traffic data sources. The repository would also publish a ranked list of the best performing models for each dataset and forecasting task, for the sake of fair comparisons between novel models. Researchers could reference datasets from third-party research works, and compare their newly proposed technique to previous ones. Interfaces enabling the submission of new data-based pipelines, datasets and results would also be unleashed for extending the coverage of this repository, including the source code producing the results published in the corresponding publication. Definitely, the availability of this centralized repository would accelerate the understanding of the current status of the field, favoring the development of new and more reliable model comparisons. We illustrate this idea by sharing the processed datasets employed during the case study of Section V in a freely accessible GitHub repository. We firmly believe that the integration of our repository and others scattered over the literature into a single point of information will be a long awaited milestone for the community working in traffic forecasting. C. Generative models for pseudo-real synthetic datasets The vast majority of learning methods selected by the ITS community attempt to model the conditional probability P (y|x), where the desired output value y (e.g. the traffic forecast) is conditioned by the input x (the predictor variables at the input of the forecasting model). On the other hand, generative models estimate P (x|y), as they try to learn the conditional distribution of data [244]. As their name suggests, these models can generate new synthetic data instances, opening an interesting research path towards augmenting the amount of traffic data with which models are trained. Although researchers have access to traffic simulators like CORSIM [245], VISSIM [246], or SUMO [247], these tools serve a specific purpose: to provide simulated traffic environments with a concrete collection of features. Here, the fictional traffic network is designed and shaped by selecting parameters such as the number of vehicles, speed, road design, etc. Due to this tuning, the environment is conditioned by the investigation requirements and lose its realistic nature. On this line, generative models could provide synthetic data, that resemble real traffic networks. With this, scarce data sources from key locations could be extended, for scenarios where test holdout does not cover all possible traffic states. In particular, Generative Adversarial Networks (GANs) [248] have demonstrated notable results at learning to synthesize new data instances that highly resemble real data. There are hundreds of publications reported in recent times using GANs for spatio-temporal data [249]. We foresee that these generative models will acquire a capital importance in traffic forecasting, especially in traffic forecasting scenarios with scarce data. Some recent achievements have already showcased the potential of GANs for this purpose [189], [250], paving the way towards massively incorporating these models for traffic forecasting under data availability constraints. D. New modeling techniques for traffic forecasting Another research path garnering a significant interest in recent times aims at the application of alternative data-based modeling approaches to traffic forecasting, mainly towards advancing over the state of the art in terms of design factors beyond the precision of their produced predictions (e.g. computational complexity of the underlying training process). This is the case of recent attempts at incorporating elements from Reservoir Computing [251] and randomization-based Machine Learning to the traffic prediction realm, including echo state networks [252], extreme learning machines [253], or more elaborated variants of these modeling alternatives [254], [255]. The extremely efficient learning procedure of these models makes them particularly appropriate for traffic forecasting over large datasets. On the other hand, the high parametric sensitivity of models currently utilized for traffic forecasting has also motivated the renaissance of bagging and boosting tree ensembles for the purpose, which are known to be more robust against the variability of their hyper-parameters and less prone to overfitting [256], [257], [258]. Finally, initial evidences of the applicability of automated machine learning tools for efficiently finding precise traffic forecasting models have been recently reported in [259]. All in all, there is little doubt that most discoveries and innovations in data-based modeling are nowadays related to Deep Learning. However, beyond the lessons and good practices exposed previously for embracing their use, we advocate for a closer look taken at other modern modeling choices, such as the Generalized Operational Perceptron [260], Liquid State Machines [261], or models encompassing an hybridization of traffic flow models and machine learning techniques [262]. Likewise, other design objectives that do not relate strictly to the accuracy of issued predictions should be increasingly set under target, mostly considering the huge scales attained today by traffic data. A major shift towards efficiency is needed for data-based traffic forecasting models, making use of new evaluation metrics that take into account the amount of data and/or number of operations required for model training. E. Understanding and explaining Deep Learning models When trained, Deep Learning models are black-boxes that do not grant any chance for the general user to understand how their predictions are made [16], [263]. In the case of traffic operators, the reasons why a neural network produces a particular prediction are of utmost necessity for making informed decisions. In a situation of disagreement, in which the operator of the traffic network does not trust the model prediction, Deep Learning does not offer any means to explain the captured knowledge that led to its forecasts. Similarly to other fields of knowledge (e.g. medical diagnosis), this lack of transparency of Deep Learning models makes it hard for humans to accept their predictions, who often opt for worse performing yet transparent alternatives (e.g. regression trees). To the best of our knowledge, very few publications have tackled traffic forecasting from a eXplainable Artificial Intelligence (XAI) perspective. One example is [264], which studies the cause-effect relationship between nodes of a traffic network, attempting at learning how upstream and downstream traffic influence the traffic prediction at the target road. A model based on a stacked auto-encoder for missing and corrupt data imputation is presented in [92], where the features extracted by the first hidden layer are analyzed towards improving the interpretability of model decisions. In [80], authors develop an attention-based traffic forecasting model. Then, for a better understanding of the propagation mechanism learned by the model, they examine the evolution of these attention scores with respect to spatial and temporal input data. The last example is [265], where knowledge from two surrounding roads is studied by analyzing the importance of the traffic features (i.e. flow values from different time steps of the time series from these roads) by using a post-hoc XAI technique. Most cause-effect relationships in traffic data are studied theoretically [266], [267], without considering the complexity that comes from the use of Deep Learning techniques. Even with correct predictions, a model that is not understandable can be of no practical value for traffic managers willing to obtain insights beyond its predicted output. In recent years, the family of Fuzzy Rule Based Systems (FRBS) model has experienced a renaissance thanks to their envisaged relevance within the XAI paradigm [268]. FRBS learn a set of human-readable if then rules defined on a fuzzy domain that best correlate the predictors and the target variable. We envision that these models, along with post-hoc XAI techniques specific to Deep Learning models, will be central for the acceptance of shallow and Deep Learning models in traffic management processes. Specifically, fuzzy rules built for explaining the knowledge captured by black-boxes, and other forms for visualizing local explanations of the produced forecasts will surely contribute to their use in practical deployments, further contributing to the actionability of their issued predictions. VIII. CONCLUSIONS This critical survey has departed from the abundance of contributions dealing with Deep Learning techniques for road traffic forecasting. In the mid 80's, the community began to model traffic distributions using data-driven methods, replacing statistical approaches prevailing at the time. Years thereafter, Deep Learning based models has taken the lead in the field, spurred by the unprecedented performance increases observed in other application domains. Their renowned superior modeling capability made the community steer towards Deep Learning based traffic forecasting models, yet without pausing and profoundly reflecting on their benefits and downsides. Our literature review, which comprises more than 150 works at the crossroads between Deep Learning and short-term traffic forecasting, has revealed the lights and shadows of the current state of this research area. As a result, we have identified a number of questionable methodological practices and points of improvement, prescribing a set of recommendations of future studies: • An adequate selection of traffic datasets, complemented by a preprocessing stage that yields properly partitioned train and test subsets. • An appropriate reasoning of the choice of Deep Learning problems, supported by the need for fusing heterogeneous contextual and/or spatio-temporal data. • A principled comparison study, encompassing baseline models, different metrics beyond precision (e.g. computational efficiency) and a statistical study aimed at concluding whether the metric gaps are statistically significant. To further clarify whether Deep Learning makes a difference in traffic forecasting, we have designed a case study intended to serve as an argument for our claims. The obtained results render empirical evidence about two main facts: 1) the nature and scope of the selected traffic dataset establishes the complexity of the forecasting task, so challenging traffic datasets are recommended for model comparison purposes; and 2) when choosing a time series regression model for traffic forecasting, Deep Learning provides similar performance levels than shallow learning models, or at least no statistically better whatsoever. We have summarized our conclusions as a set of learned lessons, which sets forth good practices for future short-term traffic forecasting studies. Our overall analysis ends up with an outlook on the challenges that persist unaddressed in the ITS field in what refers to traffic forecasting. Research opportunities are also given for approaching such challenges, partly inspired by recent achievements in data-based modelling. Among them, we have highlighted the need for taking a step further beyond accuracy, to account for other aspects that favor the actionability of traffic forecasts (e.g. confidence estimation and model explainability). Besides, we envision that a centralized traffic data repository would allow researchers to use the same traffic datasets and to reproduce results reported in the literature. Finally, the use of generative models for creating realistic traffic data will span further opportunities for data augmentation. Despite our constructive criticism exposed throughout the paper, we agree on the flexibility that makes Deep Learning excel at modeling diverse phenomena and outperforming other data-driven models. However, it is our belief that as in other disciplines, the adoption of Deep Learning for traffic forecasting should be grounded on a fair assessment of the benefits and drawbacks it may yield. Our experiments have proven that shallow learning methods provide similar results when compared to Deep Learning architectures at a lower computational complexity, whenever comparisons are done in a principled manner. Nevertheless, far from proposing to leave it aside, we firmly defend that Deep Learning should be embraced only when its singular capabilities provide performance gains worth the extra computational cost. Fig. 2 : 2Contributions on Deep Learning based short-term traffic forecasting reported in the literature classified according to Criterion 1. From left to right, each branch level stands for nature of traffic measurements, traffic context, data collecting strategy, and temporal resolution in minutes. The O indicator refers to other less used data temporal resolutions, such as 30 minutes or 1 hour. Fig. 3 : 3Deep Learning models for short-term traffic forecasting in examined works, classified according to Criterion 2. From inside out, each ring level stands for: considered dependencies, data representation format, range of coverage, and number of steps ahead prediction. Fig. 5 : 5Heatmap showing the average R 2 test score obtained by each model and dataset. Values are computed as the mean value of the test scores obtained for the 10 locations selected for each data source and value of the forecasting horizon. FNN L. Manibardo, Ibai Laña and Javier Del Ser are with TECNALIA, Basque Research and Technology Alliance (BRTA), 48160 Derio, Bizkaia, Spain. Javier Del Ser is also with the University of the Basque Country (UPV/EHU) 48013 Bilbao, Bizkaia, Spain e-mail: [eric.lopez, ibai.lana, javier.delser]@tecnalia.com Fig. 1: Milestones in the history of Deep Learning based short-term traffic forecasting. Publications are ordered according to their publication date. Horizontal red bars denote the number of works that were published every year concerning this topic (retrieved from Scopus in November 2020 with query terms: [neural network OR deep learning OR deep neural network OR LSTM network OR deep spatio-temporal] AND [traffic prediction OR traffic forecasting OR traffic congestion OR traffic state prediction], in title, abstract or keywords).SAE 2015 2016 2017 2018 2019 2020 [126]LSTM [11] LSTM-2D network-wide [92] DSAE missing/corrupted values [32] ST-ResNet crowd flow city-wide [155] CNN traffic as images [170] STGCN and DCRNN traffic as graphs [182] DMVST taxi demand [158] T-GCN and TGC-LSTM traffic as graphs [77] ASTGCN traffic as graphs 370 368* 215 112 72 67 55 TABLE I : IPublished surveys that address short-term traffic forecasting based on Deep Learning methods. Column headers denotes the citation reference of each publication. Rows correspond to different characteristics and content included in each survey.Survey [37] [38] [39] [40] [41] [42] [43] [44] Ours Period 1994 -2018 1997 -2017 1999 -2018 2012 -2018 2015 -2020 2014-2019 2014 -2019 2014 -2020 2015 -2020 # of reviewed works ∼ 70 ∼ 15 ∼ 35 ∼ 80 ∼ 65 ∼ 40 ∼ 10 ∼ 100 ∼ 150 Measurement TABLE II : IISelected data sources and their characteristics.Location Traffic variable Scope Sensor Time resolution Year Madrid [223] Flow Urban RCD 15 min 2018 California [205] Flow Freeway RCD 5 min 2017 New York [224] Speed Urban RCD 5 min 2016 Seattle [225] Speed Freeway RCD 5 min 2015 Fig. 7: Schematic overview of identified challenges and suggested research opportunities. preprocessing, ending in built regression datasets; 3) a learning and validation phase, where a model is learned from such datasets; and 4) model testing, where the performance of the trained model is verified when predicting unseen traffic data.Echo State Network Liquid State Machine Model validation Public code Track of dataset versions Ranked list of models per dataset Pseudo-real synthetic datasets FRBS Post-hoc XAI techniques Hybrid models Online Learning Confidence metrics Generalized Operational Perceptron Centralized Repository Generative Models XAI Actionability ACKNOWLEDGMENTSThe authors would like to thank the Basque Government for its funding support through the EMAITEK and ELKARTEK programs (3KIA project, KK-2020/00049). Eric L. Manibardo receives funding support from the Basque Government through its BIKAINTEK PhD support program (grant no. 48AFW22019-00002). Javier Del Ser also thanks the same institution for the funding support received through the consolidated research group MATHMODE (ref. T1294-19). Knowledge harvesting in the big-data era. F Suchanek, G Weikum, ACM SIGMOD International Conference on Management of Data. F. Suchanek and G. Weikum, "Knowledge harvesting in the big-data era," in ACM SIGMOD International Conference on Management of Data, 2013, pp. 933-938. Railway passenger train delay prediction via neural network model. M Yaghini, M M Khoshraftar, M Seyedabadi, Journal of Advanced Transportation. 473M. Yaghini, M. M. Khoshraftar, and M. Seyedabadi, "Railway pas- senger train delay prediction via neural network model," Journal of Advanced Transportation, vol. 47, no. 3, pp. 355-368, 2013. The airport gate assignment problem: mathematical model and a tabu search algorithm. J Xu, G Bailey, Annual Hawaii International Conference on System Sciences. 10J. Xu and G. Bailey, "The airport gate assignment problem: mathemati- cal model and a tabu search algorithm," in Annual Hawaii International Conference on System Sciences. IEEE, 2001, p. 10. An experimental review of reinforcement learning algorithms for adaptive traffic signal control. P Mannion, J Duggan, E Howley, Autonomic Road Transport Support Systems. SpringerP. Mannion, J. Duggan, and E. Howley, "An experimental review of reinforcement learning algorithms for adaptive traffic signal control," in Autonomic Road Transport Support Systems. Springer, 2016, pp. 47-66. Deep learning for safe autonomous driving: Current challenges and future directions. K Muhammad, A Ullah, J Lloret, J Del Ser, V H C De Albuquerque, IEEE Transactions in Intelligent Transportation Systems. to appearK. Muhammad, A. Ullah, J. Lloret, J. Del Ser, and V. H. C. de Albuquerque, "Deep learning for safe autonomous driving: Current challenges and future directions," in IEEE Transactions in Intelligent Transportation Systems, to appear, 2020. Transportation cost and benefit analysis. T Litman, Victoria Transport Policy Institute. 31T. Litman, "Transportation cost and benefit analysis," Victoria Trans- port Policy Institute, vol. 31, 2009. Evaluation of the public health impacts of traffic congestion: a health risk assessment. J I Levy, J J Buonocore, K Von Stackelberg, Environmental health. 9165J. I. Levy, J. J. Buonocore, and K. Von Stackelberg, "Evaluation of the public health impacts of traffic congestion: a health risk assessment," Environmental health, vol. 9, no. 1, p. 65, 2010. Short-term traffic forecasting: Overview of objectives and methods. E I Vlahogianni, J C Golias, M G Karlaftis, Transport reviews. 245E. I. Vlahogianni, J. C. Golias, and M. G. Karlaftis, "Short-term traffic forecasting: Overview of objectives and methods," Transport reviews, vol. 24, no. 5, pp. 533-557, 2004. Short-term traffic forecasting: Where we are and where we're going. E I Vlahogianni, M G Karlaftis, J C Golias, Transportation Research Part C: Emerging Technologies. 43E. I. Vlahogianni, M. G. Karlaftis, and J. C. Golias, "Short-term traffic forecasting: Where we are and where we're going," Transportation Research Part C: Emerging Technologies, vol. 43, pp. 3-19, 2014. Road traffic forecasting: Recent advances and new challenges. I Laña, J Ser, M Velez, E I Vlahogianni, IEEE Intelligent Transportation Systems Magazine. 102I. Laña, J. Del Ser, M. Velez, and E. I. Vlahogianni, "Road traffic forecasting: Recent advances and new challenges," IEEE Intelligent Transportation Systems Magazine, vol. 10, no. 2, pp. 93-109, 2018. LSTM network: a deep learning approach for short-term traffic forecast. Z Zhao, W Chen, X Wu, P C Chen, J Liu, Intelligent Transport Systems. 112Z. Zhao, W. Chen, X. Wu, P. C. Chen, and J. Liu, "LSTM network: a deep learning approach for short-term traffic forecast," Intelligent Transport Systems, vol. 11, no. 2, pp. 68-75, 2017. Deep learning for short-term traffic flow prediction. N G Polson, V O Sokolov, Transportation Research Part C: Emerging Technologies. 79N. G. Polson and V. O. Sokolov, "Deep learning for short-term traffic flow prediction," Transportation Research Part C: Emerging Technologies, vol. 79, pp. 1-17, 2017. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. A M Saxe, J L Mcclelland, S Ganguli, International Conference on Learning Representations. A. M. Saxe, J. L. McClelland, and S. Ganguli, "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks," International Conference on Learning Representations, 2013. Non-linear analysis of traffic flow. A S Nair, J.-C Liu, L Rilett, S Gupta, Intelligent Transportation Systems. IEEEA. S. Nair, J.-C. Liu, L. Rilett, and S. Gupta, "Non-linear analysis of traffic flow," in Intelligent Transportation Systems. IEEE, 2001, pp. 681-685. Explainable artificial intelligence (XAI). D Gunning, Defense Advanced Research Projects Agency (DARPA), nd Web. 2D. Gunning, "Explainable artificial intelligence (XAI)," Defense Ad- vanced Research Projects Agency (DARPA), nd Web, vol. 2, no. 2, 2017. Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. A B Arrieta, N Díaz-Rodríguez, J Ser, A Bennetot, S Tabik, A Barbado, S García, S Gil-López, D Molina, R Benjamins, Information Fusion. 58A. B. Arrieta, N. Díaz-Rodríguez, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado, S. García, S. Gil-López, D. Molina, R. Benjamins et al., "Explainable artificial intelligence (xai): Concepts, taxonomies, oppor- tunities and challenges toward responsible ai," Information Fusion, vol. 58, pp. 82-115, 2020. Deep learning. Y Lecun, Y Bengio, G Hinton, Nature. 5217553Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," Nature, vol. 521, no. 7553, pp. 436-444, 2015. A survey of the usages of deep learning for natural language processing. D W Otter, J R Medina, J K Kalita, IEEE Transactions on Neural Networks and Learning Systems. D. W. Otter, J. R. Medina, and J. K. Kalita, "A survey of the usages of deep learning for natural language processing," IEEE Transactions on Neural Networks and Learning Systems, pp. 1-21, 2020. Deep learning for generic object detection: A survey. L Liu, W Ouyang, X Wang, P Fieguth, J Chen, X Liu, M Pietikäinen, International Journal of Computer Vision. 1282L. Liu, W. Ouyang, X. Wang, P. Fieguth, J. Chen, X. Liu, and M. Pietikäinen, "Deep learning for generic object detection: A survey," International Journal of Computer Vision, vol. 128, no. 2, pp. 261-318, 2020. Analysis of freeway traffic time-series data by using Box-Jenkins techniques. M S Ahmed, A R Cook, Transportation Research BoardM. S. Ahmed and A. R. Cook, Analysis of freeway traffic time-series data by using Box-Jenkins techniques. Transportation Research Board, 1979, no. 722. On forecasting freeway occupancies and volumes (abridgment). M Levin, Y.-D Tsao, Transportation Research Record. 773M. Levin and Y.-D. Tsao, "On forecasting freeway occupancies and volumes (abridgment)," Transportation Research Record, no. 773, 1980. Long term prediction of traffic flow. H Kawashima, IFAC Proceedings Volumes. 20H. Kawashima, "Long term prediction of traffic flow," IFAC Proceed- ings Volumes, vol. 20, no. 3, pp. 75-82, 1987. Statistical methods versus neural networks in transportation research: differences, similarities and some insights. M G Karlaftis, E I Vlahogianni, Transportation Research Part C: Emerging Technologies. 193M. G. Karlaftis and E. I. Vlahogianni, "Statistical methods versus neural networks in transportation research: differences, similarities and some insights," Transportation Research Part C: Emerging Technolo- gies, vol. 19, no. 3, pp. 387-399, 2011. Statistical methods for detecting non-linearity and non-stationarity in univariate shortterm time-series of traffic volume. E I Vlahogianni, M G Karlaftis, J C Golias, Transportation Research Part C: Emerging Technologies. 145E. I. Vlahogianni, M. G. Karlaftis, and J. C. Golias, "Statistical methods for detecting non-linearity and non-stationarity in univariate short- term time-series of traffic volume," Transportation Research Part C: Emerging Technologies, vol. 14, no. 5, pp. 351-367, 2006. Temporal aggregation in traffic data: implications for statistical characteristics and model choice. E Vlahogianni, M Karlaftis, Transportation Letters. 31E. Vlahogianni and M. Karlaftis, "Temporal aggregation in traffic data: implications for statistical characteristics and model choice," Transportation Letters, vol. 3, no. 1, pp. 37-49, 2011. Characterizing regimes in daily cycles of urban traffic using smooth-transition regressions. Y Kamarianakis, H O Gao, P Prastacos, Transportation Research Part C: Emerging Technologies. 185Y. Kamarianakis, H. O. Gao, and P. Prastacos, "Characterizing regimes in daily cycles of urban traffic using smooth-transition regressions," Transportation Research Part C: Emerging Technologies, vol. 18, no. 5, pp. 821-840, 2010. Dynamic analysis of traffic time series at different temporal scales: A complex networks approach. J Tang, Y Wang, H Wang, S Zhang, F Liu, Physica A: Statistical Mechanics and its Applications. 405J. Tang, Y. Wang, H. Wang, S. Zhang, and F. Liu, "Dynamic analysis of traffic time series at different temporal scales: A complex networks approach," Physica A: Statistical Mechanics and its Applications, vol. 405, pp. 303-315, 2014. Spatio-temporal autocorrelation of road network data. T Cheng, J Haworth, J Wang, Journal of Geographical Systems. 144T. Cheng, J. Haworth, and J. Wang, "Spatio-temporal autocorrelation of road network data," Journal of Geographical Systems, vol. 14, no. 4, pp. 389-413, 2012. Real-time road traffic forecasting using regime-switching space-time models and adaptive LASSO. Y Kamarianakis, W Shen, L Wynter, Applied Stochastic Models in Business and Industry. 284Y. Kamarianakis, W. Shen, and L. Wynter, "Real-time road traffic forecasting using regime-switching space-time models and adaptive LASSO," Applied Stochastic Models in Business and Industry, vol. 28, no. 4, pp. 297-315, 2012. Network-scale traffic modeling and forecasting with graphical lasso and neural networks. S Sun, R Huang, Y Gao, Journal of Transportation Engineering. 13811S. Sun, R. Huang, and Y. Gao, "Network-scale traffic modeling and forecasting with graphical lasso and neural networks," Journal of Transportation Engineering, vol. 138, no. 11, pp. 1358-1367, 2012. Large-scale transportation network congestion evolution prediction using deep learning theory. X Ma, H Yu, Y Wang, Y Wang, PloS one. 103X. Ma, H. Yu, Y. Wang, and Y. Wang, "Large-scale transportation network congestion evolution prediction using deep learning theory," PloS one, vol. 10, no. 3, 2015. DNN-based prediction model for spatio-temporal data. J Zhang, Y Zheng, D Qi, R Li, X Yi, International Conference on Advances in Geographic Information Systems. J. Zhang, Y. Zheng, D. Qi, R. Li, and X. Yi, "DNN-based predic- tion model for spatio-temporal data," in International Conference on Advances in Geographic Information Systems, 2016, pp. 1-4. A survey on concept drift adaptation. J Gama, I Žliobaitė, A Bifet, M Pechenizkiy, A Bouchachia, ACM Computing Surveys. 46420J. Gama, I.Žliobaitė, A. Bifet, M. Pechenizkiy, and A. Bouchachia, "A survey on concept drift adaptation," ACM Computing Surveys, vol. 46, no. 4, pp. 1-37, 2014. 20 Spatiotemporal traffic forecasting: review and proposed directions. A Ermagun, D Levinson, Transport Reviews. 386A. Ermagun and D. Levinson, "Spatiotemporal traffic forecasting: review and proposed directions," Transport Reviews, vol. 38, no. 6, pp. 786-814, 2018. The geography of transport systems. J.-P Rodrigue, Taylor & FrancisJ.-P. Rodrigue, The geography of transport systems. Taylor & Francis, 2016. A taxonomy of traffic forecasting regression problems from a supervised learning perspective. J S Angarita-Zapata, A D Masegosa, I Triguero, IEEE Access. 7J. S. Angarita-Zapata, A. D. Masegosa, and I. Triguero, "A taxonomy of traffic forecasting regression problems from a supervised learning perspective," IEEE Access, vol. 7, pp. 68 185-68 205, 2019. Survey of neural network-based models for short-term traffic state prediction. L N Do, N Taherifar, H L Vu, Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery. 911285L. N. Do, N. Taherifar, and H. L. Vu, "Survey of neural network-based models for short-term traffic state prediction," Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, vol. 9, no. 1, p. e1285, 2019. Deep Learning methods in transportation domain: a review. H Nguyen, L.-M Kieu, T Wen, C Cai, IET Intelligent Transport Systems. 129H. Nguyen, L.-M. Kieu, T. Wen, and C. Cai, "Deep Learning methods in transportation domain: a review," IET Intelligent Transport Systems, vol. 12, no. 9, pp. 998-1004, 2018. Enhancing transportation systems via Deep Learning: A survey. Y Wang, D Zhang, Y Liu, B Dai, L H Lee, Transportation Research Part C: Emerging Technologies. 99Y. Wang, D. Zhang, Y. Liu, B. Dai, and L. H. Lee, "Enhancing transportation systems via Deep Learning: A survey," Transportation Research Part C: Emerging Technologies, vol. 99, pp. 144-163, 2019. Deep Learning for spatio-temporal data mining: A survey. S Wang, J Cao, P Yu, IEEE Transactions on Knowledge and Data Engineering. S. Wang, J. Cao, and P. Yu, "Deep Learning for spatio-temporal data mining: A survey," IEEE Transactions on Knowledge and Data Engineering, 2020. A comprehensive survey on traffic prediction. X Yin, G Wu, J Wei, Y Shen, H Qi, B Yin, arXiv:2004.08555arXiv preprintX. Yin, G. Wu, J. Wei, Y. Shen, H. Qi, and B. Yin, "A comprehensive survey on traffic prediction," arXiv preprint arXiv:2004.08555, 2020. A survey on modern deep neural network for traffic prediction: Trends, methods and challenges. D A Tedjopurnomo, Z Bao, B Zheng, F Choudhury, A Qin, IEEE Transactions on Knowledge and Data Engineering. D. A. Tedjopurnomo, Z. Bao, B. Zheng, F. Choudhury, and A. Qin, "A survey on modern deep neural network for traffic prediction: Trends, methods and challenges," IEEE Transactions on Knowledge and Data Engineering, 2020. Machine learning and Deep Learning models for traffic flow prediction: A survey. A Gobezie, M S Fufa, Research Square preprintA. Gobezie and M. S. Fufa, "Machine learning and Deep Learning models for traffic flow prediction: A survey," Research Square preprint, 2020. Short-term traffic prediction with deep neural networks: A survey. K Lee, M Eo, E Jung, Y Yoon, W Rhee, arXiv:2009.00712arXiv preprintK. Lee, M. Eo, E. Jung, Y. Yoon, and W. Rhee, "Short-term traf- fic prediction with deep neural networks: A survey," arXiv preprint arXiv:2009.00712, 2020. VDS data-based deep learning approach for traffic forecasting using LSTM network. H Yi, K.-H N Bui, AAAI Conference on Artificial Intelligence. SpringerH. Yi and K.-H. N. Bui, "VDS data-based deep learning approach for traffic forecasting using LSTM network," in AAAI Conference on Artificial Intelligence. Springer, 2019, pp. 547-558. Short-term urban traffic forecasting using deep learning. G Albertengo, W Hassan, Annals of Photogrammetry, Remote Sensing & Spatial Information Sciences. 4G. Albertengo and W. Hassan, "Short-term urban traffic forecasting using deep learning," Annals of Photogrammetry, Remote Sensing & Spatial Information Sciences, vol. 4, 2018. Impact of data loss for prediction of traffic flow on an urban road using neural networks. T Pamuła, IEEE Transactions on Intelligent Transportation Systems. 203T. Pamuła, "Impact of data loss for prediction of traffic flow on an urban road using neural networks," IEEE Transactions on Intelligent Transportation Systems, vol. 20, no. 3, pp. 1000-1009, 2018. Big data analytics-based urban traffic prediction using deep learning in ITS. K.-H N Bui, H Yi, H Jung, J Seo, International Conference on Artificial Intelligence. The Steering Committee of The World Congress in Computer Science. K.-H. N. Bui, H. Yi, H. Jung, and J. Seo, "Big data analytics-based urban traffic prediction using deep learning in ITS," in International Conference on Artificial Intelligence. The Steering Committee of The World Congress in Computer Science, Computer Engineering and Applied Computing, 2019, pp. 270-273. Deep-PRESIMM: Integrating deep learning with microsimulation for traffic prediction. A E Essien, I Petrounias, P Sampaio, S Sampaio, IEEE International Conference on Systems, Man and Cybernetics. IEEEA. E. Essien, I. Petrounias, P. Sampaio, and S. Sampaio, "Deep- PRESIMM: Integrating deep learning with microsimulation for traffic prediction," in IEEE International Conference on Systems, Man and Cybernetics. IEEE, 2019, pp. 4257-4262. A novel passenger flow prediction model using deep learning methods. L Liu, R.-C Chen, Transportation Research Part C: Emerging Technologies. 84L. Liu and R.-C. Chen, "A novel passenger flow prediction model using deep learning methods," Transportation Research Part C: Emerging Technologies, vol. 84, pp. 74-91, 2017. Periodic-CRN: A convolutional recurrent model for crowd density prediction with recurring periodic patterns. A Zonoozi, J Kim, X.-L Li, G Cong, in IJCAI. A. Zonoozi, J.-j. Kim, X.-L. Li, and G. Cong, "Periodic-CRN: A con- volutional recurrent model for crowd density prediction with recurring periodic patterns." in IJCAI, 2018, pp. 3732-3738. Predicting indoor crowd density using column-structured deep neural network. A Sudo, T.-H Teng, H C Lau, Y Sekimoto, Workshop on Prediction of Human Mobility. A. Sudo, T.-H. Teng, H. C. Lau, and Y. Sekimoto, "Predicting indoor crowd density using column-structured deep neural network," in Work- shop on Prediction of Human Mobility, 2017, pp. 1-7. A graph deep learning method for short-term traffic forecasting on large road networks. Y Zhang, T Cheng, Y Ren, Computer-Aided Civil and Infrastructure Engineering. 3410Y. Zhang, T. Cheng, and Y. Ren, "A graph deep learning method for short-term traffic forecasting on large road networks," Computer-Aided Civil and Infrastructure Engineering, vol. 34, no. 10, pp. 877-896, 2019. Kernel-weighted graph convolutional network: A deep learning approach for traffic forecasting. Q Zhang, Q Jin, J Chang, S Xiang, C Pan, International Conference on Pattern Recognition. IEEEQ. Zhang, Q. Jin, J. Chang, S. Xiang, and C. Pan, "Kernel-weighted graph convolutional network: A deep learning approach for traffic fore- casting," in International Conference on Pattern Recognition. IEEE, 2018, pp. 1018-1023. Predicting citywide road traffic flow using deep spatio-temporal neural networks. T Jia, P Yan, IEEE Transactions on Intelligent Transportation Systems. T. Jia and P. Yan, "Predicting citywide road traffic flow using deep spatio-temporal neural networks," IEEE Transactions on Intelligent Transportation Systems, 2020. Deep learning architecture for short-term passenger flow forecasting in urban rail transit. J Zhang, F Chen, Z Cui, Y Guo, Y Zhu, IEEE Transactions on Intelligent Transportation Systems. J. Zhang, F. Chen, Z. Cui, Y. Guo, and Y. Zhu, "Deep learning architecture for short-term passenger flow forecasting in urban rail transit," IEEE Transactions on Intelligent Transportation Systems, 2020. City-wide traffic flow forecasting using a deep convolutional neural network. S Sun, H Wu, L Xiang, Sensors. 202421S. Sun, H. Wu, and L. Xiang, "City-wide traffic flow forecasting using a deep convolutional neural network," Sensors, vol. 20, no. 2, p. 421, 2020. Diffusion convolutional recurrent neural network with rank influence learning for traffic forecasting. Y Huang, Y Weng, S Yu, X Chen, International Conference On Trust, Security And Privacy In Computing And Communications. IEEEY. Huang, Y. Weng, S. Yu, and X. Chen, "Diffusion convolutional recurrent neural network with rank influence learning for traffic fore- casting," in International Conference On Trust, Security And Privacy In Computing And Communications. IEEE, 2019, pp. 678-685. Traffic jam probability estimation based on blockchain and deep neural networks. V Hassija, V Gupta, S Garg, V Chamola, IEEE Transactions on Intelligent Transportation Systems. V. Hassija, V. Gupta, S. Garg, and V. Chamola, "Traffic jam probability estimation based on blockchain and deep neural networks," IEEE Transactions on Intelligent Transportation Systems, 2020. A hybrid integrated deep learning model for the prediction of citywide spatio-temporal flow volumes. Y Ren, H Chen, Y Han, T Cheng, Y Zhang, G Chen, International Journal of Geographical Information Science. 344Y. Ren, H. Chen, Y. Han, T. Cheng, Y. Zhang, and G. Chen, "A hybrid integrated deep learning model for the prediction of citywide spatio-temporal flow volumes," International Journal of Geographical Information Science, vol. 34, no. 4, pp. 802-823, 2020. Revisiting spatialtemporal similarity: A deep learning framework for traffic prediction. H Yao, X Tang, H Wei, G Zheng, Z Li, AAAI Conference on Artificial Intelligence. 33H. Yao, X. Tang, H. Wei, G. Zheng, and Z. Li, "Revisiting spatial- temporal similarity: A deep learning framework for traffic prediction," in AAAI Conference on Artificial Intelligence, vol. 33, 2019, pp. 5668- 5675. Deep spatio-temporal residual networks for citywide crowd flows prediction. J Zhang, Y Zheng, D Qi, AAAI Conference on Artificial Intelligence. J. Zhang, Y. Zheng, and D. Qi, "Deep spatio-temporal residual net- works for citywide crowd flows prediction," in AAAI Conference on Artificial Intelligence, 2017, pp. 1655-1661. Flow prediction in spatiotemporal networks based on multitask deep learning. J Zhang, Y Zheng, J Sun, D Qi, IEEE Transactions on Knowledge and Data Engineering. 323J. Zhang, Y. Zheng, J. Sun, and D. Qi, "Flow prediction in spatio- temporal networks based on multitask deep learning," IEEE Transac- tions on Knowledge and Data Engineering, vol. 32, no. 3, pp. 468-478, 2019. Cross-city transfer learning for deep spatio-temporal prediction. L Wang, X Geng, X Ma, F Liu, Q Yang, International Joint Conference on Artificial Intelligence. AAAI PressL. Wang, X. Geng, X. Ma, F. Liu, and Q. Yang, "Cross-city transfer learning for deep spatio-temporal prediction," in International Joint Conference on Artificial Intelligence. AAAI Press, 2019, pp. 1893- 1899. Deep spatial-temporal 3d convolutional neural networks for traffic data forecasting. S Guo, Y Lin, S Li, Z Chen, H Wan, IEEE Transactions on Intelligent Transportation Systems. 2010S. Guo, Y. Lin, S. Li, Z. Chen, and H. Wan, "Deep spatial-temporal 3d convolutional neural networks for traffic data forecasting," IEEE Transactions on Intelligent Transportation Systems, vol. 20, no. 10, pp. 3913-3926, 2019. Prediction of city-scale dynamic taxi origin-destination flows using a hybrid deep neural network combined with travel time. Z Duan, K Zhang, Z Chen, Z Liu, L Tang, Y Yang, Y Ni, IEEE Access. 7Z. Duan, K. Zhang, Z. Chen, Z. Liu, L. Tang, Y. Yang, and Y. Ni, "Prediction of city-scale dynamic taxi origin-destination flows using a hybrid deep neural network combined with travel time," IEEE Access, vol. 7, pp. 127 816-127 832, 2019. Densely connected convolutional networks with attention LSTM for crowd flows prediction. W Li, W Tao, J Qiu, X Liu, X Zhou, Z Pan, IEEE Access. 7W. Li, W. Tao, J. Qiu, X. Liu, X. Zhou, and Z. Pan, "Densely connected convolutional networks with attention LSTM for crowd flows prediction," IEEE Access, vol. 7, pp. 140 488-140 498, 2019. ASTIR: Spatio-temporal data mining for crowd flow prediction. L Mourad, H Qi, Y Shen, B Yin, IEEE Access. 7L. Mourad, H. Qi, Y. Shen, and B. Yin, "ASTIR: Spatio-temporal data mining for crowd flow prediction," IEEE Access, vol. 7, pp. 175 159- 175 165, 2019. ST-Attn: Spatialtemporal attention mechanism for multi-step citywide crowd flow prediction. Y Zhou, H Chen, J Li, Y Wu, J Wu, L Chen, International Conference on Data Mining Workshops. IEEEY. Zhou, H. Chen, J. Li, Y. Wu, J. Wu, and L. Chen, "ST-Attn: Spatial- temporal attention mechanism for multi-step citywide crowd flow prediction," in International Conference on Data Mining Workshops. IEEE, 2019, pp. 609-614. Exploiting spatio-temporal correlations with multiple 3d convolutional neural networks for citywide vehicle flow prediction. C Chen, K Li, S G Teo, G Chen, X Zou, X Yang, R C Vijay, J Feng, Z Zeng, International Conference on Data Mining. IEEEC. Chen, K. Li, S. G. Teo, G. Chen, X. Zou, X. Yang, R. C. Vijay, J. Feng, and Z. Zeng, "Exploiting spatio-temporal correlations with multiple 3d convolutional neural networks for citywide vehicle flow prediction," in International Conference on Data Mining. IEEE, 2018, pp. 893-898. Explore uncertainty in residual networks for crowds flow prediction. B Wang, Z Yan, J Lu, G Zhang, T Li, International Joint Conference on Neural Networks. IEEEB. Wang, Z. Yan, J. Lu, G. Zhang, and T. Li, "Explore uncertainty in residual networks for crowds flow prediction," in International Joint Conference on Neural Networks. IEEE, 2018, pp. 1-7. Improved deep hybrid networks for urban traffic flow prediction using trajectory data. Z Duan, Y Yang, K Zhang, Y Ni, S Bajgain, IEEE Access. 6Z. Duan, Y. Yang, K. Zhang, Y. Ni, and S. Bajgain, "Improved deep hybrid networks for urban traffic flow prediction using trajectory data," IEEE Access, vol. 6, pp. 31 820-31 827, 2018. Optimized graph convolution recurrent neural network for traffic prediction. K Guo, Y Hu, Z Qian, H Liu, K Zhang, Y Sun, J Gao, B Yin, IEEE Transactions on Intelligent Transportation Systems. K. Guo, Y. Hu, Z. Qian, H. Liu, K. Zhang, Y. Sun, J. Gao, and B. Yin, "Optimized graph convolution recurrent neural network for traffic prediction," IEEE Transactions on Intelligent Transportation Systems, pp. 1-12, 2020. Mode decomposition based deep learning model for multi-section traffic prediction. K Pholsena, L Pan, Z Zheng, World Wide WebK. Pholsena, L. Pan, and Z. Zheng, "Mode decomposition based deep learning model for multi-section traffic prediction," World Wide Web, pp. 1-15, 2020. Deep-Trend 2.0: A light-weighted multi-scale traffic prediction model using detrending. X Dai, R Fu, E Zhao, Z Zhang, Y Lin, F.-Y. Wang, L Li, Transportation Research Part C: Emerging Technologies. 103X. Dai, R. Fu, E. Zhao, Z. Zhang, Y. Lin, F.-Y. Wang, and L. Li, "Deep- Trend 2.0: A light-weighted multi-scale traffic prediction model using detrending," Transportation Research Part C: Emerging Technologies, vol. 103, pp. 142-157, 2019. Graphpartitioning-based diffusion convolutional recurrent neural network for large-scale traffic forecasting. T Mallick, P Balaprakash, E Rask, J Macfarlane, Transportation Research Record. 26749T. Mallick, P. Balaprakash, E. Rask, and J. Macfarlane, "Graph- partitioning-based diffusion convolutional recurrent neural network for large-scale traffic forecasting," Transportation Research Record, vol. 2674, no. 9, pp. 473-488, 2020. Attention based spatialtemporal graph convolutional networks for traffic flow forecasting. S Guo, Y Lin, N Feng, C Song, H Wan, AAAI Conference on Artificial Intelligence. 33S. Guo, Y. Lin, N. Feng, C. Song, and H. Wan, "Attention based spatial- temporal graph convolutional networks for traffic flow forecasting," in AAAI Conference on Artificial Intelligence, vol. 33, 2019, pp. 922-929. TrafficWave: Generative deep learning architecture for vehicular traffic flow prediction. D Impedovo, V Dentamaro, G Pirlo, L Sarcinella, Applied Sciences. 9245504D. Impedovo, V. Dentamaro, G. Pirlo, and L. Sarcinella, "TrafficWave: Generative deep learning architecture for vehicular traffic flow predic- tion," Applied Sciences, vol. 9, no. 24, p. 5504, 2019. A novel residual graph convolution deep learning model for short-term network-based traffic forecasting. Y Zhang, T Cheng, Y Ren, K Xie, International Journal of Geographical Information Science. 345Y. Zhang, T. Cheng, Y. Ren, and K. Xie, "A novel residual graph convolution deep learning model for short-term network-based traffic forecasting," International Journal of Geographical Information Sci- ence, vol. 34, no. 5, pp. 969-995, 2020. A hybrid deep learning based traffic flow prediction method and its understanding. Y Wu, H Tan, L Qin, B Ran, Z Jiang, Transportation Research Part C: Emerging Technologies. 90Y. Wu, H. Tan, L. Qin, B. Ran, and Z. Jiang, "A hybrid deep learning based traffic flow prediction method and its understanding," Transportation Research Part C: Emerging Technologies, vol. 90, pp. 166-180, 2018. A regularized LSTM network for short-term traffic flow prediction. Z Wang, R Zhu, M Zheng, X Jia, R Wang, T Li, International Conference on Information Science and Control Engineering. IEEEZ. Wang, R. Zhu, M. Zheng, X. Jia, R. Wang, and T. Li, "A regularized LSTM network for short-term traffic flow prediction," in International Conference on Information Science and Control Engineering. IEEE, 2019, pp. 100-105. Multirange attentive bicomponent graph convolutional network for traffic forecasting. W Chen, L Chen, Y Xie, W Cao, Y Gao, X Feng, arXiv:1911.12093arXiv preprintW. Chen, L. Chen, Y. Xie, W. Cao, Y. Gao, and X. Feng, "Multi- range attentive bicomponent graph convolutional network for traffic forecasting," arXiv preprint arXiv:1911.12093, 2019. Traffic flow prediction with big data: a deep learning approach. Y Lv, Y Duan, W Kang, Z Li, F.-Y. Wang, IEEE Transactions on Intelligent Transportation Systems. 162Y. Lv, Y. Duan, W. Kang, Z. Li, and F.-Y. Wang, "Traffic flow prediction with big data: a deep learning approach," IEEE Transactions on Intelligent Transportation Systems, vol. 16, no. 2, pp. 865-873, 2014. DeepTrend: A deep hierarchical neural network for traffic flow prediction. X Dai, R Fu, Y Lin, L Li, F.-Y. Wang, arXiv:1707.03213arXiv preprintX. Dai, R. Fu, Y. Lin, L. Li, and F.-Y. Wang, "DeepTrend: A deep hierarchical neural network for traffic flow prediction," arXiv preprint arXiv:1707.03213, 2017. A spatio-temporal decomposition based deep neural network for time series forecasting. R Asadi, A C Regan, Applied Soft Computing. 87105963R. Asadi and A. C. Regan, "A spatio-temporal decomposition based deep neural network for time series forecasting," Applied Soft Comput- ing, vol. 87, p. 105963, 2020. A convolutional recurrent autoencoder for spatio-temporal missing data imputation. R Asadi, A Regan, AAAI Conference on Artificial Intelligence. R. Asadi and A. Regan, "A convolutional recurrent autoencoder for spatio-temporal missing data imputation," in AAAI Conference on Artificial Intelligence, 2019, pp. 206-212. Graph attention LSTM network: A new model for traffic flow forecasting. T Wu, F Chen, Y Wan, International Conference on Information Science and Control Engineering. IEEET. Wu, F. Chen, and Y. Wan, "Graph attention LSTM network: A new model for traffic flow forecasting," in International Conference on Information Science and Control Engineering. IEEE, 2018, pp. 241-245. Using LSTM and GRU neural network methods for traffic flow prediction. R Fu, Z Zhang, L Li, Youth Academic Annual Conference of Chinese Association of Automation. IEEER. Fu, Z. Zhang, and L. Li, "Using LSTM and GRU neural network methods for traffic flow prediction," in Youth Academic Annual Confer- ence of Chinese Association of Automation. IEEE, 2016, pp. 324-328. Traffic flow forecasting based on hybrid deep learning framework. S Du, T Li, X Gong, Y Yang, S J Horng, International Conference on Intelligent Systems and Knowledge Engineering. IEEES. Du, T. Li, X. Gong, Y. Yang, and S. J. Horng, "Traffic flow forecasting based on hybrid deep learning framework," in International Conference on Intelligent Systems and Knowledge Engineering. IEEE, 2017, pp. 1-6. Short-term traffic flow prediction with LSTM recurrent neural network. D Kang, Y Lv, Y.-Y Chen, International Conference on Intelligent Transportation Systems. IEEED. Kang, Y. Lv, and Y.-y. Chen, "Short-term traffic flow prediction with LSTM recurrent neural network," in International Conference on Intelligent Transportation Systems. IEEE, 2017, pp. 1-6. Short-term traffic flow prediction with Conv-LSTM. Y Liu, H Zheng, X Feng, Z Chen, International Conference on Wireless Communications and Signal Processing. IEEEY. Liu, H. Zheng, X. Feng, and Z. Chen, "Short-term traffic flow prediction with Conv-LSTM," in International Conference on Wireless Communications and Signal Processing. IEEE, 2017, pp. 1-6. An efficient realization of deep learning for traffic data imputation. Y Duan, Y Lv, Y.-L Liu, F.-Y. Wang, Transportation research part C: emerging technologies. 72Y. Duan, Y. Lv, Y.-L. Liu, and F.-Y. Wang, "An efficient realization of deep learning for traffic data imputation," Transportation research part C: emerging technologies, vol. 72, pp. 168-181, 2016. Short-term traffic flow forecasting with spatialtemporal correlation in a hybrid deep learning framework. Y Wu, H Tan, arXiv:1612.01022arXiv preprintY. Wu and H. Tan, "Short-term traffic flow forecasting with spatial- temporal correlation in a hybrid deep learning framework," arXiv preprint arXiv:1612.01022, 2016. A noiseimmune LSTM network for short-term traffic flow forecasting. L Cai, M Lei, S Zhang, Y Yu, T Zhou, J Qin, Chaos: An Interdisciplinary Journal of Nonlinear Science. 30223135L. Cai, M. Lei, S. Zhang, Y. Yu, T. Zhou, and J. Qin, "A noise- immune LSTM network for short-term traffic flow forecasting," Chaos: An Interdisciplinary Journal of Nonlinear Science, vol. 30, no. 2, p. 023135, 2020. Shortterm traffic prediction using long short-term memory neural networks. Z Abbas, A Al-Shishtawy, S Girdzijauskas, V Vlassov, International Congress on Big Data. IEEEZ. Abbas, A. Al-Shishtawy, S. Girdzijauskas, and V. Vlassov, "Short- term traffic prediction using long short-term memory neural networks," in International Congress on Big Data. IEEE, 2018, pp. 57-65. A parallel-res GRU architecture and its application to road network traffic flow forecasting. B Zhao, X Zhang, International Conference on Big Data Technologies. B. Zhao and X. Zhang, "A parallel-res GRU architecture and its application to road network traffic flow forecasting," in International Conference on Big Data Technologies, 2018, pp. 79-83. Traffic flow prediction with rainfall impact using a deep learning method. Y Jia, J Wu, M Xu, Journal of advanced transportation. 2017Y. Jia, J. Wu, and M. Xu, "Traffic flow prediction with rainfall impact using a deep learning method," Journal of advanced transportation, vol. 2017, 2017. Predicting short-term traffic flow by long shortterm memory recurrent neural network. Y Tian, L Pan, International Conference on Smart City. IEEEY. Tian and L. Pan, "Predicting short-term traffic flow by long short- term memory recurrent neural network," in International Conference on Smart City. IEEE, 2015, pp. 153-158. MSAE: A multitask learning approach for traffic flow prediction using deep neural network. D Yang, H.-M Yang, P Wang, S.-J Li, Advances in Intelligent Information Hiding and Multimedia Signal Processing. SpringerD. Yang, H.-M. Yang, P. Wang, and S.-J. Li, "MSAE: A multitask learning approach for traffic flow prediction using deep neural net- work," in Advances in Intelligent Information Hiding and Multimedia Signal Processing. Springer, 2020, pp. 153-161. MF-CNN: Traffic flow prediction using convolutional neural network and multifeatures fusion. D Yang, S Li, Z Peng, P Wang, J Wang, H Yang, IEICE Transactions on Information and Systems. 1028D. Yang, S. Li, Z. Peng, P. Wang, J. Wang, and H. Yang, "MF-CNN: Traffic flow prediction using convolutional neural network and multi- features fusion," IEICE Transactions on Information and Systems, vol. 102, no. 8, pp. 1526-1536, 2019. Deep temporal convolutional networks for short-term traffic flow forecasting. W Zhao, Y Gao, T Ji, X Wan, F Ye, G Bai, IEEE Access. 7W. Zhao, Y. Gao, T. Ji, X. Wan, F. Ye, and G. Bai, "Deep temporal convolutional networks for short-term traffic flow forecasting," IEEE Access, vol. 7, pp. 114 496-114 507, 2019. An LSTM based encoder-decoder model for multistep traffic flow prediction. S Du, T Li, Y Yang, X Gong, S.-J Horng, International Joint Conference on Neural Networks. IEEES. Du, T. Li, Y. Yang, X. Gong, and S.-J. Horng, "An LSTM based encoder-decoder model for multistep traffic flow prediction," in International Joint Conference on Neural Networks. IEEE, 2019, pp. 1-8. Transfer learning and online learning for traffic forecasting under different data availability conditions: Alternatives and pitfalls. E L Manibardo, I Laña, J Del Ser, IEEE Intelligent Transportation Systems Conference. IEEEE. L. Manibardo, I. Laña, and J. Del Ser, "Transfer learning and online learning for traffic forecasting under different data availability con- ditions: Alternatives and pitfalls," in IEEE Intelligent Transportation Systems Conference. IEEE, 2020. A multitask learning model for traffic flow and speed forecasting. K Zhang, L Wu, Z Zhu, J Deng, IEEE Access. 8K. Zhang, L. Wu, Z. Zhu, and J. Deng, "A multitask learning model for traffic flow and speed forecasting," IEEE Access, vol. 8, pp. 80 707- 80 715, 2020. A hybrid method for traffic flow forecasting using multimodal deep learning. S Du, T Li, X Gong, S.-J Horng, International Journal of Computational Intelligence Systems. 131S. Du, T. Li, X. Gong, and S.-J. Horng, "A hybrid method for traffic flow forecasting using multimodal deep learning," International Journal of Computational Intelligence Systems, vol. 13, no. 1, pp. 85- 97, 2020. Traffic flow prediction model based on deep belief network and genetic algorithm. Y Zhang, G Huang, Intelligent Transport Systems. 126Y. Zhang and G. Huang, "Traffic flow prediction model based on deep belief network and genetic algorithm," Intelligent Transport Systems, vol. 12, no. 6, pp. 533-541, 2018. Stretch-wide traffic state prediction using discriminatively pre-trained deep neural networks. M Elhenawy, H Rakha, International Conference on Intelligent Transportation Systems. IEEEM. Elhenawy and H. Rakha, "Stretch-wide traffic state prediction using discriminatively pre-trained deep neural networks," in International Conference on Intelligent Transportation Systems. IEEE, 2016, pp. 1065-1070. Improving traffic flow prediction with weather information in connected cars: A deep learning approach. A Koesdwiady, R Soua, F Karray, IEEE Transactions on Vehicular Technology. 6512A. Koesdwiady, R. Soua, and F. Karray, "Improving traffic flow prediction with weather information in connected cars: A deep learning approach," IEEE Transactions on Vehicular Technology, vol. 65, no. 12, pp. 9508-9517, 2016. M Xu, W Dai, C Liu, X Gao, W Lin, G.-J Qi, H Xiong, arXiv:2001.02908Spatial-temporal transformer networks for traffic flow forecasting. arXiv preprintM. Xu, W. Dai, C. Liu, X. Gao, W. Lin, G.-J. Qi, and H. Xiong, "Spatial-temporal transformer networks for traffic flow forecasting," arXiv preprint arXiv:2001.02908, 2020. A deep generative adversarial architecture for network-wide spatial-temporal traffic-state estimation. Y Liang, Z Cui, Y Tian, H Chen, Y Wang, Transportation Research Record. 267245Y. Liang, Z. Cui, Y. Tian, H. Chen, and Y. Wang, "A deep generative adversarial architecture for network-wide spatial-temporal traffic-state estimation," Transportation Research Record, vol. 2672, no. 45, pp. 87-105, 2018. Road traffic forecasting: A hybrid approach combining artificial neural network with singular spectrum analysis. S Kolidakis, G Botzoris, V Profillidis, P Lemonakis, Economic Analysis and Policy. 64S. Kolidakis, G. Botzoris, V. Profillidis, and P. Lemonakis, "Road traffic forecasting: A hybrid approach combining artificial neural network with singular spectrum analysis," Economic Analysis and Policy, vol. 64, pp. 159-171, 2019. T-LSTM: A long short-term memory neural network enhanced by temporal information for traffic flow prediction. L Mou, P Zhao, H Xie, Y Chen, IEEE Access. 760L. Mou, P. Zhao, H. Xie, and Y. Chen, "T-LSTM: A long short-term memory neural network enhanced by temporal information for traffic flow prediction," IEEE Access, vol. 7, pp. 98 053-98 060, 2019. Combining weather condition data to predict traffic flow: a GRU-based deep learning approach. D Zhang, M R Kabuka, Intelligent Transport Systems. 127D. Zhang and M. R. Kabuka, "Combining weather condition data to predict traffic flow: a GRU-based deep learning approach," Intelligent Transport Systems, vol. 12, no. 7, pp. 578-585, 2018. Optimized structure of the traffic flow forecasting model with a deep learning approach. H.-F Yang, T S Dillon, Y.-P P Chen, IEEE Transactions on Neural Networks and Learning Systems. 2810H.-F. Yang, T. S. Dillon, and Y.-P. P. Chen, "Optimized structure of the traffic flow forecasting model with a deep learning approach," IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 10, pp. 2371-2381, 2016. Dynamic spatial-temporal graph convolutional neural networks for traffic forecasting. Z Diao, X Wang, D Zhang, Y Liu, K Xie, S He, AAAI Conference on Artificial Intelligence. 33Z. Diao, X. Wang, D. Zhang, Y. Liu, K. Xie, and S. He, "Dynamic spatial-temporal graph convolutional neural networks for traffic fore- casting," in AAAI Conference on Artificial Intelligence, vol. 33, 2019, pp. 890-897. An improved long short-term memory networks with Takagi-Sugeno fuzzy for traffic speed prediction considering abnormal traffic situation. S George, A K Santra, Computational Intelligence. S. George and A. K. Santra, "An improved long short-term memory networks with Takagi-Sugeno fuzzy for traffic speed prediction con- sidering abnormal traffic situation," Computational Intelligence, 2020. Traffic speed prediction under weekday using convolutional neural networks concepts. C Song, H Lee, C Kang, W Lee, Y B Kim, S W Cha, Intelligent Vehicles Symposium. IEEEC. Song, H. Lee, C. Kang, W. Lee, Y. B. Kim, and S. W. Cha, "Traffic speed prediction under weekday using convolutional neural networks concepts," in Intelligent Vehicles Symposium. IEEE, 2017, pp. 1293- 1298. A deep-learning model for urban traffic flow prediction with traffic events mined from twitter. A Essien, I Petrounias, P Sampaio, S Sampaio, World Wide WebA. Essien, I. Petrounias, P. Sampaio, and S. Sampaio, "A deep-learning model for urban traffic flow prediction with traffic events mined from twitter," World Wide Web, pp. 1-24, 2020. Traffic speed prediction: an attention-based method. D Liu, L Tang, G Shen, X Han, Sensors. 19183836D. Liu, L. Tang, G. Shen, and X. Han, "Traffic speed prediction: an attention-based method," Sensors, vol. 19, no. 18, p. 3836, 2019. Research on traffic speed prediction by temporal clustering analysis and convolutional neural network with deformable kernels. G Shen, C Chen, Q Pan, S Shen, Z Liu, IEEE Access. 6G. Shen, C. Chen, Q. Pan, S. Shen, and Z. Liu, "Research on traffic speed prediction by temporal clustering analysis and convolutional neural network with deformable kernels," IEEE Access, vol. 6, pp. 51 756-51 765, 2018. LSTM approach for spatial extension of traffic sensor points in urban road network. A C Piazzi, T Tettamanti, European Association for Research in Transportation. A. C. Piazzi and T. Tettamanti, "LSTM approach for spatial extension of traffic sensor points in urban road network," European Association for Research in Transportation. Link speed prediction for signalized urban traffic network using a hybrid deep learning approach. T Zhang, J Jin, H Yang, H Guo, X Ma, IEEE Intelligent Transportation Systems Conference. IEEET. Zhang, J. Jin, H. Yang, H. Guo, and X. Ma, "Link speed prediction for signalized urban traffic network using a hybrid deep learning approach," in IEEE Intelligent Transportation Systems Conference. IEEE, 2019, pp. 2195-2200. Networkwide traffic speed forecasting: 3D convolutional neural network with ensemble empirical mode decomposition. S Zhang, L Zhou, X Chen, L Zhang, L Li, M Li, Computer-Aided Civil and Infrastructure Engineering. 3510S. Zhang, L. Zhou, X. Chen, L. Zhang, L. Li, and M. Li, "Network- wide traffic speed forecasting: 3D convolutional neural network with ensemble empirical mode decomposition," Computer-Aided Civil and Infrastructure Engineering, vol. 35, no. 10, pp. 1132-1147, 2020. Traffic speed prediction using deep learning method. Y Jia, J Wu, Y Du, International Conference on Intelligent Transportation Systems. IEEEY. Jia, J. Wu, and Y. Du, "Traffic speed prediction using deep learning method," in International Conference on Intelligent Transportation Systems. IEEE, 2016, pp. 1217-1222. A capsule network for traffic speed prediction in complex road networks. Y Kim, P Wang, Y Zhu, L Mihaylova, Sensor Data Fusion: Trends, Solutions, Applications. IEEEY. Kim, P. Wang, Y. Zhu, and L. Mihaylova, "A capsule network for traffic speed prediction in complex road networks," in Sensor Data Fusion: Trends, Solutions, Applications. IEEE, 2018, pp. 1-6. Long short-term memory neural network for traffic speed prediction using remote microwave sensor data. X Ma, Z Tao, Y Wang, H Yu, Y Wang, Transportation Research Part C: Emerging Technologies. 54X. Ma, Z. Tao, Y. Wang, H. Yu, and Y. Wang, "Long short-term memory neural network for traffic speed prediction using remote microwave sensor data," Transportation Research Part C: Emerging Technologies, vol. 54, pp. 187-197, 2015. Dxnat-deep neural networks for explaining non-recurring traffic congestion. F Sun, A Dubey, J White, IEEE International Conference on Big Data. IEEEF. Sun, A. Dubey, and J. White, "Dxnat-deep neural networks for explaining non-recurring traffic congestion," in IEEE International Conference on Big Data. IEEE, 2017, pp. 2141-2150. A graph CNN-LSTM neural network for short and long-term traffic forecasting based on trajectory data. T Bogaerts, A D Masegosa, J S Angarita-Zapata, E Onieva, P Hellinckx, Transportation Research Part C: Emerging Technologies. 112T. Bogaerts, A. D. Masegosa, J. S. Angarita-Zapata, E. Onieva, and P. Hellinckx, "A graph CNN-LSTM neural network for short and long-term traffic forecasting based on trajectory data," Transportation Research Part C: Emerging Technologies, vol. 112, pp. 62-77, 2020. GPS-based citywide traffic congestion forecasting using CNN-RNN and C3D hybrid model. J Guo, Y Liu, Q Yang, Y Wang, S Fang, Transportmetrica A: Transport Science. J. Guo, Y. Liu, Q. Yang, Y. Wang, and S. Fang, "GPS-based city- wide traffic congestion forecasting using CNN-RNN and C3D hybrid model," Transportmetrica A: Transport Science, pp. 1-22, 2020. Incorporating dynamicity of transportation network with multi-weight traffic graph convolutional network for traffic forecasting. Y Shin, Y Yoon, IEEE Transactions on Intelligent Transportation Systems. Y. Shin and Y. Yoon, "Incorporating dynamicity of transportation network with multi-weight traffic graph convolutional network for traffic forecasting," IEEE Transactions on Intelligent Transportation Systems, 2020. Forecasting road traffic speeds by considering area-wide spatio-temporal dependencies based on a graph convolutional neural network (GCN). B Yu, Y Lee, K Sohn, Transportation Research Part C: Emerging Technologies. 114B. Yu, Y. Lee, and K. Sohn, "Forecasting road traffic speeds by considering area-wide spatio-temporal dependencies based on a graph convolutional neural network (GCN)," Transportation Research Part C: Emerging Technologies, vol. 114, pp. 189-204, 2020. A hybrid deep learning-based traffic forecasting approach integrating adjacency filtering and frequency decomposition. J Cao, X Guan, N Zhang, X Wang, H Wu, IEEE Access. 8J. Cao, X. Guan, N. Zhang, X. Wang, and H. Wu, "A hybrid deep learning-based traffic forecasting approach integrating adjacency filter- ing and frequency decomposition," IEEE Access, vol. 8, pp. 81 735- 81 746, 2020. Comparative analysis of implicit models for real-time short-term traffic predictions. G Fusco, C Colombaroni, N Isaenko, Intelligent Transport Systems. 104G. Fusco, C. Colombaroni, and N. Isaenko, "Comparative analysis of implicit models for real-time short-term traffic predictions," Intelligent Transport Systems, vol. 10, no. 4, pp. 270-278, 2016. Short-term traffic speed prediction of urban road with multi-source data. X Yang, Y Yuan, Z Liu, IEEE Access. 8X. Yang, Y. Yuan, and Z. Liu, "Short-term traffic speed prediction of urban road with multi-source data," IEEE Access, vol. 8, pp. 87 541- 87 551, 2020. Short-term traffic prediction based on deepcluster in large-scale road networks. L Han, K Zheng, L Zhao, X Wang, X Shen, IEEE Transactions on Vehicular Technology. 6812L. Han, K. Zheng, L. Zhao, X. Wang, and X. Shen, "Short-term traffic prediction based on deepcluster in large-scale road networks," IEEE Transactions on Vehicular Technology, vol. 68, no. 12, pp. 12 301- 12 313, 2019. Urban traffic prediction from spatio-temporal data using deep meta learning. Z Pan, Y Liang, W Wang, Y Yu, Y Zheng, J Zhang, International Conference on Knowledge Discovery & Data Mining. Z. Pan, Y. Liang, W. Wang, Y. Yu, Y. Zheng, and J. Zhang, "Urban traffic prediction from spatio-temporal data using deep meta learning," in International Conference on Knowledge Discovery & Data Mining, 2019, pp. 1720-1730. Traffic congestion prediction based on GPS trajectory data. S Sun, J Chen, J Sun, International Journal of Distributed Sensor Networks. 155S. Sun, J. Chen, and J. Sun, "Traffic congestion prediction based on GPS trajectory data," International Journal of Distributed Sensor Networks, vol. 15, no. 5, 2019. Multistep speed prediction on traffic networks: A deep learning approach considering spatio-temporal dependencies. Z Zhang, M Li, X Lin, Y Wang, F He, Transportation research part C: emerging technologies. 105Z. Zhang, M. Li, X. Lin, Y. Wang, and F. He, "Multistep speed prediction on traffic networks: A deep learning approach considering spatio-temporal dependencies," Transportation research part C: emerg- ing technologies, vol. 105, pp. 297-322, 2019. Dest-ResNet: A deep spatiotemporal residual network for hotspot traffic speed prediction. B Liao, J Zhang, M Cai, S Tang, Y Gao, C Wu, S Yang, W Zhu, Y Guo, F Wu, International Conference on Multimedia. B. Liao, J. Zhang, M. Cai, S. Tang, Y. Gao, C. Wu, S. Yang, W. Zhu, Y. Guo, and F. Wu, "Dest-ResNet: A deep spatiotemporal residual network for hotspot traffic speed prediction," in International Conference on Multimedia, 2018, pp. 1883-1891. Traffic speed prediction and congestion source exploration: A deep learning method. J Wang, Q Gu, J Wu, G Liu, Z Xiong, International Conference on Data Mining. IEEEJ. Wang, Q. Gu, J. Wu, G. Liu, and Z. Xiong, "Traffic speed predic- tion and congestion source exploration: A deep learning method," in International Conference on Data Mining. IEEE, 2016, pp. 499-508. Short-term traffic speed prediction method for urban road sections based on wavelet transform and gated recurrent unit. X Fu, W Luo, C Xu, X Zhao, Mathematical Problems in EngineeringX. Fu, W. Luo, C. Xu, and X. Zhao, "Short-term traffic speed prediction method for urban road sections based on wavelet transform and gated recurrent unit," Mathematical Problems in Engineering, 2020. GCGAN: Generative adversarial nets with graph CNN for network-scale traffic prediction. Y Zhang, S Wang, B Chen, J Cao, International Joint Conference on Neural Networks. IEEEY. Zhang, S. Wang, B. Chen, and J. Cao, "GCGAN: Generative adversarial nets with graph CNN for network-scale traffic prediction," in International Joint Conference on Neural Networks. IEEE, 2019, pp. 1-8. TrafficGAN: Network-scale deep traffic prediction with generative adversarial nets. Y Zhang, S Wang, B Chen, J Cao, Z Huang, IEEE Transactions on Intelligent Transportation Systems. Y. Zhang, S. Wang, B. Chen, J. Cao, and Z. Huang, "TrafficGAN: Network-scale deep traffic prediction with generative adversarial nets," IEEE Transactions on Intelligent Transportation Systems, 2019. Wavelet-HST: A wavelet-based higher-order spatio-temporal framework for urban traffic speed prediction. N Zhang, X Guan, J Cao, X Wang, H Wu, IEEE Access. 7N. Zhang, X. Guan, J. Cao, X. Wang, and H. Wu, "Wavelet-HST: A wavelet-based higher-order spatio-temporal framework for urban traffic speed prediction," IEEE Access, vol. 7, pp. 118 446-118 458, 2019. Predicting short-term traffic speed using a deep neural network to accommodate citywide spatio-temporal correlations. Y Lee, H Jeon, K Sohn, IEEE Transactions on Intelligent Transportation Systems. Y. Lee, H. Jeon, and K. Sohn, "Predicting short-term traffic speed using a deep neural network to accommodate citywide spatio-temporal correlations," IEEE Transactions on Intelligent Transportation Systems, 2020. A comparison of machine learning methods for the prediction of traffic speed in urban places. C Bratsas, K Koupidis, J.-M Salanova, K Giannakopoulos, A Kaloudis, G Aifadopoulou, Sustainability. 121142C. Bratsas, K. Koupidis, J.-M. Salanova, K. Giannakopoulos, A. Kaloudis, and G. Aifadopoulou, "A comparison of machine learning methods for the prediction of traffic speed in urban places," Sustain- ability, vol. 12, no. 1, p. 142, 2020. DeepRTP: A deep spatiotemporal residual network for regional traffic prediction. Z Liu, M Huang, Z Ye, K Wu, International Conference on Mobile Ad-Hoc and Sensor Networks. IEEEZ. Liu, M. Huang, Z. Ye, and K. Wu, "DeepRTP: A deep spatio- temporal residual network for regional traffic prediction," in Interna- tional Conference on Mobile Ad-Hoc and Sensor Networks. IEEE, 2019, pp. 291-296. A hybrid traffic speed forecasting approach integrating wavelet transform and motifbased graph convolutional recurrent neural network. N Zhang, X Guan, J Cao, X Wang, H Wu, arXiv:1904.06656arXiv preprintN. Zhang, X. Guan, J. Cao, X. Wang, and H. Wu, "A hybrid traffic speed forecasting approach integrating wavelet transform and motif- based graph convolutional recurrent neural network," arXiv preprint arXiv:1904.06656, 2019. T-GCN: A temporal graph convolutional network for traffic prediction. L Zhao, Y Song, C Zhang, Y Liu, P Wang, T Lin, M Deng, H Li, IEEE Transactions on Intelligent Transportation Systems. L. Zhao, Y. Song, C. Zhang, Y. Liu, P. Wang, T. Lin, M. Deng, and H. Li, "T-GCN: A temporal graph convolutional network for traffic prediction," IEEE Transactions on Intelligent Transportation Systems, 2019. Deep sequence learning with auxiliary information for traffic prediction. B Liao, J Zhang, C Wu, D Mcilwraith, T Chen, S Yang, Y Guo, F Wu, ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. B. Liao, J. Zhang, C. Wu, D. McIlwraith, T. Chen, S. Yang, Y. Guo, and F. Wu, "Deep sequence learning with auxiliary information for traffic prediction," in ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2018, pp. 537-546. Dynamic spatio-temporal graph-based cnns for traffic flow prediction. K Chen, F Chen, B Lai, Z Jin, Y Liu, K Li, L Wei, P Wang, Y Tang, J Huang, IEEE Access. 8K. Chen, F. Chen, B. Lai, Z. Jin, Y. Liu, K. Li, L. Wei, P. Wang, Y. Tang, J. Huang et al., "Dynamic spatio-temporal graph-based cnns for traffic flow prediction," IEEE Access, vol. 8, pp. 185 136-185 145, 2020. Predicting citywide crowd flows using deep spatio-temporal residual networks. J Zhang, Y Zheng, D Qi, R Li, X Yi, T Li, Artificial Intelligence. 259J. Zhang, Y. Zheng, D. Qi, R. Li, X. Yi, and T. Li, "Predicting citywide crowd flows using deep spatio-temporal residual networks," Artificial Intelligence, vol. 259, pp. 147-166, 2018. Spatiotemporal recurrent convolutional networks for traffic prediction in transportation networks. H Yu, Z Wu, S Wang, Y Wang, X Ma, Sensors. 1771501H. Yu, Z. Wu, S. Wang, Y. Wang, and X. Ma, "Spatiotemporal recurrent convolutional networks for traffic prediction in transportation networks," Sensors, vol. 17, no. 7, p. 1501, 2017. Neural congestion prediction system for trip modelling in heterogeneous spatio-temporal patterns. W Elleuch, A Wali, A M Alimi, International Journal of Systems Science. W. Elleuch, A. Wali, and A. M. Alimi, "Neural congestion prediction system for trip modelling in heterogeneous spatio-temporal patterns," International Journal of Systems Science, pp. 1-19, 2020. Learning traffic as images: a deep convolutional neural network for large-scale transportation network speed prediction. X Ma, Z Dai, Z He, J Ma, Y Wang, Y Wang, Sensors. 174818X. Ma, Z. Dai, Z. He, J. Ma, Y. Wang, and Y. Wang, "Learning traffic as images: a deep convolutional neural network for large-scale transportation network speed prediction," Sensors, vol. 17, no. 4, p. 818, 2017. Forecasting transportation network speed using deep capsule networks with nested LSTM models. X Ma, H Zhong, Y Li, J Ma, Z Cui, Y Wang, IEEE Transactions on Intelligent Transportation Systems. X. Ma, H. Zhong, Y. Li, J. Ma, Z. Cui, and Y. Wang, "Forecasting transportation network speed using deep capsule networks with nested LSTM models," IEEE Transactions on Intelligent Transportation Sys- tems, 2020. A variational autoencoder solution for road traffic forecasting systems: Missing data imputation, dimension reduction, model selection and anomaly detection. G Boquet, A Morell, J Serrano, J L Vicario, Transportation Research Part C: Emerging Technologies. 115102622G. Boquet, A. Morell, J. Serrano, and J. L. Vicario, "A variational autoencoder solution for road traffic forecasting systems: Missing data imputation, dimension reduction, model selection and anomaly detection," Transportation Research Part C: Emerging Technologies, vol. 115, p. 102622, 2020. Traffic graph convolutional recurrent neural network: A deep learning framework for network-scale traffic learning and forecasting. Z Cui, K Henrickson, R Ke, Y Wang, IEEE Transactions on Intelligent Transportation Systems. Z. Cui, K. Henrickson, R. Ke, and Y. Wang, "Traffic graph con- volutional recurrent neural network: A deep learning framework for network-scale traffic learning and forecasting," IEEE Transactions on Intelligent Transportation Systems, 2019. Intelligent highway traffic forecast based on deep learning and restructured road models. S Ryu, D Kim, Computer Software and Applications Conference. IEEE2S. Ryu and D. Kim, "Intelligent highway traffic forecast based on deep learning and restructured road models," in Computer Software and Applications Conference, vol. 2. IEEE, 2019, pp. 110-114. Stacked bidirectional and unidirectional LSTM recurrent neural network for forecasting network-wide traffic state with missing values. Z Cui, R Ke, Z Pu, Y Wang, arXiv:2005.11627arXiv preprintZ. Cui, R. Ke, Z. Pu, and Y. Wang, "Stacked bidirectional and unidi- rectional LSTM recurrent neural network for forecasting network-wide traffic state with missing values," arXiv preprint arXiv:2005.11627, 2020. Spatio-temporal graph convolutional networks: a deep learning framework for traffic forecasting. B Yu, H Yin, Z Zhu, International Joint Conference on Artificial Intelligence. B. Yu, H. Yin, and Z. Zhu, "Spatio-temporal graph convolutional networks: a deep learning framework for traffic forecasting," in Interna- tional Joint Conference on Artificial Intelligence, 2018, pp. 3634-3640. Dual graph for traffic forecasting. L Wei, Z Yu, Z Jin, L Xie, J Huang, D Cai, X He, X.-S Hua, IEEE Access. L. Wei, Z. Yu, Z. Jin, L. Xie, J. Huang, D. Cai, X. He, and X.-S. Hua, "Dual graph for traffic forecasting," IEEE Access, 2019. Spatial-temporal graph attention networks: A deep learning approach for traffic forecasting. C Zhang, J James, Y Liu, IEEE Access. 7C. Zhang, J. James, and Y. Liu, "Spatial-temporal graph attention networks: A deep learning approach for traffic forecasting," IEEE Access, vol. 7, pp. 166 246-166 256, 2019. Incrementally improving graph WaveNet performance on traffic prediction. S Shleifer, C Mccreery, V Chitters, arXiv:1912.07390arXiv preprintS. Shleifer, C. McCreery, and V. Chitters, "Incrementally improving graph WaveNet performance on traffic prediction," arXiv preprint arXiv:1912.07390, 2019. New perspectives on the use of online learning for congestion level prediction over traffic data. E L Manibardo, I Laña, J L Lobo, J Del Ser, International Joint Conference on Neural Networks. IEEEE. L. Manibardo, I. Laña, J. L. Lobo, and J. Del Ser, "New perspectives on the use of online learning for congestion level prediction over traffic data," in International Joint Conference on Neural Networks. IEEE, 2020. Evaluation of short-term freeway speed prediction based on periodic analysis using statistical models and machine learning models. X Yang, Y Zou, J Tang, J Liang, M Ijaz, Journal of Advanced Transportation. 2020X. Yang, Y. Zou, J. Tang, J. Liang, and M. Ijaz, "Evaluation of short-term freeway speed prediction based on periodic analysis using statistical models and machine learning models," Journal of Advanced Transportation, vol. 2020, 2020. Graph wavenet for deep spatial-temporal graph modeling. Z Wu, S Pan, G Long, J Jiang, C Zhang, International Joint Conference on Artificial Intelligence. AAAI PressZ. Wu, S. Pan, G. Long, J. Jiang, and C. Zhang, "Graph wavenet for deep spatial-temporal graph modeling," in International Joint Conference on Artificial Intelligence. AAAI Press, 2019, pp. 1907- 1913. Deep learning: A generic approach for extreme condition traffic forecasting. R Yu, Y Li, C Shahabi, U Demiryurek, Y Liu, Conference on Data Mining. SIAM. R. Yu, Y. Li, C. Shahabi, U. Demiryurek, and Y. Liu, "Deep learning: A generic approach for extreme condition traffic forecasting," in Conference on Data Mining. SIAM, 2017, pp. 777-785. Forecast networkwide traffic states for multiple steps ahead: A deep learning approach considering dynamic non-local spatial correlation and non-stationary temporal dependency. X Wang, X Guan, J Cao, N Zhang, H Wu, arXiv:2004.02391arXiv preprintX. Wang, X. Guan, J. Cao, N. Zhang, and H. Wu, "Forecast network- wide traffic states for multiple steps ahead: A deep learning approach considering dynamic non-local spatial correlation and non-stationary temporal dependency," arXiv preprint arXiv:2004.02391, 2020. Diffusion convolutional recurrent neural network: Data-driven traffic forecasting. Y Li, R Yu, C Shahabi, Y Liu, International Conference on Learning Representations. Y. Li, R. Yu, C. Shahabi, and Y. Liu, "Diffusion convolutional recur- rent neural network: Data-driven traffic forecasting," in International Conference on Learning Representations, 2018. Constructing geographic and long-term temporal graph for traffic forecasting. Y Sun, Y Wang, K Fu, Z Wang, C Zhang, J Ye, arXiv:2004.10958arXiv preprintY. Sun, Y. Wang, K. Fu, Z. Wang, C. Zhang, and J. Ye, "Constructing geographic and long-term temporal graph for traffic forecasting," arXiv preprint arXiv:2004.10958, 2020. Towards investigation of iterative strategy for data mining of short-term traffic flow with recurrent neural networks. A Fandango, R P Wiegand, International Conference on Information System and Data Mining. A. Fandango and R. P. Wiegand, "Towards investigation of iterative strategy for data mining of short-term traffic flow with recurrent neural networks," in International Conference on Information System and Data Mining, 2018, pp. 65-69. Rainfallintegrated traffic speed prediction using deep learning method. Y Jia, J Wu, M Ben-Akiva, R Seshadri, Y Du, Intelligent Transport Systems. 119Y. Jia, J. Wu, M. Ben-Akiva, R. Seshadri, and Y. Du, "Rainfall- integrated traffic speed prediction using deep learning method," In- telligent Transport Systems, vol. 11, no. 9, pp. 531-536, 2017. Traffic speed prediction for urban arterial roads using deep neural networks. Y Adu-Gyamfi, M Zhao, International Conference on Transportation and Development: Traffic and Freight Operations and Rail and Public Transit. Reston, VAAmerican Society of Civil EngineersY. Adu-Gyamfi and M. Zhao, "Traffic speed prediction for urban arterial roads using deep neural networks," in International Conference on Transportation and Development: Traffic and Freight Operations and Rail and Public Transit. American Society of Civil Engineers Reston, VA, 2018, pp. 85-96. Short-term traffic speed forecasting based on attention convolutional neural network for arterials. Q Liu, B Wang, Y Zhu, Computer-Aided Civil and Infrastructure Engineering. 3311Q. Liu, B. Wang, and Y. Zhu, "Short-term traffic speed forecast- ing based on attention convolutional neural network for arterials," Computer-Aided Civil and Infrastructure Engineering, vol. 33, no. 11, pp. 999-1016, 2018. Deep learning applied to road traffic speed forecasting. T Epelbaum, F Gamboa, J.-M Loubes, J Martin, T. Epelbaum, F. Gamboa, J.-M. Loubes, and J. Martin, "Deep learning applied to road traffic speed forecasting," 2017. STANN: A spatio-temporal attentive neural network for traffic prediction. Z He, C.-Y Chow, J.-D Zhang, IEEE Access. 7Z. He, C.-Y. Chow, and J.-D. Zhang, "STANN: A spatio-temporal attentive neural network for traffic prediction," IEEE Access, vol. 7, pp. 4795-4806, 2018. Deeptransport: Learning spatial-temporal dependency for traffic condition forecasting. X Cheng, R Zhang, J Zhou, W Xu, International Joint Conference on Neural Networks. IEEEX. Cheng, R. Zhang, J. Zhou, and W. Xu, "Deeptransport: Learning spatial-temporal dependency for traffic condition forecasting," in In- ternational Joint Conference on Neural Networks. IEEE, 2018, pp. 1-8. Traffic state prediction using convolutional neural network. R Toncharoen, M Piantanakulchai, International Joint Conference on Computer Science and Software Engineering. IEEER. Toncharoen and M. Piantanakulchai, "Traffic state prediction using convolutional neural network," in International Joint Conference on Computer Science and Software Engineering. IEEE, 2018, pp. 1-6. The station-free sharing bike demand forecasting with a deep learning approach and large-scale datasets. C Xu, J Ji, P Liu, Transportation research part C: emerging technologies. 95C. Xu, J. Ji, and P. Liu, "The station-free sharing bike demand forecasting with a deep learning approach and large-scale datasets," Transportation research part C: emerging technologies, vol. 95, pp. 47-60, 2018. Geospatial data to images: A deep-learning framework for traffic forecasting. W Jiang, L Zhang, Tsinghua Science and Technology. 241W. Jiang and L. Zhang, "Geospatial data to images: A deep-learning framework for traffic forecasting," Tsinghua Science and Technology, vol. 24, no. 1, pp. 52-64, 2018. Deep multi-view spatial-temporal network for taxi demand prediction. H Yao, F Wu, J Ke, X Tang, Y Jia, S Lu, P Gong, J Ye, Z Li, AAAI Conference on Artificial Intelligence. H. Yao, F. Wu, J. Ke, X. Tang, Y. Jia, S. Lu, P. Gong, J. Ye, and Z. Li, "Deep multi-view spatial-temporal network for taxi demand prediction," in AAAI Conference on Artificial Intelligence, 2018. An integrated feature learning approach using deep learning for travel time prediction. M Abdollahi, T Khaleghi, K Yang, Expert Systems with Applications. 139112864M. Abdollahi, T. Khaleghi, and K. Yang, "An integrated feature learning approach using deep learning for travel time prediction," Expert Systems with Applications, vol. 139, p. 112864, 2020. Multitask learning and GCN-based taxi demand prediction for a traffic road network. Z Chen, B Zhao, Y Wang, Z Duan, X Zhao, Sensors. 20133776Z. Chen, B. Zhao, Y. Wang, Z. Duan, and X. Zhao, "Multitask learning and GCN-based taxi demand prediction for a traffic road network," Sensors, vol. 20, no. 13, p. 3776, 2020. FMA-ETA: Estimating travel time entirely based on FFN with attention. Y Sun, Y Wang, K Fu, Z Wang, Z Yan, C Zhang, J Ye, arXiv:2006.04077arXiv preprintY. Sun, Y. Wang, K. Fu, Z. Wang, Z. Yan, C. Zhang, and J. Ye, "FMA- ETA: Estimating travel time entirely based on FFN with attention," arXiv preprint arXiv:2006.04077, 2020. Predicting citywide passenger demand via reinforcement learning from spatio-temporal dynamics. X Ning, L Yao, X Wang, B Benatallah, F Salim, P D Haghighi, International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services. X. Ning, L. Yao, X. Wang, B. Benatallah, F. Salim, and P. D. Haghighi, "Predicting citywide passenger demand via reinforcement learning from spatio-temporal dynamics," in International Conference on Mo- bile and Ubiquitous Systems: Computing, Networking and Services, 2018, pp. 19-28. Combining time-series and textual data for taxi demand prediction in event areas: A deep learning approach. F Rodrigues, I Markou, F C Pereira, Information Fusion. 49F. Rodrigues, I. Markou, and F. C. Pereira, "Combining time-series and textual data for taxi demand prediction in event areas: A deep learning approach," Information Fusion, vol. 49, pp. 120-129, 2019. DeepSTCL: A deep spatio-temporal ConvLSTM for travel demand prediction. D Wang, Y Yang, S Ning, International Joint Conference on Neural Networks. IEEED. Wang, Y. Yang, and S. Ning, "DeepSTCL: A deep spatio-temporal ConvLSTM for travel demand prediction," in International Joint Con- ference on Neural Networks. IEEE, 2018, pp. 1-8. D-GAN: Deep generative adversarial nets for spatio-temporal prediction. D Saxena, J Cao, arXiv:1907.08556arXiv preprintD. Saxena and J. Cao, "D-GAN: Deep generative adversarial nets for spatio-temporal prediction," arXiv preprint arXiv:1907.08556, 2019. Large-scale short-term urban taxi demand forecasting using deep learning. S Liao, L Zhou, X Di, B Yuan, J Xiong, Asia and South Pacific Design Automation Conference. IEEES. Liao, L. Zhou, X. Di, B. Yuan, and J. Xiong, "Large-scale short-term urban taxi demand forecasting using deep learning," in Asia and South Pacific Design Automation Conference. IEEE, 2018, pp. 428-433. Scalable deep traffic flow neural networks for urban traffic congestion prediction. M Fouladgar, M Parchami, R Elmasri, A Ghaderi, International Joint Conference on Neural Networks. IEEEM. Fouladgar, M. Parchami, R. Elmasri, and A. Ghaderi, "Scalable deep traffic flow neural networks for urban traffic congestion predic- tion," in International Joint Conference on Neural Networks. IEEE, 2017, pp. 2251-2258. Implementing a deep learning framework for short term traffic flow prediction. H Yi, K.-H N Bui, H Jung, Proceedings of the 9th International Conference on Web Intelligence, Mining and Semantics. the 9th International Conference on Web Intelligence, Mining and SemanticsH. Yi, K.-H. N. Bui, and H. Jung, "Implementing a deep learning framework for short term traffic flow prediction," in Proceedings of the 9th International Conference on Web Intelligence, Mining and Semantics, 2019, pp. 1-8. Deep autoencoder neural networks for short-term traffic congestion prediction of transportation networks. S Zhang, Y Yao, J Hu, Y Zhao, S Li, J Hu, Sensors. 19102229S. Zhang, Y. Yao, J. Hu, Y. Zhao, S. Li, and J. Hu, "Deep autoen- coder neural networks for short-term traffic congestion prediction of transportation networks," Sensors, vol. 19, no. 10, p. 2229, 2019. An LSTM-based method with attention mechanism for travel time prediction. X Ran, Z Shan, Y Fang, C Lin, Sensors. 194861X. Ran, Z. Shan, Y. Fang, and C. Lin, "An LSTM-based method with attention mechanism for travel time prediction," Sensors, vol. 19, no. 4, p. 861, 2019. Traffic jams: Cluster formation in low-dimensional cellular automata models for highway and city traffic. R Barlovic, Ph.D. dissertationStandort Duisburg universityR. Barlovic, "Traffic jams: Cluster formation in low-dimensional cel- lular automata models for highway and city traffic," Ph.D. dissertation, Standort Duisburg university, 2003. Short term traffic prediction models. C I Van Hinsbergen, F Sanders, IEEE Intelligent Transportation Systems Conference. IEEEC. I. Van Hinsbergen, F. Sanders et al., "Short term traffic predic- tion models," in IEEE Intelligent Transportation Systems Conference. IEEE, 2007. Some traffic features at freeway bottlenecks. M J Cassidy, R L Bertini, Transportation Research Part B: Methodological. 331M. J. Cassidy and R. L. Bertini, "Some traffic features at freeway bottlenecks," Transportation Research Part B: Methodological, vol. 33, no. 1, pp. 25-42, 1999. Spatiotemporal traffic-flow dependency and short-term traffic forecasting. Y Yue, A , G.-O Yeh, Environment and Planning B: Planning and Design. 355Y. Yue and A. G.-O. Yeh, "Spatiotemporal traffic-flow dependency and short-term traffic forecasting," Environment and Planning B: Planning and Design, vol. 35, no. 5, pp. 762-771, 2008. Discovering spatio-temporal causal interactions in traffic data streams. W Liu, Y Zheng, S Chawla, J Yuan, X Xing, ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. W. Liu, Y. Zheng, S. Chawla, J. Yuan, and X. Xing, "Discovering spatio-temporal causal interactions in traffic data streams," in ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2011, pp. 1010-1018. Rethinking the inception architecture for computer vision. C Szegedy, V Vanhoucke, S Ioffe, J Shlens, Z Wojna, IEEE Conference on Computer Vision and Pattern Recognition. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, "Rethink- ing the inception architecture for computer vision," in IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2818-2826. Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, IEEE Conference on Computer Vision and Pattern Recognition. K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770-778. Aggregated residual transformations for deep neural networks. S Xie, R Girshick, P Dollár, Z Tu, K He, IEEE Conference on Computer Vision and Pattern Recognition. S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, "Aggregated residual transformations for deep neural networks," in IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 1492-1500. Graph attention networks. P Veličković, G Cucurull, A Casanova, A Romero, P Lio, Y Bengio, arXiv:1710.10903arXiv preprintP. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio, "Graph attention networks," arXiv preprint arXiv:1710.10903, 2017. Multistep-ahead time series prediction. H Cheng, P.-N Tan, J Gao, J Scripps, Pacific-Asia Conference on Knowledge Discovery and Data Mining. SpringerH. Cheng, P.-N. Tan, J. Gao, and J. Scripps, "Multistep-ahead time series prediction," in Pacific-Asia Conference on Knowledge Discovery and Data Mining. Springer, 2006, pp. 765-774. Caltrans, Performance Measurement System. "Caltrans, Performance Measurement System," http://pems.dot.ca.gov, accessed: 2020-11-06. Big data and its technical challenges. H V Jagadish, J Gehrke, A Labrinidis, Y Papakonstantinou, J M Patel, R Ramakrishnan, C Shahabi, Communications of the ACM. 577H. V. Jagadish, J. Gehrke, A. Labrinidis, Y. Papakonstantinou, J. M. Patel, R. Ramakrishnan, and C. Shahabi, "Big data and its technical challenges," Communications of the ACM, vol. 57, no. 7, pp. 86-94, 2014. Modern graph theory. B Bollobás, Springer Science & Business Media184B. Bollobás, Modern graph theory. Springer Science & Business Media, 2013, vol. 184. Traffic forecasting at 'strategic','tactical'and 'operational'level: A differentiated methodology is necessary. P Naess, A Strand, disP-The Planning Review. 512P. Naess and A. Strand, "Traffic forecasting at 'strategic','tactical'and 'operational'level: A differentiated methodology is necessary," disP- The Planning Review, vol. 51, no. 2, pp. 41-48, 2015. 24 or 48 hour advance traffic forecast in urban and periurban environments: The example of Paris. C Lamboley, J Santucci, M Danech-Pajouh, World Congress on Intelligent Transport Systems. C. Lamboley, J. Santucci, and M. Danech-Pajouh, "24 or 48 hour advance traffic forecast in urban and periurban environments: The example of Paris," in World Congress on Intelligent Transport Systems, 1997. A comparative study on application of time series analysis for traffic forecasting in India: prospects and limitations. K Jha, N Sinha, S S Arkatkar, A K Sarkar, Current Science. K. Jha, N. Sinha, S. S. Arkatkar, and A. K. Sarkar, "A comparative study on application of time series analysis for traffic forecasting in India: prospects and limitations," Current Science, pp. 373-385, 2016. A question of trust: Statistical characterization of long-term traffic estimations for their improved actionability. I Laña, E Villar-Rodriguez, U Etxegarai, I Oregi, J Del Ser, Intelligent Transportation Systems Conference. IEEEI. Laña, E. Villar-Rodriguez, U. Etxegarai, I. Oregi, and J. Del Ser, "A question of trust: Statistical characterization of long-term traffic esti- mations for their improved actionability," in Intelligent Transportation Systems Conference. IEEE, 2019, pp. 1922-1928. Estimating link traffic volumes by month, day of week and time of day. J N Ivan, W M Eldessouki, M Zhao, F Guo, Tech. Rep. J. N. Ivan, W. M. Eldessouki, M. Zhao, and F. Guo, "Estimating link traffic volumes by month, day of week and time of day," Tech. Rep., 2002. Machine learning algorithm validation with a limited sample size. A Vabalas, E Gowen, E Poliakoff, A J Casson, PloS one. 1411224365A. Vabalas, E. Gowen, E. Poliakoff, and A. J. Casson, "Machine learning algorithm validation with a limited sample size," PloS one, vol. 14, no. 11, p. e0224365, 2019. Cyclical calendar and lunar patterns in automobile property accidents and injury accidents. W Laverty, I Kelly, Perceptual and motor skills. 861W. Laverty and I. Kelly, "Cyclical calendar and lunar patterns in automobile property accidents and injury accidents," Perceptual and motor skills, vol. 86, no. 1, pp. 299-302, 1998. Daily flow profiles of urban traffic. W Weijermars, E C Van Berkum, WIT Transactions on The Built Environment. 75W. Weijermars and E. C. van Berkum, "Daily flow profiles of urban traffic," WIT Transactions on The Built Environment, vol. 75, 2004. Whether weather matters to traffic demand, traffic safety, and traffic operations and flow. T H Maze, M Agarwal, G Burchett, Transportation research record. 19481T. H. Maze, M. Agarwal, and G. Burchett, "Whether weather matters to traffic demand, traffic safety, and traffic operations and flow," Transportation research record, vol. 1948, no. 1, pp. 170-176, 2006. Improving road traffic forecasting using air pollution and atmospheric data: Experiments based on LSTM recurrent neural networks. F M Awan, R Minerva, N Crespi, Sensors. 20133749F. M. Awan, R. Minerva, and N. Crespi, "Improving road traffic forecasting using air pollution and atmospheric data: Experiments based on LSTM recurrent neural networks," Sensors, vol. 20, no. 13, p. 3749, 2020. Geometric deep learning: going beyond euclidean data. M M Bronstein, J Bruna, Y Lecun, A Szlam, P Vandergheynst, IEEE Signal Processing Magazine. 344M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam, and P. Vandergheynst, "Geometric deep learning: going beyond euclidean data," IEEE Signal Processing Magazine, vol. 34, no. 4, pp. 18-42, 2017. Empirical study of traffic features at a freeway lane drop. R L Bertini, M T Leal, Journal of Transportation Engineering. 1316R. L. Bertini and M. T. Leal, "Empirical study of traffic features at a freeway lane drop," Journal of Transportation Engineering, vol. 131, no. 6, pp. 397-407, 2005. A survey on machine learning for data fusion. T Meng, X Jing, Z Yan, W Pedrycz, Information Fusion. 57T. Meng, X. Jing, Z. Yan, and W. Pedrycz, "A survey on machine learning for data fusion," Information Fusion, vol. 57, pp. 115-129, 2020. Properties of a well-defined macroscopic fundamental diagram for urban traffic. N Geroliminis, J Sun, Transportation Research Part B: Methodological. 453N. Geroliminis and J. Sun, "Properties of a well-defined macroscopic fundamental diagram for urban traffic," Transportation Research Part B: Methodological, vol. 45, no. 3, pp. 605-617, 2011. Effects of controlling aggressive driving behavior on network-wide traffic flow and emissions. F K Adamidis, E G Mantouka, E I Vlahogianni, International Journal of Transportation Science and Technology. F. K. Adamidis, E. G. Mantouka, and E. I. Vlahogianni, "Effects of controlling aggressive driving behavior on network-wide traffic flow and emissions," International Journal of Transportation Science and Technology, 2020. Madrid Open Data Portal. "Madrid Open Data Portal," http://datos.madrid.es, accessed: 2020-11- 06. NYC Real Time Traffic Speed Data Feed. "NYC Real Time Traffic Speed Data Feed," https://www.kaggle.com/ crailtap/nyc-real-time-traffic-speed-data-feed, accessed: 2020-11-06. Seattle Inductive Loop Detector Dataset. "Seattle Inductive Loop Detector Dataset," https://github.com/ zhiyongc/Seattle-Loop-Data, accessed: 2020-11-06. Hyperopt: A Python library for optimizing the hyperparameters of machine learning algorithms. J Bergstra, D Yamins, D D Cox, Python in Science Conference. Citeseer. J. Bergstra, D. Yamins, and D. D. Cox, "Hyperopt: A Python library for optimizing the hyperparameters of machine learning algorithms," in Python in Science Conference. Citeseer, 2013, pp. 13-20. A tutorial on support vector regression. A J Smola, B Schölkopf, Statistics and computing. 143A. J. Smola and B. Schölkopf, "A tutorial on support vector regression," Statistics and computing, vol. 14, no. 3, pp. 199-222, 2004. Statistical comparisons of classifiers over multiple data sets. J Demšar, Journal of Machine learning research. 7J. Demšar, "Statistical comparisons of classifiers over multiple data sets," Journal of Machine learning research, vol. 7, no. Jan, pp. 1-30, 2006. Time for a change: a tutorial for comparing multiple classifiers through bayesian analysis. A Benavoli, G Corani, J Demšar, M Zaffalon, The Journal of Machine Learning Research. 181A. Benavoli, G. Corani, J. Demšar, and M. Zaffalon, "Time for a change: a tutorial for comparing multiple classifiers through bayesian analysis," The Journal of Machine Learning Research, vol. 18, no. 1, pp. 2653-2688, 2017. From data to actions in intelligent transportation systems: a prescription of functional requirements for model actionability. I Laña, J J Sanchez-Medina, E I Vlahogianni, J Del Ser, arXiv:2002.02210arXiv preprintI. Laña, J. J. Sanchez-Medina, E. I. Vlahogianni, and J. Del Ser, "From data to actions in intelligent transportation systems: a prescription of functional requirements for model actionability," arXiv preprint arXiv:2002.02210, 2020. On improving operational planning and control in public transportation networks using streaming data: A machine learning approach. L Moreira-Matias, J Mendes-Moreira, J Gama, M Ferreira, 41L. Moreira-Matias, J. Mendes-Moreira, J. Gama, and M. Ferreira, "On improving operational planning and control in public transporta- tion networks using streaming data: A machine learning approach," ECML/PKDD 2014, p. 41, 2014. Traffic in Towns: A study of the long term problems of traffic in urban areas. C Buchanan, Routledge. C. Buchanan, Traffic in Towns: A study of the long term problems of traffic in urban areas. Routledge, 2015. Crowd sensing of traffic anomalies based on human mobility and social media. B Pan, Y Zheng, D Wilkie, C Shahabi, ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems. B. Pan, Y. Zheng, D. Wilkie, and C. Shahabi, "Crowd sensing of traffic anomalies based on human mobility and social media," in ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, 2013, pp. 344-353. Adaptive long-term traffic state estimation with evolving spiking neural networks. I Laña, J L Lobo, E Capecci, J Del Ser, N Kasabov, Transportation Research Part C: Emerging Technologies. 101I. Laña, J. L. Lobo, E. Capecci, J. Del Ser, and N. Kasabov, "Adaptive long-term traffic state estimation with evolving spiking neural net- works," Transportation Research Part C: Emerging Technologies, vol. 101, pp. 126-144, 2019. Improving adaptation and interpretability of a short-term traffic forecasting system. R Mena Yedra, J Casas, T Vilaró, R Djukic, Gavaldà Mestre, Australasian Transport Research Forum 2017 Proceedings. Auckland, New ZealandR. Mena Yedra, J. Casas Vilaró, T. Djukic, and R. Gavaldà Mestre, "Improving adaptation and interpretability of a short-term traffic forecasting system," in Australasian Transport Research Forum 2017 Proceedings: 27-29 November 2017, Auckland, New Zealand, 2017, pp. 1-15. A online boosting approach for traffic flow forecasting under abnormal conditions. T Wu, K Xie, D Xinpin, G Song, International Conference on Fuzzy Systems and Knowledge Discovery. IEEET. Wu, K. Xie, D. Xinpin, and G. Song, "A online boosting approach for traffic flow forecasting under abnormal conditions," in International Conference on Fuzzy Systems and Knowledge Discovery. IEEE, 2012, pp. 2555-2559. Learning terrain segmentation with classifier ensembles for autonomous robot navigation in unstructured environments. M J Procopio, J Mulligan, G Grudic, Journal of Field Robotics. 262M. J. Procopio, J. Mulligan, and G. Grudic, "Learning terrain segmen- tation with classifier ensembles for autonomous robot navigation in unstructured environments," Journal of Field Robotics, vol. 26, no. 2, pp. 145-175, 2009. Ex-post evaluations of demand forecast accuracy: A literature review. M S Nicolaisen, P A Driscoll, Transport Reviews. 344M. S. Nicolaisen and P. A. Driscoll, "Ex-post evaluations of demand forecast accuracy: A literature review," Transport Reviews, vol. 34, no. 4, pp. 540-557, 2014. Post-construction evaluation of traffic forecast accuracy. P Parthasarathi, D Levinson, Transport Policy. 176P. Parthasarathi and D. Levinson, "Post-construction evaluation of traffic forecast accuracy," Transport Policy, vol. 17, no. 6, pp. 428- 443, 2010. Sensitivity-based uncertainty analysis of a combined travel demand model. C Yang, A Chen, X Xu, S Wong, Transportation Research Part B: Methodological. 57C. Yang, A. Chen, X. Xu, and S. Wong, "Sensitivity-based uncertainty analysis of a combined travel demand model," Transportation Research Part B: Methodological, vol. 57, pp. 225-244, 2013. Using ensembles of decision trees to predict transport mode choice decisions: Effects on predictive success and uncertainty estimates. S Rasouli, H J Timmermans, European Journal of Transport and Infrastructure Research. 144S. Rasouli and H. J. Timmermans, "Using ensembles of decision trees to predict transport mode choice decisions: Effects on predictive success and uncertainty estimates," European Journal of Transport and Infrastructure Research, vol. 14, no. 4, 2014. Do planners get it right? The accuracy of travel demand forecasting in Norway. M Welde, J Odeck, European Journal of Transport and Infrastructure Research. 111M. Welde and J. Odeck, "Do planners get it right? The accuracy of travel demand forecasting in Norway," European Journal of Transport and Infrastructure Research, vol. 11, no. 1, 2011. Traffic forecasts under uncertainty and capacity constraints. A Matas, J.-L Raymond, A Ruiz, Transportation. 391A. Matas, J.-L. Raymond, and A. Ruiz, "Traffic forecasts under uncertainty and capacity constraints," Transportation, vol. 39, no. 1, pp. 1-17, 2012. Comment on "on discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes. J.-H Xue, D M Titterington, Neural processing letters. 283169J.-H. Xue and D. M. Titterington, "Comment on "on discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes"," Neural processing letters, vol. 28, no. 3, p. 169, 2008. CORSIM corridor traffic simulation model," in Traffic Congestion and Traffic Safety in the 21st Century: Challenges, Innovations, and Opportunities Urban Transportation Division, ASCE; Highway Division, ASCE; Federal Highway Administration, USDOT; and National Highway Traffic Safety Administration. A Halati, H Lieu, S Walker, A. Halati, H. Lieu, and S. Walker, "CORSIM corridor traffic simulation model," in Traffic Congestion and Traffic Safety in the 21st Century: Challenges, Innovations, and Opportunities Urban Transportation Di- vision, ASCE; Highway Division, ASCE; Federal Highway Adminis- tration, USDOT; and National Highway Traffic Safety Administration, USDOT., 1997. Microscopic traffic flow simulator VISSIM. M Fellendorf, P Vortisch, Fundamentals of traffic simulation. SpringerM. Fellendorf and P. Vortisch, "Microscopic traffic flow simulator VISSIM," in Fundamentals of traffic simulation. Springer, 2010, pp. 63-93. SUMO simulation of urban mobility: an overview. M Behrisch, L Bieker, J Erdmann, D Krajzewicz, International Conference on Advances in System Simulation. ThinkMind. M. Behrisch, L. Bieker, J. Erdmann, and D. Krajzewicz, "SUMO simulation of urban mobility: an overview," in International Conference on Advances in System Simulation. ThinkMind, 2011. I Goodfellow, arXiv:1701.00160NIPS 2016 tutorial: Generative adversarial networks. arXiv preprintI. Goodfellow, "NIPS 2016 tutorial: Generative adversarial networks," arXiv preprint arXiv:1701.00160, 2016. Generative adversarial networks for spatiotemporal data: A survey. N Gao, H Xue, W Shao, S Zhao, K K Qin, A Prabowo, M S Rahaman, F D Salim, arXiv:2008.08903arXiv preprintN. Gao, H. Xue, W. Shao, S. Zhao, K. K. Qin, A. Prabowo, M. S. Rahaman, and F. D. Salim, "Generative adversarial networks for spatio- temporal data: A survey," arXiv preprint arXiv:2008.08903, 2020. Multivariate time series imputation with generative adversarial networks. Y Luo, X Cai, Y Zhang, J Xu, X Yuan, Advances in Neural Information Processing Systems. Y. Luo, X. Cai, Y. Zhang, J. Xu, and X. Yuan, "Multivariate time series imputation with generative adversarial networks," in Advances in Neural Information Processing Systems, 2018, pp. 1596-1607. Reservoir computing approaches to recurrent neural network training. M Lukoševičius, H Jaeger, Computer Science Review. 33M. Lukoševičius and H. Jaeger, "Reservoir computing approaches to recurrent neural network training," Computer Science Review, vol. 3, no. 3, pp. 127-149, 2009. Short-term traffic flow prediction based on echo state networks. F Yang, C Wang, X Zuo, R Zhong, F Xiang, Adv. Inf. Sci. Serv. Sci. 49F. Yang, C. Wang, X. Zuo, R. Zhong, and F. Xiang, "Short-term traffic flow prediction based on echo state networks," Adv. Inf. Sci. Serv. Sci., vol. 4, no. 9, pp. 269-277, 2012. Probabilistic regularized extreme learning for robust modeling of traffic flow forecasting. J Lou, Y Jiang, Q Shen, R Wang, Z Li, IEEE Transactions on Neural Networks and Learning Systems. in pressJ. Lou, Y. Jiang, Q. Shen, R. Wang, and Z. Li, "Probabilistic regularized extreme learning for robust modeling of traffic flow forecasting," IEEE Transactions on Neural Networks and Learning Systems, in press, 2020. Road traffic forecasting using stacking ensembles of echo state networks. J Ser, I Laña, M N Bilbao, E I Vlahogianni, 2019 IEEE Intelligent Transportation Systems Conference (ITSC). IEEEJ. Del Ser, I. Laña, M. N. Bilbao, and E. I. Vlahogianni, "Road traffic forecasting using stacking ensembles of echo state networks," in 2019 IEEE Intelligent Transportation Systems Conference (ITSC). IEEE, 2019, pp. 2591-2597. Deep echo state networks for short-term traffic forecasting: Performance comparison and statistical assessment. J Ser, I Laña, E L Manibardo, I Oregi, E Osaba, J L Lobo, M N Bilbao, E I Vlahogianni, IEEE Intelligent Transportation Systems Conference. IEEEJ. Del Ser, I. Laña, E. L. Manibardo, I. Oregi, E. Osaba, J. L. Lobo, M. N. Bilbao, and E. I. Vlahogianni, "Deep echo state networks for short-term traffic forecasting: Performance comparison and statistical assessment," in IEEE Intelligent Transportation Systems Conference. IEEE, 2020. Ensemble learning for short-term traffic prediction based on gradient boosting machine. S Yang, J Wu, Y Du, Y He, X Chen, Journal of Sensors. 2017S. Yang, J. Wu, Y. Du, Y. He, and X. Chen, "Ensemble learning for short-term traffic prediction based on gradient boosting machine," Journal of Sensors, vol. 2017, 2017. Short-term traffic flow forecasting via multi-regime modeling and ensemble learning. Z Lu, J Xia, M Wang, Q Nie, J Ou, Applied Sciences. 101356Z. Lu, J. Xia, M. Wang, Q. Nie, and J. Ou, "Short-term traffic flow forecasting via multi-regime modeling and ensemble learning," Applied Sciences, vol. 10, no. 1, p. 356, 2020. Short-term traffic forecasting using high-resolution traffic data. W Li, C Yang, S E Jabari, arXiv:2006.12292arXiv preprintW. Li, C. Yang, and S. E. Jabari, "Short-term traffic forecasting using high-resolution traffic data," arXiv preprint arXiv:2006.12292, 2020. Evaluating automated machine learning on supervised regression traffic forecasting problems. J S Angarita-Zapata, A D Masegosa, I Triguero, Computational Intelligence in Emerging Technologies for Engineering Applications. SpringerJ. S. Angarita-Zapata, A. D. Masegosa, and I. Triguero, "Evaluating automated machine learning on supervised regression traffic forecasting problems," in Computational Intelligence in Emerging Technologies for Engineering Applications. Springer, 2020, pp. 187-204. Heterogeneous multilayer generalized operational perceptron. D T Tran, S Kiranyaz, M Gabbouj, A Iosifidis, IEEE Transactions on Neural Networks and Learning Systems. 313D. T. Tran, S. Kiranyaz, M. Gabbouj, and A. Iosifidis, "Heterogeneous multilayer generalized operational perceptron," IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 3, pp. 710-724, 2019. Liquid state machines: motivation, theory, and applications. W Maass, Computability in context: computation and logic in the real world. World ScientificW. Maass, "Liquid state machines: motivation, theory, and applica- tions," in Computability in context: computation and logic in the real world. World Scientific, 2011, pp. 275-296. A hybrid machine learning approach for freeway traffic speed estimation. Z Zhang, Y Yuan, X Yang, Transportation Research Record. 267410Z. Zhang, Y. Yuan, and X. Yang, "A hybrid machine learning approach for freeway traffic speed estimation," Transportation Research Record, vol. 2674, no. 10, pp. 68-78, 2020. Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). A Adadi, M Berrada, IEEE Access. 6A. Adadi and M. Berrada, "Peeking inside the black-box: A survey on explainable artificial intelligence (XAI)," IEEE Access, vol. 6, pp. 52 138-52 160, 2018. A bayesian network approach to traffic flow forecasting. S Sun, C Zhang, G Yu, IEEE Transactions on Intelligent Transportation Systems. 71S. Sun, C. Zhang, and G. Yu, "A bayesian network approach to traffic flow forecasting," IEEE Transactions on Intelligent Transportation Systems, vol. 7, no. 1, pp. 124-132, 2006. What lies beneath: A note on the explainability of black-box machine learning models for road traffic forecasting. A Barredo-Arrieta, I Laña, J Del Ser, Intelligent Transportation Systems Conference. IEEEA. Barredo-Arrieta, I. Laña, and J. Del Ser, "What lies beneath: A note on the explainability of black-box machine learning models for road traffic forecasting," in Intelligent Transportation Systems Conference. IEEE, 2019, pp. 2232-2237. Congested traffic flow: Observations and theory. B S Kerner, Transportation Research Record. 16781B. S. Kerner, "Congested traffic flow: Observations and theory," Trans- portation Research Record, vol. 1678, no. 1, pp. 160-167, 1999. Explanation of observed features of selforganization in traffic flow. M Treiber, D Helbing, cond-mat/9901239arXiv preprintM. Treiber and D. Helbing, "Explanation of observed features of self- organization in traffic flow," arXiv preprint cond-mat/9901239, 1999. Evolutionary fuzzy systems for explainable artificial intelligence: why, when, what for, and where to?. A Fernandez, F Herrera, O Cordon, M J Jesus, F Marcelloni, IEEE Computational Intelligence Magazine. 141A. Fernandez, F. Herrera, O. Cordon, M. J. del Jesus, and F. Marcelloni, "Evolutionary fuzzy systems for explainable artificial intelligence: why, when, what for, and where to?" IEEE Computational Intelligence Magazine, vol. 14, no. 1, pp. 69-81, 2019. He is currently a junior researcher at TECNALIA (Spain), pursuing his PhD in Artificial Intelligence. His research interest combine machine learning and signal processing within the context of Intelligent Transportation Systems (ITS). L Eric, Manibardo received his B.Sc. degree in Telecommunication Engineering in 2017, and M.Sc. degree also in Telecommunications Engineering in 2019 from the University of the Basque Country, Spain. with an emphasis on traffic forecastingEric L. Manibardo received his B.Sc. degree in Telecommunication Engineering in 2017, and M.Sc. degree also in Telecommunications Engineering in 2019 from the University of the Basque Coun- try, Spain. He is currently a junior researcher at TECNALIA (Spain), pursuing his PhD in Artificial Intelligence. His research interest combine machine learning and signal processing within the context of Intelligent Transportation Systems (ITS), with an emphasis on traffic forecasting. His research interests fall within the intersection of Intelligent Transportation Systems (ITS), machine learning, traffic data analysis and data science. He has dealt with urban traffic forecasting problems, where he has applied machine learning models and evolutionary algorithms to obtain longer term and more accurate predictions. He is currently researching methods to measure the confidence of traffic and other time series data. He also has interest in other traffic related challenges. 2006, the M.Sc. degree in Advanced Artificial Intelligence from UNED, Spain, in 2014, and the PhD in Artificial Intelligence from the University of the Basque Country. Ibai Laña received his B.Sc. degree in Computer Engineering from Deusto University, SpainHe is currently a senior researcher at TECNALIA (Spain). such as origin-destination matrix estimation or point of interest and trajectory detectionIbai Laña received his B.Sc. degree in Computer Engineering from Deusto University, Spain, in 2006, the M.Sc. degree in Advanced Artificial Intelligence from UNED, Spain, in 2014, and the PhD in Artifi- cial Intelligence from the University of the Basque Country in 2018. He is currently a senior researcher at TECNALIA (Spain). His research interests fall within the intersection of Intelligent Transportation Systems (ITS), machine learning, traffic data analy- sis and data science. He has dealt with urban traffic forecasting problems, where he has applied machine learning models and evolutionary algorithms to obtain longer term and more accurate predictions. He is currently researching methods to measure the confidence of traffic and other time series data. He also has interest in other traffic related challenges, such as origin-destination matrix estimation or point of interest and trajectory detection. 2006, and a second PhD degree (cum laude, extraordinary PhD prize) in Computational Intelligence from the University of Alcala (Spain) in 2013. He is currently a Research Professor in Artificial Intelligence and leading scientist of the OPTIMA (Optimization, Modeling and Analytics) research area at TECNALIA, Spain. He is also an adjunct professor at the University of the Basque Country. Javier Del Ser, SM'12) received his first PhD degree. cum laude) in Electrical Engineering from the University of Navarra (Spaininvited research fellow at the Basque Center for Applied Mathematics (BCAMJavier Del Ser (SM'12) received his first PhD de- gree (cum laude) in Electrical Engineering from the University of Navarra (Spain) in 2006, and a second PhD degree (cum laude, extraordinary PhD prize) in Computational Intelligence from the University of Alcala (Spain) in 2013. He is currently a Research Professor in Artificial Intelligence and leading sci- entist of the OPTIMA (Optimization, Modeling and Analytics) research area at TECNALIA, Spain. He is also an adjunct professor at the University of the Basque Country (UPV/EHU), and an invited research fellow at the Basque Center for Applied Mathematics (BCAM). His research interests are in the design of Artificial Intelligence methods for data mining and optimization applied to problems emerging from Intelligent Transportation Systems, Smart Mobility, Logistics and Autonomous Driving, among specific interests in other domains. He has published more than 380 scientific articles, co-supervised 10 Ph.D. theses, edited 7 books, co-authored 9 patents and participated/led more than 40 research projects. He is an Associate Editor of tier-one journals from areas related to Artificial Intelligence, such as Information Fusion. Swarm and Evolutionary Computation and Cognitive Computation, as well as an Associate Editor of IEEE Transactions on Intelligent Transportation Systems. His research interests are in the design of Artificial Intelligence methods for data mining and optimization applied to problems emerging from Intelligent Transportation Systems, Smart Mobility, Logistics and Autonomous Driving, among specific interests in other domains. He has published more than 380 scientific articles, co-supervised 10 Ph.D. theses, edited 7 books, co-authored 9 patents and participated/led more than 40 research projects. He is an Associate Editor of tier-one journals from areas related to Artificial Intelligence, such as Information Fusion, Swarm and Evolutionary Computation and Cognitive Computation, as well as an Associate Editor of IEEE Transactions on Intelligent Transportation Systems.
[]
[ "Independent Prototype Propagation for Zero-Shot Compositionality", "Independent Prototype Propagation for Zero-Shot Compositionality" ]
[ "Frank Ruis \nUniversity of Twente & TNO\nTNO\nUniversity of Twente\n\n", "Gertjan J Burghouts [email protected] \nUniversity of Twente & TNO\nTNO\nUniversity of Twente\n\n", "Doina Bucur [email protected] \nUniversity of Twente & TNO\nTNO\nUniversity of Twente\n\n" ]
[ "University of Twente & TNO\nTNO\nUniversity of Twente\n", "University of Twente & TNO\nTNO\nUniversity of Twente\n", "University of Twente & TNO\nTNO\nUniversity of Twente\n" ]
[]
Humans are good at compositional zero-shot reasoning; someone who has never seen a zebra before could nevertheless recognize one when we tell them it looks like a horse with black and white stripes. Machine learning systems, on the other hand, usually leverage spurious correlations in the training data, and while such correlations can help recognize objects in context, they hurt generalization. To be able to deal with underspecified datasets while still leveraging contextual clues during classification, we propose ProtoProp, a novel prototype propagation graph method. First we learn prototypical representations of objects (e.g., zebra) that are conditionally independent w.r.t. their attribute labels (e.g., stripes) and vice versa. Next we propagate the independent prototypes through a compositional graph, to learn compositional prototypes of novel attribute-object combinations that reflect the dependencies of the target distribution. The method does not rely on any external data, such as class hierarchy graphs or pretrained word embeddings. We evaluate our approach on AO-Clever, a synthetic and strongly visual dataset with clean labels, and UT-Zappos, a noisy real-world dataset of fine-grained shoe types. We show that in the generalized compositional zero-shot setting we outperform state-of-the-art results, and through ablations we show the importance of each part of the method and their contribution to the final results.Preprint. Under review.
null
[ "https://arxiv.org/pdf/2106.00305v2.pdf" ]
235,265,869
2106.00305
4a875d9673aa6c8eefa9d617273fe2bbb1110071
Independent Prototype Propagation for Zero-Shot Compositionality Frank Ruis University of Twente & TNO TNO University of Twente Gertjan J Burghouts [email protected] University of Twente & TNO TNO University of Twente Doina Bucur [email protected] University of Twente & TNO TNO University of Twente Independent Prototype Propagation for Zero-Shot Compositionality Humans are good at compositional zero-shot reasoning; someone who has never seen a zebra before could nevertheless recognize one when we tell them it looks like a horse with black and white stripes. Machine learning systems, on the other hand, usually leverage spurious correlations in the training data, and while such correlations can help recognize objects in context, they hurt generalization. To be able to deal with underspecified datasets while still leveraging contextual clues during classification, we propose ProtoProp, a novel prototype propagation graph method. First we learn prototypical representations of objects (e.g., zebra) that are conditionally independent w.r.t. their attribute labels (e.g., stripes) and vice versa. Next we propagate the independent prototypes through a compositional graph, to learn compositional prototypes of novel attribute-object combinations that reflect the dependencies of the target distribution. The method does not rely on any external data, such as class hierarchy graphs or pretrained word embeddings. We evaluate our approach on AO-Clever, a synthetic and strongly visual dataset with clean labels, and UT-Zappos, a noisy real-world dataset of fine-grained shoe types. We show that in the generalized compositional zero-shot setting we outperform state-of-the-art results, and through ablations we show the importance of each part of the method and their contribution to the final results.Preprint. Under review. Introduction As humans, hearing the phrase 'a tiny pink penguin reading a book' can conjure up a vivid image, even though we have likely never seen such a creature before. This is because humans can compose their knowledge of a small number of visual primitives to recognize novel concepts [1], a property which Lake et al. [2] argue is one of the key building blocks for human intelligence missing in current artificial intelligence systems. Machines, on the other hand, are largely data-driven, and usually require many labeled examples from various viewpoints and lighting conditions in order to recognize novel concepts. Since visual concepts follow a long-tailed distribution [3,4], such an approach makes it near impossible to gather sufficient examples for all possible concepts. Compounded by that, in the absence of sufficient data, vanilla convolutional neural networks will use any correlation they can find to classify training samples, even when they are spurious [5]. In this work, we aim to tackle both of these issues. Compositional Zero-Shot Learning (CZSL) [6] is the problem of learning to model novel objects and their attributes as a composition of visual primitives. Previous works in CZSL [6,7,8] largely ignore the dependencies between classes with shared visual primitives, and the spurious correlations between attributes and objects. More recently, Atzmon et al. [9] tackle the latter by ensuring conditional independence between attribute and object representations, while Naeem et al. [10] explicitly promote dependencies between the primitives and their compositions. While the independence approach improves generalization, it hurts accuracy on seen classes by removing useful correlations. The Le arn Figure 1: ProtoProp A sketch of our proposed method; we learn conditionally independent prototypical representations of visual primitives in the form of objects (e.g., horse) and attributes (e.g., stripes). The prototypes are then propagated through a compositional graph, where they are combined into novel compositional prototypes to recognize both seen and unseen classes (e.g., zebra). explicit dependencies, on the other hand, can share these useful correlations with unseen classes, but there will always be some that are spurious, hurting generalization. In this work, we propose to take advantage of the strengths of both approaches, respectively, by learning independent visual representations of objects and attributes, and by learning their compositions for the target classes. First, we represent visual primitives by learning local independent prototypical representations. Prototype networks [11] learn an embedding function, where inputs of the same class cluster around one prototypical representation of that class. Here we adopt such a function for learning prototypes of objects and attributes. Next, we leverage a compositional graph to learn the dependencies between the independent prototypes on the one hand, and the desired seen and unseen classes on the other, by propagating the prototypes to compositional nodes. Here, the compositional graph allows some information to be shared between objects that share attributes, e.g., between tigers and zebras. The proposed method, ProtoProp, is outlined in Figure 1. Our main contributions are: 1) We propose a novel graph propagation method that learns to combine local, independent, attribute and object prototypes into one compositional prototype that can accurately detect unseen compositional classes. 2) A spatial attention-based pooling method that allows us to obtain differentiable attribute and object patches for use in an independence loss function. 3) Our method effectively deals with bias from an underspecified dataset by learning conditionally independent representations that then take on the dependencies of the desired target distribution. 4) We validate through ablations the importance of each part of the method (local prototypes vs semantic embeddings, independence loss, backbone finetuning) and their contribution to the final results. 5) We show that we improve on state-of-the-art results on two challenging compositional zero-shot learning benchmarks: 2.5 to 20.2% harmonic mean improvement on AO-Clevr [9] and 3.1% harmonic mean improvement on UT-Zappos [12] compared to the best existing method. Related work Compositional zero-shot learning (CZSL) methods aim to recognize unseen compositions from known visual primitives in the form of attributes and objects. One line of work considers embedding the visual primitives in the image feature space. Misra et al. [6] use the weight vectors of linear SVMs as embeddings for the visual primitives, which they transform to recognize unseen compositions. Li et al. [8] look at attribute-object compositions through the lens of symmetry inspired by group theory. A different line of work considers a joint embedding function on the image, attribute, and object triplet, allowing the model to learn dependencies between the image and its visual primitives. Purushwalkam et al. [7] train a set of modular networks together with a gating network that can 'rewire' the classifier conditioned on the input attribute and object pair. Atzmon et al. [9] take a causal view of CZSL, trying to answer which intervention caused the image. They apply conditional independence constraints to the representations of the visual primitives, sacrificing accuracy on the seen data but performing well on unseen data by removing correlations that are useful but hurt generalization to novel compositions. Naeem et al. [10], on the other hand, explicitly promote the dependency between all primitives and their compositions within a graph structure, though they rely on pretrained semantic embeddings which may differ in distribution from the visual concepts they describe. Our proposed method combines the strengths from Atzmon et al. [9] and Naeem et al. [10] by first learning conditionally independent representations of the visual primitives, which are then propagated through a compositional graph to learn the dependency structure of the desired target distribution. Unlike most other methods, we are not reliant on pretrained word embeddings, but instead learn the representations for our visual primitives directly from the training data. Instead of a linear kernel like Atzmon et al. [9], we use a Gaussian kernel for our independence loss as we find that its ability to capture higher-order statistics is beneficial in terms of accuracy. Like Naeem et al. [10], we train our model fully end-to-end, including the feature extractor, as learning a good embedding is often more beneficial than an overly complicated method applied to suboptimal embeddings [13]. Prototypical networks [11] aim to learn an embedding function where inputs of the same class cluster around one prototypical representation of that class. They measure L2 distance between samples in pixel space, which is sensitive to non-semantic similarities between images, such as objects from different classes with a similar background and light conditions. Li et al. [14] move the similarity metric to latent space and utilise an autoencoder to directly visualize the prototypes, but they only test on simple MNIST-like benchmarks. Chen et al. [15] extend the method to multiple local prototypes per class, where prototypes are compared to local image patches instead of the entire average-pooled output of a CNN, and evaluate on the more complicated fine-grained bird classification dataset CUB [16]. We take a similar local-prototype approach, but we use direct attribute and object supervision, enabling parameter sharing between classes and allowing the prototypes to be used for zero-shot classification. While for most methods these prototypes are used for the final classification, in our case they are an intermediate representation. Graph neural networks (GNNs) are models that can work directly on the structure of a graph. They have been first proposed by Gori et al. [17], later elaborated upon by Scarselli et al. [18] and popularised by Kipf and Welling [19] in their work on the Graph Convolutional Neural Network (GCN). GNNs grow more popular every year, and a lot of improvements have been proposed recently [20]. Just like the GCNs were inspired by CNNs, most improvements are inspired by other areas of deep learning such as attention mechanisms in Graph Attention Networks [21], but this jump from existing deep learning fields to GNNs has overlooked simpler methods. Wu et al. [22] have taken a step back and stripped down GNNs to their simplest parts by removing nonlinearities and collapsing weight matrices until all that was left was feature propagation followed by a linear model. In the same vein, Huang et al. [23] show that a simple Multilayer Perceptron (MLP) ignoring graph structure followed by a label propagation post-processing step often greatly outperforms GNNs. These simplified methods are mostly limited to transductive settings (test nodes are available at train time) and graphs that exhibit strong homophily (similar nodes are connected). Many real-world graphs fit those criteria, including the graph we use in this work. Method In compositional zero-shot learning (CZSL), we have a set of images X and compositional classes with compositional labels Y ⊆ A × O, where A is a set of attribute labels and O a set of object labels. The labels are subdivided into Y = Y s ∪ Y u , where Y s are the seen labels for the training set and Y u unseen labels for the validation and test sets, with Y s ∩ Y u = ∅. We denote the training data consisting of only seen compositional classes as X s . Finally, CZSL assumes that each attribute and object is seen in at least one training data point, or more formally ∀p ∈ A ∪ O ∃y ∈ Y s s.t. p ∩ y = ∅. Prototype-based representations In this section we describe how we learn local attribute and object prototypes, before they are propagated and combined into compositional prototypes in Section 3.3. Prototypes are an average representation of a target class, with strong generalization and interpretability properties. We employ : Local prototypes with softmax pooling: Local image patches x ij from the layer just before the average pooling layer of a ResNet-18 are compared to prototype vectors p k through a similarity function, here a dot product. This outputs a compatibility score s, which is optimized via L CE (cf. Section 3.1). Similar to spatial attention, these patch-prototype compatibility scores are passed through a softmax function and via Hadamard (pointwise) product a weighted sum z k of the original feature map is calculated, which are passed to the HSIC function to promote independence between object and attribute prototypes (cf. Section 3.2) a local prototype approach similar to Chen et al. [15], though we use direct supervision, encouraging the feature extractor to learn local image representations that encode the attribute and object labels. We use a ResNet-18 [24] backbone for fair comparison to other CZSL methods, but the method is backbone agnostic. Figure 2 shows an example of a prototype layer. The layer has a set of prototype vectors P = {p j } k j=1 , p j ∈ R C where each target class is represented by one prototype. k = |Y i | is the number of attribute or object targets, and i ∈ {0, 1} indicates if we are training on attribute or object labels respectively. The layer takes as input the H × W × C output of a CNN just before its final pooling layer, and calculates a compatibility score s = x ij , p k (e.g., cosine similarity, L2 norm, dot product) between the input patches x ij ∈ R C over the spatial dimensions H × W and the prototypes. We find that a dot product similarity metric leads to better generalization. To save on parameters and avoid overfitting, we do not use a fully connected layer. Instead, the maximum patch-prototype similarity score is used directly as the compatibility score for the corresponding prototype's class. This compatibility score is optimized through a cross-entropy loss function: L CE = 1 |S| s,y ∈S − log exp(s[y i ]) |s| j exp(s[j])(1) Here, with f p as our prototype layer, S = {s, y ∈ (f p (X s ) | Y s )} is the set of prototype similarity scores and target labels for the training data. i ∈ {0, 1} indicates if we are training on attribute or object labels respectively. Cross-entropy loss naturally encourages clustering between samples of the same class and separation from other classes, but we find that an additional loss to push these properties even more is beneficial. For that purpose we adopt the cluster and separation costs proposed by Chen et al. [15]: Clst = 1 |X| |X| i=1 min j:pj ∈Py i min z∈xij z − p j 2 2 Sep = − 1 |X| |X| i=1 min j:pj / ∈Py i min z∈xij z − p j 2 2(2) The costs Clst and Sep encourage a higher degree of clustering between prototypes of the same class and a greater distance to prototypes of different classes, respectively. The separation cost is only applied to the object prototypes, as the attribute prototypes benefit from less distant representations. Our Hilbert-Schmidt Independence Criterion (HSIC) loss (details in Section 3.2) requires the input patches with the highest prototype similarity scores as its input, but that would require the use of the argmax function which has a gradient of 0 almost everywhere. Similar to spatial attention, we use the softmax function over the similarity map as a differentiable proxy. The softmax outputs used as weights in a weighted sum of the input patches x ij serve as an approximation of the input patch with the highest similarity score (cf. Figure 2), and ensure that each patch containing information about attribute or object labels is affected by the independence loss proportional to the strength of their appearance (i.e. their similarity score). Attribute-object independence To combat the bias in the training data we leverage a differentiable independence metric such that the model can learn representations for the attributes of images that are uniformly distributed w.r.t. their object labels and vice versa. The Hilbert-Schmidt Independence Criterion (HSIC) [25] is a kernel statistical test of independence between two random variables A and B. In the infinite sample limit HSIC(A, B) = 0 ⇐⇒ A ⊥ ⊥ B as long as the chosen kernel is universal in the sense of Steinwart [26]. We use a Gaussian kernel, as we find that its ability to capture higher-order statistics is beneficial in terms of accuracy, and follow Gretton et al. [25] in setting the kernel size to the median distance between points. See the Appendix for more details. We define our independence loss function as follows, slightly deviating from Atzmon et al. [9]: L hsic = λ h HSIC(z a , O) + HSIC(z o , A) 2(3) where λ h is a hyperparameter controlling the contribution of L hsic to the final loss, z a is the softmaxpooled output of the attribute prototype layer (cf. Section 3.1), z o the softmax-pooled output of the object prototype layer, and O and A the corresponding one-hot encodings of the object and attribute labels respectively. Figure 3 shows the full architecture. Two types of prototype layers are trained, one on the object labels and one on the attribute labels, leaving us with centroids with confounding information removed. Prototype propagation graph The prior knowledge about the compositional target classes and the attributes they afford can be represented as a compositional graph G = (P a ∪ P o ∪ C y , E). It consists of the attribute prototypes P a , object prototypes P o , and compositional classes C y ⊆ A × O (a subset of which have no training samples). The edges are undirected, E = {x, y | x ∈ P a ∪ P o , y ∈ C y : x ∈ y}, i.e. all attribute (e.g., red) and object (e.g., cube) nodes are connected to the compositional classes they are a part of (e.g., red cube). This corresponds to a bipartite graph with the attribute and object prototypes in one set and the compositional classes in the other. In the case of AO-Clevr it is a complete bipartite graph, but for real-world datasets such as UT-Zappos this is not the case as, e.g., not all shoes are available in all materials. As defined above, the conditionally independent prototypes are mapped directly to the nodes of the graph (cf. Figure 3), and through shared weights a Graph Neural Network (GNN) learns to propagate the prototypes to the compositional nodes to form new compositional prototypes. The compositional nodes are initialized with zeros. Our GNN is a 2-layer GCN [19]; inspired by SGC [22] we remove all nonlinearities as we find that simple linear combinations lead to better generalization: X =D −1/2ÂD−1/2 XΘ(4) Here X are the input nodes, Θ a learnable weight matrix, = A + I denotes the adjacency matrix with inserted self-loops, andD ii = j ij its diagonal degree matrix. We then compute another dot product similarity map s c between the compositional prototypes and the average-pooled output of the backbone, which is again optimized via cross-entropy loss function. Our final compositional classification is y = argmax(s c ), i.e., the class corresponding to the compositional prototype with the maximum similarity score. The shared weights in the GNN ensure that the model learns a general composition of attributes and objects that can generalize to unseen compositional classes. By initializing the node features with independent prototypes, the graph is able to learn the dependencies encoded by the compositional graph, including the novel zero-shot classes, instead of the biases of the training data. Experiments Implementation We implement our model using the PyTorch library [27], using some of the boilerplate code provided by Nagarajan and Grauman [28] and Purushwalkam et al. [7]. The GNN is implemented using PyTorch Geometric [29]. For our independence loss we use a normalized implementation of HSIC provided by Ma et al. [30] 1 We use the Adam [31] optimizer. Experiments have been run on an 11GB GeForce GTX 1080 Ti graphics card. On AO-Clevr we reach our best result in 1-5 epochs on all splits, at~5 minutes per epoch. On UT-Zappos we reach our best result in~30 epochs at~1.5 minutes per epoch. See the Appendix for detailed hyperparameters and grid search ranges. Evaluation We evaluate our approach on two CZSL datasets. Previous works also evaluate on MIT-States [32], but Atzmon et al. [9] conclude through a large-scale user study that the dataset has a level of~70% label noise, making it too noisy for evaluating compositionality. AO-Clevr [9,33] is a synthetic dataset consisting of 3 types of objects (sphere, cube, cylinder) and 8 attributes (red, purple, yellow, blue, green, cyan, gray, brown), with 24 compositional classes in total. There are 6 splits with a varying ratio of unseen to seen classes, ranging from 2:8 to 7:3, allowing insights into the performance of models as the proportion of unseen classes increases. [12] is a fine-grained dataset of types of shoes with 12 objects (e.g., boat shoes), 16 attributes (e.g., leather), and 116 compositional classes. We use the split proposed by Purushwalkam et al. [7], with 83 seen classes in the training set, 15 seen and 15 unseen classes in the validation set, and 18 seen and 18 unseen classes in the test set. UT-Zappos Metrics: like other recent works we adopt the generalized zero-shot evaluation protocol proposed by Purushwalkam et al. [7], Chao et al. [34]. Instead of evaluating only on the unseen classes as in the Closed setting, the Generalized setting considers both seen and unseen classes in the validation and test sets. To account for the inherent bias towards seen classes, Chao et al. [34] add a calibration bias term to the activations of unseen classes. As the value of the bias is varied between −∞ and +∞, they draw a curve with the accuracy on the unseen classes on the y axis and the accuracy on the seen classes on the x axis, and report the Area Under the Curve (AUC). The harmonic mean is defined as 2 · Accs·Accu Accs+Accu , where Acc s is the accuracy on the seen classes and Acc u the accuracy on the unseen classes. It penalizes large differences between the two metrics and as such indicates how well the model performs on both seen and unseen classes simultaneously. We also follow prior works in reporting the Closed seen accuracy where only the seen classes are considered (corresponding to a bias of −∞) and closed unseen accuracy where only the unseen classes are considered (corresponding to a bias of +∞). For the results on AO-Clevr we report the seen and unseen accuracy components of the harmonic mean in accordance to Atzmon et al. [9]. Atzmon et al. [9] do not perform post-hoc bias calibration, but instead handle the seen-unseen bias during training. Like Naeem et al. [10], we train our feature extractor end-to-end with the rest of our model. Other methods keep the backbone fixed, but Naeem et al. [10] have shown that they perform worse when allowed to finetune as they will then start to overfit. While our method is designed to take full advantage of the backbone, we show our results with a fixed backbone in our ablations. Results For each of the results we select the best model by the best harmonic mean on the validation set and report the results for all metrics on the test set. The results with error bars come from 5 training runs with random initializations. We note that, unlike other methods, Causal [9] does not perform post-hoc bias calibration, but instead tackles the bias against seen classes during training, which is a benefit of their approach. Figure 4 shows the results on AO-Clevr. We also report the results for CGE [10] (after a grid search using their own codebase) as they have not reported experiments on this dataset. See the Appendix for the exact values and error bars. We observe that for the 3:7, 5:5, 6:4 and 7:3 splits, one or more colors are not part of the training data, making a large portion of the validation and test classes impossible to classify through compositional methods on those splits. Our method performs better than the compared methods on all splits, with minimal impact on the seen accuracy. The improvements are in the range of 2.5 to 20.2% for the harmonic mean and 3.3 to 17.0% for the unseen accuracy, with higher gains as the proportion of unseen classes increases significantly. UT-Zappos Table 1 shows the results on UT-Zappos. Because earlier works only report a single best run, we report the average over 5 random initializations including standard error for the two best performing previous models (CGE and TMN), using their own codebase and reported hyperparameters. We find that, especially for this dataset, the error bars are important since it is highly susceptible to random initialization; a rare few of our runs could reach a harmonic mean of 60+ while the average is around 50, with similar observations for other methods. Our method outperforms earlier methods on all metrics except for the closed unseen accuracy where CGE performs slightly better. Our method performs especially well when both seen and unseen classes are taken into account, as evidenced by the AUC and harmonic mean improvements. AO-Clevr Ablations To determine the effect of our visual features in the form of prototypes, in contrast to the semantic features in most prior works, we perform ablations where we replace our prototypes with pretrained word embeddings. We also take a look at the importance of the independence loss function and finetuning the backbone. Importance of prototypes Table 2 shows the results of our ablations on the 4:6 split on AO-Clevr. Row 1 shows the results for our proposed method without the independence loss function, in row 2 we add the independence loss but freeze the backbone, and row 3 shows the results with both independence and finetuning, as described in the method section. The prototypes perform worse than semantic embeddings when used as node features without the independence loss, but with the independence loss they receive significant gains and come out on top with an 8.3% increase in harmonic mean accuracy over the semantic features. With a frozen backbone the harmonic mean for our method is just slightly higher (0.8%) than Causal on the same split, though it reaches that accuracy in a fraction of the time. Visual vs. semantic For the ablation in row 4 we use semantic word embeddings (word2vec [35]) as node features, which is equivalent to Naeem et al. [10] with a simplified GCN. In row 5 we also train our local prototypes but don't use them during classification, only to influence the local features the feature extractor learns. Training the local prototypes improves the results slightly even when they are not used for the final classification, by encouraging the backbone to learn local features that represent the target attributes and objects. Effect of independence Table 2 shows that the prediction accuracy breaks down when not using the independence loss. Figure 5 provides some intuition for this through a t-SNE [36] plot of softmax- Figure 5: t-SNE plot of softmax-pooled color patches for AO-Clevr that were trained with (left) and without (right) HSIC loss, where with the HSIC loss we get a homogeneous grouping of the attributes and no fragmentation by spurious confounders. pooled attribute (in this case color) representations from our model on AO-Clevr. Without the loss there are three strongly separated clusters per color (one per shape), and many colors are embedded closer to different colors of the same shape. When we do use the independence loss, each of the colors is part of their own single cluster and the only correlations left are those between visually similar colors, improving the accuracy of the compositional prototypes after the propagation step. Limitations Like typical CZSL works, our method is limited to a single attribute and object label per image, and each individual attribute and object needs to be seen in at least one training data point. Existing attribute datasets are often either noisy or do not work in the aforementioned single-attribute setting. But if this initial hurdle of quality datasets is overcome, compositional methods make it easier to handle the addition of new classes, especially for rare objects with little to no available images. To extend the scope to more realistic settings and other datasets, in future work we would extend the method to multiple attributes, e.g., through attribute prototype regression [37], or by leveraging textual descriptions that can be scraped from the web in a semi-supervised fashion [38]. The hyperparameters can be difficult to tune, as there are hyperparameters for the backbone, prototype layer, independence loss, and compositional GNN. We do find, however, that the prototype layers can be tuned separately from the rest, as the best performing individual prototypes and the hyperparameters that led there also perform best when used in conjunction with the GNN. As such, most hyperparameters can be adopted from existing prototype-based classification works that have been trained on a similar dataset. Conclusion In order to have a chance at recognizing the long-tailed distribution of visual concepts, our models need to become better at recognition through shared visual primitives. To this end, we have proposed a novel prototype propagation method for compositional zero-shot learning. Our method learns prototypes of visual primitives that are independent from the other visual primitives they appear with, and propagates those prototypes through a compositional graph in order to recognize unseen compositions. The method works with just the attribute and object annotations, and, as such, is not reliant on external sources of information, such as hierarchy graphs or pretrained semantic embeddings. We evaluate our work on two CZSL benchmarks, and improve on state-of-the-art results, especially with large fractions of unseen classes, with minimal impact on the accuracy of the seen classes. Figure 2 2Figure 2: Local prototypes with softmax pooling: Local image patches x ij from the layer just before the average pooling layer of a ResNet-18 are compared to prototype vectors p k through a similarity function, here a dot product. This outputs a compatibility score s, which is optimized via L CE (cf. Section 3.1). Similar to spatial attention, these patch-prototype compatibility scores are passed through a softmax function and via Hadamard (pointwise) product a weighted sum z k of the original feature map is calculated, which are passed to the HSIC function to promote independence between object and attribute prototypes (cf. Section 3.2) Figure 3 : 3ProtoProp: overview of proposed method. Local attribute and object prototypes are trained with independence (HSIC) loss and mapped to nodes in a simplified graph neural network, which learns to combine them into compositional prototypes of both seen and unseen classes. The maximum score in the dot product similarity map between the compositional prototypes and averagepooled backbone output is the final classification prediction. Figure 4 : 4AO-Clevr results Plots of the seen and unseen accuracy and their harmonic mean (yaxis) as the ratio of unseen:seen compositional classes (x-axis) increases. ProtoProp consistently outperforms state-of-the-art methods, especially when the portion of unseen classes grows. Table 1 : 1UT-Zappos results, ProtoProp improves on state-of-the-art results in the AUC, Closed seen, and harmonic mean metrics.Method AUC Closed seen Closed unseen Harmonic AttOp [28] 25.9 59.8 54.2 40.8 LE+ [6] 25.7 53.0 61.9 41.0 SymNet [8] 23.9 53.3 57.9 39.2 TMN [7] 24.7 ± 4.4 58.8 ± 1.4 49.4 ± 4.7 40.7 ± 2.0 Causal [9] 23.3 ± 0.3 - 55.4 ± 0.8 31.8 ± 1.7 CGE [10] 32.5 ± 1.5 61.0 ± 0.9 65.9 ± 1.2 47.1 ± 1.7 ProtoProp (ours) 34.7 ± 0.8 62.1 ± 0.9 65.5 ± 0.2 50.2 ± 1.3 Table 2 : 2Ablationon the AO-Clevr 4:6 split: Checkmarks indicate whether prototypes are used as node features, semantic vectors as node features, local prototypes are trained with independence loss, or the backbone is frozen respectively. Prototypes trained when semantic node features are used are not part of the final classification but still improve the feature extractor output. Node Features Indep. Proto Finetune Seen Unseen Harmonic Visual 94.5 ± 0.1 77.0 ± 2.8 84.8 ± 1.7 Visual 78.2 ± 1.1 73.0 ± 1.0 75.5 ± 0.8 Visual 97.9 ± 1.0 95.5 ± 0.9 96.7 ± 0.7 Semantic 95.4 ± 1.1 82.4 ± 0.7 88.4 ± 0.1 Semantic 97.3 ± 1.2 84.5 ± 1.4 90.4 ± 0.9 Table 3 : 3AO-Clevr Results: ProtoProp consistently outperforms state of the art methods, especially when the portion of unseen classes grows. 98.6 ± 0.6 99.3 ± 0.4 98.9 ± 0.3 89.7 ± 1.9 77.7 ± 1.4 83.2 ± 1.2 3:7 96.3 ± 0.9 81.7 ± 0.1 88.4 ± 0.4 80.9 ± 3.6 72.2 ± 1.0 75.7 ± 2.3 4:6 97.9 ± 1.0 95.5 ± 0.9 96.7 ± 0.7 84.1 ± 1.8 67.4 ± 2.0 74.7 ± 1.7 5:5 96.7 ± 0.2 63.1 ± 0.4 76.4 ± 0.3 83.8 ± 0.8 47.1 ± 4.5 59.8 ± 3.9 6:4 95.6 ± 3.7 38.6 ± 4.7 54.6 ± 3.9 86.1 ± 2.9 26.9 ± 0.5 40.9 ± 1.0 7:3 91.5 ± 4.6 39.3 ± 1.6 54.9 ± 0.6 69.3 ± 6.1 22.8 ± 3.0 33.7 ± 4.2 95.5 ± 1.1 80.4 ± 1.3 87.3 ± 0.7 83.8 ± 2.6 68.1 ± 4.1 74.5 ± 1.6 5:5 94.1 ± 2.5 55.4 ± 0.4 69.7 ± 0.9 84.7 ± 2.2 38.0 ± 3.0 51.5 ± 2.2 6:4 95.4 ± 0.1 33.2 ± 0.7 49.3 ± 0.7 83.7 ± 0.4 18.1 ± 2.9 29.1 ± 3.7 7:3 77.9 ± 0.1 22.3 ± 0.2 34.7 ± 0.3 88.1 ± 2.5 5.8 ± 0.9 10.8 ± 1.6ProtoProp (ours) Causal [9] U:S Seen Unseen Harmonic Seen Unseen Harmonic 2:8 CGE [10] TMN [7] U:S Seen Unseen Harmonic Seen Unseen Harmonic 2:8 96.7 ± 2.0 96.0 ± 1.1 96.4 ± 1.5 85.8 ± 0.9 79.7 ± 4.4 82.4 ± 2.1 3:7 98.2 ± 0.7 74.4 ± 3.4 84.6 ± 2.5 86.5 ± 0.3 62.2 ± 4.9 72.0 ± 3.4 4:6 The code will be available upon acceptances. A AppendixA.1 Hyperparameters AO-Clevr Like Atzmon et al.[9]we performed grid searches over the following splits: {2:8, 5:5, 6:4, 7:3}. We used the largest batch size that could fit in memory on our limited hardware, which was 256 for an image size of 224x224. For the learning rate (Adam[31]optimizer) we searched in the range of {0.001, 0.0001, 1e04, 5e-4, 5e-5}, with weight decay {0, 5e-4. 5e-5}. We chose a weight decay of 5e-5 and learning rate of 5e-4 until the 4:6 split and 1e-4 afterwards. UT-Zappos we again used the Adam optimizer, with learning rate in the ranges {5e-5, 5e-4, 5e-3}, and weight decay {0, 5e-4. 5e-5}, where we chose a learning rate and and weight decay of 5e-5 and a batch size of 128. For the rest of the parameters we searched the same ranges as above, where the same choices were optimal as for AO-Clevr.A.2 Hilbert-Schmidt Independence CriterionThe (biased) empirical HSIC estimator[25]is defined as:Where K and L are m × m matrices with entries k ij and l ij , H = I 1 m 11 , and 1 is a 1 × m vector of ones. The elements of K and L are outputs of a kernel function over the inputs U and V such as the (universal) gaussian kernel k ij := exp −σ −2 u i − u j 2 where σ is the kernel size. We follow Gretton et al.[25]in setting the kernel size to the median distance between points, but universality of the gaussian kernel holds for any kernel size. The empirical estimator has a bias in the order of O(m −1 ) which is negligible at even moderate sample sizes.A.3 AO-Clevr Results Revealing the multidimensional mental representations of natural objects underlying human similarity judgements. N Martin, Hebart, Y Charles, Francisco Zheng, Chris I Pereira, Baker, Nature Human Behaviour. 411Martin N Hebart, Charles Y Zheng, Francisco Pereira, and Chris I Baker. Revealing the multidi- mensional mental representations of natural objects underlying human similarity judgements. Nature Human Behaviour, 4(11):1173-1185, 2020. Building machines that learn and think like people. Brenden M Lake, Joshua B Tomer D Ullman, Samuel J Tenenbaum, Gershman, Behavioral and brain sciences. 40Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. Building machines that learn and think like people. Behavioral and brain sciences, 40, 2017. Learning to share visual appearance for multiclass object detection. Ruslan Salakhutdinov, Antonio Torralba, Josh Tenenbaum, CVPR 2011. IEEERuslan Salakhutdinov, Antonio Torralba, and Josh Tenenbaum. Learning to share visual appearance for multiclass object detection. In CVPR 2011, pages 1481-1488. IEEE, 2011. Largescale long-tailed recognition in an open world. Ziwei Liu, Zhongqi Miao, Xiaohang Zhan, Jiayun Wang, Boqing Gong, Stella X Yu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionZiwei Liu, Zhongqi Miao, Xiaohang Zhan, Jiayun Wang, Boqing Gong, and Stella X Yu. Large- scale long-tailed recognition in an open world. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2537-2546, 2019. Learning not to learn: Training deep neural networks with biased data. Byungju Kim, Hyunwoo Kim, Kyungsu Kim, Sungjin Kim, Junmo Kim, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionByungju Kim, Hyunwoo Kim, Kyungsu Kim, Sungjin Kim, and Junmo Kim. Learning not to learn: Training deep neural networks with biased data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9012-9020, 2019. From red wine to red tomato: Composition with context. Ishan Misra, Abhinav Gupta, Martial Hebert, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionIshan Misra, Abhinav Gupta, and Martial Hebert. From red wine to red tomato: Composi- tion with context. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1792-1801, 2017. Taskdriven modular networks for zero-shot compositional learning. Senthil Purushwalkam, Maximilian Nickel, Abhinav Gupta, Marc&apos;aurelio Ranzato, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionSenthil Purushwalkam, Maximilian Nickel, Abhinav Gupta, and Marc'Aurelio Ranzato. Task- driven modular networks for zero-shot compositional learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3593-3602, 2019. Symmetry and group in attribute-object compositions. Yong-Lu Li, Yue Xu, Xiaohan Mao, Cewu Lu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionYong-Lu Li, Yue Xu, Xiaohan Mao, and Cewu Lu. Symmetry and group in attribute-object compositions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11316-11325, 2020. A causal view of compositional zero-shot recognition. Yuval Atzmon, Felix Kreuk, Uri Shalit, Gal Chechik, Advances in Neural Information Processing Systems. 33Yuval Atzmon, Felix Kreuk, Uri Shalit, and Gal Chechik. A causal view of compositional zero-shot recognition. Advances in Neural Information Processing Systems, 33, 2020. Learning graph embeddings for compositional zero-shot learning. Yongqin Muhammad Ferjad Naeem, Federico Xian, Zeynep Tombari, Akata, arXiv:2102.01987arXiv preprintMuhammad Ferjad Naeem, Yongqin Xian, Federico Tombari, and Zeynep Akata. Learning graph embeddings for compositional zero-shot learning. arXiv preprint arXiv:2102.01987, 2021. Prototypical networks for few-shot learning. Jake Snell, Kevin Swersky, Richard Zemel, Proceedings of the 31st International Conference on Neural Information Processing Systems. the 31st International Conference on Neural Information Processing SystemsJake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 4080-4090, 2017. Fine-grained visual comparisons with local learning. Aron Yu, Kristen Grauman, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionAron Yu and Kristen Grauman. Fine-grained visual comparisons with local learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 192-199, 2014. Rethinking few-shot image classification: a good embedding is all you need?. Yonglong Tian, Yue Wang, Dilip Krishnan, Joshua B Tenenbaum, Phillip Isola, arXiv:2003.11539arXiv preprintYonglong Tian, Yue Wang, Dilip Krishnan, Joshua B Tenenbaum, and Phillip Isola. Re- thinking few-shot image classification: a good embedding is all you need? arXiv preprint arXiv:2003.11539, 2020. Deep learning for case-based reasoning through prototypes: A neural network that explains its predictions. Oscar Li, Hao Liu, Chaofan Chen, Cynthia Rudin, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence32Oscar Li, Hao Liu, Chaofan Chen, and Cynthia Rudin. Deep learning for case-based reasoning through prototypes: A neural network that explains its predictions. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018. This looks like that: Deep learning for interpretable image recognition. Chaofan Chen, Oscar Li, Daniel Tao, Alina Barnett, Cynthia Rudin, Jonathan K Su, Advances in Neural Information Processing Systems. 32Chaofan Chen, Oscar Li, Daniel Tao, Alina Barnett, Cynthia Rudin, and Jonathan K Su. This looks like that: Deep learning for interpretable image recognition. Advances in Neural Information Processing Systems, 32:8930-8941, 2019. C Wah, S Branson, P Welinder, P Perona, S Belongie, The Caltech-UCSD Birds. C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 . Dataset, CNS-TR-2011-001California Institute of TechnologyTechnical ReportDataset. Technical Report CNS-TR-2011-001, California Institute of Technology, 2011. A new model for learning in graph domains. Marco Gori, Gabriele Monfardini, Franco Scarselli, Proceedings. 2005 IEEE International Joint Conference on Neural Networks. 2005 IEEE International Joint Conference on Neural NetworksIEEE2Marco Gori, Gabriele Monfardini, and Franco Scarselli. A new model for learning in graph domains. In Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005., volume 2, pages 729-734. IEEE, 2005. The graph neural network model. Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, Gabriele Monfardini, IEEE Transactions on Neural Networks. 201Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61-80, 2008. Semi-supervised classification with graph convolutional networks. N Thomas, Max Kipf, Welling, International Conference on Learning Representations (ICLR. Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR), 2017. A comprehensive survey on graph neural networks. Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, S Yu Philip, IEEE Transactions on Neural Networks and Learning Systems. Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S Yu Philip. A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 2020. Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio, Graph Attention Networks. International Conference on Learning Representations. Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph Attention Networks. International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=rJXMpikCZ. Simplifying graph convolutional networks. Felix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, Kilian Weinberger, Proceedings of the 36th International Conference on Machine Learning. the 36th International Conference on Machine LearningPMLRFelix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Weinberger. Simplifying graph convolutional networks. In Proceedings of the 36th International Conference on Machine Learning, pages 6861-6871. PMLR, 2019. Combining label propagation and simple models out-performs graph neural networks. Qian Huang, Horace He, Abhay Singh, Ser-Nam Lim, Austin Benson, International Conference on Learning Representations. Qian Huang, Horace He, Abhay Singh, Ser-Nam Lim, and Austin Benson. Combining label propagation and simple models out-performs graph neural networks. In International Con- ference on Learning Representations, 2021. URL https://openreview.net/forum?id= 8E1-f3VhX1o. Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. A kernel statistical test of independence. Arthur Gretton, Kenji Fukumizu, Choon Hui Teo, Le Song, Bernhard Schölkopf, Alexander J Smola, Nips. Citeseer20Arthur Gretton, Kenji Fukumizu, Choon Hui Teo, Le Song, Bernhard Schölkopf, Alexander J Smola, et al. A kernel statistical test of independence. In Nips, volume 20, pages 585-592. Citeseer, 2007. On the influence of the kernel on the consistency of support vector machines. Ingo Steinwart, Journal of machine learning research. 2Ingo Steinwart. On the influence of the kernel on the consistency of support vector machines. Journal of machine learning research, 2(Nov):67-93, 2001. Pytorch: An imperative style, highperformance deep learning library. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary Devito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, Soumith Chintala, Advances in Neural Information Processing Systems. H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. GarnettCurran Associates, Inc32Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high- performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché- Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024-8035. Curran Associates, Inc., 2019. URL http://papers.neurips.cc/paper/ 9015-pytorch-an-imperative-style-high-performance-deep-learning-library. pdf. Attributes as operators: factorizing unseen attributeobject compositions. Tushar Nagarajan, Kristen Grauman, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)Tushar Nagarajan and Kristen Grauman. Attributes as operators: factorizing unseen attribute- object compositions. In Proceedings of the European Conference on Computer Vision (ECCV), pages 169-185, 2018. Fast graph representation learning with PyTorch Geometric. Matthias Fey, Jan E Lenssen, ICLR Workshop on Representation Learning on Graphs and Manifolds. Matthias Fey and Jan E. Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019. The HSIC bottleneck: Deep learning without back-propagation. Kurt Wan-Duo Ma, J P Lewis, W Bastiaan Kleijn, The Thirty-Second Innovative Applications of Artificial Intelligence Conference. New York, NY, USAAAAI Press2020The Tenth AAAI Symposium on Educational Advances in Artificial IntelligenceKurt Wan-Duo Ma, J. P. Lewis, and W. Bastiaan Kleijn. The HSIC bottleneck: Deep learning without back-propagation. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 5085-5092. AAAI Press, 2020. URL https://aaai.org/ojs/index.php/AAAI/article/view/5950. Auto-Encoding Variational Bayes. P Diederik, Max Kingma, Welling, 2nd International Conference on Learning Representations. Banff, AB, CanadaConference Track ProceedingsDiederik P. Kingma and Max Welling. Auto-Encoding Variational Bayes. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014. Discovering states and transformations in image collections. Phillip Isola, J Joseph, Edward H Lim, Adelson, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionPhillip Isola, Joseph J Lim, and Edward H Adelson. Discovering states and transformations in image collections. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1383-1391, 2015. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, Lawrence Zitnick, Ross Girshick, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionJustin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2901-2910, 2017. An empirical study and analysis of generalized zero-shot learning for object recognition in the wild. Wei-Lun Chao, Soravit Changpinyo, Boqing Gong, Fei Sha, European conference on computer vision. SpringerWei-Lun Chao, Soravit Changpinyo, Boqing Gong, and Fei Sha. An empirical study and analysis of generalized zero-shot learning for object recognition in the wild. In European conference on computer vision, pages 52-68. Springer, 2016. Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean, Advances in Neural Information Processing Systems. 26Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed repre- sentations of words and phrases and their compositionality. Advances in Neural Information Processing Systems, 26:3111-3119, 2013. Visualizing data using t-sne. Laurens Van Der Maaten, Geoffrey Hinton, Journal of machine learning research. 911Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(11), 2008. Attribute prototype network for zero-shot learning. Wenjia Xu, Yongqin Xian, Jiuniu Wang, Bernt Schiele, Zeynep Akata, 34th Conference on Neural Information Processing Systems. Curran Associates, IncWenjia Xu, Yongqin Xian, Jiuniu Wang, Bernt Schiele, and Zeynep Akata. Attribute prototype network for zero-shot learning. In 34th Conference on Neural Information Processing Systems. Curran Associates, Inc., 2020. A generative adversarial approach for zero-shot learning from noisy texts. Yizhe Zhu, Mohamed Elhoseiny, Bingchen Liu, Xi Peng, Ahmed Elgammal, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionYizhe Zhu, Mohamed Elhoseiny, Bingchen Liu, Xi Peng, and Ahmed Elgammal. A generative adversarial approach for zero-shot learning from noisy texts. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1004-1013, 2018.
[]
[ "SYMMETRIC AUTOMORPHISMS OF FREE GROUPS, BNSR-INVARIANTS, AND FINITENESS PROPERTIES", "SYMMETRIC AUTOMORPHISMS OF FREE GROUPS, BNSR-INVARIANTS, AND FINITENESS PROPERTIES" ]
[ "Matthew C B Zaremsky " ]
[]
[]
The BNSR-invariants of a group G are a sequence Σ 1 (G) ⊇ Σ 2 (G) ⊇ · · · of geometric invariants that reveal important information about finiteness properties of certain subgroups of G. We consider the symmetric automorphism group ΣAut n and pure symmetric automorphism group PΣAut n of the free group F n , and inspect their BNSR-invariants. We prove that for n ≥ 2, all the "positive" and "negative" character classes of PΣAut n lie in Σ n−2 (PΣAut n ) \ Σ n−1 (PΣAut n ). We use this to prove that for n ≥ 2, Σ n−2 (ΣAut n ) equals the full character sphere S 0 of ΣAut n but Σ n−1 (ΣAut n ) is empty, so in particular the commutator subgroup ΣAut ′ n is of type F n−2 but not F n−1 . Our techniques involve applying Morse theory to the complex of symmetric marked cactus graphs.
10.1307/mmj/1516330971
[ "https://arxiv.org/pdf/1607.03043v1.pdf" ]
119,313,878
1607.03043
b9168625325b5be53677858be5b6ed281c977ab0
SYMMETRIC AUTOMORPHISMS OF FREE GROUPS, BNSR-INVARIANTS, AND FINITENESS PROPERTIES 11 Jul 2016 Matthew C B Zaremsky SYMMETRIC AUTOMORPHISMS OF FREE GROUPS, BNSR-INVARIANTS, AND FINITENESS PROPERTIES 11 Jul 2016arXiv:1607.03043v1 [math.GR] The BNSR-invariants of a group G are a sequence Σ 1 (G) ⊇ Σ 2 (G) ⊇ · · · of geometric invariants that reveal important information about finiteness properties of certain subgroups of G. We consider the symmetric automorphism group ΣAut n and pure symmetric automorphism group PΣAut n of the free group F n , and inspect their BNSR-invariants. We prove that for n ≥ 2, all the "positive" and "negative" character classes of PΣAut n lie in Σ n−2 (PΣAut n ) \ Σ n−1 (PΣAut n ). We use this to prove that for n ≥ 2, Σ n−2 (ΣAut n ) equals the full character sphere S 0 of ΣAut n but Σ n−1 (ΣAut n ) is empty, so in particular the commutator subgroup ΣAut ′ n is of type F n−2 but not F n−1 . Our techniques involve applying Morse theory to the complex of symmetric marked cactus graphs. Introduction The Bieri-Neumann-Strebel-Renz (BNSR) invariants Σ m (G) of a group G are a sequence of geometric invariants Σ 1 (G) ⊇ Σ 2 (G) ⊇ · · · that encode a large amount of information about the subgroups of G containing the commutator subgroup G ′ . For example if G is of type F n and m ≤ n then Σ m (G) reveals precisely which such subgroups are of type F m . Recall that a group is of type F n if it admits a classifying space with compact n-skeleton; these finiteness properties are an important class of quasi-isometry invariants of groups. The BNSR-invariants are in general very difficult to compute; a complete description is known for the class of right-angled Artin groups [MMV98,BG99], but not many other substantial families of groups. A complete picture also exists for the generalized Thompson groups F n,∞ [BGK10, Koc12,Zar15], and the first invariant Σ 1 is also known for some additional classes of groups, e.g., one-relator groups [Bro87], pure braid groups [KMM15] and pure symmetric automorphism groups of right-angled Artin groups [OK00,KP14], among others. In this paper, we focus on the groups ΣAut n and PΣAut n of symmetric and pure symmetric automorphisms of the free group F n . An automorphism of F n is symmetric if it takes each basis element to a conjugate of a basis element, and pure symmetric if it takes each basis element to a conjugate of itself. These are also known as the (pure) loop braid groups, and are the groups of motions of n unknotted unlinked oriented loops in 3-space; an element describes these loops moving around and through each other, ending up back where they started, either individually in the pure case or just as a set in the non-pure case (but preserving orientation in both cases). Other names for these and closely related groups include welded braid groups, permutation-conjugacy automorphism groups, braidpermutation groups and more. See [Dam16] for a discussion of the many guises of these groups. Some topological properties known for PΣAut n include that is has cohomological dimension n − 1 [Col89], it is a duality group [BMMM01] and its cohomology ring has been computed [JMM06]. The first invariant Σ 1 (PΣAut n ) was fully computed by Orlandi-Korner [OK00] (she denotes the group by P Σ n ). Koban and Piggott subsequently computed Σ 1 (G) for G the group of pure symmetric automorphisms of any right-angled Artin group [KP14]. One reason that the question of BNSR-invariants is interesting for PΣAut n is that PΣAut n is similar to a right-angled Artin group, for instance it admits a presentation in which the relations are all commutators (see Section 2), but for n ≥ 3 it is not a right-angled Artin group [KP14], and it is not known whether it is a CAT(0) group [BMMM01,Question 6.4]. The BNSR-invariants are completely known for right-angled Artin groups, but the Morse theoretic proof of this fact in [BG99] made essential use of the CAT(0) geometry of the relevant complexes. Our approach here is to use Morse theory applied to the complex of symmetric marked cactus graphs ΣK n to prove the following main results: Theorem A. For n ≥ 2, if χ is a positive or negative 1 character of PΣAut n then [χ] ∈ Σ n−2 (PΣAut n ) \ Σ n−1 (PΣAut n ). Theorem B. For n ≥ 2, we have Σ n−2 (ΣAut n ) = S(ΣAut n ) = S 0 and Σ n−1 (ΣAut n ) = ∅. In particular the commutator subgroup ΣAut ′ n is of type F n−2 but not F n−1 . For example this shows that ΣAut ′ n is finitely generated if and only if n ≥ 3, and finitely presentable if and only if n ≥ 4. It appears that these are already new results (except for the fact that ΣAut ′ 2 is not finitely generated, which is easy to see since ΣAut 2 ∼ = F 2 ⋊ S 2 ). Theorem B also provides what could be viewed as the first examples for m ≥ 2 of "naturally occurring" groups G of type F ∞ such that Σ m−1 (G) = S(G) but Σ m (G) = ∅, and of groups of type F ∞ whose commutator subgroups have arbitrary finiteness properties. (One can also construct more ad hoc examples: we have noticed that taking a semidirect product of F n 2 with the Coxeter group of type B n = C n also produces a group with these properties.) As a remark, contrasting the loop braid group ΣAut n with the classical braid group B n , it is easy to see that Σ m (B n ) = S(B n ) = S 0 for all m and n, and B ′ n is of type F ∞ for all n. For the case n = 3 we can actually get a full computation of Σ m (PΣAut 3 ); Orlandi-Korner already computed Σ 1 (PΣAut 3 ) (it is dense in S(PΣAut 3 ); see Citation 2.2), and we prove that Σ 2 (PΣAut 3 ) = ∅ (see Theorem 4.23). We tentatively conjecture that Σ n−2 (PΣAut n ) is always dense in S(PΣAut n ) and Σ n−1 (PΣAut n ) is always empty, but for n ≥ 4 it seems this cannot be proved using our techniques, as discussed in Remark 4.24. As a remark, there is a result of Pettet involving finiteness properties of some other normal subgroups of PΣAut n . Namely she found that the kernel of the natural projection PΣAut n → PΣAut n−1 is finitely generated but not finitely presentable when n ≥ 3 [Pet10]. This is in contrast to the pure braid situation, where the kernel of the "forget a strand" map P B n → P B n−1 is of type F ∞ (in fact it is the free group F n−1 ). This paper is organized as follows. In Section 1 we recall the background on BNSRinvariants and Morse theory. In Section 2 we discuss the groups of interest, and in Section 3 we discuss the complex ΣK n . We prove Theorem A in Section 4, and with Theorem A in hand we quickly prove Theorem B in Section 5. Acknowledgments. I am grateful to Alex Schaefer for a helpful conversation about graph theory that in particular helped me figure out how to prove that Σ 2 (PΣAut 3 ) = ∅ (see Subsection 4.3), to Celeste Damiani for pointing me toward the paper [Sav96], and to Robert Bieri for enlightening discussions about the novelty of the behavior of the BNSRinvariants found in Theorem B. BNSR-invariants and Morse theory In this rather technical section we recall the definition of the BNSR-invariants, and set up the Morse theoretic approach that we will use. The results in Subsection 1.3 are general enough that we expect they should be useful in the future to compute BNSR-invariants of other interesting groups. 1.1. BNSR-invariants. A CW-complex Z is called a classifying space for G, or K(G, 1), if π 1 (Z) ∼ = G and π k (Z) = 0 for all k = 1. We say that G is of type F n if it admits a K(G, 1) with compact n-skeleton. For example G is of type F 1 if and only if it is finitely generated and of type F 2 if and only if it is finitely presentable. If G is of type F n for all n we say it is of type F ∞ . If G acts properly and cocompactly on an (n − 1)-connected CW-complex, then G is of type F n . Definition 1.1 (BNSR-invariants). Let G be a group acting properly and cocompactly on an (n − 1)-connected CW-complex Y (so G is of type F n ). Let χ : G → R be a character of G, i.e., a homomorphism to R. There exists a map h χ : Y → R, which we will call a character height function, such that h χ (g.y) = χ(g) + h χ (y) for all g ∈ G and y ∈ Y . For t ∈ R let Y χ≥t be the full subcomplex of Y supported on those 0-cells y with h χ (y) ≥ t. Let [χ] be the equivalence class of χ under scaling by positive real numbers. The character sphere S(G) is the set of non-trivial character classes [χ]. For m ≤ n, the mth BNSR- invariant Σ m (G) is defined to be Σ m (G) := {[χ] ∈ S(G) | (Y χ≥t ) t∈R is essentially (m − 1)-connected}. Recall that (Y χ≥t ) t∈R is said to be essentially (m − 1)-connected if for all t ∈ R there exists −∞ < s ≤ t such that the inclusion of Y χ≥t into Y χ≥s induces the trivial map in π k for all k ≤ m − 1. It turns out Σ m (G) is well defined up to the choice of Y and h χ (see for example [Bux04,Definition 8.1]). As a remark, the definition there used the filtration by sets h −1 χ ([t, ∞)) t∈R , but thanks to cocompactness this filtration is essentially (m − 1)-connected if and only if our filtration (Y χ≥t ) t∈R is. One important application of BNSR-invariants is the following: Citationv of a 0-cell v in Y is the subcomplex of Y consisting of cells that are faces of cells containing v. The link lk Y v of v is the simplicial complex st Y v of directions out of v into st Y v. We will suppress the subscript Y from the notation when it is clear from context. If v and w are distinct 0-cells sharing a 1-cell we will call v and w adjacent and write v adj w. In [BB97], Bestvina and Brady defined a Morse function on an affine cell complex Y to be a map Y → R that is affine on cells, takes discretely many values on the 0-cells, and is non-constant on 1-cells. When using Morse theory to compute BNSR-invariants though, these last two conditions are often too restrictive. The definition of Morse function that will prove useful for our purposes is as follows. Definition 1.3 (Morse function). Let Y be an affine cell complex and let h : Y → R and f : Y → R be functions that are affine on cells. We call (h, f ) : Y → R × R a Morse function if the set {h(v) − h(w) | v, w ∈ Y (0) , v adj w} does not have 0 as a limit point (we will call it discrete near 0), the set {f (v) | v ∈ Y (0) } is finite 2 , and if v, w ∈ Y (0) with v adj w and h(v) = h(w) then f (v) = f (w). For example if h takes discrete values on 0-cells and distinct values on adjacent 0-cells, then (taking f to be constant and ignoring it) we recover Bestvina and Brady's notion of "Morse function". Using the usual order on R and the lexicographic order on R × R, it makes sense to compare (h, f ) values of 0-cells. On a given cell c, since h and f are affine on c it is clear that (h, f ) achieves its maximum and minimum values at unique faces of c, and the last assumption in Definition 1.3 ensures these will be 0-cells. For Y an affine cell complex, (h, f ) a Morse function on Y and t ∈ R, denote by Y h≥t the subcomplex of Y supported on those 0-cells v with h(v) ≥ t. Lemma 1.5 (Morse Lemma). Let Y be an affine cell complex and (h, f ) : Y → R × R a Morse function. Let t ∈ R and s ∈ [−∞, t). If for all 0-cells v with h(v) ∈ [s, t) the ascending link lk ↑ v is (m − 1)-connected, then the inclusion Y h≥t → Y h≥s induces an isomorphism in π k for k ≤ m − 1 and an epimorphism in π m . Proof. The essential parts of the proof are the same as in [BB97]. Choose ε > 0 such that for any v adj w, |h(v) − h(w)| ∈ (0, ε) (this is possible since the set of values h(v) − h(w) for v adj w is discrete near 0). We can assume by induction (and by compactness of spheres if s = −∞) that t−s ≤ ε. In particular if adjacent 0-cells v and w both lie in Y h≥s \Y h≥t , then h(v) = h(w) and f (v) = f (w). To build up from Y h≥t to Y h≥s , we need to glue in the 0-cells of Y h≥s \ Y h≥t along their relative links in some order such that upon gluing in v, all of lk ↑ v is already present, but nothing else in lk v, so the relative link is precisely the ascending link. To do this, we put any order we like on each set F i := {v ∈ Y (0) h≥s \ Y (0) h≥t | f (v) = i} for i ∈ f (Y (0) ) , and then extend these to an order on Y h≥t by declaring that everything in F i comes after everything in F j whenever i < j. Now when we glue in v, for w ∈ lk v we have w ∈ lk ↑ v if and only if either h(w) > h(v), in which case h(w) ≥ t and w is already present, or h(w) = h(v) and f (w) > f (v), in which case w ∈ F f (w) is also already present. Since the relevant ascending links are (m − 1)-connected by assumption, the result follows from the Seifert-van Kampen, Mayer-Vietoris and Hurewicz Theorems. As a corollary to the proof, we have: v with s ≤ h(v) < t we have H m+1 (lk ↑ v) = 0, then the inclusion Y h≥t → Y h≥s induces an injection in H m+1 . Proof. In the proof of the Morse Lemma, we saw that Y h≥s is obtained from Y h≥t by coning off the ascending links of 0-cells v with s ≤ h(v) < t, so this is immediate from the Mayer-Vietoris sequence. For example if Y is (m + 1)-dimensional, so the links are at most m-dimensional, then this additional condition will always be satisfied. Wen dealing with BNSR-invariants, the following is particularly useful: Corollary 1.7. Let Y be an (m − 1)-connected affine cell complex with a Morse function (h, f ). Suppose there exists q such that, for every 0-cell v of Y with h(v) < q, lk ↑ v is (m − 1)-connected. Then the filtration (Y h≥t ) t∈R is essentially (m − 1)-connected. Now assume additionally that H m+1 (Y ) = 0 and for every 0-cell v of Y with h(v) < q, H m+1 (lk ↑ v) = 0, and that for all p there exists a 0-cell v with h(v) < p such that H m (lk ↑ v) = 0. Then the filtration (Y h≥t ) t∈R is not essentially m-connected. Proof. By the Morse Lemma, for any r ≤ q the inclusion Y h≥r → Y = Y h≥−∞ induces an isomorphism in π k for k ≤ m − 1. Since Y is (m − 1)-connected, so is Y h≥r . Now for any t ∈ R we just need to choose s = min{q, t} and we get that the inclusion Y h≥t → Y h≥s induces the trivial map in π k for k ≤ m − 1, simply because Y h≥s is (m − 1)-connected. For the second claim, suppose that (Y h≥t ) t∈R is essentially m-connected. Say t < q, and choose s ≤ t such that the inclusion Y h≥t → Y h≥s induces the trivial map in π k for k ≤ m. Also, since t < q, this inclusion induces a surjection in these π k by the Morse Lemma, so in fact Y h≥s itself is m-connected, as are all Y h≥r for r ≤ s (for the same reason). Now choose v such that h(v) < s and H m (lk ↑ v) = 0. Since H m (Y h≥r ) = 0 for all r ≤ s, Mayer-Vietoris and Corollary 1.6 say that H m+1 (Y h≥q ) = 0 for any q ≤ h(v). But this includes q = −∞, which contradicts our assumption that H m+1 (Y ) = 0. 1.3. BNSR-invariants via Morse theory. We now return to the situation in Definition 1.1, so Y is an (n − 1)-connected CW-complex on which G acts properly and cocompactly (and, we assume, cellularly), χ is a character of G, and h χ is a character height function on Y . The goal of this subsection is to establish a Morse function on Y using h χ . Let us make two additional assumptions. First, assume Y is simplicial (this is just to ensure that any function on Y (0) can be extended to a function on Y that is affine on cells). Second, assume that no adjacent 0-simplicies in Y share a G-orbit (if this is not the case, it can be achieved by subdividing). Let f : Y (0) /G → R be any function that takes distinct values on adjacent 0-cells, where the cell structure on Y /G is induced from Y . (Just to give some examples, one could construct f by randomly assigning distinct values to the 0-cells in Y /G, or one could take the barycentric subdivision and have f read the dimension.) Define f : Y (0) → R via f (v) := f (G.v) , and extend f to a map (also called f ) on all of Y by extending affinely to each simplex. Lemma 1.8. With Y , h χ and f as above, (h χ , f ) : Y → R × R is a Morse function. Proof. The functions h χ and f are affine on cells by construction. The set {f (v) | v ∈ Y (0) } equals the set {f (G.v) | G.v ∈ Y (0) /G}, which is finite since Y /G is compact. For any g ∈ G we have h χ (g.v) − h χ (g.w) = χ(g) + h χ (v) − χ(g) − h χ (w) = h χ (v) − h χ (w), so by compactness of Y /G the set {h χ (v) − h χ (w) | v, w ∈ Y (0) , v adj w} is finite (and hence discrete near 0). Finally, since f takes distinct values on adjacent 0-cells in Y /G, and no adjacent 0-cells in Y share an orbit, we see f takes distinct values on adjacent 0-cells in Y . In particular Corollary 1.7 can now potentially be used to prove that (Y χ≥t ) t∈R is or is not essentially (m − 1)-connected, and hence that [χ] is or is not in Σ m (G). While any f constructed as above will make (h χ , f ) a Morse function, this does not mean every f may be useful, for instance if the ascending links are not as highly connected as one would hope. In fact it seems likely that situations exist where every choice of f yields a "useless" Morse function. Hence, in practice one hopes to have a concrete space Y with a natural choice of f that produces nice ascending links. (Pure) symmetric automorphism groups We now turn to our groups of interest. Let F n be the free group with basis S : = {x 1 , . . . , x n }. An automorphism α ∈ Aut(F n ) is called symmetric if for each i ∈ [n] := {1, . . . , n}, (x i )α is conjugate to x j for some j; if each (x i )α is conjugate to x i we call α pure symmetric 3 . Note that in some texts, "symmetric" allows for (x i )α to be conjugate to some x −1 j , but we do not allow that here. Denote by ΣAut n the group of all symmetric automorphisms of F n , and by PΣAut n the group of pure symmetric automorphisms. The abelianization F n → Z n induces a surjection Aut(F n ) → GL n (Z), and the restriction of this map to ΣAut n yields a splitting ΣAut n ∼ = PΣAut n ⋊S n . An equivalent description of ΣAut n (and PΣAut n ) is as the (pure) loop braid group, i.e., the group of (pure) motions of n unknotted, unlinked oriented circles in 3-space. The subgroup of ΣAut n consisting of those automorphisms taking x 1 · · · x n to itself is isomorphic to the classical braid group B n , and the intersection of this with PΣAut n is the classical pure braid group P B n [Sav96]. Other names for ΣAut n and closely related groups include welded braid groups, permutation-conjugacy automorphism groups, braidpermutation groups and more. Details on the various viewpoints for these groups can be found for example in [Dam16]. In [McC86], McCool found a (finite) presentation for PΣAut n . The generators are the automorphisms α i,j (i = j) given by ( x i )α i,j = x −1 j x i x j and (x k )α i,j = x k for k = i, and the defining relations are [α i,j , α k,ℓ ] = 1, [α i,j , α k,j ] = 1 and [α i,j α k,j , α i,k ] = 1, for distinct i, j, k, ℓ. (In particular note that this implies PΣAut 2 ∼ = F 2 .) It will also be convenient later to consider automorphisms α I,j , defined via α I,j := i∈I α i,j 3 Automorphisms will be acting on the right here, so we will reflect this in the notation. for I ⊆ [n] \ {j}, where the product can be taken in any order thanks to the relation [α i,j , α k,j ] = 1. Following Collins [Col89] we call these symmetric Whitehead automorphisms. Since the defining relations in McCool's presentation are commutators, we immediately see that PΣAut n has abelianization Z n(n−1) , with basis {α i,j | i = j}. Since S n acts transitively on the α i,j , we also quickly compute that ΣAut n abelianizes to Z × (Z/2Z) for all n ≥ 2. A natural basis for the vector space Hom(PΣAut n , R) ∼ = R n(n−1) is the dual of {α i,j | i = j}. This dual basis has a nice description that we will now work up to. For α ∈ PΣAut n let w i,α ∈ F n be the elements such that ( x i )α = x w i,α i . For each i = j define χ i,j : PΣAut n → Z by sending α to ϕ j (w i,α ) , where ϕ j : F n → Z are the projections sending x j to 1 and the other generators to 0. Lemma 2.1. Each χ i,j is a homomorphism. Proof. Let α, β ∈ PΣAut n and i ∈ [n]. Write w i,α = x ε 1 k 1 · · · x εr kr for k 1 , . . . , k r ∈ [n] and ε 1 , . . . , ε r ∈ {±1}, so we have (x i )α • β = (x −εr kr )β · · · (x −ε 1 k 1 )βw −1 i,β x i w i,β (x ε 1 k 1 )β · · · (x εr kr )β. In particular w i,α•β = w i,β (x ε 1 k 1 )β · · · (x εr kr )β. Note that ϕ j ((x ε 1 k 1 )β · · · (x εr kr )β) = ϕ j (x ε 1 k 1 · · · x εr kr ), so ϕ j (w i,α•β ) = ϕ j (w i,α ) + ϕ j (w i,β ), as desired. Clearly χ i,j (α k,ℓ ) = δ (i,j),(k,ℓ) (the Kronecker delta), so {χ i,j | i = j} is the basis of Hom(PΣAut n , R) dual to {α i,j | i = j}. Since Hom(PΣAut n , R) ∼ = R n(n−1) we know that the character sphere S(PΣAut n ) is S(PΣAut n ) = S n(n−1)−1 . For the group ΣAut n , Hom(ΣAut n , R) ∼ = R for all n, so to find a basis we just need a non-trivial character. We know ΣAut n = PΣAut n ⋊S n , so the most natural candidate is the character reading 1 on each α i,j and 0 on S n . Note that S(ΣAut n ) = S 0 for all n ≥ 2. Writing an arbitrary character of PΣAut n as χ = i =j a i,j χ i,j , we recall Orlandi-Korner's computation of Σ 1 (PΣAut n ): Citation 2.2. [OK00] We have [χ] ∈ Σ 1 (PΣAut n ) unless either (i) there exist distinct i and j such that a p,q = 0 whenever {p, q} ⊆ {i, j}, or (ii) there exist distinct i, j and k such that a p,q = 0 whenever {p, q} ⊆ {i, j, k} and moreover a p,q = −a p ′ ,q whenever {p, p ′ , q} = {i, j, k}. In these cases [χ] ∈ Σ 1 (PΣAut n ). For example, Σ 1 (PΣAut 2 ) is empty (which we know anyway since PΣAut 2 ∼ = F 2 ) and Σ 1 (PΣAut 3 ) is a 5-sphere with three 1-spheres and one 2-sphere removed, so in particular Σ 1 (PΣAut 3 ) is dense in S(PΣAut 3 ). The groups ΣAut n and PΣAut n are of type F ∞ (this can be seen for example after work of Collins [Col89]) 4 , so one can ask what Σ m (ΣAut n ) and Σ m (PΣAut n ) are, for any m and n. One thing we know, which we will use later, is that the invariants are all closed under taking antipodes: Observation 2.3. If [χ] ∈ Σ m (G) for G = ΣAut n or PΣAut n then [−χ] ∈ Σ m (G). Proof. The automorphism of F n taking each x i to x −1 i induces an automorphism of ΣAut n and PΣAut n under which each character χ maps to −χ. The result now follows since Σ m (G) is invariant under Aut(G). We can now state our main results. Definition 2.4 (Positive/negative character). Call χ = i =j a i,j χ i,j positive if a i,j > 0 for all i, j, and negative if a i,j < 0 for all i, j. Theorem A. For n ≥ 2, if χ is a positive or negative character of PΣAut n then [χ] ∈ Σ n−2 (PΣAut n ) \ Σ n−1 (PΣAut n ). As a remark, thanks to Observation 2.3 the negative character classes lie in a given Σ m (PΣAut n ) if and only if the positive ones do, so we only need to prove Theorem A for positive characters. Theorem B. For n ≥ 2, we have Σ n−2 (ΣAut n ) = S(ΣAut n ) = S 0 and Σ n−1 (ΣAut n ) = ∅. In particular the commutator subgroup ΣAut ′ n is of type F n−2 but not F n−1 . The commutator subgroups ΣAut ′ n and PΣAut ′ n are easy to describe; see for example Lemmas 4 and 5 of [Sav96]. The commutator subgroup PΣAut ′ n consists of those automorphisms taking each x i to w −1 i x i w i for some w i ∈ F ′ n . In other words, PΣAut ′ n is just the intersection of all the ker(χ i,j ). Note that for n ≥ 2 the commutator subgroup PΣAut ′ n is not finitely generated, since it surjects onto PΣAut ′ 2 ∼ = F ′ 2 . The commutator subgroup ΣAut ′ n consists of those automorphisms taking x i to w −1 i x π(i) w i for some even permutation π ∈ S n (i.e., π ∈ A n ) and satisfying ϕ(w 1 · · · w n ) = 0 where ϕ : F n → Z is the map taking each basis element to 1. As a remark, the abelianization map ΣAut n → Z splits, for instance by sending Z to α 1,2 , so we have ΣAut n = ΣAut ′ n ⋊Z. In Section 5 we will be able to deduce Theorem B from Theorem A quickly by using the next lemma. If we write BB n for the kernel of the character i =j χ i,j of PΣAut n taking each α i,j to 1, so BB n is the "Bestvina-Brady-esque" subgroup of PΣAut n , then we have: Lemma 2.5. ΣAut ′ n = BB n ⋊ A n . Proof. When we restrict the map ΣAut n → S n to ΣAut ′ n , by the above description we know that the image is A n . This map splits, and the kernel of this restricted map is the kernel of the original map, which is PΣAut n , intersected with ΣAut ′ n . The above description tells us that this consists of all pure symmetric automorphisms α such that ϕ(w 1,α · · · w n,α ) = 0, and from the definition of the χ i,j it is clear that ϕ(w 1,α · · · w n,α ) = i =j χ i,j (α), so we are done. In particular BB n has finite index in ΣAut ′ n , so they have the same finiteness properties. To prove Theorems A and B we need a complex on which the groups act nicely, and to understand ascending links. We discuss the complex in Section 3 and the ascending links in Section 4. The complex of symmetric marked cactus graphs In [Col89], Collins found a contractible simplicial complex ΣK n on which the "Outer" versions of ΣAut n and PΣAut n act properly and cocompactly, described by symmetric marked cactus graphs. We will use the obvious analog of this complex for our groups. Thanks to the action being proper and cocompact, and the complex being contractible, it can be used to "reveal" the BNSR-invariants of the groups, as per Definition 1.1. In this section we recall the construction of ΣK n and set up the character height functions that will then be used in the following sections to prove our main results. Terminology: By a graph we will always mean a connected finite directed graph with one vertex specified as the basepoint, such that the basepoint has degree at least two and all other vertices have degree at least three. We will use the usual terminology of initial and terminal endpoints of an edge, paths, cycles, reduced paths, simple cycles, subtrees, subforests and spanning trees. Our graphs will always be understood to have rank n, unless otherwise specified. Let R n be the n-petaled rose, that is the graph with one vertex * (which is necessarily the basepoint) and n edges. Then π 1 (R n ) ∼ = F n , and we identify Aut(F n ) with the group of basepoint-preserving self-homotopy equivalences of R n , modulo homotopy. Definition 3.1 (Cactus graph, cladode, base, above, projection, before/after, between). A graph Γ is called a cactus graph if every edge is contained in precisely one simple cycle. We will refer to the simple cycles, viewed as subgraphs, as cladodes. For example the petals of the rose R n are precisely its cladodes. We will assume the orientations of the edges are such that each cladode is a directed cycle, that is, no distinct edges of a cladode share an origin (or terminus). If C is a cladode of a cactus graph Γ with basepoint p, there is a unique vertex b C of C closest to p, which we call the base of C. Note that every vertex is the base of at least one cladode. We say that a cladode C is above a cladode D if every path from b C to p must pass through an edge of D. If C is above D there is a unique vertex proj D (C) of D closest to b C , which we will call the projection of C onto D. Given two distinct points x and y in a common cladode C, with x, y = b C , there is a unique reduced path from x to y in C \ {b C }; if this path follows the orientation of C we say x is before y, and otherwise we say x is after y. Within C it also makes sense to say that an edge is before or after another edge, or that an edge is before or after a point not in the interior of that edge. We say a point or edge is between two points or edges if it is before one and after the other. See Figure 1 for an example illustrating the many definitions in Definition 3.1. Figure 1. A cactus graph, with its cladodes numbered for reference. To illustrate the definitions in Definition 3.1 with some examples, we note: cladodes 3 and 4 have the same base; cladode 7 is above cladodes 2 and 1 but no others; the projection of cladode 8 onto cladode 1 is the vertex that is the base of cladode 3; and the base of cladode 3 is after the base of cladode 2 and before the base of cladode 12, and hence is between them. Given a graph Γ and a subforest F , we will write Γ/F for the graph obtained by quotienting each connected component of F to a point. The quotient map d : Γ → Γ/F is called a forest collapse or forest blow-down. It is a homotopy equivalence, and a homotopy inverse of a forest blow-down is called a forest blow-up, denoted u : Γ/F → Γ. Definition 3.2 (Marking). A marking of a basepointed graph Γ is a homotopy equivalence ρ : R n → Γ from the n-petaled rose to Γ, taking basepoint to basepoint. A marking of the rose itself represents an automorphism of F n , thanks to our identification of Aut(F n ) with the group of basepoint-preserving self-homotopy equivalences of R n , modulo homotopy. If a marking α : R n → R n even represents a (pure) symmetric automorphism, then it makes sense to call the marking itself (pure) symmetric. More generally: Definition 3.3 ((Pure) symmetric marking). A marking ρ : R n → Γ is called (pure) symmetric if there exists a forest collapse d : Γ → R n such that d • ρ : R n → R n is (pure) symmetric. Definition 3.4 (Symmetric marked cactus graph). A symmetric marked cactus graph is a triple (Γ, p, ρ) where Γ is a cactus graph with basepoint p and ρ is a symmetric marking. Two such triples (Γ, p, ρ), (Γ ′ , p ′ , ρ ′ ) are considered equivalent if there is a homeomorphism φ : Γ → Γ ′ taking p to p ′ such that φ•ρ ≃ ρ ′ . We will denote equivalence classes by [Γ, p, ρ], and will usually just refer to [Γ, p, ρ] as a symmetric marked cactus graph. We note that under this equivalence relation, every symmetric marked cactus graph is equivalent to one where the marking is even pure symmetric. This is just because the markings of the rose that permute the petals are all equivalent to the trivial marking. Moreover, these are the only markings equivalent to the trivial marking, so the map α → [R n , * , α] is in fact a bijection between PΣAut n and the set of symmetric marked roses. Definition 3.5 (Partial order). We define a partial order ≤ on the set of symmetric marked cactus graphs as follows. Let [Γ, p, ρ] be a symmetric marked cactus graph and F a subforest of Γ, with d : Γ → Γ/F the forest collapse. Let p F := d(p) and let ρ F := d • ρ. We declare that [Γ, p, ρ] ≤ [Γ/F, p F , ρ F ]. It is easy to check that the relation ≤ is well defined up to equivalence of triples, and that it is a partial order. Definition 3.6 (Complex of symmetric marked cactus graphs). The complex of symmetric marked cactus graphs ΣK n is the geometric realization of the partially ordered set of symmetric marked cactus graphs. Note that ΣAut n and PΣAut n act (on the right) on ΣK Technically Collins considers the "Outer" version where we do not keep track of basepoints, but it is straightforward to get these results also in our basepointed "Auter" version. → Γ/F , we have (d • ρ) • α = d • (ρ • α), i.e., ρ F • α = (ρ • α) F . In particular, we have the requisite setup of Definition 1.1. Remark 3.8. One can similarly consider the complex of all marked basepointed graphs, and get the well studied spine of Auter space, which is contractible and on which Aut(F n ) acts properly and cocompactly (see [CV86,HV98]). This is not relevant for our present purposes though, since the abelianization of Aut(F n ) is finite, and hence its character sphere is empty. The next step is to take a character χ of PΣAut n and induce a character height function h χ on ΣK n . First recall that we equivocate between symmetric markings of roses and elements of PΣAut n . Hence for 0-simplices in ΣK n of the form [R n , * , α], we can just define h χ ([R n , * , α]) := χ(α). In general we define h χ ([Γ, p, ρ]) as follows: Extend this affinely to the simplices of ΣK n , to get h χ : ΣK n → R. , so this follows simply because χ(β•α) = χ(β)+χ(α) for all α, β ∈ PΣAut n . Now we need a "tiebreaker" function f as in Lemma 1.8. As discussed before and after that lemma, any randomly chosen injective f : ΣK (0) n /G → R could serve to induce a tiebreaker f : ΣK n → R, but we want to be more clever than this. In particular our tiebreaker will yield tractable ascending links that will actually reveal parts of the BNSRinvariants. The 0-cells in the orbit space ΣK n / PΣAut n are homeomorphism classes of cactus graphs, so "number of vertices" is a well defined measurement on these 0-cells. Let f : ΣK (0) n /G → R be the function taking a graph to the negative of its number of vertices. In particular since we are using the negative, the rose has the largest f value of all cactus graphs. Let f : ΣK n → R be the extension of f described before Lemma 1.8, so f ([Γ, p, ρ]) equals the negative number of vertices of Γ, and consider the function (h χ , f ) : ΣK n → R × R. Since ΣK n is simplicial and adjacent 0-simplices in ΣK n cannot share a PΣAut n -orbit (for instance since they necessarily have different f values), the following is immediate from Lemma 1.8: Corollary 3.11. For any χ, (h χ , f ) is a Morse function on ΣK n . It is clear from the definition of h χ that ΣK hχ≥t n is the union of the stars of those [R n , * , α] with χ(α) ≥ t. It is a common phenomenon when working in Auter space and its relatives to encounter important subcomplexes that are unions of stars of marked roses. Another example arises in [BBM07], where Bestvina, Bux and Margalit use "homology markings" of roses to prove that for n ≥ 3 the kernel of Out(F n ) → GL n (Z) has cohomological dimension 2n − 4 and is not of type F 2n−4 (when n ≥ 4 it remains open whether or not this kernel is of type F 2n−5 ). We record here a useful technical lemma that gives information on how h χ can differ between "nearby" symmetric marked roses. Let [Γ, p, ρ] be a symmetric marked cactus graph and let T be a spanning tree in Γ. Since T is spanning, collapsing T yields a symmetric marked rose. The marking ρ provides the cladodes of Γ with a numbering from 1 to n; let C i,ρ be the ith cladode. Since T is a spanning tree, it meets C i,ρ at all but one edge; write E i,T for the single-edge subforest of C i,ρ that is not in T . In particular, intuitively, upon collapsing T , E i,T becomes the ith petal of R n . Note that T is completely determined by the set {E 1,T , . . . , E n,T }, namely it consists of all the edges of Γ not in any E i,T . Lemma 3.12 (Change of spanning tree). Let [Γ, p, ρ] be a symmetric marked cactus graph and let T be a spanning tree in Γ. Suppose U is another spanning tree such that E j,T = E j,U but E i,T = E i,U for all i = j (so U differs from T only in the jth cladode). Suppose that E j,T is before E j,U (in the language of Definition 3.1). Let ∅ = I [n] be the set of indices i such that the projection proj C j,ρ (C i,ρ ) lies between E j,T and E j,U (so in particular j ∈ I). Then for any χ = i =j a i,j χ i,j we have h χ ([Γ/T, p T , ρ T ]) − h χ ([Γ/U, p U , ρ U ]) = i∈I a i,j . Proof. By collapsing the subforest T ∩ U we can assume without loss of generality that T = E j,T and U = E j,U are each a single edge, so Γ has two vertices, the basepoint p and another vertex q. The set I indexes those cladodes whose base is q, so [n] \ I indexes those cladodes whose base is p. Up to the action of PΣAut n we can assume that Γ/U is the trivially marked rose, so we need to show that h χ ([Γ/T, p T , ρ T ]) = i∈I a i,j . In fact the procedure of blowing up the trivial rose to get Γ and then blowing down T is a Whitehead move (see [CV86,Section 3.1]) that corresponds to the symmetric Whitehead automorphism α I,j . In other words, viewed as an element of PΣAut n , we have ρ T = α I,j . This means that h χ ([Γ/T, p T , ρ T ]) = χ(α I,j ) = i∈I a i,j , as desired. Ascending links: In the next section we will need to understand ascending links lk ↑ v with respect to (h χ , f ), for v = [Γ, p, ρ] a 0-simplex in ΣK n , so we discuss this a bit here. Since lk ↑ v is a full subcomplex of lk v we just need to understand which 0-simplices of lk v lie in lk ↑ v. First note that lk v is a join, of the down-link lk d v, spanned by those 0-simplices of lk v obtained from forest blow-downs of Γ, and its up-link lk u v, spanned by those 0-simplices of lk v corresponding to forest blow-ups of Γ. The ascending link lk ↑ v similarly decomposes as the join of the ascending down-link lk ↑ d v and ascending up-link lk ↑ u v, which are just defined to be lk ↑ d v := lk d v ∩ lk ↑ v and lk ↑ u v := lk u v ∩ lk ↑ v. Since 0-simplices in lk d v have larger f value than v (i.e., the graphs have fewer vertices than Γ) and cannot have strictly larger h χ value, given a subforest F ⊆ Γ we see that [Γ/F, p F , ρ F ] ∈ lk ↑ d v if and only if h χ ([Γ/F, p F , ρ F ]) ≥ h χ ([Γ, p, ρ]) if and only if h χ ([Γ/F, p F , ρ F ]) = h χ ([Γ, p, ρ]). Similarly if [ Γ, p, ρ] ∈ lk u v then it lies in lk ↑ u v if and only if it has strictly larger h χ value than v (since it has smaller f value). Topology of ascending links Throughout this section, χ is a non-trivial character of PΣAut n with character height function h χ on ΣK n , and lk ↑ [Γ, p, ρ] means the ascending link of the 0-simplex [Γ, p, ρ] with respect to χ. Complex of forests). The complex of forests F (Γ) for a graph Γ is the geometric realization of the partially ordered set of non-empty subforests of Γ, with partial order given by inclusion. We will analyze the topology of lk ↑ [Γ, p, ρ] = lk ↑ d [Γ, p, ρ] * lk ↑ u [Γ, We will not really need to know the homotopy type of F (Γ) in what follows, but it is easy to compute so we record it here for good measure. Proof. For each cladode C let F C (Γ) be the complex of subforests of Γ contained in C. 1). Now, V C − 1 is the number of non-base vertices of C, and every vertex of Γ except for the basepoint is a non-base vertex of a unique cladode, so C (V C − 1) = V − 1. Clearly F C (Γ) ≃ S V C −2 , where V C is the number of vertices of C. Since F (Γ) is the join of all the F C (Γ), we get F C (Γ) ≃ S d−1 for d = C (V C −-simplices F such that [Γ/F, p F , ρ F ] ∈ lk ↑ [Γ, p, ρ]. Observation 4.4. F ↑ (Γ, p, ρ) ∼ = lk ↑ d [Γ, p, ρ]. Proof. The isomorphism is given by F → [Γ/F, p F , ρ F ]. For certain characters, F ↑ (Γ, p, ρ) is guaranteed to be contractible. We call these characters decisive: Observation 4.6. Let χ be decisive. Then for any Γ = R n , F ↑ (Γ, p, ρ) is contractible. Proof. Every ascending forest is contained in an ascending spanning tree, and we are assuming there is a unique ascending spanning tree, so F ↑ (Γ, p, ρ) is just the star in F ↑ (Γ, p, ρ) of this unique ascending spanning tree. Proof. Let T be the spanning tree in Γ such that, using the notation from Lemma 3.12, for each cladode C j,ρ , the origin of the edge in E j,T is the base of C j,ρ ; see Figure 2 for an example. Note that the edge of E j,T is before all the other edges of C j,ρ . We claim that ρ T has larger χ value than ρ U for any other spanning tree U. First we prove this in the case when U differs from T only in one cladode, say C j,ρ . Let I j be the set of i such that the projection proj C j,ρ (C i,ρ ) lies between E j,U and E j,T . Since E j,T is before E j,U , by Lemma 3.12 we get h χ ([Γ/T, p T , ρ T ]) − h χ ([Γ/U, p U , ρ U ]) = i∈I j a i,j > 0. Now suppose U differs from T in more than one cladode, say C j 1 ,ρ , . . . , C jr,ρ . By changing E j k ,T to E j k ,U one k at a time, we get h χ ([Γ/T, p T , ρ T ]) − h χ ([Γ/U, p U , ρ U ]) = r k=1 i∈I j k a i,j k > 0. We conclude that h χ ([Γ/T, p T , ρ T ]) > h χ ([Γ/U, p U , ρ U ]), as desired. By a parallel argument, negative characters are also decisive. Proof. Let T and U be two different spanning trees in Γ, so [Γ, p, ρ] lies in the stars of the symmetric marked roses [Γ/T, p T , ρ T ] and [Γ/U, p U , ρ U ]. We claim that for χ generic, these symmetric marked roses have different h χ values, from which the result will follow. Using the notation from Lemma 3.12, suppose the cladodes in which T and U differ are C j 1 ,ρ , . . . , C jr,ρ , so E j k ,T = E j k ,U for 1 ≤ k ≤ r, but E j,T = E j,U for all j ∈ {j 1 , . . . , j r }. Let U = T 0 , T 1 , . . . , T r = T be spanning trees such that we obtain T k from T k−1 by replacing E j k ,U with E j k ,T . For each 1 ≤ k ≤ r let I k be the set of i such that the projection proj C j k ,ρ (C i,ρ ) lies between E j k ,T and E j k ,U . By Lemma 3.12, we know that h χ ([Γ/T k , p T k , ρ T k ]) − h χ ([Γ/T k−1 , p T k−1 , ρ T k−1 ]) = ± i∈I k a i,j k for each 1 ≤ k ≤ r (with the plus or minus depending on whether E j k ,T is before or after E j k ,U ). This implies that h χ ([Γ/T, p T , ρ T ]) − h χ ([Γ/U, p U , ρ U ]) = (± i∈I 1 a i,j 1 ) + (± i∈I 2 a i,j 2 ) + · · · + (± i∈Ir a i,jr ). Since χ is generic, this cannot be zero. Remark 4.11. If χ is not decisive, then F ↑ (Γ, p, ρ) is still somewhat understandable, it just might not be contractible. A forest is ascending if and only if it lies in an ascending spanning tree, so F ↑ (Γ, p, ρ) is the union of the stars of the ascending spanning trees. Also, a non-empty intersection of some of these stars is again a (contractible) star, namely the star of the forest that is the intersection of the relevant spanning trees. Hence F ↑ (Γ, p, ρ) is homotopy equivalent to the nerve of its covering by stars of ascending spanning trees. This is isomorphic to the simplicial complex whose 0-simplices are the ascending spanning trees, and where k of them span a (k − 1)-simplex whenever the trees have a non-empty intersection. In theory it should be possible to compute the homotopy type of this complex, but we are not currently interested in the non-decisive characters (since they will be totally intractable when we study the ascending up-link in the next subsection), so we leave further analysis of this for the future. Since lk ↑ v = lk ↑ d v * lk ↑ u v and lk ↑ d [Γ, p, ρ] ∼ = F ↑ (Γ, p, ρ) , we get: Corollary 4.12. If χ is a decisive character of PΣAut n and v = [Γ, p, ρ] for Γ not a rose, then lk ↑ v is contractible. 4.2. Ascending up-link. Thanks to Corollary 4.12, for decisive characters of PΣAut n the only 0-simplices of ΣK n that can have non-contractible ascending links are those of the form [R n , * , α]. These have empty down-link, so the ascending link equals the ascending up-link. It turns out that the ascending up-link lk ↑ u [R n , * , α] is homotopy equivalent to a particularly nice complex I ↑ n (χ), the complex of ascending ideal edges, which we now describe. Let E( * ) be the set of half-edges of R n incident to * . Since we identify π 1 (R n ) with F n , the petals of R n are naturally identified with the basis S = {x 1 , . . . , x n } of F n . We will write i for the half-edge in the petal x i with * as its origin, and i for the half-edge in x i with * as its terminus, so E( * ) = {1, 1, 2, 2, . . . , n, n}. Intuitively, an ideal edge A describes a way of blowing up a new edge at * , with the half-edges in E( * ) \ A becoming incident to the new basepoint and the half-edges in A becoming incident to the new non-basepoint vertex; a more rigorous discussion can be found for example in [Jen02]. The conditions in the definition ensure that blowing up a symmetric ideal edge results in a cactus graph. See Figure 3 for an example. The asymmetry between the conditions |A| ≥ 2 and |E( * ) \ A| ≥ 1 arises because the basepoint of a cactus graph must have degree at least 2, whereas other vertices must have degree at least 3. In fact every vertex of a cactus graph has even degree, so in practice |A| ≥ 2 is equivalent to |A| ≥ 3 for symmetric ideal edges. (i) j ∈ A and i∈I a i,j > 0 or (ii) j ∈ A and i∈I a i,j < 0. 1 1 2 2 A E( * ) \ A −→ 1 1 2 2 A E( * ) \ A −→ For example, the symmetric ideal edge in Figure 3 is ascending if and only if a 2,1 < 0. If χ is positive (respectively negative) then A is ascending if and only if j ∈ A (respectively j ∈ A), for {j, j} the split pair of A. If χ is generic then for any set I of pairs {i, i} and any j ∈ [n] \ I, i∈I a i,j = 0, so one of A = {j} ∪ I or A = {j} ∪ I is an ascending ideal edge. Definition 4.15 (Compatible). Two ideal edges A and A ′ are called compatible if any of A ⊆ A ′ , A ′ ⊆ A or A ∩ A ′ = ∅ occur. Definition 4.16 (Complex of (ascending) symmetric ideal edges). Let I n be the simplicial complex whose 0-simplices are the symmetric ideal edges, and where a collection of symmetric ideal edges span a simplex if and only if they are pairwise compatible. Let I ↑ n (χ) be the subcomplex of I n spanned by the ascending symmetric ideal edges. It is a classical fact that I n is homotopy equivalent to lk u [R n , * , α] (for any α). More precisely, the barycentric subdivision I ′ n is isomorphic to lk u [R n , * , α]. It turns out a similar thing happens when restricting to ascending ideal edges: Lemma 4.17. For any character χ of PΣAut n and any 0-simplex of the form [R n , * , α], lk ↑ u [R n , * , α] is homotopy equivalent to I ↑ n (χ). Proof. Since the barycentric subdivision I ′ n of I n is isomorphic to lk u [R n , * , α], we have that lk ↑ u [R n , * , α] is isomorphic to a subcomplex I ′ n (asc) of I ′ n . This is the subcomplex spanned by those 0-simplices in I ′ n , i.e, those collections of pairwise compatible symmetric ideal edges, whose corresponding tree blow-up makes h χ go up. Note that I ↑ n (χ) ′ is a subcomplex of I ′ n (asc), since as soon as one ideal edge in a collection corresponds to an ascending edge blow-up the whole collection corresponds to an ascending tree blow-up. Given a 0-simplex σ = {A 1 , . . . , A k } of I ′ n (asc), let φ(σ) := {A i | A i ∈ I ↑ n (χ)}. We claim that φ(σ) is non-empty, and hence φ : I ′ n (asc) → I ′ n (asc) is a well defined map whose image is I ↑ n (χ). Let [Γ, p, ρ] be the result of blowing up the ideal tree given by σ. Let U be the spanning tree in Γ such that [Γ/U, p U , ρ U ] = [R n , * , α]. Since the blow-up is ascending, the blow-down reversing it cannot be ascending, so U is not an ascending spanning tree in Γ. Choose an ascending spanning tree T in Γ, so U = T . Similar to the proof of Proposition 4.10, we can turn U into T by changing one edge at a time, and from Lemma 3.12 we get h χ ([Γ/T, p T , ρ T ]) − h χ ([Γ/U, p U , ρ U ]) = (± i∈I 1 a i,j 1 ) + (± i∈I 2 a i,j 2 ) + · · · + (± i∈Ir a i,jr ) where the I k and j k are as in the proof of Proposition 4.10. Since T is ascending but U is not, this quantity is positive. Hence there exists k such that ± i∈I k a i,j k > 0 (with the "±" determined by whether E j k ,T is before or after E j k ,U ). Write j = j k for brevity. Now let T ′ be the spanning tree (T \ E j,U ) ∪ E j,T (keep in mind that E j,U lies in T and not U, and E j,T lies in U and not T ), so, roughly, T ′ is the result of changing only the part of T in C j,ρ to look like U. Let F := T \ C j,ρ and consider [Γ/F, p F , ρ F ]. Let E j,U and E j,T be the images of E j,U and E j,T in Γ/F . The difference between the h χ values obtained by blowing down E j,U versus E j,T is the positive value ± i∈I k a i,j k from before; hence E j,U is ascending in Γ/F and E j,T is not ascending. Now, the blow-up of [R n , * , α] resulting in [Γ/F, p F , ρ F ] corresponds to one of the A i , and E j,T is the new edge blown up. This is an ascending blow-up, since the reverse is a non-ascending blow-down. This shows that at least one of the A i is indeed ascending, so φ(σ) = ∅. Having shown that φ : I ′ n (asc) → I ′ n (asc) is well defined, it is easily seen to be a poset retraction (à la [Qui78, Section 1.3]) onto its image I ↑ n (χ) ′ , so we conclude that lk ↑ u [R n , * , α] ∼ = I ′ n (asc) ≃ I ↑ n (χ) ′ ∼ = I ↑ n (χ). It is clear from Definition 4.14 that for χ positive, the complex I ↑ n (χ) is independent of χ. We will write I ↑ n (pos) for I ↑ n (χ) in this case. In Proposition 4.19 we will determine the connectivity properties of I ↑ n (pos). First we need the following useful lemma, which was proved in [WZ16]. Lemma 4.18 (Strong Nerve Lemma). Let Y be a simplicial complex covered by subcomplexes Y 1 , . . . , Y n . Suppose that whenever an intersection k i=1 Y j i of k of them (1 ≤ k ≤ n) is non-empty, it is (n−k −2)-connected. If the nerve N of the covering is (n−3)-connected then so is Y . If the nerve of the covering is (n − 3)-connected but not (n − 2)-acyclic, then so is Y . Proof. That Y is (n − 3)-connected follows from the usual Nerve Lemma, e.g., [BLVŽ94, Lemma 1.2], but this usual Nerve Lemma is not enough to show Y is not (n − 2)-acyclic if the nerve is not. In [WZ16, Proposition 1.21] it was shown using spectral sequences that indeed these hypotheses ensure that Y is not (n − 2)-acyclic. Proof. We will prove that I ↑ n (pos) is (n − 3)-connected by using induction to prove a more general statement, and then afterwards we will prove that I ↑ n (pos) is not (n − 2)-acyclic by applying Lemma 4.18. Call a subset P ⊆ E( * ) positive if for each 1 ≤ i ≤ n we have that i ∈ P implies i ∈ P . Define the defect d(P ) to be the number of i ∈ P with i ∈ P . Define the weight w(P ) of P to be the number of pairs {j, j} contained in P , plus one if the defect is non-zero. For example the sets {1, 1, 2, 2}, {1, 2, 2} and {1, 2, 3, 4, 5, 5} all have weight two (and defect zero, one and four, respectively). Also note that P = E( * ) itself is positive and has defect zero and weight n. Let I ↑ (P ; pos) be the subcomplex of I ↑ n (pos) supported on those 0-simplices A such that A ⊆ P . We now claim that I ↑ (P ; pos) is (w(P ) − 3)-connected, so I ↑ n (pos) being (n − 3)-connected is a special case of this. We induct on w(P ). For the base case we can use w(P ) = 1, and the result holds vacuously since every set is (−2)-connected. Now let w(P ) ≥ 2. Let D be the set of all i ∈ P with i ∈ P , so d(P ) = |D|. Within this induction on w(P ) we now additionally begin an induction on d(P ). For the base case we assume D = ∅, i.e., d(P ) = 0. Consider the 0-simplices of I ↑ (P ; pos) of the form P \ {i}, for each i ∈ P ∩ [n]. Call these hubs, and denote P \ {i} by Θ i . Note that a symmetric ideal edge in I ↑ (P ; pos) is compatible with a given hub if and only if it is contained in it (i.e., it cannot properly contain it nor be disjoint from it). Any collection of pairwise compatible symmetric ideal edges in I ↑ (P ; pos) lies in the star of some hub, so I ↑ (P ; pos) is covered by the contractible stars of the Θ i . The intersection of the stars of any k hubs, say Θ i 1 , . . . , Θ i k , is isomorphic to the complex I ↑ (P \ {i 1 , . . . , i k }; pos). For k = 1 this is contractible, being a star, and for k > 1 we have w(P \ {i 1 , . . . , i k }) = w(P ) − k + 1 < w(P ), so by induction on w(P ) we know this is (w(P ) − k − 2)-connected. Finally, the nerve of the covering of I ↑ (P ; pos) by these stars is the boundary of a (w(P ) − 1)-simplex, i.e., a (w(P ) − 2)-sphere, so by the first statement in Lemma 4.18 we conclude that I ↑ (P ; pos) is (w(P ) − 3)-connected. We can now prove Theorem A. Proof of Theorem A. By Corollary 4.12 and Proposition 4.19, all the ascending links of 0-simplices in ΣK n are (n − 3)-connected, so Corollary 1.7 says the filtration (ΣK χ≥t n ) t∈R is essentially (n − 3)-connected, and so [χ] ∈ Σ n−2 (PΣAut n ). To prove the negative statement, note that Proposition 4.19 says that there exist ascending links of 0-simplices that are not (n − 2)-acyclic, with arbitrary h χ value. Also, every ascending link has trivial (n − 1)st homology since it is (n − 2)-dimensional. By Corollary 1.7 then, the filtration (ΣK χ≥t n ) t∈R is not essentially (n − 2)-connected, and so [χ] ∈ Σ n−1 (PΣAut n ). Remark 4.21. Using the natural split epimorphisms PΣAut n → PΣAut m for m < n, we also now can see that if χ = i =j a i,j χ i,j is a character of PΣAut n induced from this epimorphism by a positive or negative character of PΣAut m (so a i,j is positive for 1 ≤ i, j ≤ m or negative for all 1 ≤ i, j ≤ m, and is zero if either i or j is greater than m) then [χ] ∈ Σ m−2 (PΣAut n ). However, we cannot immediately tell whether [χ] ∈ Σ m−1 (PΣAut n ) (which we suspect is the case), since Pettet showed the kernels of PΣAut n → PΣAut m have bad finiteness properties [Pet10]. As an immediate consequence of Theorem A, Citation 1.2 and Observation 2.3, we have the following result, which will provide the crucial step in proving Theorem B in Section 5. Corollary 4.22. For n ≥ 2, if χ is a discrete positive character of PΣAut n , then the kernel ker(χ) is of type F n−2 but not F n−1 . In particular the "Bestvina-Brady-esque" subgroup BB n , i.e., the kernel of the character sending each standard generator α i,j to 1 ∈ Z, is of type F n−2 but not F n−1 . 4.3. The n = 3 case. When n = 3 the 0-simplex links in ΣK 3 are graphs, and using some graph theoretic considerations we can actually prove the analog of Proposition 4.19 for generic characters, which leads to the following: Theorem 4.23. Σ 2 (PΣAut 3 ) = ∅. Proof. Since Σ 2 (PΣAut 3 ) is open and the generic character classes are dense in the character sphere (Observation 4.9), it suffices to prove the analog of Proposition 4.19 for all generic χ. Since I ↑ 3 (χ) is 1-dimensional, i.e., a graph, we need to prove that it is connected but not a tree. First we collect some facts about I 3 . It is a graph with eighteen vertices, namely there are twelve vertices corresponding to the symmetric ideal edges A ⊆ E( * ) = {1, 1, 2, 2, 3, 3} with |A| = 3 (three choices of which {i, i} to split, times two choices of which of i or i to include in A, times two choices of which {j, j} (j = i) to include in A) and six vertices for the symmetric ideal edges with |A| = 5 (six choices of which element of E( * ) to leave out of A). Call the former vertices depots and the latter hubs. There is an edge connecting a depot to a hub whenever the depot is contained in the hub, and there is an edge connecting two depots whenever they are disjoint. Each depot has degree four and each hub has degree six. Now, since χ is generic, for each pair of depots of the form {i, j, j} and {i, j, j}, precisely one of them is ascending, with a similar statement for hubs. Hence I ↑ 3 (χ) is a full subgraph of I 3 spanned by six depots and three hubs. We claim that each ascending hub has degree at least three in I ↑ 3 (χ). Consider the hub {i ′ , j, j, k, k}, where i ′ ∈ {i, i} and i, j, k are distinct. If this is ascending then at least one of the depots {i ′ , j, j} or {i ′ , k, k} must be as well, since if a j,i + a k,i is positive (respectively negative) then at least one of a j,i or a k,i must be as well. The other two edges incident to our hub come from the fact that one of the depots {j, k, k} or {j, k, k} is ascending, as is one of {j, j, k} or {j, j, k}. Having shown that each hub in I ↑ 3 (χ) has degree at least three in I ↑ 3 (χ), this tells us I ↑ 3 (χ) has at least nine edges, since hubs cannot be adjacent. Then since I ↑ 3 (χ) also has nine vertices we conclude that it contains a non-trivial cycle. It remains to prove it is connected. Say the three hubs in I ↑ 3 (χ) are u, v and w, and suppose that u has no adjacent depots in I ↑ 3 (χ) in common with either v or w (since otherwise we are done). Since there are only six depots, this implies that v and w have the same set of adjacent depots in I ↑ 3 (χ). But this is impossible since the intersection of the stars of two different ascending hubs can contain at most two ascending depots. Combining this with Orlandi-Korner's computation of Σ 1 (PΣAut 3 ), we get in particular that Σ 1 (PΣAut 3 ) is dense in S(PΣAut 3 ) = S 5 but Σ 2 (PΣAut 3 ) is already empty. (Note since PΣAut 2 ∼ = F 2 we also know that Σ 1 (PΣAut 2 ) = ∅.) Remark 4.24. Unfortunately the analogous result for arbitrary n, i.e., that Σ n−2 (PΣAut n ) is dense in S(PΣAut n ) but Σ n−1 (PΣAut n ) is empty, cannot be deduced using our methods when n > 3 (though we do suspect it is true). One would hope that the analog of Proposition 4.19 always holds for all generic χ, but it does not. For example, when n = 4 one can find a generic character χ such that I ↑ 4 (χ) is not simply connected. One example we found is to take χ with a 1,2 = a 2,1 = a 3,4 = a 4,3 = 3 and all other a i,j = −1 (adjusted slightly by some tiny ε > 0 to be generic). This non-simply connected ascending link also has non-trivial second homology, so this does not necessarily mean that [χ] is not in Σ 2 (PΣAut 4 ) (and we believe that it actually is), it is just inconclusive. In general we tentatively conjecture that Σ n−2 (PΣAut n ) is dense in S(PΣAut n ) and Σ n−1 (PΣAut n ) is empty, but for now our Morse theoretic approach here seems to only be able to handle the positive and negative characters for arbitrary n, and also the generic characters for n = 3. Proof of Theorem B We can now use our results about PΣAut n to quickly prove Theorem B, about ΣAut n . Proof of Theorem B. Since BB n is of type F n−2 but not F n−1 by Corollary 4.22, and has finite index in ΣAut ′ n by Lemma 2.5, we know that ΣAut ′ n is of type F n−2 but not F n−1 . Definition 1. 4 ( 4Ascending star/link). Given a Morse function (h, f ) on an affine cell complex Y , define the ascending star st ↑ v of a 0-cell v in Y to be the subcomplex of st v consisting of all faces of those cells c for which the unique 0-face of c where (h, f ) achieves its minimum is v. Define the ascending link lk ↑ v to be the subcomplex of lk v consisting of directions into st ↑ v. Note that lk ↑ v is a full subcomplex of lk v, since h and f are affine on cells. Corollary 1. 6 . 6With the same setup as the Morse Lemma, if additionally for all 0-cells [Γ, p, ρ].α := [Γ, p, ρ • α], and this extends to an action on ΣK n since for any forest collapse d : Γ The complex ΣK n is contractible and (n − 1)-dimensional, and the actions of ΣAut n and PΣAut n on ΣK n are proper and cocompact. Definition 3 . 9 ( 39The character height function h χ ). Let [Γ, p, ρ] be a symmetric marked cactus graph. Define h χ ([Γ, p, ρ]) to be h χ ([Γ, p, ρ]) := max{χ(α) | [Γ, p, ρ] ∈ st([R n , * , α])}. We need to show that h χ ([Γ, p, ρ].α) = h χ ([Γ, p, ρ]) + χ(α) for all [Γ, p, ρ] ∈ ΣK(0) n and α ∈ PΣAut n . We know that [Γ, p, ρ].α = [Γ, p, ρ•α], and clearly [Γ, p, ρ] ∈ st[R n , * , β] if and only if [Γ, p, ρ•α] ∈ st[R n , * , β•α] Observation 4. 2 . 2For Γ a cactus graph with V vertices, F (Γ) ≃ S V −2 . Definition 4 . 3 ( 43Complex of ascending forests). The complex of ascending forests F ↑ (Γ, p, ρ) for a symmetric marked cactus graph [Γ, p, ρ] is the full subcomplex of F (Γ) supported on those 0 Definition 4 . 5 ( 45Decisive). Call a character χ of PΣAut n decisive if every [Γ, p, ρ] lies in a unique star of a symmetric marked rose with maximal χ value. Proposition 4 . 7 ( 47Positive implies decisive). Positive characters of PΣAut n are decisive. Figure 2 . 2A cactus graph, with the tree T from the proof of Proposition 4.7 marked in bold.It turns out "most" characters are decisive, in the following sense: Definition 4 . 8 ( 48Generic). Call a character χ = i =j a i,j χ i,j of PΣAut n generic if for every choice of ε i,j ∈ {−1, 0, 1} we have i =j ε i,j a i,j = 0 only if ε i,j = 0 for all i, j (said another way, the a i,j have no non-trivial linear dependencies using coefficients from {−1, 0, 1}). Observation 4 . 9 . 49The set {[χ] ∈ S(PΣAut n ) | χ is generic} is dense in S(PΣAut n ).Proof. Given a linear dependence i =j ε i,j a i,j = 0 with ε i,j ∈ {−1, 0, 1}, the complement of the set of character classes satisfying this dependence is open and dense in S(PΣAut n ). Since there are only finitely many choices for the ε i,j , the set of generic character classes is also (open and) dense in S(PΣAut n ). Proposition 4 . 410 (Generic implies decisive). Generic characters of PΣAut n are decisive. Definition 4 . 413 ((Symmetric) ideal edges). A subset A of E( * ) such that |A| ≥ 2 and |E( * ) \ A| ≥ 1 is called an ideal edge. We say an ideal edge A splits {i, i} if {i, i} ∩ A and {i, i} \ A are both non-empty. We call A symmetric if there exists precisely one i ∈ [n] such that A splits {i, i}. In this case we call {i, i} the split pair of A. Figure 3 . 3The symmetric ideal edge {1, 2, 2} and the non-symmetric ideal edge {1, 2}, together with the blow-ups they produce. The former yields a cactus graph and the latter does not. Definition 4 . 414 (Ascending symmetric ideal edge). Let χ = i =j a i,j χ i,j be a character of PΣAut n and let A be a symmetric ideal edge. Suppose {j, j} is the split pair of A and let I = (A ∩ [n]) \ {j}. We call A ascending (with respect to χ) if either Proposition 4 . 19 . 419The complex I ↑ n (pos) is (n − 3)-connected but not (n − 2)-acyclic (and hence so are lk ↑ u [R n , * , α] and lk ↑ [R n , * , α] for any positive character of PΣAut n ). 1.2. [BR88, Theorem B and Remark 6.5] Let G be a group of type F m . Let G ′ ≤ H ≤ G. Then H is of type F m if and only if for every non-trivial character χ of G such that χ(H) = 0, we have [χ] ∈ Σ m (G). For example, if H = ker(χ) for χ a discrete character of G, i.e., one with image Z, then H is of type F m if and only if [±χ] ∈ Σ m (G). Also note that G ′ itself is of type F m if and only if Σ m (G) = S(G). important classical properties of the Σ m (G) are that they are all open subsets of S(G) and that they are invariant under the natural action of Aut(G) on S(G) [BNS87, BR88]. 1.2. Morse theory. Bestvina-Brady Morse theory can be a useful tool for computing BNSR-invariants. In this section we give the relevant definitions and results from Morse theory, in the current level of generality needed. Let Y be an affine cell complex (see [BB97, Definition 2.1]). The star st YOther For the definitions of "positive" and "negative" consult Definition 2.4. In what follows it will be clear that "finite" could be replaced with "well ordered" but for our present purposes we will just assume it is finite. Actually, PΣAut n is even of "type F", meaning it has a compact classifying space, but we will not need this fact. (χ) has non-trivial π 1 and H 2 . Hence, we have focused only on positive and negative characters of PΣAut n in Theorem A, but in Subsection 4.3 we will show that generic characters are also tractable at least when n = 3. Now suppose D = ∅, so d(P ) > 0. Without loss of generality we can write D = {1, . . . , d}. We will build up to I ↑ (P ; pos) from a subcomplex with a known homotopy type, namely the contractible star of the 0-simplex {1} ∪ (P \ D). The 0-simplices of I ↑ (P ; pos) missing from this star are those A containing an element of {2, . . . , d} (if d = 1 there is nothing to do, so assume d ≥ 2), so to obtain I ↑ (P ; pos) from this star we will attach these missing 0-simplices, in some order, along their relative links lk rel A. If we can do this in an order such that the relative links are always (w(P ) − 4)-connected, then we can conclude that I ↑ (P ; pos) is (w(P ) − 3)-connected. The order is as follows: first glue in the A containing 2 in order of decreasing size, then the A containing 3 in order of decreasing size, and so forth. The relative link lk rel A of A decomposes into the join of its relative in-link lk in rel A and relative out-link lk out rel A. The relative in-link of A is defined to be the subcomplex supported on those B in lk rel A such that B ⊆ A. The relative out-link is defined to be the subcomplex supported on those B in lk rel A that satisfy either A ⊆ B or A ∩ B = ∅. These options encompass all the ways a symmetric ideal edge can be compatible with A, and clearly everything in lk in rel A is compatible with everything in lk out rel A, so indeed lk rel A = lk in rel A * lk out rel A. To show that lk rel A is (w(P ) −4)connected for every A containing an element of {2, . . . , d}, we will consider lk in rel A and lk out rel A separately. Let {i A } := A ∩ D, so i A ∈ {2, . . . , d}, and let A ♭ := A \ {i A }. The 0-simplices B in lk in rel A must lie in I ↑ (A ♭ ; pos), since for B to come before A in our order while having smaller cardinality than A, it must not contain i A (so such B are actually already in the star of {1} ∪ (P \ D)). Hence lk in rel A is isomorphic to I ↑ (A ♭ ; pos), and w(A ♭ ) = w(A) − 1 < w(P ), so by induction on w(P ) we know lk in rel A is (w(A ♭ ) − 3)connected. Next, the 0-simplices B in lk out rel A must be disjoint from {i A +1, . . . , d} and either properly contain A or be disjoint from A. The map B → B \ A ♭ induces an isomorphism from lk out rel A to I ↑ (P \(A ♭ ∪{i A +1, . . . , d}); pos); the inverse map sends C to itself if i A ∈ C and to C ∪ A ♭ if i A ∈ C. Since w(P \ (A ♭ ∪ {i A + 1, . . . , d})) = w(P ) − w(A ♭ ) < w(P ), by induction on w(P ) we know lk out rel A is (w(P ) − w(A ♭ ) − 3)-connected. We conclude that lk rel A = lk in rel A * lk out rel A is (((w(A ♭ ) − 3) + (w(P ) − w(A ♭ ) − 3)) + 2)-connected, which is to say (w(P ) − 4)-connected, as desired.This finishes the inductive proof, which in particular shows I ↑ n (pos) is (n − 3)-connected. Now to see that it is not (n − 2)-acyclic, consider the covering of I ↑ n (pos) by the stars of Θ 1 , . . . , Θ n , as above. The intersection of any k of these stars is (n − k − 2)-connected, as was deduced during the inductive proof, and the nerve of the covering is an (n − 2)-sphere, so Lemma 4.18 says I ↑ (P ; pos) is not (n − 2)-acyclic.A parallel argument shows that I ↑ n (χ) is also (n − 3)-connected but not (n − 2)-acyclic for χ a negative character of PΣAut n .Remark 4.20. If χ is neither positive nor negative then I ↑ n (χ) is much more complicated, for example as discussed in Remark 4.24 below, one can find examples of generic χ for which I ↑ Also, Observation 2.3 says that for any m either Σ m (ΣAut n ) is all of S 0 or else is empty. The result now follows from Citation 1.2. Morse theory and finiteness properties of groups. M Bestvina, N Brady, Invent. Math. 1293M. Bestvina and N. Brady, Morse theory and finiteness properties of groups, Invent. Math. 129 (1997), no. 3, 445-470. Dimension of the Torelli group for Out(F n ). M Bestvina, K.-U Bux, D Margalit, Invent. Math. 1701M. Bestvina, K.-U. Bux, and D. Margalit, Dimension of the Torelli group for Out(F n ), Invent. Math. 170 (2007), no. 1, 1-32. The Bestvina-Brady construction revisited: geometric computation of Σ-invariants for right-angled Artin groups. K.-U Bux, C Gonzalez, J. London Math. Soc. 2K.-U. Bux and C. Gonzalez, The Bestvina-Brady construction revisited: geometric computa- tion of Σ-invariants for right-angled Artin groups, J. London Math. Soc. (2) 60 (1999), no. 3, 793-801. The Sigma invariants of Thompson's group F , Groups Geom. R Bieri, R Geoghegan, D H Kochloukova, R. Bieri, R. Geoghegan, and D. H. Kochloukova, The Sigma invariants of Thompson's group F , Groups Geom. Dyn. 4 (2010), no. 2, 263-273. Chessboard complexes and matching complexes. A Björner, L Lovász, S T Vrećica, R T Živaljević, J. London Math. Soc. 2A. Björner, L. Lovász, S. T. Vrećica, and R. T.Živaljević, Chessboard complexes and matching complexes, J. London Math. Soc. (2) 49 (1994), no. 1, 25-39. The pure symmetric automorphisms of a free group form a duality group. N Brady, J Mccammond, J Meier, A Miller, J. Algebra. 2462N. Brady, J. McCammond, J. Meier, and A. Miller, The pure symmetric automorphisms of a free group form a duality group, J. Algebra 246 (2001), no. 2, 881-896. A geometric invariant of discrete groups. R Bieri, W D Neumann, R Strebel, Invent. Math. 903R. Bieri, W. D. Neumann, and R. Strebel, A geometric invariant of discrete groups, Invent. Math. 90 (1987), no. 3, 451-477. Valuations on free resolutions and higher geometric invariants of groups. R Bieri, B Renz, Comment. Math. Helv. 633R. Bieri and B. Renz, Valuations on free resolutions and higher geometric invariants of groups, Comment. Math. Helv. 63 (1988), no. 3, 464-497. Trees, valuations, and the Bieri-Neumann-Strebel invariant. K S Brown, Invent. Math. 903K. S. Brown, Trees, valuations, and the Bieri-Neumann-Strebel invariant, Invent. Math. 90 (1987), no. 3, 479-504. Finiteness properties of soluble arithmetic groups over global function fields. K.-U Bux, Geom. Topol. 8electronicK.-U. Bux, Finiteness properties of soluble arithmetic groups over global function fields, Geom. Topol. 8 (2004), 611-644 (electronic). Cohomological dimension and symmetric automorphisms of a free group. D J Collins, Comment. Math. Helv. 641D. J. Collins, Cohomological dimension and symmetric automorphisms of a free group, Com- ment. Math. Helv. 64 (1989), no. 1, 44-61. Moduli of graphs and automorphisms of free groups. M Culler, K Vogtmann, Invent. Math. 841M. Culler and K. Vogtmann, Moduli of graphs and automorphisms of free groups, Invent. Math. 84 (1986), no. 1, 91-119. C Damiani, arXiv:1605.02323A journey through loop braid groups. C. Damiani, A journey through loop braid groups, arXiv:1605.02323, 2016. Cerf theory for graphs. A Hatcher, K Vogtmann, J. London Math. Soc. 2A. Hatcher and K. Vogtmann, Cerf theory for graphs, J. London Math. Soc. (2) 58 (1998), no. 3, 633-655. Contractibility of fixed point sets of auter space. C A Jensen, Topology Appl. 1193C. A. Jensen, Contractibility of fixed point sets of auter space, Topology Appl. 119 (2002), no. 3, 287-304. The integral cohomology of the group of loops. C Jensen, J Mccammond, J Meier, Geom. Topol. 10C. Jensen, J. McCammond, and J. Meier, The integral cohomology of the group of loops, Geom. Topol. 10 (2006), 759-784. The BNS-invariant for the pure braid groups. N Koban, J Mccammond, J Meier, Groups Geom. Dyn. 93N. Koban, J. McCammond, and J. Meier, The BNS-invariant for the pure braid groups, Groups Geom. Dyn. 9 (2015), no. 3, 665-682. On the Σ 2 -invariants of the generalised R. Thompson groups of type F. D H Kochloukova, J. Algebra. 371D. H. Kochloukova, On the Σ 2 -invariants of the generalised R. Thompson groups of type F , J. Algebra 371 (2012), 430-456. The Bieri-Neumann-Strebel invariant of the pure symmetric automorphisms of a right-angled Artin group. N Koban, A Piggott, Illinois J. Math. 581N. Koban and A. Piggott, The Bieri-Neumann-Strebel invariant of the pure symmetric auto- morphisms of a right-angled Artin group, Illinois J. Math. 58 (2014), no. 1, 27-41. On basis-conjugating automorphisms of free groups. J Mccool, Canad. J. Math. 386J. McCool, On basis-conjugating automorphisms of free groups, Canad. J. Math. 38 (1986), no. 6, 1525-1529. Higher generation subgroup sets and the Σ-invariants of graph groups. J Meier, H Meinert, L Vanwyk, Comment. Math. Helv. 731J. Meier, H. Meinert, and L. VanWyk, Higher generation subgroup sets and the Σ-invariants of graph groups, Comment. Math. Helv. 73 (1998), no. 1, 22-44. The Bieri-Neumann-Strebel invariant for basis-conjugating automorphisms of free groups. L A Orlandi-Korner, Proc. Amer. Math. Soc. 1285L. A. Orlandi-Korner, The Bieri-Neumann-Strebel invariant for basis-conjugating automor- phisms of free groups, Proc. Amer. Math. Soc. 128 (2000), no. 5, 1257-1262. Finiteness properties for a subgroup of the pure symmetric automorphism group. A Pettet, C. R. Math. Acad. Sci. Paris. 3483-4A. Pettet, Finiteness properties for a subgroup of the pure symmetric automorphism group, C. R. Math. Acad. Sci. Paris 348 (2010), no. 3-4, 127-130. Homotopy properties of the poset of nontrivial p-subgroups of a group. D Quillen, Adv. in Math. 282D. Quillen, Homotopy properties of the poset of nontrivial p-subgroups of a group, Adv. in Math. 28 (1978), no. 2, 101-128. On a group of conjugating automorphisms of a free group. A G Savushkina, Mat. Zametki. 601159A. G. Savushkina, On a group of conjugating automorphisms of a free group, Mat. Zametki 60 (1996), no. 1, 92-108, 159. The Basilica Thompson group is not finitely presented. S Witzel, M C B Zaremsky, arXiv:1603.01150SubmittedS. Witzel and M. C. B. Zaremsky, The Basilica Thompson group is not finitely presented, Submitted. arXiv:1603.01150, 2016. On the Σ-invariants of generalized Thompson groups and Houghton groups. M C B Zaremsky, arXiv:1502.0262013902Binghamton, NYDepartment of Mathematical Sciences, Binghamton UniversityM. C. B. Zaremsky, On the Σ-invariants of generalized Thompson groups and Houghton groups, Submitted. arXiv:1502.02620, 2015. Department of Mathematical Sciences, Binghamton University, Binghamton, NY 13902 E-mail address: zaremsky@math. binghamton.eduE-mail address: [email protected]
[]
[ "Web-page Indexing based on the Prioritize Ontology Terms", "Web-page Indexing based on the Prioritize Ontology Terms" ]
[ "Sukanta Sinha [email protected] \nTata Consultancy Services Ltd\nSalt Lake700091Victoria Park Building, KolkataIndia\n\nWIDiCoReL Research Lab, Green Tower\nC-9/1, Golf Green700095KolkataIndia\n", "Rana Dattagupta [email protected] \nComputer Science Dept\nJadavpur University\n700032KolkataIndia\n", "Debajyoti Mukhopadhyay [email protected] \nInformation Technology Dept\nMaharashtra Institute of Technology\n411038PuneIndia\n\nWIDiCoReL Research Lab, Green Tower\nC-9/1, Golf Green700095KolkataIndia\n" ]
[ "Tata Consultancy Services Ltd\nSalt Lake700091Victoria Park Building, KolkataIndia", "WIDiCoReL Research Lab, Green Tower\nC-9/1, Golf Green700095KolkataIndia", "Computer Science Dept\nJadavpur University\n700032KolkataIndia", "Information Technology Dept\nMaharashtra Institute of Technology\n411038PuneIndia", "WIDiCoReL Research Lab, Green Tower\nC-9/1, Golf Green700095KolkataIndia" ]
[]
In this world, globalization has become a basic and most popular human trend. To globalize information, people are going to publish the documents in the internet. As a result, information volume of internet has become huge. To handle that huge volume of information, Web searcher uses search engines. The Web-page indexing mechanism of a search engine plays a big role to retrieve Web search results in a faster way from the huge volume of Web resources. Web researchers have introduced various types of Web-page indexing mechanism to retrieve Web-pages from Web-page repository. In this paper, we have illustrated a new approach of design and development of Web-page indexing. The proposed Web-page indexing mechanism has applied on domain specific Web-pages and we have identified the Web-page domain based on an Ontology. In our approach, first we prioritize the Ontology terms that exist in the Web-page content then apply our own indexing mechanism to index that Webpage. The main advantage of storing an index is to optimize the speed and performance while finding relevant documents from the domain specific search engine storage area for a user given search query.
10.1007/978-981-13-3053-7_6
[ "https://arxiv.org/pdf/1311.6243v1.pdf" ]
16,452,873
1311.6243
995d42cdb748d07dba29ec90c38d1770f611d90c
Web-page Indexing based on the Prioritize Ontology Terms Sukanta Sinha [email protected] Tata Consultancy Services Ltd Salt Lake700091Victoria Park Building, KolkataIndia WIDiCoReL Research Lab, Green Tower C-9/1, Golf Green700095KolkataIndia Rana Dattagupta [email protected] Computer Science Dept Jadavpur University 700032KolkataIndia Debajyoti Mukhopadhyay [email protected] Information Technology Dept Maharashtra Institute of Technology 411038PuneIndia WIDiCoReL Research Lab, Green Tower C-9/1, Golf Green700095KolkataIndia Web-page Indexing based on the Prioritize Ontology Terms Domain Specific SearchOntologyOntology Based SearchRe- levance ValueSearch engineWeb-page Indexing In this world, globalization has become a basic and most popular human trend. To globalize information, people are going to publish the documents in the internet. As a result, information volume of internet has become huge. To handle that huge volume of information, Web searcher uses search engines. The Web-page indexing mechanism of a search engine plays a big role to retrieve Web search results in a faster way from the huge volume of Web resources. Web researchers have introduced various types of Web-page indexing mechanism to retrieve Web-pages from Web-page repository. In this paper, we have illustrated a new approach of design and development of Web-page indexing. The proposed Web-page indexing mechanism has applied on domain specific Web-pages and we have identified the Web-page domain based on an Ontology. In our approach, first we prioritize the Ontology terms that exist in the Web-page content then apply our own indexing mechanism to index that Webpage. The main advantage of storing an index is to optimize the speed and performance while finding relevant documents from the domain specific search engine storage area for a user given search query. Introduction In recent years, the growth of the World Wide Web (WWW) has been rising at an alarming rate and contains a huge amount of multi-domain data [1]. As a result, there is an explosion in information and web searcher uses search engines to handle that information. There are various parameters used by the search engines to produce better search engine performance, Web-page indexing is one of them. Nowadays, Web researchers have already introduced some efficient Web-page indexing mechanism like Back-of-the-book-style Web-page indexes formally called "Web site A-Z index-es", "Human-produced Web-page index", "Meta search Web-page indexing", "Cache based Web-page indexing", etc. [2]. In our approach, we have introduced a new mechanism for Web-page indexing. This is fully domain specific Ontological approach, where each Ontology term is treated as a base index. Ontology index number assigned based on their weight value [3][4]. In our proposed mechanism, first we retrieve dominating and sub-dominating Ontology terms for a considered Web-page from the domain specific Web-page repository, then apply primary and secondary attachment rule according to our proposed mechanism. The paper is organized in the following way. In section 2, we have discussed the related work on Web-page indexing. The proposed architecture for domain-specific Web-page indexing is given in section 3. All the component of our architecture is also discussed in the same section. Experimental analyses and conclusion of our paper is given in section 4 and 5 respectively. Related Works The main advantage of storing an index is to optimize the speed and performance while finding relevant documents from the search engine storage area for a user given search criteria. In this section, we are going to discuss the existing Web-page indexing mechanism and their drawbacks. Back-of-the-book-style Back-of-the-book-style Web-page indexes formally called "Web site A-Z indexes". Web site A-Z indexes have several advantages. But search engines language is full of homographs and synonyms and not all the references found will be relevant. For example, a computer-produced index of the 9/11 report showed many references to George Bush, but did not distinguish between "George H. W. Bush" and "George W. Bush" [5]. Human-produced Web-page Index Human-produced index has someone check each and every part of the text to find everything relevant to the search term, while a search engine leaves the responsibility for finding the information with the enquirer. It will increase miss and hit ratio. This approach is not suitable for the huge volume of Web data [6]. Meta Search Web-page Indexing Metadata Web indexing involves assigning keywords or phrases to Web-pages or websites within a meta-tag field, so that the Web-page or website can be retrieved by a search engine that is customized to search the keywords field. This may be involved using keywords restricted to a controlled vocabulary list [7]. Cache based Web-page Indexing Frequently used search query produces search result quickly because the result information stored into cache memory. On the other hand while an irregular search string encountered, the search engine cannot produce faster search result due to information not available in the cache memory. Irregular search strings always come because of the huge volume of internet information and user [8][9]. Proposed Approach In our approach, we have proposed a new mechanism for indexing domain specific Web-pages. Before going forward with the new indexing mechanism, we need to make sure all the inputs are available in our hands. Those inputs are domain specific Web-page repository, set of Ontology terms, Weight table and Syntable [10]. One of our earlier work, we have created the domain specific Web-page repository [11][12]. We have used that repository as an input of our proposed approach. Extraction of Dominating and Sub-Dominating Ontology Terms In this section, we will discuss how to extract dominating and sub-dominating Ontology terms. We will illustrate this by using one example (refer Fig. 1). Fig. 1. Example of Extracting Dominating and Sub-dominating Ontology Terms Consider a "Mobile" domain Web-page. First extract the Web-page content then apply definition 1.1 and 1.2. We have found that Ontology term "Mobile" holds term relevance value 45, which is maximum and according to our definition 1.1 Ontology term "Mobile" becomes dominating Ontology term. Ontology term "price", "color", "battery" and "company" holds term relevance value 31, 27, 18 and 15 respectively, which are greater than all other Ontology terms excluding "Mobile" Ontology term. Now according to our definition 1.2, Ontology term "price", "color", "battery" and "company" become sub-dominating Ontology term 1, sub-dominating Ontology term 2, subdominating Ontology term 3 and sub-dominating Ontology term 4 respectively. If number of sub-dominating Ontology term increased then secondary attachments also increases proportionally to store them (refer Rule 1.2), which increases indexing memory size. For that reason, we have used four sub-dominating Ontology terms as a threshold value. Some rare cases, we found multiple Ontology term holds same term relevance value that time we will prioritize dominating and sub-dominating Ontology terms according to their lower term weight value, i.e., consider the higher value of the number of occurrences of that Ontology term in the considered Web-page content. Proposed Algorithm of Web-page indexing Proposed algorithm briefly describes the mechanism of Web-page indexing based on the prioritized Ontology terms for a set of domain specific Web-pages. A pictorial diagram of Web-page structures after applying our indexing mechanism is shown in fig. 2. Each Ontology term maintains two tables. One table used for storing primary attachments and other one used for storing secondary attachments (refer Rule 1.1 and 1.2). In fig. 2, (P 1 ,…, P 9 ,…, P h , …, P k ) and (S 1 ,…, S 9 ,…, S h , … , S k ) all are pointing primary and secondary attachment table of their corresponding Ontology terms respectively. Each Web-page has only one primary attachment and four secondary attachments. In the fig. 2, (P_ID 1 , P_ID 2 ,….) representing Web-page identifier of each considered domain specific Web-pages. Solid lines are denoting primary attachment, which pointing primary attachment of dominating Ontology term. Dotted lines are denoting secondary attachments, which pointing secondary attachment of sub-dominating Ontology terms. Input User Interface In our proposed search engine, we have facilitated Web searchers to customize their search result by selecting all the inputs. We have used drop-down lists for selecting dominating and sub-dominating Ontology terms. Web-searcher can produce optimistic search results from our proposed search engine without knowing the domain knowledge because all the Ontology terms are already available in the drop-down lists. After providing all the inputs, i.e., search tokens, relevance range and number of search results, Web searcher need to click on "Search" button to get the search results. In fig.3 shows a part of the user interface of our prototype and "*" denotes mandatory fields. "Number of Search Results" field restricts the Web searcher produce limited search result. For an example, say 100 search results are produced for user given search tokens and relevance rage, now user wants 20 search results that time user needs to put 20 in the "Number of Search Results" field. Lesser time will be taken to displaying 20 result links instead of displaying 100 result links. In the user interface, the maximum relevance value and minimum relevance value are set dynamically according to the practical scenario based data or query. Fig. 3. A Part of User Interface Web-page Retrieval Mechanism Based on the User Input Web-page retrieval from Web search engine resources are an important role of a Web search engine. We are retrieving a resultant Web-page list from our data store based on the user given dominating and sub-dominating Ontology terms, relevance range, etc. Most of the cases in the existing search engines follow to parse the search string and then retrieve the Web-pages based on those parsed tokens. According to our prototype, we are giving a flexibility to the user does not use the search string, directly select the search tokens from the drop down lists (refer Fig. 3). As a result, it reduces the search string parsing time and miss hit ratio due to user"s inadequate domain knowledge. As discussed in section 3.3, at a time user can select only one dominating and four sub-dominating Ontology terms. Our prototype uses below formula to produce a resultant Web-page list based on the user given relevance range. (50% of "x" from the primary attachment list of dominating Ontology term + 20% of "x" from secondary attachment list of first sub-dominating Ontology term + 15% of "x" from secondary attachment list of second sub-dominating Ontology term + 10% of "x" from secondary attachment list of third sub-dominating Ontology term + 5% of "x" from secondary attachment list of fourth sub-dominating Ontology term), where "x" denotes "Number of Search Results" in the user interface (refer Fig. 3). Experimental Analyses In this section, we have given some experimental study as well as discussed how to set up our system. Section 4.1 explains our experimental procedure, section 4.2 depicts our prototype time complexity to produce resultant Web-page list and section 4.3 shows the experimental results of our system. Experiment Procedure Performance of our system depends on various parameters, and those parameters need to be setup before running our system. The considered parameters are domain relevance limit, weight value assignment, Ontology terms, domain specific Web-page repository, etc. These parameters are assigned by tuning our system through experiments. We have created domain specific Web-page repository by taking 50 seed URLs is an input of our domain specific Web search crawler. Time Complexity to Produce Resultant Web-page List We have considered "k" number of Ontology terms. We have kept them in a sorted order according to their weight value. While finding user given dominating Ontology term primary attachment link, our prototype required maximum O(log 2 k) time using binary search mechanism (refer Fig. 2). On the other hand while finding other four user given sub-dominating Ontology term secondary attachment links, our prototype required 4O(log 2 k) times. In the second level, our prototype reached from primary and secondary attachment to the Web-pages just spending constant time because there is no iteration required. Finally, our prototype time complexity becomes [5O(log 2 k) +5c] ≈ O(log 2 k) to the retrieve resultant Web-page list, where "c" is a constant time required to reach the primary and secondary attachment to Web-pages. Experimental Result It is very difficult to compare our search results with the existing search engines. Most of the cases, existing search engines do not hold domain specific concepts. It is very important that while comparing two systems both are on the same page, i.e., contains same resources, environment, system platforms, search query all are same. Few existing cases, where search engine gives an advanced search option to the Web searchers, but not match with our domains. Anyhow we have produced few data to measure our proposed prototype performance. To produce the experimental results, we have compared the two systems (before and after applying Web-page indexing mechanism) performances. In table 1, we have given a performance report of our system. 5000 Number of Search Results Time Taken (in Seconds) Total Number of Web-pages in the Repository To measure accuracy, we have applied our set of search query multiple times by taking "Number of Search Results" (refer Fig. 3) field values 10, 20, 30, 40 and 50. In table 2, we have shown our system accuracy measurement. Conclusions In this paper, we have proposed a prototype of a domain specific Web search engine. This prototype has used one dominating and four sub-dominating Ontology terms to produce Web search results. All the Web-pages are indexed according to their dominating and sub-dominating Ontology terms. According to our experimental results, Web-page indexing mechanism produced faster result for the user selected dominating and sub-dominating Ontology terms. According to our prototype, we are giving a flexibility to the user does not use the search string, directly select the search tokens from the drop down lists. As a result, it reduces the search string parsing time and miss hit ratio due to user"s inadequate domain knowledge. This prototype is highly scalable. Suppose, we need to increase the supporting domains for our prototype, then we need to include the new domain Ontology and other details like weight table, syntable, etc. of that Ontology. In a single domain there does not exist huge number of ontology terms. Hence, the number of indexes should be lesser than a general search engine. As a result, we can reach the web-pages quickly as well as reducing index storage cost. : Domain specific Web-pages Output : Indexed all the Web-pages 1. Select a Web-page (P) from domain specific Web-page repository 2. Extract Dominating Ontology Term (D) 3. Extract Sub-Dominating Ontology Terms (SDi where 0<i≤4 and i is an integer) 4. Add Web-page identifier (P_ID) of P with Primary attachment of D 5. Add Web-page identifier (P_ID) of P with Secondary attachment of SDi where 0<i≤4 and i is an integer 6. Repeat step 1-5 until all the Web-pages get indexed 7. End Fig. 2 . 2Web-page structures after applying our indexing mechanism Definition 1.1: Dominating Ontology Term-Ontology term which holds maximum Ontology term relevance value in the considered Web-page. Definition 1.2: Sub-dominating Ontology Terms-Ontology terms which hold successive maximum Ontology term relevance values other than dominating Ontology term in the considered Web-page. Rule 1.1: Primary Attachment (P1, P2 …) -All the dominating Ontology terms for all Web-pages are indexed with the primary attachment of their respective Ontology term. Rule 1.2: Secondary Attachment (S1, S2 …) -All the sub-dominating Ontology terms for all Web-pages are indexed with the secondary attachment of their respective Ontology term. Definition 2.1: Ontology -It is a set of domain related key information, which is kept in an organized way based on their importance.Definition 2.2: Relevance Value -It is a numeric value for each Web-page, which is generated on the basis of the term Weight value, term Synonyms, number of occur- rences of Ontology terms which are existing in that Web-page. Definition 2.3: Seed URL -It is a set of base URL from where the crawler starts to crawl down the Web pages from the Internet. Definition 2.4: Weight Table -This table has two columns, first column denotes Ontology terms and second column denotes weight value of that Ontology term. On- tology term weight value lies between "0" and "1". Definition 2.5: Syntable -This table has two columns, first column denotes Ontology terms and second column denotes synonym of that ontology term. For a particular ontology term, if more than one synonym exists, those are kept using comma (,) sepa- rator. Definition 2.6: Relevance Limit -It is a predefined static relevance cut-off value to recognize whether a Web-page is domain specific or not. Definition 2.7: Term Relevance Value -It is a numeric value for each Ontology term, which is generated on the basis of the term Weight value, term Synonyms, number of occurrences of that Ontology term in the considered Web-page. Table 1 1Performance Report of Our SystemBefore applying Web-page indexing After applying Web-page indexing 10 0.530973451 0.392156863 5000 20 1.085972851 0.860215054 5000 30 1.753246753 1.409921671 5000 40 2.394014963 2.008368201 5000 50 3.018108652 2.683363148 Table 2 2Accuracy of Our SystemNumber of Search Results Avg. No. of Relevant Results Avg. No. of Non- Relevant Results Total Number of Web-pages in the Repository 10 8.7 1.3 5000 20 17.2 2.8 5000 30 26.4 3.6 5000 40 34.6 5.4 5000 50 43.6 6.4 5000 Scaling phenomena in the Internet. W Willinger, R Govindan, S Jamin, V Paxson, S Shenker, Proceedings of the National Academy of Sciences. the National Academy of Sciencessuppl. 1Willinger, W., Govindan, R., Jamin, S., Paxson, V. and Shenker, S.: Scaling phe- nomena in the Internet. in Proceedings of the National Academy of Sciences, 1999, suppl. 1, pp. 2573-2580 User preferences for features in back of book indexes. V Diodato, Journal of the American Society for Information Science. 457Diodato, V.: User preferences for features in back of book indexes. Journal of the American Society for Information Science, 45(7), 1994, pp.529-536 P Spyns, R Meersman, M Jarrar, Data modelling versus ontology engineering. SIGMOD. 31Spyns, P., Meersman, R., Jarrar, M.: Data modelling versus ontology engineering. SIGMOD, 2002, Record Special Issue 31(4), pp. 12-17 An ontology engineering methodology for DOGMA. P Spyns, Y Tang, R Meersman, Journal of Applied Ontology. 5Spyns, P., Tang, Y., Meersman, R.: An ontology engineering methodology for DOGMA. Journal of Applied Ontology 5, 2008 Back of book indexes and the characteristics of author and nonauthor indexing: Report of an exploratory study. V Diodato, G Gandt, Journal of the American Society for Information Science. 425Diodato, V. and Gandt, G.: Back of book indexes and the characteristics of author and nonauthor indexing: Report of an exploratory study. Journal of the American Society for Information Science, 42(5), 1991, pp.341-350 Guidelines for Indexes and Related Information Retrieval Devices. J D Anderson, NISO-TR02-1997NISO Technical Report. 2NISO PressAnderson, J. D.: Guidelines for Indexes and Related Information Retrieval Devic- es. NISO Technical Report 2, NISO-TR02-1997. Bethesda, Maryland, NISO Press, 1997 Information retrieval on Internet using meta-search engines: A review. CSIR. Elizabeth Manoj, J , Manoj, M and Elizabeth, J.: Information retrieval on Internet using meta-search engines: A review. CSIR, October 2008, pp. 739-746 Small forwarding tables for fast routing lookups. A Brodnik, S Carlsson, M Degermark, S Pink, Proc. of ACM SIGCOMM. of ACM SIGCOMM97Brodnik, A., Carlsson, S., Degermark, M. and Pink, S.: Small forwarding tables for fast routing lookups. In Proc. of ACM SIGCOMM"97, 1997 Next Generation Routers. H J Chao, IEEE Proceeding. 909Invited paperChao, H.J.: Next Generation Routers. Invited paper, IEEE Proceeding, vol. 90, no. 9, 2002, pp. 1518-1558 The OntoWordNet Project: Extension and Axiomatization of Conceptual Relations in WordNet. A Gangemi, R Navigli, P Velardi, Proc. of International Conference on Ontologies, Databases and Applications of SEmantics (ODBASE 2003). of International Conference on Ontologies, Databases and Applications of SEmantics (ODBASE 2003)Catania, Sicily (ItalyGangemi, A., Navigli, R., Velardi, P.: The OntoWordNet Project: Extension and Axiomatization of Conceptual Relations in WordNet. In Proc. of International Conference on Ontologies, Databases and Applications of SEmantics (ODBASE 2003), Catania, Sicily (Italy), 2003, pp. 820-838 A New Approach to Design Domain Specific Ontology Based Web Crawler. D Mukhopadhyay, A Biswas, S Sinha, 10th International Conference on Information Technology, ICIT2007. Rourkela, India; California, USAIEEE Computer Society PressMukhopadhyay, D., Biswas, A., Sinha, S.: A New Approach to Design Domain Specific Ontology Based Web Crawler. 10th International Conference on Infor- mation Technology, ICIT2007, Rourkela, India; IEEE Computer Society Press, California, USA; December 17-20, 2007; pp.289-291 A New Approach to Design Graph Based Search Engine for Multiple Domains Using Different Ontologies. 11th International Conference on Information Technology. D Mukhopadhyay, S Sinha, Proceedings. ICITMukhopadhyay, D., Sinha, S.: A New Approach to Design Graph Based Search Engine for Multiple Domains Using Different Ontologies. 11th International Con- ference on Information Technology, ICIT 2008 Proceedings; Bhubaneswar, India;
[]
[ "Localized modulated wave solutions in diffusive glucose-insulin systems", "Localized modulated wave solutions in diffusive glucose-insulin systems" ]
[ "Alain Mvogo \nDepartment of Physics\nFaculty of Science\nLaboratory of Biophysics\nUniversity of Yaounde I\nUniversity of Yaounde\nP.O. Box 812Cameroon\n\nCentre d'Excellence Africain en Technologies de l'Information et de la Communication\nUniversity of Yaounde I\nCameroon\n", "Antoine Tambue \nThe African Institute for Mathematical Sciences (AIMS)\nStellenbosch University\n6-8 Melrose Road7945MuizenbergSouth Africa\n\nCenter for Research in Computational and Applied Mechanics (CERECAM)\nDepartment of Mathematics and Applied Mathematics\nUniversity of Cape Town\n7701RondeboschSouth Africa\n", "Germain H Ben-Bolie \nCentre d'Excellence Africain en Technologies de l'Information et de la Communication\nUniversity of Yaounde I\nCameroon\n\nDepartment of Physics\nFaculty of Science\nLaboratory of Nuclear Physics\nUniversity of Yaounde I\nUniversity of Yaounde\nP.O. Box 812Cameroon\n", "Timoléon C Kofané \nCentre d'Excellence Africain en Technologies de l'Information et de la Communication\nUniversity of Yaounde I\nCameroon\n\nDepartment of Physics\nFaculty of Science\nLaboratory of Mechanics\nUniversity of Yaounde I\nUniversity of Yaounde\nP.O. Box 812Cameroon\n" ]
[ "Department of Physics\nFaculty of Science\nLaboratory of Biophysics\nUniversity of Yaounde I\nUniversity of Yaounde\nP.O. Box 812Cameroon", "Centre d'Excellence Africain en Technologies de l'Information et de la Communication\nUniversity of Yaounde I\nCameroon", "The African Institute for Mathematical Sciences (AIMS)\nStellenbosch University\n6-8 Melrose Road7945MuizenbergSouth Africa", "Center for Research in Computational and Applied Mechanics (CERECAM)\nDepartment of Mathematics and Applied Mathematics\nUniversity of Cape Town\n7701RondeboschSouth Africa", "Centre d'Excellence Africain en Technologies de l'Information et de la Communication\nUniversity of Yaounde I\nCameroon", "Department of Physics\nFaculty of Science\nLaboratory of Nuclear Physics\nUniversity of Yaounde I\nUniversity of Yaounde\nP.O. Box 812Cameroon", "Centre d'Excellence Africain en Technologies de l'Information et de la Communication\nUniversity of Yaounde I\nCameroon", "Department of Physics\nFaculty of Science\nLaboratory of Mechanics\nUniversity of Yaounde I\nUniversity of Yaounde\nP.O. Box 812Cameroon" ]
[]
We investigate intercellular insulin dynamics in an array of diffusively coupled pancreatic islet β-cells. The cells are connected via gap junction coupling, where nearest neighbor interactions are included. Through the multiple scale expansion in the semi-discrete approximation, we show that the insulin dynamics can be governed by the complex Ginzburg-Landau equation. The localized solutions of this equation are reported. The results suggest from the biophysical point of view that the insulin propagates in pancreatic islet β-cells using both temporal and
10.1016/j.physleta.2016.04.039
[ "https://arxiv.org/pdf/1508.05226v5.pdf" ]
17,526,896
1508.05226
35257028432bb32efd2a29d1170ead4d8cd7dea6
Localized modulated wave solutions in diffusive glucose-insulin systems 28 Apr 2016 May 2, 2016 Alain Mvogo Department of Physics Faculty of Science Laboratory of Biophysics University of Yaounde I University of Yaounde P.O. Box 812Cameroon Centre d'Excellence Africain en Technologies de l'Information et de la Communication University of Yaounde I Cameroon Antoine Tambue The African Institute for Mathematical Sciences (AIMS) Stellenbosch University 6-8 Melrose Road7945MuizenbergSouth Africa Center for Research in Computational and Applied Mechanics (CERECAM) Department of Mathematics and Applied Mathematics University of Cape Town 7701RondeboschSouth Africa Germain H Ben-Bolie Centre d'Excellence Africain en Technologies de l'Information et de la Communication University of Yaounde I Cameroon Department of Physics Faculty of Science Laboratory of Nuclear Physics University of Yaounde I University of Yaounde P.O. Box 812Cameroon Timoléon C Kofané Centre d'Excellence Africain en Technologies de l'Information et de la Communication University of Yaounde I Cameroon Department of Physics Faculty of Science Laboratory of Mechanics University of Yaounde I University of Yaounde P.O. Box 812Cameroon Localized modulated wave solutions in diffusive glucose-insulin systems 28 Apr 2016 May 2, 2016Preprint submitted to Physics Letters AarXiv:1508.05226v5 [q-bio.TO] * Corresponding authorinsulinislet β-cellscomplex Ginzburg-Landau equationmodulated waves We investigate intercellular insulin dynamics in an array of diffusively coupled pancreatic islet β-cells. The cells are connected via gap junction coupling, where nearest neighbor interactions are included. Through the multiple scale expansion in the semi-discrete approximation, we show that the insulin dynamics can be governed by the complex Ginzburg-Landau equation. The localized solutions of this equation are reported. The results suggest from the biophysical point of view that the insulin propagates in pancreatic islet β-cells using both temporal and spatial dimensions in the form of localized modulated waves. Keywords: insulin, islet β-cells, complex Ginzburg-Landau equation, modulated waves. Introduction Blood glucose levels are controlled by a complex interaction of multiple chemicals and hormones in the body. The metabolism of glucose into the β-cells leads to increase in the adenosine triphosphate concentration, closure of ATP-sensitive K + channels, depolarization of the β-cell membrane and opening of the voltagedependant C 2+ a channels, thereby allowing C 2+ a influx [1]. The resultant rise in intracellular C 2+ a concentration in the β-cell triggers insulin secretion. Insulin, which is secreted from pancreatic β-cells is the key hormone regulating glucose levels. Physiological responses generated within a cell can propagate to neighboring cells through intercellular communication involving the passage of a molecular signal to a bordering cell through a gap junction [2,3], through extracellular communication involving the secretion of molecular signals [2] such as hormones, neurotransmitters, etc., and also trough extracellular calcium signaling [4,5]. Oscillations of C 2+ a rather than metabolism in the β-cell are thought to be the direct cause of these oscillations in insulin secretion [6]. Sneyd et al. [7] proposed a dynamical model of such oscillations, which assume gap junction diffusion of inositol-1,4,5triphosphate between adjacent cells. The diffusion of inositol-1,4,5-triphosphate between cells then initiate not only C 2+ a oscillations but also insulin oscillations in adjacent cells. It is well known that the dynamics of the insulin is very relevant because it is related to the onset of the pathologies such as diabetes caused by elevated blood glucose. Careful diabetes mellitus self-management is essential in avoiding chronic complications that compromise health, and is characterized by many and often not readily observable clinical effects [8]. Along the same line, attention has been paid with the increased emphasis on derangements of the sensitivity of tissues to insulin in diverse pathological conditions like diabetes, obesity and cardiovascular diseases [9,10,11]. Therefore, there is an urgent need for improved diagnostic methods that provide more precise clinical assessments and sensitive detection of symptoms at earlier stage of the disease. This may be facilitated by improved mathematical models and tools related to interrelationship dynamics among physiological variables implicated in the glucose-insulin system. This assumption motivates the present work, where a diffusive model of coupled pancreatic islet β-cells is investigated. The cells are connected via the gap junction coupling which consists of a mechanism used by cells to coordinate and synchronize their information [7]. We then investigate a clear analytical solution describing the dynamics of insulin in the system. By means of the multiple scale expansion in the semi-discrete approximation, we use the Liénard form of a diffusive system to obtain the complex Ginzburg-Landau (CGL) equation, which describes the evolution of modulated waves in this system. We obtain an expression of the hormonal wave by using of the envelope soliton solution of the CGL by Pereira and Stenflo [12], and Nozaki and Bekki [13]. The solution reveal that the hormonal wave is a localized nonlinear solution which propagates in the form of a breather-like coherent structure. The rest of the paper is organized as follows: in Section 2, we present the mathematical model. In Section 3, we find an envelope soliton in the model by applying the multiple scale expansion in the semi-discrete approximation. Our work is summarized in Section 4. Mathematical model We propose in this paper a diffusive model of coupled pancreatic islet β-cells, in which the cells are connected via the gap junction coupling. Diffusive cell models with gap junction coupling are quite interesting in describing and characterizing quasi-perfect intercellular communication [7]. For the mathematical modeling, let us consider x n and y n as the intracellular concentrations of glucose and insulin in the nth cell, respectively. The equations of N coupled cells in the system through the gap junction are given bẏ x n = −a 1 x n − a 2 x n y n + a 3 ,(1)y n = D(y n+1 − 2y n + y n−1 ) + b 1 x n − b 2 y n ,(2) where n = 1, ..., N. The parameter a 1 is the rate constant which represents insulin-independent glucose disappearance, a 2 is the rate constant which represents insulin-dependent glucose disappearance. In other terms, a 2 describes the modulation of the effective kinetic constant of the glucose utilization by insulin action. The parameter a 3 is the glucose infusion rate, b 1 is the rate constant which represents insulin production due to glucose stimulation and b 2 is the rate constant which represents insulin degradation. The parameter D is the coupling strength of the gap junction. We have considered two nearest neighbors coupling in a weak coupling regime. A weak coupling between neighboring cells is a situation that arises in the study of bursting activity in the β-cell islets of the pancreas, which secrete insulin in response to glucose level [14,15]. The model also predicts that oscillations occur if there is sufficient diffusion (D > 0.1) to create adequate concentrations mixing in the reacting layers of the cells [16]. Note that the model is nonlinear, due to the presence of the bilinear term between in Ref. [17], ten healthy volunteers (5 males and 5 females) participated in the study. As indicated by Gaetano and Arino [17], all subjects had negative family and personal histories for diabetes mellitus and other endocrine diseases, were on no medications, had no current illness and had maintained a constant body weight for the six months preceding the study. In the present paper, we have taken the data of the two first subjects as listed in the Table 1. Many papers have been published detailing the occurrence mode oscillations of pancreatic islet β-cells. In the present paper, the system being nonlinear, we are mainly interested by nonlinear solutions that can describe the nonlinear dynamics of insulin. It is convenient to transform the system into the wave form. This is achieved by differentiating the second equation and substituting x n into the obtained second-order ordinary differential equation. The above transformations did not fundamentally affect the structure of the system, but allow us to conveniently write Eqs. (1) and (2) in a Liénard form that is a second-order differential equation with a small damping term. The governing equation for the insulin concentration then reads Subjects a 1 a 2 a 3 b 1 b 2 1 0.0226 3.8 × 10 −y n + Ω 2 0 y n + (ν 0 + ν 1 y n )ẏ n + λ 1 y 2 n + K = D 0 (y n+1 − 2y n + y n−1 ) + D 1 (ẏ n+1 − 2ẏ n +ẏ n−1 ) + D 2 (y n+1 − 2y n + y n−1 )ẏ n ,(3) where Ω 2 0 = a 1 b 2 , ν 0 = a 1 + b 2 , ν 1 = a 2 , λ 1 = a 2 b 2 , D 0 = a 1 D, D 1 = D, D 2 = a 2 D and K = −b 1 a 3 . For such equations, perturbation approaches are used to obtain nearly exact solutions. Accordingly, we introduce a new variable ψ n such that y n = ǫψ n .(4) As we are interested for solution in a weakly dissipative medium, we assume the parameters ν 0 and D 1 perturbed at the order ǫ 2 . Keeping the first nonlinear term of the development, Eq. (3) reads ψ n + Ω 2 0 ψ n + (ǫ 2 ν 0 + ǫν 1 ψ n )ψ n + ǫλ 1 ψ 2 n = D 0 (ψ n+1 − 2ψ n + ψ n−1 ) + ǫ 2 D 1 (ψ n+1 − 2ψ n +ψ n−1 ) + ǫD 2 (ψ n+1 − 2ψ n + ψ n−1 )ψ n .(5) The solutions of this equation are regarded up of carrier waves modulated by envelope signal, called envelope solitons which appears naturally for most weakly dispersive and nonlinear systems in the small amplitude limit [18]. In the next section, the multiple scale expansion in the semi-discrete approximation is used to find the envelope soliton solution of Eq. (3). The method has been found a powerful tool in solving similar equation [18]. Multiple scale expansion in the semi-discrete approximation Referring to the multiple scale expansion [19,20], we proceed by making a change of variables according to the space and new time scales Z n = ǫ n z and T n = ǫ n t, respectively. That is we are looking for solution y(z, t) depending on these new set of variables as a perturbation series of functions y(z, t) = ∞ n=1 ǫ n ψ n (Z 0 , Z 1 , Z 2 , ..., T 0 , T 1 , T 2 , ...),(6) where Z n and T n are treated as independent variables. Equation of motion of the amplitude The semi-discrete approximation is a perturbation technique in which the carrier waves are kept discrete while the amplitude is treated in the continuum limit. For this, we look for modulated wave solution of the form ψ n =F 1n e iθn + F * 1n e −iθn + ǫ[F 0n + (F 2n e 2iθn + F * 2n e −2iθn )] + O(ǫ 2 ),(7) where θ n = qn − ωt, q is the wave vector and ω is the frequency. The procedure will consist of replacing the form of the solution Eq. (7) and its derivatives in the different terms of Eq. (5). We then group terms in the same power of ǫ, which leads us to a system of equations. Each of those equations will correspond to each approximation for specific harmonics. Substituting Eq. (7) in Eq. (5) gives (F 1,n − 2iωḞ 1,n − ω 2 F 1,n )e iθn + ǫ(F 0,n ) + ǫ(F 2,n − 4iωḞ 2,n − 4ω 2 F 2,n )e 2iθn + Ω 2 0 [F 1,n e iθn + ǫF 0,n + ǫF 2,n e 2iθn ] + ǫ 2 ν 0 (Ḟ 1,n − iωF 1,n )e iθn + ǫν 1 [F 1,n (Ḟ 1,n − iωF 1,n )e 2iθn + ǫF 1,nḞ0,n e iθn + ǫF 0,n (Ḟ 1,n − iωF 1,n )e iθn ] + ǫλ 1 [(F 2 1,n e 2iθn + 2F 1,n F * 1,n ) = +2ǫ 2 λ 1 (F 1,n F 0,n + F * 1,n F 2,n )e iθn ] + D 0 (F 1,n+1 e iqa + F 1,n−1 e −iqa − 2F 1,n )e iθn + ǫD 0 (F 0,n+1 + F 0,n−1 − 2F 0,n ) + ǫD 0 (F 2,n+1 e 2iqa + F 2,n−1 e −2iqa − 2F 2,n )e 2iθn + ǫ 2 D 1 [Ḟ 1,n+1 e iqa +Ḟ 1,n−1 e −iqa − 2Ḟ 1,n − iω(F 1,n+1 e iqa + F 1,n−1 e −iqa − 2F 1,n )]e iθn + ǫD 2 F 1,n (F 1,n+1 e iqa + F 1,n−1 e −iqa − 2F 1,n )e 2iθn + ǫ 2 D 2 F 1,n (F 0,n+1 + F 0,n−l − 2F 0,n )e iθn + ǫ 2 D 2 F 0,n (F 1,n+1 e iqa + F 1,n−1 e −iqa − 2F 1,n )e iθn .(8) Since the envelope function varies slowly in space and time, we use the continuum approximation for F in a multiple scale expansion such that F n±1 = F ± ǫ ∂F ∂Z 1 ± ǫ 2 ∂F ∂Z 2 + ǫ 2 2 ∂ 2 F ∂Z 2 1 + O(ǫ 3 ),(9) and ∂F n ∂t = ǫ ∂F ∂T 1 + ǫ 2 ∂ 2 F ∂T 2 + O(ǫ 3 ).(10) Equating dc, first-, and second-harmonic terms, we get respectively F 0 = − 2λ 1 Ω 2 0 |F 1 | 2 ,(11)F 2 = λ 1 − iων 1 − 4D 2 sin 2 ( q 2 ) 3Ω 2 0 + 16D 0 sin 4 ( q 2 ) F 2 1 ,(12)∂ 2 F 1 ∂T 2 1 − 2iω ∂F 1 ∂T 2 = iων 0 F 1 + (iων 1 − 2λ 1 ) −2λ 1 Ω 2 0 + λ 1 − iων 1 − 4D 2 sin 2 ( q 2 ) 3Ω 2 0 + 16D 0 sin 4 ( q 2 ) |F 1 | 2 F 1 + 2iD 0 sin(q) ∂F 1 ∂Z 2 + D 0 cos(q) ∂ 2 F 1 ∂Z 2 1 + 4iωD 1 sin 2 ( q 2 )F 1 + 8λ 1 D 2 Ω 2 0 |F 1 | 2 F 1 .(13) In the above calculation, we have used the dispersion relation for the carrier wave ω 2 = Ω 2 0 + 4D 0 sin 2 ( q 2 ),(14) obtained by linearizing Eq.(8). As we observe in Fig. (1), the corresponding linear spectrum for the first two subjects of Ref. [17] is related to the system parameters. However for the parameter values related to the subject 2, the spectrum is increasing compared to the linear spectrum given by the parameter values related to subject 1. Using the new scales ξ n = Z n − V g T n and τ n = T n , with velocity V g = D 0 sin(q) ω ,(15) we finally obtain the complex Ginzburg-Landau equation i ∂F 1 ∂τ 2 + P ∂ 2 F 1 ∂ξ 2 1 + (Q 1 + iQ 2 )|F 1 | 2 F 1 + iγF 1 = 0,(16) where P = ω 2 D 0 cos(q) − D 2 0 sin 2 (q) 4ω 3 , for the plane waves P 1 Q 1 + P 2 Q 2 > 0 (P 1 and P 2 are the real and the imaginary parts of the dispersion coefficient, respectively) for which the plane waves in the system are unstable. This relation is known as the Lange and Newell's criterion [22]. However, in this work the imaginary part of the dispersion coefficient is equal to zero, then the Lange and Newell's criterion reduces to P 1 Q 1 > 0 known as the Benjamin-Feir instability criterion [23]. According to this instability criterion, for P Q 1 > 0 wave planes in the system are unstable while for P Q 1 < 0, they are stable. Since the manner with which hormonal waves propagate in the system does not depend of the stability criterion, one can expect to find in the system spatiotemporal modulated wave solutions for any wave carrier whose wave vector is in the positive range of P Q 1 . Q 1 = 1 ω 2λ 1 D 2 sin 2 ( q 2 ) + λ 2 1 Ω 2 0 + ω 2 ν 2 1 − 3λ 2 1 + 8λ 2 1 sin 2 ( q 2 ) 12Ω 2 0 + 64D 0 sin 4 ( q 2 ) , Q 2 = −λ 1 ν 1 2Ω 2 0 + 3λ 1 ν 1 − 4D 2 ν 1 sin 2 ( q 2 ) 12Ω 2 0 + 64D 0 sin 4 ( q 2 ) , γ = ν 0 4 + D 1 sin 2 ( q 2 ).(17) Nonlinear solution of the equation of motion The solutions of nonlinear partial differential equations constitute a crucial factor in the progress of nonlinear dynamics and are a key access for the understanding of various biological phenomena. Many analytical investigations have been carried out to find the envelope soliton solutions of these equations which are localized waves with particle like behavior i.e., preserving their forms in space or in time or both in space and time resulting in spatial, temporal or spatiotemporal solitons, respectively [24]. We assume that the form of the envelope soliton solution of Eq. (16) has the form of the one proposed by Pereira and Stenflo [12], and Nozaki and Bekki [13] F 1 (ξ 1 , τ 2 ) = Ae φ 1 + e (φ+φ * ) (1+iµ) .(18) The real part F 1r and the imaginary part F 1i of F 1 (ξ 1 , τ 2 ) are given respectively by F 1r (ξ 1 , τ 2 ) = A e −φ + cos(2µφ)e φ 2(cosh(2φ) + cos(2µφ)) and F 1i (ξ 1 , τ 2 ) = −A sin(2µφ)e φ 2(cosh(2φ) + cos(2µφ)) , where φ = qξ 1 − ωτ 2 , µ = −β ± 2 + β 2 and β = − 3Q 1 2Q 2 . Using the solution of F 1 given by Eq. (19) and from Eq. (4), we have y = 2ǫ(F 1r cos θ − F 1i sin θ) + ǫ 2 [F 0 + 2(F 2r cos 2θ − F 2i sin 2θ)] + O(ǫ 3 ),(20) where F 2r and F 2i are the real and imaginary parts of F 2 , respectively such that F 2r = c 1 (F 2 1r − F 2 1i ) + 2c 2 F 1r F 1i and F 2i = c 2 (F 2 1i − F 2 1r ) + 2c 1 F 1r F 1i ,(21) with c 1 = λ 1 − 4D 2 sin 2 ( q 2 ) 3Ω 2 0 + 16D 0 sin 4 ( q 2 ) and c 2 = ων 1 3Ω 2 0 + 16D 0 sin 4 ( q 2 ) . Inserting Eqs. (19) and (21) into Eq. (20), we obtain for the insulin dynamics the following solution y n (t) =ǫA cos(θ n − 2αφ n )e φn + cos θ n e −φn (cosh 2φ n + cos 2αφ n ) + ǫA 2 − λ 1 Ω 2 0 (cosh 2φ n + cos 2αφ n ) + ǫ 2 A 2 (c 1 cos 2θ n + c 2 sin 2θ n ) × 2 cos 2αφ n + cos 4αφ n e 2φn + e −2φn (cosh 2φ n + cos 2αφ n ) + ǫA 2 (c 1 sin 2θ n − c 2 cos 2θ n ) × 2 sin 2αφ n + sin 4αφ n e 2φn (cosh 2φ n + cos 2αφ n ) , where φ n = ǫq(n − V g t) − ωǫ 2 t. In Fig. 3, we have represented the evolution of the solution at different times according to the parameter values related to the two first healthy subjects of Ref. [17]. As we observe in this figure, the solution is well a localized modulated solution, and its propagates structurally stable. As interestingly remarked in the present work, the modulated solution involving in the system appears in the form of a breather-like coherent structure and it propagates with the same dynamics for the different parameter values related to the two healthy subjects. This assumption can lead to the conclusion that the insulin propagates in pancreatic islet β-cells using localized modulated solitonic waves. Let us recall that the localized modulated oscillations obtained in this work are involved in many other biophysical systems. Under certain conditions, they can move and transport energy along the system [25,26,27]. As recently demonstrated in [25], breathing modes are also responsible of energy sharing between α-polypeptide coupled chains. Also, localized oscillations can be precursors of the bubbles that appear in thermal denaturation of DNA and they have been shown to describe the breaking of the hydrogen bonds linking two bases [26]. It has been also shown that these localized oscillations can move along microtubule systems [27]. Then, breathers should be understood as triggering signal for the motor proteins to start moving as interestingly find also in this work. In Fig. 4, we have increased the value of ǫ from ǫ = 0.077 to ǫ = 0.099, with the same parameter values used in Fig. 3. It clearly reveals in this figure the influence of small perturbation in the dynamics of the hormonal wave. One can easily see that for both subjects the solutions still remain the breather excitations. However these breathers are now represented by modulated solitons so that the envelopes cover less oscillations of the carrier wave as those observed in Fig. 3. It is also observed that the amplitude of the wave has increased. Conclusion We have studied the intercellular insulin dynamics in an array of diffusively coupled pancreatic islet β-cells. The model was formulated by a system of discrete ordinary differential equations where the cells are connected via gap junction coupling. Motivated with often non-observance clinical effects due to some pathological diseases [8], the work was devoted to derive a clear analytical solution describing the insulin dynamics in pancreatic islet β-cells. Applying a powerful perturbation technique, we have found that the complex Ginzburg-Landau equation is the equation which describes the insulin dynamics. It has been revealed that the solution of the hormonal wave is well a localized modulated solitonic wave called breather. In another regard, the breather has been revealed as mechanically important in other biophysical systems such as collagen [25], DNA [26], microtubule [27] systems. The correlation with the present work may indicate an important role of breathers and other nonlinear excitations in the dynamics of pancreatic islet β-cells. In a forthcoming work, we intend to investigate long-range effect, since intercellular waves travel also with non contacting cells [29,30] indicating the long-range interaction in the system. the internal variable y(t) representing the insulin action and the variable x(t) representing in the first equation the glucose concentration. The parameter values chosen are those from the literature of healthy volunteer undergoing the intra venous glucose tolerance test. In a clinical experiment conducted and reported During the last three decades, the CGL equation and its modified versions have drawn tremendous attention. These equations describe a variety of physical phenomena in optical waveguides and fibers, plasmas, Boose Einstein condensation, phase transitions, open flow motions, bimolecular dynamics, spatially extended non equilibrium systems, etc [21]. In the present research work, the CGL equation is an equation describing the evolution of modulated hormonal waves in a diffusive coupled pancreatic islet β-cells model. This result suggests that, the insulin propagates within the islet β-cells using both time and space domains in order to regulate glucose level. In another regard, oscillations of insulin secretion which are likely caused by intrinsic β-cell mechanisms generate a spatiotemporal dynamics of insulin between cells as modified by exogenous signals such as hormonal and neuronal input. We have represented in Fig. 2. the variations of constants P , Q 1 , Q 2 , γ and of the product P Q 1 with respect to the wave vector q. The parameter values are those in the Table 1. It is observed for both subjects that the coefficients are positive for small values of the wave vector q. The nonlinearity coefficients have very small values. It is observed also that except the dissipative coefficient γ, all the other coefficients decrease with the increasing of the wave vector. It is well known that the complex Ginzburg-Landau equation has as the modulational instability criterion Figure 1 :Figure 2 : 12(Color online) The dispersion relation of the hormonal wave. Parameter values are: D = 0.12 and Ω 2 0 = a 1 b 2 . Subject 1: a 1 = 0.0226, b 2 = 0.0437. Subject 2: a 1 (Color online) Variations of coefficients (a) P , (b) Q 1 , (c) Q 2 , (d) the product P Q 1 as a function of the wave vector q of the carrier wave. The parameter values are given in Table 1. Subject 1: a 1 = 0.0226, a 2 = 3.8 × 10 −8 , b 1 = 0.0022, b 2 = 0.0437. Subject 2: a 1 = 0.0509, a 2 = 1.29 × 10 −7 , b 1 = 0.0096, b 2 = 0.2062. With D = 0.12. Figure 3 :Figure 4 : 34(Color online) The solution y n as a function of the position at different times. The parameter values are: Subject 1: D = 0.12, q = 0.035, ǫ = 0.077, a 1 = 0.0226, b 1 = 0.0022, a 2 = 3.8 × 10 −8 , b 2 = 0.0437. Subject 2: D = 0.12, q = 0.2, ǫ = 0.077, a 1 = 0.0509, a 2 = 1.29 × 10 −7 , (Color online) Effects of small perturbation on the hormonal modulated wave. ǫ = 0.099 and t = 5. The parameter values are the same as in Fig.3. Table 1 : 1Parameter values[17]. . F M Ashcroft, J. Clin. Invest. 1152047F.M. Ashcroft, J. Clin. Invest. 115, 2047 (2005). . B E Isackson, W H Evans, S Boitano, Am. J. Physiol. 280221B.E. Isackson, W.H. Evans, S. Boitano, Am. J. Physiol. 280, L221 (2001). . Y Osipchuk, M Cahulan, Nature. 359241Y. Osipchuk, M. Cahulan, Nature 359, 241 (1992). . A M Hofer, S Curci, M A Doble, E M Brown, D I Soybel, Nat. Cell Bio. 2392A.M. Hofer, S. Curci, M.A. Doble, E. M. Brown, D.I. Soybel, Nat. Cell Bio. 2, 392 (2000). . M E Gracheva, J D Gunton, J. Theor. Biol. 221513M.E. Gracheva, J.D. Gunton, J. Theor. Biol. 221, 513 (2003). . P Gilon, M A Ravier, J C Jonas, J C Henquin, Diabetes. 51144P. Gilon, M. A. Ravier, J.C. Jonas, J. C. Henquin, Diabetes 51, S144 (2002). . J Sneyd, M Wilkins, A Stahonja, M Sanderson, Biophys. Chem. 72101J. Sneyd, M. Wilkins, A. Stahonja, M. Sanderson, Biophys. Chem. 72, 101 (1998). The diabetes Control and Complications Trial Research. New England J. Med. 329977The diabetes Control and Complications Trial Research, New England J. Med. 329, 977 (1993). . A De Gaetano, G Mingrone, M Castagneto, P A Tataranni, A V Greco, Am. J. Physiol. 27193A. De Gaetano, G. Mingrone, M. Castagneto, P.A. Tataranni, A.V. Greco, Am. J. Physiol. 271, E93 (1996) . R A Defronzo, E Ferrannini, Diabetes Care. 14173R.A. Defronzo, E. Ferrannini, Diabetes Care 14, 173 (1991) . S Frontoni, L Ohman, J R Haywood, R A Defronzo, L Rossetti, Am. J. Physiol. 262191S. Frontoni, L. Ohman, J.R. Haywood, R.A. Defronzo, L. Rossetti, Am. J. Physiol. 262, E191 (1992). . N R Pereira, L Stenflo, Phys. Fluids. 201733N.R. Pereira, L. Stenflo, Phys. Fluids 20, 1733 (1977). . K Nozaki, N Bekki, J. Phys. Soc. Jpn. 531581K. Nozaki, N. Bekki, J. Phys. Soc. Jpn. 53, 1581 (1984). . S Raghavachari, J A Glazier, Phys. Rev. Lett. 822991S. Raghavachari, J.A. Glazier, Phys. Rev. Lett. 82, 2991 (1999). . M Perez-Armandriz, M C Roy, D C Spray, M V L Bennet, Biophys. J. 5976M. Perez-Armandriz, M.C. Roy, D.C. Spray, M.V.L. Bennet, Biophys. J. 59, 76 (1991). . J P Keener, Bull. Math. Bio. 63625J.P. Keener, Bull. Math. Bio. 63, 625 (2001). . A De Gaetano, O Arimo, J. Math. Biol. 40136A. De Gaetano, O. Arimo, J. Math. Biol. 40, 136 (2000). . M Remoissenet, Phys. Rev. B. 332386M. Remoissenet, Phys. Rev. B. 33, 2386 (1986). . D J Kaup, A C Newell, Phys. Rev. B. 15162D.J. Kaup, A.C. Newell, Phys. Rev. B 1S, 5162 (1978). . A C Newell, J. Math. Phys. 191126A.C. Newell, J. Math. Phys. 19, 1126 (1978). A Hasegawa, Plasma Instabilities and Nonlinear Effects. BerlinSpringer VerlagA. Hasegawa, Plasma Instabilities and Nonlinear Effects (Springer Verlag, Berlin, 1975). . C G Lange, A C Newell, SIAM J. Appl. Math. 27441C.G. Lange and A.C. Newell: SIAM J. Appl. Math. 27, 441 (1974). . T B Benjamin, J E Feir, J. Fluid Mech. 27417T.B. Benjamin and J.E. Feir: J. Fluid Mech. 27, 417 (1967). . S Shwetanshumala, Progress In Electromagnetics Research Letters. 317S. Shwetanshumala, Progress In Electromagnetics Research Letters 3, 17 (2008). . A Mvogo, G H Ben-Bolie, T C Kofane, Chaos. 2563115A. Mvogo, G.H. Ben-Bolie, T.C. Kofane, Chaos 25, 063115 (2015). . T Dauxois, M Peyrard, A R Bishop, Phys. Rev. E. 4744T. Dauxois, M. Peyrard, A.R. Bishop, Phys. Rev. E 47, R44 (1993). . S Zdravkovi, A N Bugay, G F Aru, A Maluckov, Chaos. 2423139S. Zdravkovi, A.N. Bugay, G.F. Aru, A. Maluckov, Chaos 24, 023139 (2014). . G D Mitsis, M G Markakis, V Z Marmarelis, IEEE, Trans. Biomed. Eng. 562347G.D. Mitsis, M.G. Markakis, V.Z. Marmarelis, IEEE, Trans. Biomed. Eng. 56, 2347 (2009). . A Mvogo, A Tambue, G H Ben-Bolie, T C Kofane, Commun Nonlinear Sci Numer Simulat. 39396A. Mvogo, A. Tambue, G. H. Ben-Bolie, T.C. Kofane, Commun Nonlinear Sci Numer Simulat 39, 396 (2016). . W D Kepseu, P Woafo, Phys. Rev. E. 7811922W.D. Kepseu, P. Woafo, Phys. Rev. E 78, 011922 (2008).
[]
[ "EXP FUNCTION FOR EDWARDS CURVES OVER LOCAL FIELDS", "EXP FUNCTION FOR EDWARDS CURVES OVER LOCAL FIELDS" ]
[ "Giuseppe Filippone \nDepartment of Mathematics and Computer Science\nUniversity of Palermo\nVia Archirafi 3490123PalermoItaly\n" ]
[ "Department of Mathematics and Computer Science\nUniversity of Palermo\nVia Archirafi 3490123PalermoItaly" ]
[]
We extend the exponential map Exp for complex elliptic curves in short Weierstrass form to Edwards curves over local fields. Subsequently, we compute the map Exp for Edwards curves over the local field of p-adic numbers.
10.3934/amc.2023012
[ "https://export.arxiv.org/pdf/2303.09985v1.pdf" ]
257,622,784
2303.09985
6f413ea3463c54c7e7b68491e187680241b672ba
EXP FUNCTION FOR EDWARDS CURVES OVER LOCAL FIELDS 17 Mar 2023 Giuseppe Filippone Department of Mathematics and Computer Science University of Palermo Via Archirafi 3490123PalermoItaly EXP FUNCTION FOR EDWARDS CURVES OVER LOCAL FIELDS 17 Mar 2023 We extend the exponential map Exp for complex elliptic curves in short Weierstrass form to Edwards curves over local fields. Subsequently, we compute the map Exp for Edwards curves over the local field of p-adic numbers. Introduction The literature on elliptic curves and their applications in cryptography is well consolidated. Recently, curves such as Montgomery elliptic curves and Edwards curves (in particular in their twisted version) have gained great popularity for their cryptographic applications. Edwards curves were first introduced in 2007 by H. Edwards [8]. These curves are already the subject of many papers in cryptography [5-7, 12, 15]. Compared to the classic elliptic curves in Weierstrass form, they are more efficient for cryptographic use and the (single or multiple) digital signature. An application of Edwards curves to Goppa Codes is shown in [9]. Since the Weierstrass elliptic functions fulfill the identity 1 2 ℘ ′ (z) 2 = ℘ 3 (z) − g2 4 ℘(z) − g3 4 , where g 2 , g 3 ∈ C are constants, the function Exp : z → ℘(z), 1 2 ℘ ′ (z) maps an element z belonging to the complex torus C/Λ, where Λ is the period lattice of ℘, to a point belonging to the corresponding elliptic curve in short Weierstrass form of the complex projective plane, defined by the equation y 2 = x 3 − g2 4 x − g3 4 . Moreover, it is such that Exp(z 1 + z 2 ) = Exp(z 1 ) * Exp(z 2 ) (see e.g. §VI and §IX in [22]), where the operation * is given by the chord-and-tangent law on the points of the elliptic curve. In this paper, we extend the above exponential map to Edwards curves over local fields, and we give a particular specialization of this map over the local field Q p of p-adic numbers. We are motivated by authoritative literature on the matter of lifting, summarized in [21] where the author gives a survey connecting the lifting to the discrete logarithm problem over elliptic curves in Weierstrass form. Although cryptosystems over infinite fields have received little attention in the past, in [25] the authors gave a cryptosystem based on quotient groups of an elliptic curve in Weierstrass form over the p-adic number field, able to encrypt messages with variable lengths. This led to public-key cryptosystems with hierarchy management [26], which look interesting for their possible applications. More recently, similar topics have been investigated in [24], where the authors consider twisted Edwards curves over local fields and introduce a cryptosystem based on quotient groups of twisted Edwards curves over local fields. For these reasons, although it is possible to extend the above map to other forms of elliptic curves (such as Legendre form, Jacobi form, Hessian form, Huff form), in this work we will focus only on the Edwards form. To the best of our knowledge, there are no other papers in which this study was already addressed. In section 2, we describe Edwards curves and their relationship with elliptic curves in Weierstrass form. In section 3, we extend the map Exp for elliptic curves in Weierstrass form over C to Edwards curves over local fields. Finally, in section 4, we exhibit the map Exp for Edwards curves when the local field taken into account is the field Q p of p-adic numbers. Prerequisites and notations The goal of this paper is to compute the map Exp for the Edwards curves over local fields. For a general introduction to local fields, we address the reader to a classic book, e.g. [20]. Here we summarize some results on Edwards curves, which will be used later and give explicitly a reduction (theorem 2.1) to canonical forms of divisors on an Edwards curve and an explicit equivalence, under particular conditions, between a class of Edwards curves and a class of elliptic curves in Weierstrass form (see theorem 2.3). Definition 2.1 (Edwards curves). A (non-smooth) algebraic curve over a field K which, with respect to a suitable coordinate system, has the equationx 2 +ŷ 2 = 1 + dx 2ŷ2 , where d ∈ K is such that d(d − 1) = 0, is called an Edwards curve E. Recall that, over a field K of characteristic different from 2, a (smooth) elliptic curve (possessing at least a K-rational point) can be represented in a suitable coordinate system by the Weierstrass equation y 2 = x 3 + a ′ x 2 + b ′ x, having one point at infinity Ω = [Z : X : Y ] = [0 : 0 : 1] on the y-axis. Hence, from here on, unless otherwise specified, we will consider an elliptic curve in Weierstrass form defined by the latter equation. In the following, we provide a brief introduction to the group law for Edwards curves, which was first considered in [5] (cf. also [3,8]). Formally speaking, one has to take into account the group of divisor classes Let κ be the unique hyperbola (containing O ′ = (0, −1), 2Ω 1 and 2Ω 2 ), passing through P and Q, which intersects the curve E in a further point R = (x R , y R ). Let l R : Y − y R Z = 0 be the line passing through R and parallel to the x-axis (thus l R pass through S = (−x R , y R ) as well). One has that div κ (Y − y R Z) · X = P + Q − S − O, hence (P − O) + (Q − O) ≡ (S − O). The above group law can be summarized into the following addition and doubling formulas, where for all (not necessarily distinct) points P = (x P , y P ) and Q = (x Q , y Q ), the sum divisor S − O ≡ (P − O) + (Q − O) is such that: S = x P y Q + x Q y P 1 + dx P x Q y P y Q , y P y Q − x P x Q 1 − dx P x Q y P y Q . Remark 2.2 (cf. [5]). Note that (P − O) + (Q − O) ≡ O − O if and only if P = (x P , y P ) and Q = (x Q , y Q ) are such that y P = y Q and x P = −x Q , that is, Q is the symmetric point, with respect to the y-axis, to the point P . Remark 2.3. Note that if the parameter d is a non-square, then the denominators in the addition and doubling formulas cannot vanish [5] and the affine points of the curve give in turn a subgroup of the whole group of divisor classes (cf. corollary 2.2). In terms of the group of divisor classes, one finds either of the following reduced divisors in each divisor class. Theorem 2.1 (Jacobian of Edwards curves). Let E be an Edwards curve, and let J (E) be the Jacobian of E. Every divisor D ∈ J (E) has one of the following canonical forms: 1. D ≡ P − O; 2. D ≡ (P − O) + (Ω 1 − O); 3. D ≡ (P − O) + (Ω 2 − O); 4. D ≡ (P − O) + (Ω 1 − Ω 2 ), where P ∈ E is an affine point. In particular, the divisors equivalent to P − O form a subgroup J 0 (E) (corollary 2.2) of index 4 in J (E), and 2Ω 1 ≡ O ′ + O and 2Ω 2 ≡ H ′ + H, where O = (0, 1), O ′ = (0, −1), H = (1, 0), and H ′ = (−1, 0). Proof. Let D = D 1 + D 2 be a divisor of E, where D 1 is such that every point in its support is an affine point, and D 2 = t 1 Ω 1 + t 2 Ω 2 with t 1 , t 2 ∈ Z. We show that every even multiple of Ω 1 and Ω 2 is equivalent to a multiple of O ′ + O and H + H ′ , respectively. Indeed, we have that: div X Z = O ′ + O − 2Ω 1 , div Y Z = H ′ + H − 2Ω 2 , thus O ′ + O ≡ 2Ω 1 and H ′ + H ≡ 2Ω 2 . As a consequence, we can reduce D to one of the canonical forms shown in the claim, by exploiting the above rule, the group law for E and the following remark: if t 1 and t 2 are both odd, first we reduce D to one of these two forms ( P −O)+(Ω 1 −Ω 2 ) or (P −O)+(Ω 2 −Ω 1 ), because D is a zero degree divisor, but the latter is equivalent to the former because Ω 1 − Ω 2 ≡ (Ω 2 − Ω 1 ) + (O ′ + O) − (H ′ + H). Finally, since 2(Ω 1 − O) = 2Ω 1 − 2O ≡ (O ′ + O) − 2O = O ′ − O ∈ J 0 (E), 2(Ω 2 − O) = 2Ω 2 − 2O ≡ H ′ + H − 2O ≡ O − O ∈ J 0 (E), the quotient group J (E) J 0 (E) is isomorphic to Z 2Z ⊕ Z 2Z . Corollary 2.2. The subset J 0 (E) of zero degree divisors whose support contains only affine points is a subgroup of J (E). Proof. Let (P − O), (Q − O) ∈ J 0 (E) be two divisors, where P, Q ∈ E(K). By remark 2.2 one has that −(Q − O) ∈ J 0 (E), and by remark 2.3 one has that (P − O) − (Q − O) ∈ J 0 (E). Remark 2.4. Given an elliptic curve in Weierstrass form W, it is usual to identify the non-zero divisor P − Ω with the point P of W(K), and the zero divisor Ω − Ω with Ω. Similarly, one can denote the non-zero divisor P − O of J 0 (E) with the affine point P of E(K), and the zero divisor O − O with O. Hence, one may refer to either the Jacobian or the group of K-rational points of these curves, indifferently. In the following, we describe under which conditions one has an equivalence between Edwards curves E and elliptic curves in Weierstrass form W. Definition 2.2 (cf. [5,9]). Let E be an Edwards curve defined, over a field K of characteristic different from 2, by the equationx 2 +ŷ 2 = 1 + dx 2ŷ2 , where d(d − 1) = 0. Let 0 = x 1 ∈ K be such that x 1 and (1 − d) are both non-square or square in K, and let y 1 ∈ K such that y 2 1 = 4x 3 1 1−d . Putting a ′ = 2x 1 1+d 1−d and b ′ = x 2 1 , one considers the elliptic curve in Weierstrass form W = W d,x1 , defined over K by the equation y 2 = x 3 + a ′ x 2 + b ′ x, and one denotes by α and β the two following rational maps: α : E(K) −→ W(K) (2.1a) (x,ŷ) −→ (x, y) = x 1 1 +ŷ 1 −ŷ , y 1 (1 +ŷ) x(1 −ŷ) , α −1 = β : W(K) −→ E(K) (2.1b) (x, y) −→ (x,ŷ) = y 1 x x 1 y , x − x 1 x + x 1 , which make W and E birationally equivalent. Moreover, one extends the definition of α and β by putting α((0, 1)) = Ω, β(Ω) = (0, 1), α((0, −1)) = (0, 0) and β((0, 0)) = (0, −1); and (possibly) β((t 1 , 0)) = β((t 2 , 0)) = Ω 1 , β((−x 1 , ±s 1 )) = Ω 2 , where (t 1 , 0), (t 2 , 0), (−x 1 , ±s 1 ) ∈ W(K), with t 1 , t 2 = 0 (see [9] for further comments). In the following, we stress the meaning of taking d a non-square in a field K. Theorem 2.3 (Isomorphism between J (W) and J 0 (E)). Let both E and W as in definition 2.2. If d is not a square in the field K, then there is an isomorphism over K between the group J (W) and the subgroup J 0 (E) defined in theorem 2.1. Proof. By using the parameters x 1 , y 1 , a ′ , and b ′ as defined in definition 2.2, we prove that, if d is a non-square, then the rational map β in (2.1b) defines a biregular map between the elliptic curve in Weierstrass form W of equation y 2 = x 3 +a ′ x 2 +b ′ x and the subset of E consisting of its affine points. We are left with proving that there is no point in W with abscissa −x 1 and that (0, 0) is the only point in W(K) with ordinate y = 0. The former assertion follows from the fact that, by intersecting the line x = −x 1 and the curve W one has that −x 3 1 + a ′ x 2 1 − b ′ x 1 is equal to dy 2 1 , which is a non- square in K because d is a non-square and, therefore, there is no point in W(K) with abscissa −x 1 . The latter assertion follows from the fact that the intersection between the line y = 0 and the curve W has no roots in K except x = 0. More precisely, since ∆(x 2 + a ′ x + b ′ ) = d 4x1 1−d 2 is not a square in K because d is a non-square, then (0, 0) is the only point in W(K) with ordinate y = 0. Since the map β in (2.1b) transforms a line through P ∈ W(K) and Q ∈ W(K) onto the hyperbola through β(P ), β(Q), O ′ , 2Ω 1 and 2Ω 2 , and maps vertical lines onto horizontal lines, then β induces a group homomorphism of the corresponding divisor classes groups. From here on, we confine ourselves to the case in theorem 2.3, where one can simply identify the elements P − O of J 0 (E) with the point P , so that O is the neutral element of the group. The map Exp for Edwards curves over local fields In this section, we use the already known results on elliptic curves in Weierstrass form, and we extend these results to Edwards curves. Recall that, as we said in the previous section, we have confined ourselves to the case where there is a birational equivalence between an elliptic curve W in Weierstrass form and an Edwards curve E such that the Jacobian J (W) of W is isomorphic to J 0 (E) (theorem 2.3), that is, the subgroup of divisors of E whose reduced form is P − O, where P is an affine point and O = (0, 1) is taken as the neutral element of the group. Let K be a local field, O K its ring of integers, m K its prime ideal, and k = O K /m K its residue field. We will take the image J 0 k (E) under reduction modulo m K of the group J 0 K (E), then we will investigate under what assumptions one has that J 0 K (E) ∼ = J 0 k (E) ⊕ m K . First, we remark that for elliptic curves W in short Weierstrass form, defined by the equation y 2 = x 3 + ax + b, whose reduction modulo m K is non-singular, the following sequence: (3.1) 0 −→ m K Exp W − −−− → J K W Mod W − −−− → J k W −→ 0 is exact [14] (see also [22, ch. §VII]), thus Im Exp W = Ker Mod W , Exp W is a monomorphism, Mod W is an epimorphism, and one has that J k W ∼ = J K W Ker Mod W = J K W Im Exp W . The map Mod W is nothing else than a simple reduction modulo m K of the coordinates of the points P = [Z : X : Y ] in W(K) which, up to a multiplication times a suitable t ∈ O K , have integral entries Z, X, Y ∈ O K : Mod W : J K W → J k W P = [Z : X : Y ] → [Z (mod m K ) : X (mod m K ) : Y (mod m K )] . Note that Mod W is trivially surjective for Hensel's lemma (see proof in [22, sec. §VII.2.1]). Furthermore, the function Exp W is defined as follows: Exp W : m K −→ J K W z −→ 1 : ℘(z) : 1 2 ℘ ′ (z) . Remark 3.1. Note that the Weierstrass ℘-function (and its derivative) can be expressed [1, 2] through a Laurent series in a neighborhood of zero, and one has that ℘(z) = 1 z 2 + ∞ k=2 c k z 2k−2 , ℘ ′ (z) = − 2 z 3 + ∞ k=2 (2k − 2)c k z 2k−3 , where c 2 = g2 20 , c 4 = g3 28 , c k = (2k+1)(k−3) k−2 m=2 c m c k−m , and g 2 , g 3 ∈ C are the parameters of the elliptic curve over C in short Weierstrass form defined by the equation ℘ ′ (z) 2 2 = ℘ 3 (z) − g2 4 ℘(z) − g3 4 . Moreover, one may generalize these results over a local field K taking into account a neighborhood of zero in order to have a convergence for the series expansion of ℘ and ℘ ′ . In this case, one has that g 2 , g 3 and z belong to the local field K. Since z = 0 is the only element of m K mapped to Ω, the homomorphism Exp W is into; thus, for any z in a neighborhood of zero, one can define (3.2) Exp −1 W := −2 ℘(z) ℘ ′ (z) (see §IV and §VII [22] for further details) such that Exp −1 W Exp W (z) = z, whose first terms in a Taylor series are z + g 2 10 z 5 + 3g 3 28 z 7 + g 2 2 120 z 9 + 23g 2 g 3 1540 z 11 + O(z 13 ). Thus, taking E such that its reduction modulo m K is non-singular, we have the following theorem. Theorem 3.1 (The map Exp for Edwards curves). Let K be a local field, O K is its ring of integers, and m K is the prime ideal of O K . If E is an Edwards curve as in theorem 2.3, that is, with d ∈ K a non-square, then the following map: Exp E : m K −→ J 0 K (E) z −→ 2 3 y 1 (3℘(z) − a ′ ) x 1 ℘ ′ (z) , 3℘(z) − a ′ − 3x 1 3℘(z) − a ′ + 3x 1 , where x 1 , y 1 , and a ′ are as in definition 2.2, is an exponential map for E, that is, Exp E (z 1 + z 2 ) = Exp E (z 1 ) + Exp E (z 2 ). Proof. Recall that, in definition 2.2, we have a birational equivalence between the Edwards curve E and the elliptic curve in Weierstrass form W of equation y 2 = x 3 + a ′ x 2 + b ′ x, whereas the above map Exp W is defined for elliptic curves in short Weierstrass form W of equation y 2 = x 3 + ax + b. However, we can apply the transformation χ : (x, y) → x − a ′ 3 , y which, through the change of variablesx = x − a ′ 3 ,ȳ = y, changes the Weierstrass form y 2 = x 3 + a ′ x 2 + b ′ x onto the short Weierstrass formȳ 2 =x 3 + ax + b, that is, for any P = (x,ȳ) ∈ W(K) such thatȳ 2 =x 3 + ax + b, we have that χ(P ) = P ′ ∈ W(K). As χ(P ) = P ′ belongs to W(K), we can now compute β(P ′ ), where β in (2.1b), in order to get a point belonging to E(K). In particular, if P = Exp W (z) for some z ∈ m K , then β(χ(P )) = β χ Exp W (z) = β χ 1 : ℘(z) : 1 2 ℘ ′ (z) = = β 1 : ℘(z) − a ′ 3 : 1 2 ℘ ′ (z) = = 2 3 y 1 (3℘(z) − a ′ ) x 1 ℘ ′ (z) , 3℘(z) − a ′ − 3x 1 3℘(z) − a ′ + 3x 1 . Thus, the map Exp E for Edwards curves over the local field K is defined as Exp E = β • χ • Exp W , that is, Exp E : m K −→ J K (E) z −→ Exp E (z) = 2 3 y 1 (3℘(z) − a ′ ) x 1 ℘ ′ (z) , 3℘(z) − a ′ − 3x 1 3℘(z) − a ′ + 3x 1 . Note that χ(Ω) = Ω = [0 : 0 : 1] as the projective map χ maps [Z : X : Y ] onto the point Z : X − a ′ 3 Z : Y , and thus we have that β χ Exp W (0) = β(χ(Ω)) = β(Ω) = O. Finally, we are left to prove that the map Exp E = β • χ • Exp W is a oneto-one homomorphism of groups. On the one hand, the maps Exp W , β, and χ are one-to-one. Indeed, the map β here is bijective as d is not a square, and χ −1 : (x,ȳ) → x + a ′ 3 , y . On the other hand, Exp E is a homomorphism because Exp W and β (see theorem 2.3) are homomorphisms, and χ is a translation, thus one has that: Exp E (z 1 + z 2 ) = β • χ • Exp W (z 1 + z 2 ) = = β • χ • Exp W (z 1 ) + Exp W (z 2 ) = = β • χ • Exp W (z 1 ) + β • χ • Exp W (z 2 ) = = Exp E (z 1 ) + Exp E (z 2 ). Remark 3.2. As χ transforms the curve W into the curve W, one has that χ(P 1 + P 2 ) = χ(P 1 ) + χ(P 2 ), where the left term uses the addition formula for W, and the right term uses the addition formula for W. Corollary 3.2. The following is a short exact sequence: (3.3) 0 −→ m K Exp E −−− → J 0 K (E) ModE − −−− → J 0 k (E) −→ 0. Proof. The proof follows from the fact that, from theorem 3.1, Exp E is a monomorphism and Mod E is an epimorphism. Moreover, since, for any z ∈ m K , Exp E (z) = (O(z 3 ), 1 + O(z 3 )), then Mod E (Exp E (z)) = (0, 1), and Im(Exp E ) ⊆ Ker(Mod E ). Finally, together with Exp W , which is invertible by (3.2), the map Exp E = β•χ•Exp W is invertible for z ∈ m K , that is, one can write any point P in Ker(Mod E ) as P = Exp E (z), for some z ∈ m K , thus Ker(Mod E ) ⊆ Im(Exp E ). In the following, we stress the meaning of choosing a curve whose cardinality differs from the cardinality of its ground field. Definition 3.1. Let W be an elliptic curve such that card(J k (W)) = card(k), where k is a finite field. The curve W is an anomalous curve. Non-anomalous curves are subject to attacks by means, for instance, of pairing mappings, that is, efficiently computable, bilinear, and non-degenerate maps e : G 1 × G 2 → G 3 , where typically G 1 and G 2 are cyclic subgroups (such as the Weil pairing, see e.g. §III.8 in [22], and the Ate pairing, see [11]) or quotient groups (such as the Tate pairing, see e.g. [10], and the Eta pairing, see [4]) of the Jacobian of the curve, while G 3 is a subgroup of the multiplicative group of the ground field because the pairing carries the logarithm of an element in G 1 to the logarithm of an element in G 3 (see e.g. [17]). Anomalous curves are safe with respect to these attacks since all the above pairings are defined if and only if the cardinalities of G 1 and G 2 divide q k − 1, where q k is the cardinality of the ground field. On the other hand, however, anomalous curves are also subject to attacks as it is possible to map the Jacobian of such curves to the additive group of the finite field k (see [13,16,18,19,22,23]). We address the reader to §XI.6 in [22] for a simple polynomial algorithm able to solve the ECDLP for an anomalous curve. Proof. As k = O K /m K is finite and W is not anomalous, for any 1 ≤ h ∈ Z, the sequence: 0 −→ H −→ J H (W) −→ J k (W) −→ 0, where H = m K /(̟ h K O K ) , with ̟ K the uniformizer of K, is splitting by the Schur-Zassenhaus theorem and defines, therefore, a section σ h W : J k (W) → J H (W) which is a homomorphism. Taking the inverse limit σ W = lim h→∞ σ h W , we obtain a section σ W : J k (W) → J H (W) which is a homomorphism, hence the sequence is splitting. Corollary 3.4. If k = O K /m K is finite, W is not an anomalous curve, and E is the Edwards curve birational equivalent to W, then J 0 K (E) is isomorphic to J 0 k (E) ⊕ m K . Proof. Since we confined ourselves to the case in theorem 2.3, then we have that J 0 K (E) ∼ = J K (W), J 0 k (E) ∼ = J k (W) , and the proof follows from theorem 3.3. Note that the above exact sequence does not split over K if one supposes that the elliptic curve in Weierstrass form, taken into account in theorem 2.3, is an anomalous curve, as we show in the next example. The map Exp for Edwards curves over Q p The goal of this section is to compute the map Exp for Edwards curves E over the local field Q p of p-adic numbers. In particular, we study the field Q p through the inverse limit Z p = lim ←− Z p k Z , that is, we compute Z pZ , Z p 2 Z , . . ., Z p k Z approaching Z p , the field of p-adic integers, for k → ∞. The field Q p is a non-Archimedean local field of characteristic zero, thus putting K = Q p , one has that its ring of integers O K is the ring Z p of p-adic integers, its prime ideal m K is pZ p (which uniformizer ̟ K is equal to p), and its residue field O K /m K is Z pZ = GF(p). Recall that, in remark 3.1, we gave the Laurent series expansion for the Weierstrass ℘-function and its derivative ℘ ′ for a complex number z. As here we are now focusing on the field Q p , one has to take into account the convergence radius of these series over Q p . In the context of the field of p-adic numbers, a convergence neighborhood of zero is given by multiples of p, that is, when p | z. In this neighborhood, these series always converge since c k z 2k−2 ≡ 0 mod p h , and (2k − 2)c k z 2k−3 ≡ 0 mod p h , for a suitable positive integer h. If J 0 k (E) denotes the image of the subgroup J 0 (E) modulo p k , then, applying the above changes and observations, the results in section 3 can be expressed as follows: J 0 k (E) = J 0 1 (E) ⊕ Im(Exp E ). Indeed, the map Exp E over Q p can be expressed, through an inverse limit with k increasing, as follows: Exp E : pZ p p k Z p −→ E Z p k Z z = ph −→ 2 3 y 1 (3℘(z) − a ′ ) x 1 ℘ ′ (z) , 3℘(z) − a ′ − 3x 1 3℘(z) − a ′ + 3x 1 , where h = 1, 2, . . . , p k−1 , since Z p = lim ←− Z p k Z .pZ p p k Z p −→ Z p k−1 Z ph −→ h, where h = 1, 2, . . . , p k−1 , we have that J 0 k (E) = J 0 1 (E) ⊕ Z p k−1 Z . It is necessary to make some further adjustments to the map Exp for elliptic curves in Weierstrass form. In particular, as we are approximating Q p with Z p k Z , with k → ∞, the map Exp for elliptic curves in short Weierstrass form should be rewritten as follows: Exp W : pZ p p k Z p −→ W Z p k Z z −→ tz 3 : tz 3 ℘(z) : tz 3 1 2 ℘ ′ (z) , where t is the least common multiple between the denominators of the series expansion of ℘ and ℘ ′ (see remark 3.1). In particular, the multiplication by the factor tz 3 has to be done in order to make all coordinates integer, and therefore to avoid modular inversions when the denominator is a multiple of p as we move from the field Z pZ to the ring Z p k Z . For elliptic curves in short Weierstrass form, since Im Exp W = Ker Mod W , then P ∈ Im Exp W if Mod W (P ) = Ω. Therefore, the points belonging to Im Exp W have the following form P = [ph 1 : ph 2 : h 3 ] with p ∤ h 3 . However, since Ω, the point at infinity of W, is mapped through β onto the neutral point O ∈ E Z p k Z , then all the points belonging to Im(Exp E ) will be equivalent, modulo p, to O and, therefore, they will all be affine points. So, by counting all the affine points (x, y) ∈ E Z p k Z , one may check that the number of these points is equal to card J 0 1 (E) · p k−1 , where p k−1 is the cardinality of Im(Exp E ). Thus, the map Exp allows us to speed up the addition operation by splitting the original group J 0 k (E) into a pair (P, c), where P ∈ E Z pZ and c ∈ Z p k−1 Z . Remark 4.3. Note that the addition formula for the Weierstrass form cannot be applied in the case in which points, reduced modulo p, return the point at infinity. In this latter case, the sum of two points over Z p k Z may return a point that does not exist, such as [0 : 0 : 0]. On the contrary, every point in E Z p k Z is affine, thus the addition formula for an Edwards curve always returns the proper result. Conclusions The Edwards curves are a recent (2007) mathematical tool used in cryptographic and digital signature applications because of their efficient (and secure) group law operations. Until now, these curves have been studied in order to find other ways to employ them and further speed up their applications, while preserving their security. In this paper, we extended the map Exp for elliptic curves in short Weierstrass form, defined over C by the equation y 2 = x 3 + ax + b, to the Edwards curves E, defined over local fields by the equationx 2 +ŷ 2 = 1 + dx 2ŷ2 , with d a non-square, by using the birational equivalence between the Weierstrass form W of equation y 2 = x 3 + a ′ x 2 + b ′ x and E. Up to the representation of the elements of J 0 K (E) as pairs (P, c), where P ∈ E(GF(p)) and c ∈ Z p k−1 Z , this map provides a tool able to speed up the group law operations for Edwards curves over local fields, and in particular over the field Q p of p-adic numbers by splitting the whole subgroup J 0 k (E), that is, the image of the subgroup J 0 (E) over Z p k Z , into a pair (P, c), where P ∈ E(GF(p)) and c ∈ Z p k−1 Z . This also gives a motivation for studying the map able to correctly define the pair (P, c) and a group law in order to sum two of such elements, that is, (P 1 , c 1 ) + (P 2 , c 2 ). Remark 2. 1 . 1Note that, unlike those in Weierstrass form, curves in Edwards form E have two points at infinity, that is, Ω 1 = [Ẑ :X :Ŷ ] = [0 : 1 : 0] on the x-axis and Ω 2 = [Ẑ :X :Ŷ ] = [0 : 0 : 1] on the y-axis, which are ordinary singular points for E. E) , modulo the subgroup of principal divisors on E. In particular, one wants to sum the two divisors (P − O) and (Q − O), where P, Q ∈ E(K) are two affine points of E, and O = (0, 1) ∈ E(K) is taken as the base point, in light of theorem 2.1. Theorem 3. 3 . 3If k = O K /m K is finite, and W is not an anomalous curve, then J K (W) is isomorphic to the direct sum of J k (W) and m K . Example 1 . 1Let W be the elliptic curve in Weierstrass form defined by the equation y 2 = x 3 + 4x + 7 over k = GF(53), whose Jacobian can be readily verified to have 53 elements. Hence, J k (W) is isomorphic to the cyclic group C 53 . However, J Z/53 2 Z (W) = C 53 ⊕ C 53 as the point P = (3, 130) ∈ W Z 53 2 Z is such that 53(P − Ω) = [0 : 53 : 1603] − Ω = Ω − Ω. Remark 4.1. Note that in this case we consider, as the domain of Exp E , the quotient pZp p k Zp since, modulo p k , Exp E (ph) = Exp E p(h + p k ) , with h ∈ Z. Remark 4.2. Note that, as there is a natural isomorphism from Im(Exp) andZ p k−1 Z through the following map: Weierstrass Elliptic and Related Functions. Milton Abramowitz, Irene A Stegun, isbn: 978-0486612720Handbook of Mathematical Functions with Formulas, Graphs and Mathematical Tables. Milton Abramowitz and Irene A. StegunNew YorkDover Publications, Inc18cit. on p. 5Milton Abramowitz and Irene A. Stegun. "Weierstrass Elliptic and Related Functions". In: Handbook of Mathematical Functions with Formulas, Graphs and Mathematical Tables. Ed. by Milton Abramowitz and Irene A. Stegun. New York: Dover Publications, Inc., 1972. Chap. 18, pp. 627-671. isbn: 978-0486612720 (cit. on p. 5). Graduate texts in mathematics 41. Apostol Tom Mike, Modular functions and Dirichlet series in number theory. New YorkSpringer Verlang2nd ed. Chap. 1.6 -1.11. cit. on p. 5Tom Mike Apostol. In: Modular functions and Dirichlet series in number theory. 2nd ed. Graduate texts in mathematics 41. New York: Springer Verlang, 1996. Chap. 1.6 -1.11, pp. 9-14. isbn: 978-0387971278 (cit. on p. 5). Faster computation of the Tate pairing. Christophe Arène, 10.1016/j.jnt.2010.05.013Journal of Number Theory. 1312Elliptic Curve CryptographyChristophe Arène et al. "Faster computation of the Tate pairing". In: Journal of Number Theory 131.5 (2011). Elliptic Curve Cryptography, pp. 842-857. issn: 0022-314X. doi: https://d oi . org / 10 . 1016 / j . jnt . 2010 . 05 . 013. url: https : / / www . sciencedirect . com / science / article / pii /S0022314X10001757 (cit. on p. 2). Efficient pairing computation on supersingular Abelian varieties. S L M Paulo, Barreto, 10.1007/s10623-006-9033-6Designs, Codes and Cryptography. 42cit. on p. 7Paulo S. L. M. Barreto et al. "Efficient pairing computation on supersingular Abelian varieties". In: Designs, Codes and Cryptography 42.3 (Mar. 2007), pp. 239-271. issn: 1573-7586. doi: 10.1007/s10623-006-9033-6. url: https://doi.org/10.1007/s10623-006-9033-6 (cit. on p. 7). Faster Addition and Doubling on Elliptic Curves. J Daniel, Tanja Bernstein, Lange, 10.1007/978-3-540-76900-2_3Advances in Cryptology -ASIACRYPT. Kaoru KurosawaBerlin, Heidelberg; Berlin HeidelbergSpringercit. onDaniel J. Bernstein and Tanja Lange. "Faster Addition and Doubling on Elliptic Curves". In: Ad- vances in Cryptology -ASIACRYPT 2007. Ed. by Kaoru Kurosawa. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007, pp. 29-50. isbn: 978-3-540-76900-2. doi: 10.1007/978-3-540-76900-2_3 (cit. on pp. 1-4). Inverted Edwards Coordinates". In: Applied Algebra, Algebraic Algorithms and Error-Correcting Codes. J Daniel, Tanja Bernstein, Lange, Ed. by Serdar Boztaş and Hsiao-Feng. FrancisDaniel J. Bernstein and Tanja Lange. "Inverted Edwards Coordinates". In: Applied Algebra, Al- gebraic Algorithms and Error-Correcting Codes. Ed. by Serdar Boztaş and Hsiao-Feng (Francis) . Lu, Berlin, 10.1007/978-3-540-77224-8_4Springer1Heidelberg; Berlin HeidelbergcitLu. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007, pp. 20-27. isbn: 978-3-540-77224-8. doi: 10.1007/978-3-540-77224-8_4 (cit. on p. 1). Optimizing Double-Base Elliptic-Curve Single-Scalar Multiplication. Daniel J Bernstein, 10.1007/978-3-540-77026-8_13Progress in Cryptology -INDOCRYPT. K. Srinathan, C. Pandu Rangan, and Moti YungBerlin, Heidelberg; Berlin HeidelbergSpringercit. on p. 1)Daniel J. Bernstein et al. "Optimizing Double-Base Elliptic-Curve Single-Scalar Multiplication". In: Progress in Cryptology -INDOCRYPT 2007. Ed. by K. Srinathan, C. Pandu Rangan, and Moti Yung. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007, pp. 167-182. isbn: 978-3-540- 77026-8. doi: 10.1007/978-3-540-77026-8_13 (cit. on p. 1). A normal form for elliptic curves. Harold Edwards, doi: 10 . 1090 / S 0273-0979-07-01153-6Bulletin of The American Mathematical Society -BULL AMER MATH SOC. 44cit. on pp. 1, 2)Harold Edwards. "A normal form for elliptic curves". In: Bulletin of The American Mathe- matical Society -BULL AMER MATH SOC 44 (July 2007), pp. 393-423. doi: 10 . 1090 / S 0273-0979-07-01153-6 (cit. on pp. 1, 2). Goppa codes over Edwards curves. Giuseppe Filippone, in progress. cit. on pp. 1, 4)Giuseppe Filippone. Goppa codes over Edwards curves. (in progress). url: https://drive.google .com/file/d/120dQ2xZ-Oz0XmCBQn3rSvIGMg6K4dBmK/view?usp=share_link (cit. on pp. 1, 4). The Tate pairing and the discrete logarithm applied to elliptic curve cryptosystems. G Frey, M Muller, H.-G Ruck, 10.1109/18.771254IEEE Transactions on Information Theory. 457G. Frey, M. Muller, and H.-G. Ruck. "The Tate pairing and the discrete logarithm applied to elliptic curve cryptosystems". In: IEEE Transactions on Information Theory 45.5 (1999), pp. 1717-1719. doi: 10.1109/18.771254 (cit. on p. 7). The Eta Pairing Revisited. F Hess, N P Smart, F Vercauteren, 10.1109/TIT.2006.881709doi: 10 . 1109 / TIT. 2006 . 881709IEEE Transactions on Information Theory. 52107F. Hess, N.P. Smart, and F. Vercauteren. "The Eta Pairing Revisited". In: IEEE Transactions on Information Theory 52.10 (2006), pp. 4595-4602. doi: 10 . 1109 / TIT. 2006 . 881709 (cit. on p. 7). Faster Group Operations on Elliptic Curves. Huseyin Hisil, https:/dl.acm.org/doi/10.5555/1862758.1862762Proceedings of the Seventh Australasian Conference on Information Security. the Seventh Australasian Conference on Information SecurityWellington, New ZealandAustralian Computer Society, Inc98AISC '09Huseyin Hisil et al. "Faster Group Operations on Elliptic Curves". In: Proceedings of the Sev- enth Australasian Conference on Information Security -Volume 98. AISC '09. Wellington, New Zealand: Australian Computer Society, Inc., 2009, pp. 7-20. isbn: 9781920682798. doi: 10.5555/1862758.1862762. url: https://dl.acm.org/doi/10.5555/1862758.1862762 (cit. on p. 1). Solving elliptic curve discrete logarithm problems using Weil descent. Michael Jacobson, Alfred Menezes, Andreas Stein, J. Ramanujan Math. Soc. 1637Michael Jacobson, Alfred Menezes, and Andreas Stein. "Solving elliptic curve discrete logarithm problems using Weil descent". In: J. Ramanujan Math. Soc. 16.3 (2001), pp. 231-260. issn: 0970-1249. url: https://eprint.iacr.org/2001/041 (cit. on p. 7). On the structure of elliptic curves over finite extensions of Qp with additive reduction. Michiel Kosters, René Pannekoek, 10.48550/arXiv.1703.07888arXiv:1703.078885Michiel Kosters and René Pannekoek. "On the structure of elliptic curves over finite extensions of Qp with additive reduction". In: arXiv:1703.07888 (Mar. 2017). doi: https://doi.org/10.48550/a rXiv.1703.07888 (cit. on p. 5). Edwards Curves. Tanja Lange, 10.1007/978-1-4419-5906-5_243Encyclopedia of Cryptography and Security. Sushil van Tilborg Henk C. A. and JajodiaBoston, MASpringer UScit. on p. 1)Tanja Lange. "Edwards Curves". In: Encyclopedia of Cryptography and Security. Ed. by Sushil van Tilborg Henk C. A. and Jajodia. Boston, MA: Springer US, 2011, pp. 380-382. isbn: 978-1-4419-5906-5. doi: 10.1007/978-1-4419-5906-5_243 (cit. on p. 1). Generating anomalous elliptic curves. Franck Leprévost, 10.1016/j.ipl.2004.11.008issn: 0020-0190Information Processing Letters. 937Franck Leprévost et al. "Generating anomalous elliptic curves". In: Information Processing Letters 93.5 (2005), pp. 225-230. issn: 0020-0190. doi: https://doi.org/10.1016/j.ipl.2004.11.008. url: https://www.sciencedirect.com/science/article/pii/S0020019004003527 (cit. on p. 7). Reducing elliptic curve logarithms to logarithms in a finite field. A J Menezes, T Okamoto, S A Vanstone, 10.1109/18.259647IEEE Transactions on Information Theory. 397A.J. Menezes, T. Okamoto, and S.A. Vanstone. "Reducing elliptic curve logarithms to logarithms in a finite field". In: IEEE Transactions on Information Theory 39.5 (1993), pp. 1639-1646. doi: 10.1109/18.259647 (cit. on p. 7). Fermat Quotients and the Polynomial Time Discrete Log Algorithm for Anomalous Elliptic Curves. Takakazu Satoh, Kiyomichi Araki, 10.14992/00009878doi: 10 . 14992 / 000098787Takakazu Satoh and Kiyomichi Araki. Fermat Quotients and the Polynomial Time Discrete Log Algorithm for Anomalous Elliptic Curves. June 1998. doi: 10 . 14992 / 00009878. url: https://doi.org/10.14992/00009878 (cit. on p. 7). Evaluation of Discrete Logarithms in a Group of P-Torsion Points of an Elliptic Curve in Characteristic p". I A Semaev, 10.1090/S0025-5718-98-00887-4In: Math. Comput. 677I. A. Semaev. "Evaluation of Discrete Logarithms in a Group of P-Torsion Points of an Elliptic Curve in Characteristic p". In: Math. Comput. 67.221 (Jan. 1998), pp. 353-356. issn: 0025-5718. doi: 10.1090/S0025-5718-98-00887-4. url: https://doi.org/10.1090/S0025-5718-98-00887-4 (cit. on p. 7). Graduate Texts in Mathematics. Jean-Pierre Serre, 10.1007/978-1-4757-5673-9viii+241. isbn: 0-387- 90424-7Marvin Jay GreenbergSpringer-Verlag67New YorkJean-Pierre Serre. Local fields. Vol. 67. Graduate Texts in Mathematics. Translated from the French by Marvin Jay Greenberg. New York: Springer-Verlag, 1979, pp. viii+241. isbn: 0-387- 90424-7. doi: https://doi.org/10.1007/978-1-4757-5673-9 (cit. on p. 2). Lifting and Elliptic Curve Discrete Logarithms. Joseph H Silverman, 10.1007/978-3-642-04159-4_6Selected Areas in Cryptography. Roberto Maria Avanzi, Liam Keliher, and Francesco SicaBerlin, Heidelberg; Berlin HeidelbergSpringercit. on p. 1)Joseph H. Silverman. "Lifting and Elliptic Curve Discrete Logarithms". In: Selected Areas in Cryptography. Ed. by Roberto Maria Avanzi, Liam Keliher, and Francesco Sica. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009, pp. 82-102. isbn: 978-3-642-04159-4. doi: 10.1007/978-3-642-04159-4_6. url: https://doi.org/10.1007/978-3-642-04159-4_6 (cit. on p. 1). The Arithmetic of Elliptic Curves. Joseph H Silverman, 10.1007/978-0-387-09494-6Springer Verlang106New YorkGraduate Texts in Mathematics. cit. on pp. 1, 5-7Joseph H. Silverman. The Arithmetic of Elliptic Curves. 2nd ed. Vol. 106. Graduate Texts in Mathematics. New York: Springer Verlang, 2009. isbn: 978-0-387-09493-9. doi: 10.1007/978-0-387-09494-6 (cit. on pp. 1, 5-7). The Discrete Logarithm Problem on Elliptic Curves of Trace One. N P Smart, 10.1007/s001459900052Journal of Cryptology. 1237N. P. Smart. "The Discrete Logarithm Problem on Elliptic Curves of Trace One". In: Journal of Cryptology 12.3 (June 1999), pp. 193-196. issn: 1432-1378. doi: 10.1007/s001459900052. url: https://doi.org/10.1007/s001459900052 (cit. on p. 7). Cryptography on twisted Edwards curves over local fields. Chunming Tang, Maozhi Xu, Yanfeng Qi, 10.1007/s11432-014-5155-zScience China Information Sciences. 58ChunMing Tang, MaoZhi Xu, and YanFeng Qi. "Cryptography on twisted Edwards curves over local fields". In: Science China Information Sciences 58.1 (Jan. 2015), pp. 1-15. issn: 1869-1919. doi: 10.1007/s11432-014-5155-z. url: https://doi.org/10.1007/s11432-014-5155-z (cit. on p. 2). Cryptography on elliptic curves over p-adic number fields. Maozhi Xu, 10.1007/s11432-008-0014-4Science in China Series F: Information Sciences. 511MaoZhi Xu et al. "Cryptography on elliptic curves over p-adic number fields". In: Science in China Series F: Information Sciences 51.3 (Mar. 2008), pp. 258-272. issn: 1862-2836. doi: 10.1007/s11432-008-0014-4. url: https://doi.org/10.1007/s11432-008-0014-4 (cit. on p. 1). Hierarchical Management Scheme by Local Fields. Hong Zhi, Mao Zhi Yue, Xu, 10.1007/s10114-011-9110-2.url:https:/actamath.cjoe.ac.cn/Jwk_sxxb_en/EN/10.1007/s10114-011-9110-2(cit.onp.1Acta Mathematica Sinica. 27Zhi Hong Yue and Mao Zhi Xu. "Hierarchical Management Scheme by Local Fields". In: Acta Mathematica Sinica 27.1 (Dec. 2010), pp. 155-168. issn: 1439-8516. doi: https : / / doi . org /10.1007/s10114-011-9110-2. url: https://actamath.cjoe.ac.cn/Jwk_sxxb_en/EN/10.1007/s 10114-011-9110-2 (cit. on p. 1).
[]
[ "Dialogue-to-Video Retrieval", "Dialogue-to-Video Retrieval" ]
[ "Chenyang Lyu [email protected] \nSchool of Computing\nDublin City University\nDublinIreland\n", "Manh-Duy Nguyen [email protected] \nSchool of Computing\nDublin City University\nDublinIreland\n", "Van-Tu Ninh \nSchool of Computing\nDublin City University\nDublinIreland\n", "Liting Zhou [email protected] \nSchool of Computing\nDublin City University\nDublinIreland\n", "Cathal Gurrin [email protected] \nSchool of Computing\nDublin City University\nDublinIreland\n", "Jennifer Foster [email protected] \nSchool of Computing\nDublin City University\nDublinIreland\n" ]
[ "School of Computing\nDublin City University\nDublinIreland", "School of Computing\nDublin City University\nDublinIreland", "School of Computing\nDublin City University\nDublinIreland", "School of Computing\nDublin City University\nDublinIreland", "School of Computing\nDublin City University\nDublinIreland", "School of Computing\nDublin City University\nDublinIreland" ]
[]
Recent years have witnessed an increasing amount of dialogue/conversation on the web especially on social media. That inspires the development of dialogue-based retrieval, in which retrieving videos based on dialogue is of increasing interest for recommendation systems. Different from other video retrieval tasks, dialogue-to-video retrieval uses structured queries in the form of user-generated dialogue as the search descriptor. We present a novel dialogue-to-video retrieval system, incorporating structured conversational information. Experiments conducted on the AVSD dataset show that our proposed approach using plain-text queries improves over the previous counterpart model by 15.8% on R@1. Furthermore, our approach using dialogue as a query, improves retrieval performance by 4.2%, 6.2%, 8.6% on R@1, R@5 and R@10 and outperforms the state-of-the-art model by 0.7%, 3.6% and 6.0% on R@1, R@5 and R@10 respectively.
10.48550/arxiv.2303.16761
[ "https://export.arxiv.org/pdf/2303.16761v1.pdf" ]
257,641,991
2303.16761
2c9668a745f604be38bafbb549fdf5501c1f6a95
Dialogue-to-Video Retrieval Chenyang Lyu [email protected] School of Computing Dublin City University DublinIreland Manh-Duy Nguyen [email protected] School of Computing Dublin City University DublinIreland Van-Tu Ninh School of Computing Dublin City University DublinIreland Liting Zhou [email protected] School of Computing Dublin City University DublinIreland Cathal Gurrin [email protected] School of Computing Dublin City University DublinIreland Jennifer Foster [email protected] School of Computing Dublin City University DublinIreland Dialogue-to-Video Retrieval dialog-based retrieval · dialogue search query · conversa- tional information Recent years have witnessed an increasing amount of dialogue/conversation on the web especially on social media. That inspires the development of dialogue-based retrieval, in which retrieving videos based on dialogue is of increasing interest for recommendation systems. Different from other video retrieval tasks, dialogue-to-video retrieval uses structured queries in the form of user-generated dialogue as the search descriptor. We present a novel dialogue-to-video retrieval system, incorporating structured conversational information. Experiments conducted on the AVSD dataset show that our proposed approach using plain-text queries improves over the previous counterpart model by 15.8% on R@1. Furthermore, our approach using dialogue as a query, improves retrieval performance by 4.2%, 6.2%, 8.6% on R@1, R@5 and R@10 and outperforms the state-of-the-art model by 0.7%, 3.6% and 6.0% on R@1, R@5 and R@10 respectively. Introduction The aim of a video retrieval system is to find the best matching videos according to queries provided by the users [26,25,20,8,5]. Video retrieval has significant practical value as the vast volume of videos on the web has triggered the need for efficient and effective video search systems. In this paper, we focus on improving the performance of video retrieval systems by combining both textual descriptions of the target video with interactive dialogues between users discussing the content of the target video. Previous work on video retrieval applied a CNN-based architecture [18,16,12] combined with an RNN network [3] to handle visual features and their time-series information [30,32,2]. Meanwhile, another RNN model was employed to embed a textual description into the same vector space as the video, so that their similarity could be computed in order to perform retrieval [26,32,2]. Due to the huge impact of the transformer architecture [29] in both text and image modalities, The first three authors contributed equally. arXiv:2303.16761v1 [cs.IR] 23 Mar 2023 this network has also been widely applied in the video retrieval research field, obtaining improvements over previous approaches [9,4,22,17,13]. Current video retrieval research, however, mainly focuses on plain text queries such as video captions or descriptions. The need to search videos using queries with complex structures becomes more important when the initial simple text query is ambiguous or not sufficiently well described to find the correct relevant video. Nevertheless, there are only a few studies that focus on this problem [24,23]. Madusa et al. [23] used a dialogue, a sequence of questions and answers about a video, as a query to perform the retrieval because this sequential structure contains rich and detailed information. Specifically, starting with a simple initial description, a video retrieval model would return a list of matching videos from which a question and its answer were generated to create an extended dialogue. This iterative process continued until the correct video was found. Unlike the model of Maeoki et al. [24] which applied a CNN-based encoder and an LSTM [14] to embed data from each modality and to generate questions and answers, Madusa et al's system, ViReD [23], applied Video2Sum [28] to convert a video into a textual summary which can be used with the initial query to get the generated dialogue with the help of a BART model [19]. In this paper, we focus on a less-studied aspect of video retrieval: dialogue-tovideo retrieval where the search query is a user-generated dialogue that contains structured information from each turn of the dialogue. The need for dialogue-tovideo retrieval derives from the increasing amount of online conversations on social media, which inspires the development of effective dialogue-to-video retrieval systems for many purposes, especially recommendation systems [1,11,33]. Different from general text-to-video retrieval, dialogue-to-video uses user-generated dialogues as the search query to retrieve videos. The dialogue contains user discussion about a certain video, which provides dramatically different information than a plain-text query. This is because during the interaction between users in the dialogue, a discussion similar to the following could happen "A: The main character of that movie was involved in a horrible car accident when he was 13. B: No, I think you mean another character.". Such discussion contains subtle information about the video of interest and thus cannot be treated as a plain-text query. Therefore, to incorporate the conversational information from dialogues, we propose a novel dialogue-to-video retrieval approach. In our proposed model, we sequentially encode each turn of the dialogue to obtain a dialogue-aware query representation with the purpose of retaining the dialogue information. Then we calculate the similarity between this dialogue-aware query representation and individual frames in the video in order to obtain a weighted video representation. Finally, we use the video representation to compute an overall similarity score with the dialogue-aware query. To validate the effectiveness of our approach, we conduct dialogue-to-video experiments on a benchmark dataset AVSD [1]. Experimental results show that our approach achieves significant improvements over previous state-of-the-art models including FiT and ViReD [24,4,23]. In this section, we describe how our dialogue-to-video retrieval system works. Our retrieval system consists of two major components: 1) a temporal-aware video encoder responsible for encoding the image frames in video with temporal information. 2) a dialogue-query encoder responsible for encoding the dialogue query with conversational information. As shown in Figure 1 In the video encoder, we encode each frame f i to its visual representation f h i . Then we incorporate temporal information to the corresponding frame representation and feed them into a stacked Multi-Head-Attention module, yielding temporal frame representation f h i . In the dialogue-query encoder, we sequentially encode D by letting d h i = Text-Encoder(d h i−1 , d i ) in order to produce a dialogue-history-aware dialogue representation. We then obtain the final dialogue-query representation by fusing all d h i : D h = g(d h 1 , ......, d h m ) where g represents our fusion function. After obtaining D h , we use it to calculate similarities with each frame f h i , which are then used to obtain a video representation V h based on the weighted summation of all f h i . Finally, we obtain the dialogue-tovideo similarity score using the dot-product between D h and V h . Temporal-aware Video Encoder Our temporal-aware video encoder, which is built on Vision Transformer [7] firstly encodes each frame f i to its visual representation: f h i = Image-Encoder(f i )(1) Then we inject the positional information of the corresponding frame in the video to the frame representation and feed it to the Multi-Head-Attention module: f h i = Multi-Head-Attention([f p 1 , ......, f p n ])(2) where f p i is the frame representation with positional information f p i = ψ(f h i , p i ) and p i is the corresponding positional embedding. Practically, we add absolute positional embedding vectors to frame representation as in BERT [6]: f p i = f h i + p i . Finally, we obtain the temporal-aware video representation V h = {f h 1 , ......, f h n }. Dialogue-query Encoder The dialogue-query encoder is responsible for encoding the dialogue-query D = {d 1 , d 2 , ......, d m }: d h i = Text-Encoder(d h i−1 , d i )(3) where Text-Encoder is a Transformer-based encoder model [29,6,27] in our experiments. Then we fuse all d h i to obtain a dialogue-level representation D h for the dialogue-query: D h = g(d h 1 , ......, d h m )(4) Interaction between Video and Dialogue-query To calculate the similarity score between each V and D, we firstly compute the similarity scores between dialogue-query D h and each frame f h i . Then we obtain a weighted summation of all frames f h i as the video representation V h : V h = n i=1 c i f h i (5) c i = e φ(D h ,f h i ) n j=1 e φ(D h ,f h j ) (6) The final similarity score is obtained by dot-product between D h and V h : s = D h (V h ) T Training Objective We perform in-batch contrastive learning [15,10]. For a batch of N video-dialogue pairs { (V 1 , D 1 ), ......, (V N , D N )}, the dialogue-to-video and video-to-dialogue match loss are: L d2v = − 1 N N i=1 e D h i (V h i ) T N j=1 e D h i (V h j ) T (7) L v2d = − 1 N N i=1 e D h i (V h i ) T N j=1 e D h j (V h i ) T(8) The overall loss to be minimized during the training process is L = (L d2v + L v2d )/2. Experiments Dataset We conduct our experiments on the popular video-dialogue dataset: AVSD [1]. 1 In AVSD, each video is associated with a 10-round dialogue discussing the content of the corresponding video. We follow the dataset split of AVSD in [1,24], 7,985 videos for training, 863 videos for validation and 1,000 videos for testing. Training setup Our implementation is based on CLIP [27] from Huggingface [31]. CLIP is used to initialize our Image-Encoder and Text-Encoder. For performance and efficiency consideration, we employ ViT-B/16 [27] as our image encoder. 2 . We train our system with a learning rate of 1 × 10 −5 for 10 epochs, with a batch size of 16. We use a maximum gradient norm of 1. The optimizer we used is AdamW [21], for which the is set to 1 × 10 −8 . We perform early stopping when the performance on validation set degrades. We employ R@K, Median Rank and Mean Rank as evaluation metrics [1]. Our code is made publicly available. 3 Results We present our experimental results on the test set of AVSD [1] in Table 1, where we also show the results of recent baseline models including: 1) LSTM [24], an LSTM-based interactive video retrieval model; 2) FiT [4], a Transformer-based [27] using the dialogue summary as the initial query and model-generated dialogue as an additional query. In Table 1, our model is named D2V (Dialogue-to-Video). We also include the results of our system using the the video caption (script in AVSD dataset) -D2V+Script -and the dialogue summary (summary in AVSD dataset) as the search query -D2V+Summary. The results in Table 1 show that our proposed approach, D2V, achieves superior performance compared to previous models. First, D2V+Script with plain-text video caption input outperforms its counterpart FiT by a large margin (15.8 R@1 improvement) and even obtains significant improvements (by 10.6 R@1) over FiT using dialogue as input. That shows the effectiveness of our proposed model architecture. Second, D2V+Dialogue significantly outperforms D2V+Script and D2V+Summary by 3.2 R@1 and 2.2 R@1 respectively, which demonstrates the benefit of incorporating dialogue as a search query. The results in Table 1 show that the dialogue does indeed contain important information about the video content and demonstrates the plausibility of using dialogue as a search query. Effect of Dialogue Rounds We investigate the effect of dialogue rounds on the retrieval performance. The results on the validation set of AVSD are shown in Figure 2, where we use a varying number of dialogue rounds (from 1 to 10) when retrieving videos. We observe a consistent improvement with an increasing number of dialogue rounds. The results show that with more rounds of dialogue, we can obtain better retrieval performance. The improvement brought by increasing the dialogue rounds is more significant especially in the early stage (when using 1 round of dialogue versus 3 rounds). Conclusion In this paper, we proposed a novel dialogue-to-video retrieval model which incorporates conversational information from dialogue-based queries. Experimental results on the AVSD benchmark dataset show that our approach with a plaintext query outperforms previous state-of-the-art models. Moreover, our model using dialogue as a search query yields further improvements in retrieval performance, demonstrating the importance of utilising dialogue information. "......" B: "......" A: "......" B: "......" A: "......" B: "..... Fig. 1 . 1The architecture of our proposed approach. , our model receives video-query pairs and produces similarity scores. Each video consists of n frames: V = {f 1 , f 2 , ......, f n } and each dialogue query is composed of m turns of conversation: D = {d 1 , d 2 , ......, d m }. Fig. 2 . 2Effect of dialogue rounds text-to-video retrieval model using the video summary as the search query; 3) FiT [4] + Dialogue, the FiT model with dialogue in AVSD [1] as the search query 4 ; 4) ViReD [23], a video retrieval system based on FiT and CLIP Table 1 . 1Experimental Results on AVSD datasetUse Dialogue R@1 R@5 R@10 MedRank MeanRank LSTM [24] 4.2 13.5 22.1 N/A 119 FiT [4] 5.6 18.4 27.5 25 95.4 FiT + Dialogue [4] 10.8 28.9 40 18 58.7 ViReD [23] 24.9 49.0 60.8 6.0 30.3 D2V + Script 21.4 45.9 57.5 9.0 39.8 D2V + Summary 23.4 48.5 59.1 6.0 33.5 D2V + Dialogue 25.6 52.1 65.1 5.0 28.9 0 2 4 6 8 Number of Dialogue Rounds https://video-dialog.com 2 https://openai.com/blog/clip/ 3 https://github.com/lyuchenyang/Dialogue-to-Video-Retrieval We concatenate all the rounds of dialogue as plain text to serve as the search query. AcknowledgementsThis work was funded by Science Foundation Ireland through the SFI Centre for Research Training in Machine Learning (18/CRT/6183). We thank the reviewers for their helpful comments. Audio-visual scene-aware dialog. H Alamri, V Cartillier, A Das, J Wang, A Cherian, I Essa, D Batra, T K Marks, C Hori, P Anderson, S Lee, D Parikh, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionAlamri, H., Cartillier, V., Das, A., Wang, J., Cherian, A., Essa, I., Batra, D., Marks, T.K., Hori, C., Anderson, P., Lee, S., Parikh, D.: Audio-visual scene-aware dialog. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2019) Localizing moments in video with natural language. Anne Hendricks, L Wang, O Shechtman, E Sivic, J Darrell, T Russell, B , Anne Hendricks, L., Wang, O., Shechtman, E., Sivic, J., Darrell, T., Russell, B.: Localizing moments in video with natural language. In: ICCV (2017) Neural machine translation by jointly learning to align and translate. D Bahdanau, K H Cho, Y Bengio, ICLR 20153rd International Conference on Learning Representations. Bahdanau, D., Cho, K.H., Bengio, Y.: Neural machine translation by jointly learn- ing to align and translate. In: 3rd International Conference on Learning Represen- tations, ICLR 2015 (2015) Frozen in time: A joint video and image encoder for end-to-end retrieval. M Bain, A Nagrani, G Varol, A Zisserman, IEEE International Conference on Computer Vision. Bain, M., Nagrani, A., Varol, G., Zisserman, A.: Frozen in time: A joint video and image encoder for end-to-end retrieval. In: IEEE International Conference on Computer Vision (2021) Improving video-text retrieval by multi-stream corpus alignment and dual softmax loss. X Cheng, H Lin, X Wu, F Yang, D Shen, arXiv:2109.04290Cheng, X., Lin, H., Wu, X., Yang, F., Shen, D.: Improving video-text retrieval by multi-stream corpus alignment and dual softmax loss. arXiv:2109.04290 (2021) BERT: Pre-training of deep bidirectional transformers for language understanding. J Devlin, M W Chang, K Lee, K Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinnesotaAssociation for Computational Linguistics1MinneapolisDevlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). pp. 4171-4186. Association for Computational Linguistics, Minneapo- lis, Minnesota (Jun 2019). https://doi.org/10.18653/v1/N19-1423, https://www. aclweb.org/anthology/N19-1423 An image is worth 16x16 words: Transformers for image recognition at scale. A Dosovitskiy, L Beyer, A Kolesnikov, D Weissenborn, X Zhai, T Unterthiner, M Dehghani, M Minderer, G Heigold, S Gelly, International Conference on Learning Representations. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. In: International Con- ference on Learning Representations (2020) MDMMT: Multidomain multimodal transformer for video retrieval. M Dzabraev, M Kalashnikov, S Komkov, A Petiushko, CVPRDzabraev, M., Kalashnikov, M., Komkov, S., Petiushko, A.: MDMMT: Multido- main multimodal transformer for video retrieval. In: CVPR (2021) Multi-modal transformer for video retrieval. V Gabeur, C Sun, K Alahari, C Schmid, ECCVGabeur, V., Sun, C., Alahari, K., Schmid, C.: Multi-modal transformer for video retrieval. In: ECCV (2020) SimCSE: Simple contrastive learning of sentence embeddings. T Gao, X Yao, D Chen, Empirical Methods in Natural Language Processing. Gao, T., Yao, X., Chen, D.: SimCSE: Simple contrastive learning of sentence em- beddings. In: Empirical Methods in Natural Language Processing (EMNLP) (2021) Improving video retrieval by adaptive margin. F He, Q Wang, Z Feng, W Jiang, Y Lü, Y Zhu, X Tan, Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. the 44th International ACM SIGIR Conference on Research and Development in Information RetrievalHe, F., Wang, Q., Feng, Z., Jiang, W., Lü, Y., Zhu, Y., Tan, X.: Improving video retrieval by adaptive margin. In: Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. pp. 1359-1368 (2021) Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionHe, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770-778 (2016) Efficient search and browsing of large-scale video collections with vibro. N Hezel, K Schall, K Jung, K U Barthel, International Conference on Multimedia Modeling. SpringerHezel, N., Schall, K., Jung, K., Barthel, K.U.: Efficient search and browsing of large-scale video collections with vibro. In: International Conference on Multimedia Modeling. pp. 487-492. Springer (2022) Long short-term memory. S Hochreiter, J Schmidhuber, Neural computation. 98Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural computation 9(8), 1735-1780 (1997) Dense passage retrieval for open-domain question answering. V Karpukhin, B Oguz, S Min, P Lewis, L Wu, S Edunov, D Chen, W T Yih, Empirical Methods in Natural Language Processing. EMNLPKarpukhin, V., Oguz, B., Min, S., Lewis, P., Wu, L., Edunov, S., Chen, D., Yih, W.t.: Dense passage retrieval for open-domain question answering. In: Empirical Methods in Natural Language Processing (EMNLP) (2020) Imagenet classification with deep convolutional neural networks. A Krizhevsky, I Sutskever, G E Hinton, Communications of the ACM. 606Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep con- volutional neural networks. Communications of the ACM 60(6), 84-90 (2017) Avseeker: an active video retrieval engine at vbs2022. T K Le, V T Ninh, M K Tran, G Healy, C Gurrin, M T Tran, International Conference on Multimedia Modeling. SpringerLe, T.K., Ninh, V.T., Tran, M.K., Healy, G., Gurrin, C., Tran, M.T.: Avseeker: an active video retrieval engine at vbs2022. In: International Conference on Multime- dia Modeling. pp. 537-542. Springer (2022) Gradient-based learning applied to document recognition. Y Lecun, L Bottou, Y Bengio, P Haffner, Proceedings of the IEEE. 8611LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11), 2278-2324 (1998) BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. M Lewis, Y Liu, N Goyal, M Ghazvininejad, A Mohamed, O Levy, V Stoyanov, L Zettlemoyer, 10.18653/v1/2020.acl-main.703Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsLewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., Stoyanov, V., Zettlemoyer, L.: BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. pp. 7871-7880. Association for Computational Linguistics, Online (Jul 2020). https://doi.org/10.18653/v1/2020.acl-main.703, https://www.aclweb. org/anthology/2020.acl-main.703 Y Liu, S Albanie, A Nagrani, A Zisserman, arXiv:1907.13487Use what you have: Video retrieval using representations from collaborative experts. Liu, Y., Albanie, S., Nagrani, A., Zisserman, A.: Use what you have: Video retrieval using representations from collaborative experts. arXiv:1907.13487 (2019) Decoupled weight decay regularization. I Loshchilov, F Hutter, International Conference on Learning Representations. Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: International Conference on Learning Representations (2019), https://openreview.net/forum? id=Bkg6RiCqY7 Clip4clip: An empirical study of clip for end to end video clip retrieval and captioning. H Luo, L Ji, M Zhong, Y Chen, W Lei, N Duan, T Li, Neurocomputing. Luo, H., Ji, L., Zhong, M., Chen, Y., Lei, W., Duan, N., Li, T.: Clip4clip: An empirical study of clip for end to end video clip retrieval and captioning. Neuro- computing (2022) A Madasu, J Oliva, G Bertasius, arXiv:2205.05739Learning to retrieve videos by asking questions. arXiv preprintMadasu, A., Oliva, J., Bertasius, G.: Learning to retrieve videos by asking ques- tions. arXiv preprint arXiv:2205.05739 (2022) Interactive video retrieval with dialog. S Maeoki, K Uehara, T Harada, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. the IEEE/CVF Conference on Computer Vision and Pattern Recognition WorkshopsMaeoki, S., Uehara, K., Harada, T.: Interactive video retrieval with dialog. In: Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recogni- tion Workshops. pp. 952-953 (2020) Learning a text-video embedding from incomplete and heterogeneous data. A Miech, I Laptev, J Sivic, arXiv:1804.02516Miech, A., Laptev, I., Sivic, J.: Learning a text-video embedding from incomplete and heterogeneous data. arXiv:1804.02516 (2018) Learning joint embedding with multimodal cues for cross-modal video-text retrieval. N C Mithun, J Li, F Metze, A K Roy-Chowdhury, ICMRMithun, N.C., Li, J., Metze, F., Roy-Chowdhury, A.K.: Learning joint embedding with multimodal cues for cross-modal video-text retrieval. In: ICMR (2018) Learning transferable visual models from natural language supervision. A Radford, J W Kim, C Hallacy, A Ramesh, G Goh, S Agarwal, G Sastry, A Askell, P Mishkin, J Clark, International Conference on Machine Learning. PMLRRadford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning. pp. 8748-8763. PMLR (2021) Towards diverse paragraph captioning for untrimmed videos. Y Song, S Chen, Q Jin, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionSong, Y., Chen, S., Jin, Q.: Towards diverse paragraph captioning for untrimmed videos. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 11245-11254 (2021) Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L Kaiser, I Polosukhin, Proceedings of the 31st International Conference on Neural Information Processing Systems. pp. 6000-6010. NIPS'17. the 31st International Conference on Neural Information Processing Systems. pp. 6000-6010. NIPS'17USACurran Associates IncVaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. In: Proceedings of the 31st Inter- national Conference on Neural Information Processing Systems. pp. 6000-6010. NIPS'17, Curran Associates Inc., USA (2017), http://dl.acm.org/citation. cfm?id=3295222.3295349 Sequence to sequence-video to text. S Venugopalan, M Rohrbach, J Donahue, R Mooney, T Darrell, K Saenko, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionVenugopalan, S., Rohrbach, M., Donahue, J., Mooney, R., Darrell, T., Saenko, K.: Sequence to sequence-video to text. In: Proceedings of the IEEE international conference on computer vision. pp. 4534-4542 (2015) Transformers: State-of-the-art natural language processing. T Wolf, L Debut, V Sanh, J Chaumond, C Delangue, A Moi, P Cistac, T Rault, R Louf, M Funtowicz, J Davison, S Shleifer, P Von Platen, C Ma, Y Jernite, J Plu, C Xu, T Le Scao, S Gugger, M Drame, Q Lhoest, A Rush, 10.18653/v1/2020.emnlp-demos.6Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsAssociation for Computational LinguisticsWolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., von Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. pp. 38-45. Association for Computational Linguistics, Online (Oct 2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6, https://aclanthology.org/2020.emnlp-demos.6 Text2video: An end-to-end learning framework for expressing text with videos. X Yang, T Zhang, C Xu, IEEE Transactions on Multimedia. 209Yang, X., Zhang, T., Xu, C.: Text2video: An end-to-end learning framework for expressing text with videos. IEEE Transactions on Multimedia 20(9), 2360-2370 (2018) Mmchat: Multi-modal chat dataset on social media. Y Zheng, G Chen, X Liu, J Sun, Proceedings of The 13th Language Resources and Evaluation Conference. European Language Resources Association. The 13th Language Resources and Evaluation Conference. European Language Resources AssociationZheng, Y., Chen, G., Liu, X., Sun, J.: Mmchat: Multi-modal chat dataset on social media. In: Proceedings of The 13th Language Resources and Evaluation Confer- ence. European Language Resources Association (2022)
[ "https://github.com/lyuchenyang/Dialogue-to-Video-Retrieval" ]
[ "Massless fermionic bound states and the gauge/gravity correspondence", "Massless fermionic bound states and the gauge/gravity correspondence" ]
[ "Riccardo Argurio \nPhysique Théorique et Mathématique and International Solvay Institutes\nUniversité Libre de Bruxelles\nC.P. 2311050BruxellesBelgium\n", "Gabriele Ferretti \nInstitute of Fundamental Physics\nChalmers University of Technology\n412 96GöteborgSweden\n", "Christoffer Petersson \nInstitute of Fundamental Physics\nChalmers University of Technology\n412 96GöteborgSweden\n" ]
[ "Physique Théorique et Mathématique and International Solvay Institutes\nUniversité Libre de Bruxelles\nC.P. 2311050BruxellesBelgium", "Institute of Fundamental Physics\nChalmers University of Technology\n412 96GöteborgSweden", "Institute of Fundamental Physics\nChalmers University of Technology\n412 96GöteborgSweden" ]
[]
We study the equations of motion of fermions in type IIB supergravity in the context of the gauge/gravity correspondence. The main motivation is the search for normalizable fermionic zero modes in such backgrounds, to be interpreted as composite massless fermions in the dual theory. We specialize to backgrounds characterized by a constant dilaton and a self-dual three-form. In the specific case of the Klebanov-Strassler solution we construct explicitly the fermionic superpartner of the Goldstone mode associated with the broken baryonic symmetry. The fermionic equations could also be used to search for goldstinos in theories that break supersymmetry dynamically.
10.1088/1126-6708/2006/03/043
[ "https://export.arxiv.org/pdf/hep-th/0601180v3.pdf" ]
10,311,940
hep-th/0601180
1ade49b61ddbd2b68c519693923cead4500b0590
Massless fermionic bound states and the gauge/gravity correspondence 21 Feb 2006 Riccardo Argurio Physique Théorique et Mathématique and International Solvay Institutes Université Libre de Bruxelles C.P. 2311050BruxellesBelgium Gabriele Ferretti Institute of Fundamental Physics Chalmers University of Technology 412 96GöteborgSweden Christoffer Petersson Institute of Fundamental Physics Chalmers University of Technology 412 96GöteborgSweden Massless fermionic bound states and the gauge/gravity correspondence 21 Feb 2006 We study the equations of motion of fermions in type IIB supergravity in the context of the gauge/gravity correspondence. The main motivation is the search for normalizable fermionic zero modes in such backgrounds, to be interpreted as composite massless fermions in the dual theory. We specialize to backgrounds characterized by a constant dilaton and a self-dual three-form. In the specific case of the Klebanov-Strassler solution we construct explicitly the fermionic superpartner of the Goldstone mode associated with the broken baryonic symmetry. The fermionic equations could also be used to search for goldstinos in theories that break supersymmetry dynamically. Introduction and summary One of the latest surges of interest in the context of the gauge/gravity correspondence (for reviews close to the topics of this work, see [1,2,3]) has been the possibility that some backgrounds might provide the supergravity realization of dynamical supersymmetry (SUSY) breaking. This possibility was first considered for the quiver theories described in [4,5,6]. These theories were constructed as a non-conformal deformation of the conformal theories [7,8,9] dual to the new Sasaki-Einstein manifolds [10,11] Y pq . Unfortunately, in spite of being chiral theories, these theories do not display true dynamical supersymmetry breaking with a stable ground state, but rather a runaway behavior [5,12] very much like super QCD with 0 < N f < N c [13]. Still, the possibility of the existence of gravity solutions dual to dynamically broken SUSY has not been ruled out. (Some work on deformations for these theories can be found in [14,15,16,17,18]. For related earlier work see [19].) One of the consequences of spontaneous SUSY breaking (dynamical [20] or tree level [21,22]) is the existence of a fermionic Goldstone mode g -the "goldstino" [23]. Such mode can arise as a massless bound state of microscopic degrees of freedom in a confining theory and has the distinguishing property of coupling to the supercurrent J without derivative terms -in obvious notation: 0|J µ α |g β = f γ µ αβ ,(1) where f = 0 is the goldstino coupling. In the context of the gauge/gravity correspondence, such particle must be described by a normalizable zero mode in the bulk coupling directly to the gravitino Ψ µ and can be studied by looking at the bulk fermionic equations of motion. Even in theories that do not break SUSY, the study of the bulk fermionic equations and the search for normalizable zero modes is still of interest. Obviously, with unbroken SUSY, the bosonic and fermionic spectra must match and one does not obtain additional information from the latter. However, particularly in the case of zero modes, some information may be easier to obtain in the second case, since the fermionic equations are easily linearized and index theorems may be available. In some cases, such as the cascading theory of Klebanov and Strassler (KS) [24] massless fermionic modes (the "axino") 1 must exist as a superpartner to the Goldstone boson associated with the breaking of the baryonic U(1) symmetry [25,26,27] and their explicit construction strengthens the correspondence. More generically, N = 1 SUSY implies the presence of massless fermionic superpartners of the scalar fields parameterizing the quantum moduli space of vacua, when there is one (some of these scalars can be seen as Goldstone bosons of broken global continuous symmetries). The "axino" does not obey (1) (since SUSY is unbroken in this case) and its explicit form helps elucidating precisely how (1) should be interpreted in the bulk. The solution that we find in section 4 has the property that it does not give a source for the supercovariant field strength of the gravitino, more specifically Γ M N P D N Ψ P = 0 on shell. (2) (The notation is discussed in section 2). We propose that the signature of spontaneous SUSY breaking is the existence of a normalizable zero mode for which (2) is not satisfied. Another way to distinguish a generic massless fermion from the goldstino is by looking at how they transform under the global symmetries of the problem 2 . For instance, the bosonic zero modes found in [26] are odd under the Z 2 symmetry exchanging the two S 2 spheres of the deformed conifold and the same symmetry should act non-trivially on their fermionic superpartner. On the other hand, a true goldstino should be invariant under such symmetries. We will discuss the details for the KS solution in the conclusions after having presented the explicit solution. The purpose of this paper is twofold -on the one hand, we wish to begin addressing the general issues above for a class of KS-like backgrounds (consisting of a constant dilaton and a self-dual three-form) and, on the other, we test these techniques in the true KS model [24] and construct explicitly the fermionic zero mode. Eventually, one will have to consider more complicated backgrounds with more general fluxes but we feel that the class we are considering in this paper is a good starting point to sharpen one's tools and includes at least the important example of [24]. Various aspects of flux compactifications that might be relevant in this context are reviewed in [28]. use the word within quotes for sake of brevity and to make connection with the previous literature. 2 We thank I. Klebanov for pointing out this possibility to us in the context of the KS solution. Perhaps the most interesting quality of the general equations we discuss is that the existence of a zero mode hinges on the existence of a solution to the massless Dirac equation on a (six-dimensional) Ricci flat manifold (see section 3.4). In the compact case, the existence of such a solution implies the existence of a covariantly constant spinor and thus of a Kähler structure, by the standard arguments of integration by part. In the non-compact case however, the boundary terms cannot be neglected and, because of the presence of the warp factor, there is a possibility for having a normalizable zero mode without necessarily implying a Kähler structure. We shall discuss this possibility in the conclusions, after having presented the dependence of the equations from the warp factor. Another possibility would be to leave the Kähler structure untouched but change the three-form appropriately. The paper is organized as follows: In section 2 we begin by reviewing the fermionic equations of motion of type IIB supergravity obtained in [29] (see also [30,31,32,33]). In section 3 we specialize to the above mentioned class of backgrounds and show how the equations for the zero modes can be reduced to a set of Dirac and Rarita-Schwinger equations on the internal manifold, starting precisely with the Dirac equation discussed above. In section 4 we turn to an application of the equations just derived and use them to construct the "axino" for the true KS solution. This zero mode is the fermionic partner of the Goldstone mode associated to the breaking of the baryonic U(1) symmetry and is not to be thought as a goldstino and in fact condition (2) is satisfied. We briefly summarize our findings in section 5 and present a more detailed discussion of the Z 2 symmetry transformations of the KS solution and comment on the issue related to the Kähler structure mentioned above. Some useful formulas, like the explicit expression for the spin connection on the deformed conifold, are collected in the appendix. The fermionic equations of motion of type IIB supergravity In this section we review the fermionic equations of motion of type IIB supergravity obtained in [29]. This allows us to make some comments on the conventions and notation used. We will set the Newton constant to one, κ = 1, for convenience. (It can always be reinstated by dimensional analysis.) We will only work to first order in the fermionic fields. In order to follow the more recent literature, we will use, contrary to [29], a "mostly plus" metric. This can be most easily accomplished by letting g M N → −g M N , Γ M → iΓ M and so on, and implies a few sign changes that are easily implemented. The Γ-matrices are all real in the Majorana representation. Our convention for the ǫ-tensor is that it includes the appropriate determinant of the metric and thus transforms as a true tensor, not as a density. Also, when evaluated with flat indices it is purely numerical and we have the sign convention ǫ 0...9 = −ǫ 0...9 = 1. Finally, the five-form F 5 is self-dual ( * 10 F 5 = F 5 ) in the sense F M 1 M 2 M 3 M 4 M 5 = 1 5! ǫ M 1 M 2 M 3 M 4 M 5 M 6 M 7 M 8 M 9 M 10 F M 6 M 7 M 8 M 9 M 10(3) We also define, with flat indices, Γ χ10 = Γ 0 . . . Γ 9 , and the chiralities of the dilatino λ and gravitino Ψ M are: Γ χ10 λ = −λ, Γ χ10 Ψ M = +Ψ M , reversed from the conventions in [29]. The dilatino and gravitino equations of motion are, respectively [29]: Γ M D M λ = i 240 Γ M N P QR F M N P QR λ + 1 24 Γ M Γ N P Q G N P Q Ψ M +Γ M Γ R P R Ψ * M ,(4) and Γ M N P D N Ψ P = − 1 48 Γ N RL Γ M G * N RL λ − i 480 Γ M N P Γ QRLST F QRLST Γ N Ψ P + 1 96 Γ M N P (Γ LSR N G LSR − 9Γ LS G N LS )Ψ * P + 1 2 Γ R Γ M P R λ * .(5) Note that the Ψ terms on the RHS of (4) and (5) are not written out explicitly in [29] but they are certainly present for supercovariance as can be seen by taking the SUSY variations, which in our notation read: δλ = Γ M P M ε * + 1 24 Γ M N P G M N P ε,(6) and δΨ M = D M ε+ i 480 Γ N P QRS F N P QRS Γ M ε− 1 96 (Γ M N P Q G N P Q −9Γ N P G M N P )ε * .(7) We have chosen to write out explicitly all the fermionic terms to avoid confusion but one could just as well introduce a supercovariant derivative D N in terms of which eq. (5) becomes simply Γ M N P D N Ψ P = − 1 48 Γ N RL Γ M G * N RL λ + 1 2 Γ R Γ M P R λ * .(8) The RHS of (8) acts as a source for the supercovariant field strength of the gravitino. The ordinary covariant derivatives are by definition: D M λ = ∂ M + 1 4 ω AB M Γ AB − 3 2 iQ M λ (9) D M Ψ R = ∂ M + 1 4 ω AB M Γ AB − 1 2 iQ M Ψ R − Γ L M R Ψ L ,(10) where Q M is the auxiliary U(1) field introduced in [29] and ω AB M the usual spin connection. Notice that the contribution of the Christoffel symbol Γ L M R drops out in the kinetic term for the gravitino but, without it, the derivative is no longer covariant. The fermionic equations of motion in a KSlike ansatz We now specialize the equations reviewed in the previous section to a generic KS-like background precisely defined as follows. Bosonic ansatz Let us review the ansatz step by step in order to distinguish between the basic assumptions and their consequences. We start from the 4+6 split of the geometry. The ten dimensional metric is split into a four-dimensional warped Minkowski space described by the coordinates x µ and a six-dimensional internal space described by the coordinates y i : ds 2 10 = e − 1 2 u(y) dx µ dx µ + e 1 2 u(y) dŝ 2(11) where e u(y) is the warp factor and dŝ 2 = g ij (y)dy i dy j is the internal metric which is assumed to describe a smooth non-compact manifold. To avoid confusion we stress that the six-dimensional indices i, j . . . will always be raised/lowered with the metric g ij and all powers of the warp factor written explicitly. Also, with a slight abuse of notation, the covariant derivative D i will denote the true covariant derivative on the internal manifold, and thus it is shifted (by a term containing the warp factor) with respect to the one used in section 2. A subtlety that arises when commuting it through a Γ-matrix is discussed in appendix A. By Poincaré invariance in the Minkowski space, all other fields can depend only on the y i coordinates. Furthermore, the complex 3-form G 3 must be living purely in the six-dimensional internal space. The basic assumption that we make is to take the 3-form to be imaginary self-dual in the six-dimensional internal space: * 6 G 3 = iG 3 ,(12) that is 1 6 ǫ ijklmn G lmn = iG ijk ,(13) where the ǫ-tensor is defined with respect to the internal metric g ij . In particular, for flat indices we have ǫ 4...9 = ǫ 4...9 = 1. The assumption (12) leads to many simplifications. First of all, we can consider a background where the type IIB dilaton and RR scalar can be held constant, thus allowing us to set P M = Q M = 0.(14) We can think of this condition as a kind of extremality condition, since the equations of motion for the dilaton and axion are sourceless for our ansatz. The Bianchi identities further impose that the 3-form is closed: dG 3 = 0,(15) and self-duality thus requires it to be harmonic. To preserve 4d Poincaré symmetry the self-dual 5-form must be taken as: F 5 = F 5 + * 10 F 5 , F 5 = F 1 ∧ dx 0 ∧ dx 1 ∧ dx 2 ∧ dx 3 .(16) The equations of motion of the 5-form are: dF 5 = 1 8 iG 3 ∧ G * 3 .(17) Since G 3 is purely in the 6-manifold, the EOM above imply that dF 5 = 0 and thus F 1 = dZ, with Z = Z(y) a real function. Now the EOM for the 3-form are: d * 10 G 3 = 4iF 5 ∧ G 3 .(18) Taking into account self-duality of G 3 , we have: * 10 G 3 = ie −u G 3 ∧ dx 0 ∧ dx 1 ∧ dx 2 ∧ dx 3 .(19) Thus the EOM for G 3 imply that d(4Z − e −u ) ∧ G 3 = 0 over the 6-manifold, which, due to self-duality of G 3 implies that Z = 1 4 e −u up to an additive constant that we set to zero. Thus: F 5 = 1 4 de −u ∧ dx 0 ∧ dx 1 ∧ dx 2 ∧ dx 3 .(20) Note that the sign of the 5-form is directly related to the sign in the selfduality equation for G 3 . The Einstein equations for the metric, given the above source fields, yield for the internal part just the condition that the 6-dimensional metric is Ricciflat, R ij = 0. For the 4-dimensional part, they yield an equation for the warp factor that can be also consistently derived from the EOM of the 5-form (17) with indices along the 6-manifold, and which is entirely determined by the data on the six-dimensional manifold: − ∇ 6 e u = 1 12 G * lmn G lmn(21) Of course, the above equations do not imply SUSY. As well known [34], SUSY requires in addition the internal space to be Kähler and the threeform to be (2, 1) and primitive. The KS background obeys these conditions and is thus supersymmetric. However, we will not make this assumption in our derivation, except in section 4 where we shall specialize to the KS background. Without (or even with) SUSY, one might wonder whether it makes sense to impose the self-duality condition on the 3-form. Relaxing this condition would imply a much more generic, but also much more complicated, set up. Though such a generalization should ultimately be carried out, we feel that the above set up is first of all a good training ground, but might also be of relevance in situations in which SUSY is present asymptotically, and the 3-form could well preserve its self-duality everywhere. Fermionic ansatz Now it is time to introduce a 4-6 split for the spinors and the Γ-matrices. We split the Γ-matrices as follows Γ µ = e u 4 γ µ ⊗ 1, Γ i = e − u 4 γ χ4 ⊗ γ i , with γ χ4 = iγ 0 . . . γ 3 .(22) The warp factors have been denoted explicitly so that the four and six dimensional γ-matrices obey {γ µ , γ ν } = 2η µν and {γ i , γ j } = 2g ij .(23) We are in a Majorana representation where all γ µ are real and all γ i imaginary. Similar equations, with the warp factors reversed, hold for Γ µ and Γ i . We also define, with flat indices, γ χ6 = −iγ 4 . . . γ 9 ,(24) which is such that Γ χ10 = γ χ4 ⊗ γ χ6 . We consider one of the two linearly independent constant Weyl spinors in four dimensions ǫ + of positive four dimensional chirality together with its complex conjugate ǫ − = ǫ * + of negative four dimensional chirality. We make the most general ansatz that is suited to search for zero momentum massless modes with four-dimensional spin 1/2: λ = ǫ + ⊗ λ − + ǫ − ⊗ λ + Ψ µ = Γ µ (ǫ + ⊗ χ − + ǫ − ⊗ χ + ) Ψ i = e u 4 (ǫ + ⊗ ψ +i + ǫ − ⊗ ψ −i ).(25) The ± signs denote the four and six dimensional chiralities and in the case of λ we use the same symbol for the six dimensional spinor as for the ten dimensional one since no confusion can arise. The warp factor in the last of (25) has been introduced for convenience. Notice that, apart from ǫ ± , the other spinors are not the complex conjugate of each other since the ten dimensional fermions are not Majorana. Fermionic equations of motion, preliminaries It is now straightforward to insert (25) and the bosonic ansatz into the equations of motion (4), (5) and to collect the terms proportional to ǫ + and those proportional to ǫ − . We obtain equations that contain only data from the six dimensional manifold. Namely, the dilatino equation (4) gives rise to the following two equations: γ i D i λ − + 3 8 γ i ∂ i uλ − = 1 4 e − u 2 γ jk G ijk ψ +i(26) and: γ i D i λ + − 1 8 γ i ∂ i uλ + = − 1 6 e − u 2 γ ijk G ijk χ + .(27) Similarly, the component along x µ of (5) gives rise to: γ ij D i ψ +j − 1 2 ∂ i uψ +i + 3 8 γ ij ∂ i uψ +j + 3γ i D i χ − − 3 8 γ i ∂ i uχ − = 1 48 e − u 2 γ ijk G * ijk λ − + 1 8 e − u 2 G nij γ ij ψ * −n(28) and: γ ij D i ψ −j + 3 8 γ ij ∂ i uψ −j − 3γ i D i χ + − 9 8 γ i ∂ i uχ + = 1 8 e − u 2 γ ijk G ijk χ * − − 1 8 e − u 2 G nij γ ij ψ * +n(29) Finally, the component along y i of (5) yields: γ pij D i ψ +j − 1 8 γ pij ∂ i uψ +j + 4γ pi D i χ − − 1 2 γ pi ∂ i uχ − = 1 2 e − u 2 G pij γ ij χ * +(30) and γ pij D i ψ −j + 3 8 γ pij ∂ i uψ −j − 4γ pi D i χ + + 1 2 γ pi ∂ i uχ + − 2∂ p uχ + = 1 8 e − u 2 G * pij γ ij λ + + 1 2 e − u 2 G pij γ ij χ * − − 1 2 e − u 2 G pij γ i ψ * +j(31) Before making any further manipulation, it is advisable to check which of the six dimensional fermions can or cannot be gauged away in this particular bosonic background. Therefore we reserve to the SUSY variations (6) and (7) the same treatment we gave the equations of motion. For the SUSY variation: ε = ǫ + ⊗ ε + + ǫ − ⊗ ε − :(32) (where, again, ε + and ε − are independent), we get: δλ + = 0 δλ − = 1 24 e − 3u 4 G ijk γ ijk ε + δχ + = 0 δχ − = − 1 4 e − u 4 γ i ∂ i uε + − 1 96 e − 3u 4 G ijk γ ijk ε * − (33) e u 4 δψ +i = D i ε + + 1 4 γ ij ∂ j uε + − 1 8 ∂ i uε + + 1 16 e − u 2 G ijk γ jk ε * − e u 4 δψ −i = D i ε − + 1 8 ∂ i uε − + 1 8 e − u 2 G ijk γ jk ε * + The usual gauge choice Γ M ψ M = 0 can be easily seen to correspond to 4χ − + γ i ψ +i = 0 and 4χ + − γ i ψ −i = 0, but we will choose a more convenient one in the following. Disentangling the fermionic equations of motion We now rewrite all the fermionic equations as a system which can be solved step by step, in principle by inverting the Dirac operator on the 6 dimensional transverse manifold. First of all, we subtract from (29) the contraction with γ p of (31), to obtain the massless Dirac equation discussed in the introduction: γ i D iχ+ = 0, whereχ + = e − 5 8 u χ + .(34) In order to rewrite (30), we definẽ ψ +i = e − u 8 (ψ +i + γ i χ − ).(35) If we then choose the gauge γ iψ +i = 0, we obtain the following simple equation: γ j D jψ+i = 1 2 G ijk γ jkχ * + .(36) Note that contracting with γ i we obtain the condition (that was used to obtain the previous equation): D iψ +i = 0.(37) Now we turn to (26)- (27). We perform the rescalings: λ + = e − u 8 λ + ,λ − = e 3 8 u λ − .(38) Then the equations simply write: γ i D iλ+ = − 1 6 G ijk γ ijkχ + ,(39)γ i D iλ− = 1 4 G ijk γ jkψ+i .(40) Turning to (31), we define:ψ −i = e 3 8 u ψ −i .(41) Then, imposing the gauge γ iψ −i = 0, we obtain the equation: γ j D jψ−i = −4e u D iχ+ −γ ij ∂ j e uχ + −∂ i e uχ + + 1 8 G * ijk γ jkλ + − 1 2 G ijk γ jψ * k + . (42) For the sake of completeness, the contraction with γ i gives: D iψ −i = −3γ i ∂ i e uχ + .(43) We are left with (28). After we perform an additional rescaling: χ − = e 7 8 u χ − ,(44) the equation becomes: γ i D iχ− = − 1 2 ∂ i e uψ +i − 1 96 G * ijk γ ijkλ − − 1 16 G ijk γ ijψ * −k .(45) Note that the SUSY variation of the gauge fixing conditions is simply given by: γ i δψ +i = γ i D iε+ , ε + = e 3 8 uε + ,(46)γ i δψ −i = γ i D iε− , ε − = e − u 8ε − .(47) Finding an explicit fermionic solution in the KS background In this section, we apply the equations derived above to study the problem of finding a fermionic massless zero mode in the supersymmetric KS background [24]. The existence of such mode is needed in order to form a SUSY multiplet together with the two bosonic massless modes (sometimes referred to as the "axion" and the "saxion" with a slight abuse of language) which have been derived in [26,27] 3 . The "axion" is actually the Goldstone boson associated with the breaking of the baryonic symmetry [25,26,27]. Finding the "axino" completes the holographic description of the massless multiplet present in the low energy effective description of the boundary theory, and is thus a nice check of the gauge/gravity correspondence. We first review the KS background [24]. This is just a specific case of the generic ansatz discussed in section 3.1 and thus, we only need to know that the internal space is the deformed conifold [40] (see also [41,42,43]), whose sechsbein are, up to an overall rescaling: e 1 = A(τ )(− sin θ 1 dφ 1 − cos ψ sin θ 2 dφ 2 + sin ψdθ 2 ) e 2 = A(τ )(dθ 1 − sin ψ sin θ 2 dφ 2 − cos ψdθ 2 ) e 3 = B(τ )(− sin θ 1 dφ 1 + cos ψ sin θ 2 dφ 2 − sin ψdθ 2 ) e 4 = B(τ )(dθ 1 + sin ψ sin θ 2 dφ 2 + cos ψdθ 2 ) (48) e 5 = C(τ )(dψ + cos θ 1 dφ 1 + cos θ 2 dφ 2 ) e 6 = C(τ )dτ where, in terms of the function: K(τ ) = (sinh τ cosh τ − τ ) 1/3 sinh τ ,(49) defined in [24], we have: A 2 (τ ) = 1 4 K(τ )(cosh τ − 1), B 2 (τ ) = 1 4 K(τ )(cosh τ + 1),(50)C 2 (τ ) = 1 3K 2 (τ ) . For the sake of completeness, we give the spin connection in appendix A. We can then write the equations for a covariantly constant spinor on the deformed conifold, D i η = 0. With our coordinate choice (48), they imply that the spinor must be constant, and has to obey the conditions (with flat indices): (γ 12 + γ 34 )η = (γ 16 − γ 45 )η = 0(51) We will choose η to have positive chirality γ χ6 η = η and denote its complex conjugate (of negative chirality) by η * . Then, η satisfies three conditions: (γ 1 + iγ 4 )η = 0, (γ 3 + iγ 2 )η = 0, (γ 6 − iγ 5 )η = 0.(52) The above formula allows one to read off the complex structure in the flat indices. Denoting complex indices in boldface, we take the following holomorphic sechsbein: e 1 = e 1 + ie 4 , e 2 = e 3 + ie 2 , e 3 = e 6 − ie 5 ,(53) and the antiholomorphic sechsbein are obtained by complex conjugation. The unconventional combinations above are forced upon us by the labeling of the sechsbein (48) that is the one commonly used in the literature. With these normalizations, the flat metric is η 11 = 2 and η 11 = 1/2. With (53), the conditions (52) simply become: γ a η = 0, a = 1, 2, 3.(54) Using the covariantly constant spinor η, we can now start solving the fermionic equations in the KS background. Eq. (34) can be trivially solved by setting: χ + = η.(55) Notice that the expression for χ + = e 5 8 u η is then normalizable in the sense of [44] (see also [45,46,47], the case for the Rarita-Schwinger field is discussed in [48,49,50,51]). We already notice from the asymptotic behavior of the solution above, that the mode we have found should be dual to an operator of dimension ∆ = 5 2 , which is the right dimension for the fermion in the "axion-saxion" chiral multiplet, which has dimension ∆ = 2, see [26]. Moving on to (36), we need, first of all, an expression for the three form G 3 . This is given in [24] and has the form (in the flat basis (53)): G 3 = √ 3M (τ cosh τ − sinh τ ) sinh 3 τ e 1 ∧ e 2 ∧ e3(56)+ (sinh τ cosh τ − τ ) 2 sinh 3 τ (e1 ∧ e 2 ∧ e 3 − e 1 ∧ e2 ∧ e 3 ) , where M is the number of fractional branes in the KS set up. In relation to the complex structure (53) G 3 is indeed a (2, 1) primitive form, so that G ijk γ ijk η = 0. It can also be easily checked that the RHS of (36) has only antiholomorphic indices and thus it is consistent 4 to take, in the flat basis: ψ +a = 0.(57) Making now the ansatz thatψ +i depends explicitly only on τ one can show (by requiring the θ 1 dependence of (36) to cancel out algebraically) that the most general form for the remaining components is: ψ +1 = zγ 1 η * + vγ 2 η * ψ +2 = vγ 1 η * + zγ 2 η * (58) ψ +3 = −2zγ 3 η * The terms proportional to z(τ ) are solutions of an homogeneous equation whereas v(τ ) couples to the source. The remaining conditions are all solved by the functions z(τ ) = c sinh τ cosh τ − τ (59) v(τ ) = −M (τ cosh τ − sinh τ ) K sinh 2 τ . Requiringψ +i to be regular at the origin sets c = 0. 5 Normalizability can be checked using the boundary terms discussed in [48,49,50,51]. 4 Indeed, the spin connection is such that the covariant derivative does not couple holomorphic and antiholomorphic indices (with respect to the flat basis (53)). 5 We can actually write the solution above in closed form, asψ +i = −2iB ij γ j η * , where B 2 is the 2-form potential of the imaginary part of G 3 , also given in [24]. Note that B 2 is a (1, 1) primitive form which satisfies d * 6 B 2 = 0, conditions which are necessary for consistency with (36). Armed with the explicit solutions of (34) and (36) we can easily solve the dilatino equations (39) and (40). The source for (39) is identically zero and normalizability forces us to take λ + = 0(60) The source for (40) turns out to be proportional to γ 3 η * times an overall function of τ allowing for the ansatzλ − = f (τ )η * . Inserting in (40) we find f = 4e u , implying the normalizability of λ − = 4e 5 8 u η * = 4χ * + .(61) Notice that any dependence on λ − disappears in the remaining equations. It is now time to look at (42). It will be useful to have the explicit expression for the warp factor. With the normalizations (48) and (56), we have, from (21): e u(τ ) = 2M 2 ∞ τ dτ ′ K(τ ′ ) (τ ′ coth τ ′ − 1) sinh τ ′ .(62) In this case the source term has both holomorphic and antiholomorphic indices and the resulting set of equations cannot be solved in terms of elementary functions. Still, it is possible to completely characterize the solution and its asymptotic behavior in terms of the warp factor. Making the ansatz thatψ −i only depends explicitly on τ and imposing the gauge condition, one can writeψ −i in terms of three unknown functions: ψ −1 = r(τ )γ1η,ψ −2 = r(τ )γ2η,ψ −3 = −2r(τ )γ3η, ψ −1 = s(τ )γ1η,ψ −2 = −s(τ )γ2η,ψ −3 = t(τ )γ3η.(63) One could also add to (63) a solution of the homogeneous equation, similar to the z(τ ) dependence of (58) that decouples from the system and should be set to zero anyway by imposing regularity at the origin and normalizability. Inserting (63) into (42) yields three linear first order O.D.E.s in the three unknown functions r, s, t that can be further simplified into two decoupled O.D.E.s (one of first order and the other of second order) for r and s: r ′ (τ ) + 2 sinh 2 τ cosh τ sinh τ − τ r(τ ) = 1 2 ∂ τ e u(τ ) ,(64) and: s ′′ (τ ) + 4 coth τ s ′ (τ ) + 3s(τ ) = − 1 2 ∂ τ e u(τ ) sinh τ ,(65) where we have used (62). Furthermore, t is expressed in terms of s: t(τ ) = (s(τ ) sinh τ ) ′ .(66) The reason why the equations for s(τ ) and r(τ ) decouple is that one can solve separately the equations for the holomorphic and antiholomorphic components. After some manipulations it turns out that they can both be solved in terms of simple integrals, much like the warp factor: r(τ ) = 1 2 e u(τ ) − 1 sinh τ cosh τ − τ τ 0 dτ ′ e u(τ ′ ) sinh 2 τ ′ = −M 2 1 sinh τ cosh τ − τ τ 0 dτ ′ K 4 (τ ′ ) sinh τ ′ (τ ′ cosh τ ′ − sinh τ ′ ), s(τ ) = − 1 2 sinh 3 τ τ 0 dτ ′ e u(τ ′ ) sinh 2 τ ′ ,(67)= 1 4 K 3 (τ )(2r(τ ) − e u(τ ) ), where all the integration constants have been fixed so that the solution is regular at the origin and normalizable. Note that s(τ ) is asymptotically subleading, so that the components of the solution depending explicitly on it vanish faster as the boundary is approached. At last, we conclude this derivation by finding the expression for χ − from (45). Inserting the expression forψ * −k in the RHS we find that the source is pointing along γ 1 γ 2 γ 3 η * . Perhaps the most convenient way to write the source is in terms of the function s(τ ) in (67) and its derivative: γ i D iχ− = √ 3M 4 sinh τ (τ s(τ ) + (τ coth τ − 1)s ′ (τ ))γ 1 γ 2 γ 3 η * .(68) (Notice that γ 1 γ 2 γ 3 η * ∝ η.) This suggests the ansatz χ − = w(τ )γ 1 γ 2 η *(69) and in fact, (68) yields: w(τ ) = M 2 τ coth τ − 1 K(τ ) sinh τ s(τ ),(70) where, once again, the integration constant has been fixed by the requirement that the solution be regular at the origin. One can check that χ − is also normalizable. This completes the finding of the zero momentum massless fermionic mode. Having found the explicit solution it is very easy to check that condition (2) is obeyed (in the sense that the RHS of (8) vanishes) due to the simple expression for the dilatino. Conclusions In this paper we began a systematic study of the fermionic equations of motion of IIB supergravity in the context of the gauge/gravity correspondence with emphasis to the search for bulk zero modes dual to massless fermions in the gauge theory. We stressed that among all such fermions, the one associated with SUSY breaking (if it occurs) should be singled out by looking at the gravitino fluctuation, most likely through the contribution to its supercovariant field strength. Other fermionic massless modes, such as the KS "axino" do not contribute to this quantity. It is interesting to note that the vanishing of the RHS of (8), at least for our ansatz, is closely related to the conditions for SUSY preservation, namely G ijk γ ijk η = 0. This is no longer vanishing even in the mildest way to break SUSY, i.e. by the presence of a (0, 3) piece for G 3 . Another way to distinguish a generic massless fermion from the goldstino is by looking at how it transforms under the global symmetries of the problem. For instance, the bosonic zero modes found in [26,27] are odd under the Z 2 symmetry exchanging the two S 2 spheres of the deformed conifold and the same symmetry should act non-trivially on its fermionic superpartner. Let us briefly recall the origin of this symmetry. The exchange of the two spheres is implemented by the exchange of the pairs of coordinate (θ 1 , φ 1 ) and (θ 2 , φ 2 ) in the solution. Trivially, the sechsbein e 5 and e 6 in (48) that are odd. Hence, of the bosonic fields, the (constant) dilaton, the metric and the five-form are even while the three form G 3 is odd. Let us also recall that the bosonic zero mode a(x) constructed in [26] enters as δG 3 = * 4 da+. . . and thus it must be odd under the symmetry so as to preserve the overall parity of G 3 . Let us now look at the fermionic solution presented in section 4. The transformation properties of the fields ψ +i and ψ −i are somewhat complicated by the fact that they carry an internal index but we don't really need them for the argument -it is quite enough to look at χ ± and λ ± . The claim is that χ + and λ − are even while χ − is odd (λ + is zero). To see this, notice that the covariantly constant spinors η and η * are both even because the six-dimensional chirality is unchanged by the symmetry (one is exchanging two pairs of indices). Thus χ + and λ − given in (55) and (61) are even 6 . On the other hand, expanding the solution (69) for χ − in terms of the gamma matrices in the real basis, one gets a spinor proportional to γ 1 γ 3 + γ 2 γ 4 + i(γ 1 γ 2 − γ 3 γ 4 ) η *(73) that transforms as the combinations of sechsbein in (72) and it is thus odd. To compensate for that in the expression for Ψ µ we must let the zero mode ǫ ± → −γ χ4 ǫ ± , thus showing that it transforms non-trivially under the Z 2 symmetry. The second point briefly mentioned in the introduction that we would like to discuss is the possibility that the presence of the warp factor might allow for normalizable zero modes without requiring a Kähler structure and thus SUSY. There are two different types of boundary conditions that should be considered. Let us begin with the standard one. Assume that one is looking at an internal manifold whose metric is asymptotically that of a cone: dŝ 2 ≈ dr 2 + r 2 dΣ 2 ,(74) for some five-dimensional Einstein (but not necessarily Sasaki) manifold with metric dΣ 2 . The existence of a covariantly constant spinor would of course imply a Kähler structure for the manifold, but the original condition (the Dirac equation (34)) necessary for the zero mode is weaker on a non-compact manifold. The two conditions are equivalent by the standard argument of integration by part only for spinorsχ + vanishing at infinity faster that r −2 . (The covariantly constant spinor η is an exception because it is completely independent on r.) Would the existence of a spinor solving the Dirac equation (34) but decaying more slowly than r −2 still allow for a massless mode on the boundary? For this we must look at the other boundary condition inferred from the AdS/CFT correspondence in [44]. Namely, we must ensure that, for instance: √ Gχ + χ + bdry < ∞,(75) where G is the determinant of the induced metric at the boundary. This is one of the conditions that has been used throughout section 4 to check for normalizability. Inserting the appropriate powers of the warp factor e u ≈ r −4 , one sees that (75) requiresχ + to scale like r α with α < 1/2. Thus, the possibility of having a normalizable zero mode without a Kähler structure is left open. Whether gravity duals to theories with a stable non-supersymmetric vacuum exist is still an open and interesting question. It seems that one would need the background to be dual to a chiral gauge theory with no classical flat directions (see the arguments and caveats in [52]). The cascading theories considered until now are not of this kind, and we do not expect a smooth gravity dual for a theory with no stable vacuum. Presumably, one will have to turn to a more general ansatz, but we hope that a similar analysis to the one performed in this paper can be helpful in this endeavor. where in defining the functions f 1 , f 2 , f 3 we have taken into account the following identities: f 1 (τ ) = A ′ (τ ) A(τ )C(τ ) = −A 2 (τ ) + B 2 (τ ) + C 2 (τ ) 4A(τ )B(τ )C(τ ) f 2 (τ ) = B ′ (τ ) B(τ )C(τ ) = A 2 (τ ) − B 2 (τ ) + C 2 (τ ) 4A(τ )B(τ )C(τ ) (77) f 3 (τ ) = C ′ (τ ) 2C 2 (τ ) = A 2 (τ ) + B 2 (τ ) − C 2 (τ ) 4A(τ )B(τ )C(τ ) . Other useful formulas are those involving the self-dual forms G 3 and F 5 : γ ijk G ijk = γ ijk G ijk 1 + γ χ6 2 γ m γ ijk G ijk = 6γ ij G mij 1 + γ χ6 2 γ ijk γ m G ijk = 6γ ij G mij 1 − γ χ6 2 ,(78) and similarly for the complex conjugates (recall that γ i and γ χ6 are all imaginary in order for the ten dimensional matrices to be real). The terms containing the five-form on the other hand, can be simplified with the help of: i 240 Γ M N P QR F M N P QR = 1 4 ∂ m uΓ m Γ χ6 1 − Γ χ10 2 ,(79) where Γ χ10 = Γ 0 . . . Γ 9 as in the text and Γ χ6 = −iΓ 4 . . . Γ 9 = 1 ⊗ γ χ6 all with flat indices. (See also e.g. [53,54].) Lastly, recall that, since D i represents the covariant derivative on the internal manifold, it no longer commutes with Γ µ and we have instead: D i Γ µ = Γ µ D i − 1 4 ∂ i uΓ µ .(80) are invariant under the exchange but the remaining four transform in a complicated way. One can construct however various combinations that transform in a simple way: (e 1 ) 2 + (e 2 ) 2 and (e 3 ) 2 + (e 4 ) 2 (71) that are even under the exchange, and e 1 ∧ e 2 , e 3 ∧ e 4 and e 1 ∧ e 3 + e 2 ∧ e 4 - Belgium (convention 4.4505.86), by the "Interuniversity Attraction Poles Programme -Belgian Science Policy" and by the European Commission FP6 programme MRTN-CT-2004-005104, in which he is associated to V.U.Brussel. The research of G.F. is supported by the Swedish Research Council (Vetenskapsrådet) contracts 622-2003-1124 and 621-2002-3884. Partial support from the EU Superstring Theory Network, project number MRTN-CT-2004-512194 is also gratefully acknowledged. Appendix A We collect some useful formulas that have been used extensively in the derivation of the equations in the text. Let us begin with the spin connection. In the flat basis, the non-zero components of the spin connection ω ab,c = −ω ba,c are ω 12,1 = ω 34,1 = cos θ 1 2A(τ ) sin θ 1 ω 16,1 = −ω 45,1 = ω 26,2 = ω 35,2 = f 1 (τ ) ω 12,3 = ω 34,3 = cos θ 1 2B(τ ) sin θ 1 ω 36,3 = −ω 25,3 = ω 46,4 = ω 15,4 = f 2 Strictly speaking, one should refrain to call such multiplet "axionic" since it is not related to an anomalous symmetry, like the QCD axion. Still, we will, in a few places, After the bosonic modes were given, the full baryonic branch was constructed in[35], using the techniques of[36] and showed to agree with the gauge theory analysis in[37]. Another deformation, which breaks SUSY explicitly, was considered in[38,39]. The quantities with a tilde only differ by powers of the warp factor and have thus the same transformation properties. AcknowledgmentsIt is a pleasure to thank M. Bertolini, F. Bigazzi and A. L. Cotrone for many discussions at the beginning of this project. In particular, M. Bertolini has been extremely helpful in elucidating many aspects of the gauge/gravity correspondence to us while this work was being carried out. We are very grateful to I. Klebanov for reading a draft of the manuscript and making suggestions on how to improve it. We also benefited from conversations with S. Kuperstein, D. Martelli, L. Martucci and A. Zaffaroni and from email exchanges with I. Pesando and D. Tsimpis. R.A. is a Research Associate of the Fonds National de la Recherche Scientifique (Belgium). The research of R.A. is partially supported by IISN . O Aharony, arXiv:hep-th/0212193O. Aharony, arXiv:hep-th/0212193. . M Bertolini, arXiv:hep-th/0303160Int. J. Mod. Phys. A. 185647M. Bertolini, Int. J. Mod. Phys. A 18 (2003) 5647 [arXiv:hep-th/0303160]. . M J Strassler, arXiv:hep-th/0505153M. J. Strassler, arXiv:hep-th/0505153. . D Berenstein, C P Herzog, P Ouyang, S Pinansky, arXiv:hep-th/0505029JHEP. 050984D. Berenstein, C. P. Herzog, P. Ouyang and S. Pinansky, JHEP 0509 (2005) 084 [arXiv:hep-th/0505029]. . S Franco, A Hanany, F Saad, A M Uranga, arXiv:hep-th/0505040S. Franco, A. Hanany, F. Saad and A. M. Uranga, arXiv:hep-th/0505040. . M Bertolini, F Bigazzi, A L Cotrone, arXiv:hep-th/0505055Phys. Rev. D. 7261902M. Bertolini, F. Bigazzi and A. L. Cotrone, Phys. Rev. D 72 (2005) 061902 [arXiv:hep-th/0505055]. . D Martelli, J Sparks, arXiv:hep-th/0411238Commun. Math. Phys. 26251D. Martelli and J. Sparks, Commun. Math. Phys. 262 (2006) 51 [arXiv:hep-th/0411238]. . M Bertolini, F Bigazzi, A L Cotrone, arXiv:hep-th/0411249JHEP. 041224M. Bertolini, F. Bigazzi and A. L. Cotrone, JHEP 0412 (2004) 024 [arXiv:hep-th/0411249]. . S Benvenuti, S Franco, A Hanany, D Martelli, J Sparks, arXiv:hep-th/0411264JHEP. 050664S. Benvenuti, S. Franco, A. Hanany, D. Martelli and J. Sparks, JHEP 0506 (2005) 064 [arXiv:hep-th/0411264]. . J P Gauntlett, D Martelli, J Sparks, D Waldram, arXiv:hep-th/0403002Adv. Theor. Math. Phys. 8711J. P. Gauntlett, D. Martelli, J. Sparks and D. Waldram, Adv. Theor. Math. Phys. 8 (2004) 711 [arXiv:hep-th/0403002]. . J P Gauntlett, D Martelli, J F Sparks, D Waldram, arXiv:hep-th/0403038J. P. Gauntlett, D. Martelli, J. F. Sparks and D. Waldram, arXiv:hep-th/0403038. . K Intriligator, N Seiberg, arXiv:hep-th/0512347K. Intriligator and N. Seiberg, arXiv:hep-th/0512347. . I Affleck, M Dine, N Seiberg, Nucl. Phys. B. 241493I. Affleck, M. Dine and N. Seiberg, Nucl. Phys. B 241 (1984) 493. . C P Herzog, Q J Ejaz, I R Klebanov, arXiv:hep-th/0412193JHEP. 05029C. P. Herzog, Q. J. Ejaz and I. R. Klebanov, JHEP 0502 (2005) 009 [arXiv:hep-th/0412193]. . B A Burrington, J T Liu, M Mahato, L A Pando Zayas, arXiv:hep-th/0504155JHEP. 050719B. A. Burrington, J. T. Liu, M. Mahato and L. A. Pando Zayas, JHEP 0507 (2005) 019 [arXiv:hep-th/0504155]. . S S , arXiv:hep-th/0501012Phys. Lett. B. 614201S. S. Pal, Phys. Lett. B 614 (2005) 201 [arXiv:hep-th/0501012]. . K Sfetsos, D Zoakos, arXiv:hep-th/0507169Phys. Lett. B. 625135K. Sfetsos and D. Zoakos, Phys. Lett. B 625 (2005) 135 [arXiv:hep-th/0507169]. . M Berg, M Haack, W Muck, arXiv:hep-th/0507285Nucl. Phys. B. 73682M. Berg, M. Haack and W. Muck, Nucl. Phys. B 736 (2006) 82 [arXiv:hep-th/0507285]. . D N Page, C N Pope, Class. Quant. Grav. 4213D. N. Page and C. N. Pope, Class. Quant. Grav. 4 (1987) 213. . E Witten, Nucl. Phys. B. 188513E. Witten, Nucl. Phys. B 188 (1981) 513. . P Fayet, J Iliopoulos, Phys. Lett. B. 51461P. Fayet and J. Iliopoulos, Phys. Lett. B 51 (1974) 461. . L O&apos;raifeartaigh, Nucl. Phys. B. 96331L. O'Raifeartaigh, Nucl. Phys. B 96 (1975) 331. . A Salam, J A Strathdee, Phys. Lett. B. 49465A. Salam and J. A. Strathdee, Phys. Lett. B 49 (1974) 465. . I R Klebanov, M J Strassler, arXiv:hep-th/0007191JHEP. 000852I. R. Klebanov and M. J. Strassler, JHEP 0008 (2000) 052 [arXiv:hep-th/0007191]. . O Aharony, arXiv:hep-th/0101013JHEP. 010312O. Aharony, JHEP 0103 (2001) 012 [arXiv:hep-th/0101013]. . S S Gubser, C P Herzog, I R Klebanov, arXiv:hep-th/0405282JHEP. 040936S. S. Gubser, C. P. Herzog and I. R. Klebanov, JHEP 0409 (2004) 036 [arXiv:hep-th/0405282]. . S S Gubser, C P Herzog, I R Klebanov, arXiv:hep-th/0409186Comptes Rendus Physique. 51031S. S. Gubser, C. P. Herzog and I. R. Klebanov, Comptes Rendus Physique 5 (2004) 1031 [arXiv:hep-th/0409186]. . M Grana, arXiv:hep-th/0509003Phys. Rept. 42391M. Grana, Phys. Rept. 423 (2006) 91 [arXiv:hep-th/0509003]. . J H Schwarz, Nucl. Phys. B. 226269J. H. Schwarz, Nucl. Phys. B 226 (1983) 269. . J H Schwarz, P C West, Phys. Lett. B. 126301J. H. Schwarz and P. C. West, Phys. Lett. B 126 (1983) 301. . P S Howe, K S Stelle, P K Townsend, Nucl. Phys. B. 236125P. S. Howe, K. S. Stelle and P. K. Townsend, Nucl. Phys. B 236 (1984) 125. . A R Kavalov, R L Mkrtchian, Sov. J. Nucl. Phys. 461246Yad. Fiz.A. R. Kavalov and R. L. Mkrtchian, Sov. J. Nucl. Phys. 46 (1987) 728 [Yad. Fiz. 46 (1987) 1246]. . L Castellani, I Pesando, Int. J. Mod. Phys. A. 81125L. Castellani and I. Pesando, Int. J. Mod. Phys. A 8 (1993) 1125. . M Grana, J Polchinski, arXiv:hep-th/0009211Phys. Rev. D. 6326001M. Grana and J. Polchinski, Phys. Rev. D 63 (2001) 026001 [arXiv:hep-th/0009211]. . A Butti, M Grana, R Minasian, M Petrini, A Zaffaroni, arXiv:hep-th/0412187JHEP. 050369A. Butti, M. Grana, R. Minasian, M. Petrini and A. Zaffaroni, JHEP 0503 (2005) 069 [arXiv:hep-th/0412187]. . M Grana, R Minasian, M Petrini, A Tomasiello, arXiv:hep-th/0406137JHEP. 040846M. Grana, R. Minasian, M. Petrini and A. Tomasiello, JHEP 0408 (2004) 046 [arXiv:hep-th/0406137]. . A Dymarsky, I R Klebanov, N Seiberg, arXiv:hep-th/0511254A. Dymarsky, I. R. Klebanov and N. Seiberg, arXiv:hep-th/0511254. . S Kuperstein, J Sonnenschein, arXiv:hep-th/0309011JHEP. 040215S. Kuperstein and J. Sonnenschein, JHEP 0402 (2004) 015 [arXiv:hep-th/0309011]. . M Schvellinger, arXiv:hep-th/0407152JHEP. 040957M. Schvellinger, JHEP 0409 (2004) 057 [arXiv:hep-th/0407152]. . P Candelas, X C De La Ossa, Nucl. Phys. B. 342246P. Candelas and X. C. de la Ossa, Nucl. Phys. B 342 (1990) 246. . R Minasian, D Tsimpis, arXiv:hep-th/9911042Nucl. Phys. B. 572499R. Minasian and D. Tsimpis, Nucl. Phys. B 572 (2000) 499 [arXiv:hep-th/9911042]. . K Ohta, T Yokono, arXiv:hep-th/9912266JHEP. 000223K. Ohta and T. Yokono, JHEP 0002 (2000) 023 [arXiv:hep-th/9912266]. . D Arean, D E Crooks, A V Ramallo, arXiv:hep-th/0408210JHEP. 041135D. Arean, D. E. Crooks and A. V. Ramallo, JHEP 0411 (2004) 035 [arXiv:hep-th/0408210]. . M Henningson, K Sfetsos, arXiv:hep-th/9803251Phys. Lett. B. 43163M. Henningson and K. Sfetsos, Phys. Lett. B 431 (1998) 63 [arXiv:hep-th/9803251]. . W Muck, K S Viswanathan, arXiv:hep-th/9805145Phys. Rev. D. 58106006W. Muck and K. S. Viswanathan, Phys. Rev. D 58 (1998) 106006 [arXiv:hep-th/9805145]. . G E Arutyunov, S A Frolov, arXiv:hep-th/9806216Nucl. Phys. B. 544576G. E. Arutyunov and S. A. Frolov, Nucl. Phys. B 544 (1999) 576 [arXiv:hep-th/9806216]. . M Henneaux, arXiv:hep-th/9902137M. Henneaux, arXiv:hep-th/9902137. . S Corley, arXiv:hep-th/9808184Phys. Rev. D. 5986003S. Corley, Phys. Rev. D 59 (1999) 086003 [arXiv:hep-th/9808184]. . A Volovich, arXiv:hep-th/9809009JHEP. 980922A. Volovich, JHEP 9809 (1998) 022 [arXiv:hep-th/9809009]. . R C Rashkov, arXiv:hep-th/9904098Mod. Phys. Lett. A. 141783R. C. Rashkov, Mod. Phys. Lett. A 14 (1999) 1783 [arXiv:hep-th/9904098]. . P Matlock, K S Viswanathan, arXiv:hep-th/9906077Phys. Rev. D. 6126002P. Matlock and K. S. Viswanathan, Phys. Rev. D 61 (2000) 026002 [arXiv:hep-th/9906077]. . I Affleck, M Dine, N Seiberg, Nucl. Phys. B. 256557I. Affleck, M. Dine and N. Seiberg, Nucl. Phys. B 256 (1985) 557. . A Kehagias, arXiv:hep-th/9805131Phys. Lett. B. 435337A. Kehagias, Phys. Lett. B 435 (1998) 337 [arXiv:hep-th/9805131]. . S S Gubser, arXiv:hep-th/0010010S. S. Gubser, arXiv:hep-th/0010010.
[]
[ "When can Regression-Adjusted Control Variates Help? Rare Events, Sobolev Embedding and Minimax Optimality", "When can Regression-Adjusted Control Variates Help? Rare Events, Sobolev Embedding and Minimax Optimality" ]
[ "Solgi Mira ", "; Imparato ", "Oates " ]
[]
[ "Assaraf and Caffarel" ]
This paper studies the use of a machine learning-based estimator as a control variate for mitigating the variance of Monte Carlo sampling. Specifically, we seek to uncover the key factors that influence the efficiency of control variates in reducing variance. We examine a prototype estimation problem that involves simulating the moments of a Sobolev function based on observations obtained from (random) quadrature nodes. Firstly, we establish an information-theoretic lower bound for the problem. We then study a specific quadrature rule that employs a nonparametric regression-adjusted control variate to reduce the variance of the Monte Carlo simulation. We demonstrate that this kind of quadrature rule can improve the Monte Carlo rate and achieve the minimax optimal rate under a sufficient smoothness assumption. Due to the Sobolev Embedding Theorem, the sufficient smoothness assumption eliminates the existence of rare and extreme events. Finally, we show that, in the presence of rare and extreme events, a truncated version of the Monte Carlo algorithm can achieve the minimax optimal rate while the control variate cannot improve the convergence rate. Huszár, Ferenc, and David Duvenaud. 2012. Optimally-weighted herding is bayesian quadrature. arXiv preprint arXiv:1204.1664. Jiao, Jiantao, Kartik Venkat, Yanjun Han, and Tsachy Weissman. 2015. Minimax estimation of functionals of discrete distributions. IEEE Transactions on Information Theory 61 (5): 2835-2885. Kanagawa, Motonobu, and Philipp Hennig. 2019. Convergence guarantees for adaptive bayesian quadrature methods. Advances in Neural Information Processing Systems 32. Kanagawa, Motonobu, Bharath K Sriperumbudur, and Kenji Fukumizu. 2016. Convergence guarantees for kernel-based quadrature rules in misspecified settings. Advances in Neural Information Processing Systems 29. Karvonen, Toni, and Simo Sarkka. 2018. Fully symmetric kernel quadrature. SIAM Journal on Scientific Computing 40 (2): A697-A720. Krieg, David, Erich Novak, and Mathias Sonnleitner. 2022. Recovery of sobolev functions restricted to iid sampling. Mathematics of Computation 91 (338): 2715-2738. Krieg, David, and Mathias Sonnleitner. 2020. Random points are optimal for the approximation of sobolev functions. arXiv preprint arXiv:2009.11275. Krishnamurthy, Akshay, Kirthevasan Kandasamy, Barnabas Poczos, and Larry Wasserman. 2014. Nonparametric estimation of renyi divergence and friends. In International conference on machine learning, 919-927. PMLR. Lacoste-Julien, Simon, Fredrik Lindsten, and Francis Bach. 2015. Sequential kernel herding: frank-wolfe optimization for particle filtering. In Artificial intelligence and statistics, 544-552. PMLR. Lepski, Oleg, Arkady Nemirovski, and Vladimir Spokoiny. 1999. On estimation of the Lr norm of a regression function. Probability theory and related fields 113:221-253. Lin, Lin. 2017. Randomized estimation of spectral densities of large matrices made accurate. Numerische Mathematik 136:183-213. Liu, Hanzhong, and Yuehan Yang. 2020. Regression-adjusted average treatment effect estimates in stratified randomized experiments. Biometrika 107 (4): 935-948.
null
[ "https://export.arxiv.org/pdf/2305.16527v1.pdf" ]
258,947,083
2305.16527
03258e89f90722961ccdd85431b8da200be6c564
When can Regression-Adjusted Control Variates Help? Rare Events, Sobolev Embedding and Minimax Optimality Holzmüller and BachCopyright Holzmüller and Bach2021. 2016. 1999. 2013. 2017. 2019. 2018. 2023 Solgi Mira ; Imparato Oates When can Regression-Adjusted Control Variates Help? Rare Events, Sobolev Embedding and Minimax Optimality Assaraf and Caffarel Oates, Girolami, and ChopinHolzmüller and Bach2021. 2016. 1999. 2013. 2017. 2019. 2018. 2023RESEARCH NOTE *Corresponding author.Monte CarloSobolev EmbeddingRare EventsMinimax OptimalityControl Variate This paper studies the use of a machine learning-based estimator as a control variate for mitigating the variance of Monte Carlo sampling. Specifically, we seek to uncover the key factors that influence the efficiency of control variates in reducing variance. We examine a prototype estimation problem that involves simulating the moments of a Sobolev function based on observations obtained from (random) quadrature nodes. Firstly, we establish an information-theoretic lower bound for the problem. We then study a specific quadrature rule that employs a nonparametric regression-adjusted control variate to reduce the variance of the Monte Carlo simulation. We demonstrate that this kind of quadrature rule can improve the Monte Carlo rate and achieve the minimax optimal rate under a sufficient smoothness assumption. Due to the Sobolev Embedding Theorem, the sufficient smoothness assumption eliminates the existence of rare and extreme events. Finally, we show that, in the presence of rare and extreme events, a truncated version of the Monte Carlo algorithm can achieve the minimax optimal rate while the control variate cannot improve the convergence rate. Huszár, Ferenc, and David Duvenaud. 2012. Optimally-weighted herding is bayesian quadrature. arXiv preprint arXiv:1204.1664. Jiao, Jiantao, Kartik Venkat, Yanjun Han, and Tsachy Weissman. 2015. Minimax estimation of functionals of discrete distributions. IEEE Transactions on Information Theory 61 (5): 2835-2885. Kanagawa, Motonobu, and Philipp Hennig. 2019. Convergence guarantees for adaptive bayesian quadrature methods. Advances in Neural Information Processing Systems 32. Kanagawa, Motonobu, Bharath K Sriperumbudur, and Kenji Fukumizu. 2016. Convergence guarantees for kernel-based quadrature rules in misspecified settings. Advances in Neural Information Processing Systems 29. Karvonen, Toni, and Simo Sarkka. 2018. Fully symmetric kernel quadrature. SIAM Journal on Scientific Computing 40 (2): A697-A720. Krieg, David, Erich Novak, and Mathias Sonnleitner. 2022. Recovery of sobolev functions restricted to iid sampling. Mathematics of Computation 91 (338): 2715-2738. Krieg, David, and Mathias Sonnleitner. 2020. Random points are optimal for the approximation of sobolev functions. arXiv preprint arXiv:2009.11275. Krishnamurthy, Akshay, Kirthevasan Kandasamy, Barnabas Poczos, and Larry Wasserman. 2014. Nonparametric estimation of renyi divergence and friends. In International conference on machine learning, 919-927. PMLR. Lacoste-Julien, Simon, Fredrik Lindsten, and Francis Bach. 2015. Sequential kernel herding: frank-wolfe optimization for particle filtering. In Artificial intelligence and statistics, 544-552. PMLR. Lepski, Oleg, Arkady Nemirovski, and Vladimir Spokoiny. 1999. On estimation of the Lr norm of a regression function. Probability theory and related fields 113:221-253. Lin, Lin. 2017. Randomized estimation of spectral densities of large matrices made accurate. Numerische Mathematik 136:183-213. Liu, Hanzhong, and Yuehan Yang. 2020. Regression-adjusted average treatment effect estimates in stratified randomized experiments. Biometrika 107 (4): 935-948. Introduction In this paper, we consider a nonparametric quadrature rule on (random) quadrature points based on regression-adjusted control variate (Asmussen and Glynn 2007;Davidson and MacKinnon 1992;Oates and Girolami 2016;Hickernell, Lemieux, and Owen 2005). To construct the quadrature rule, we partition our available data into two halves. The first half is used to construct a nonparametric estimator, which is then utilized as a control variate to reduce the variance of the Monte Carlo algorithm implemented over the second half of our data. Traditional and well-known results Asmussen and Glynn 2007, Chapter 5.2 show that the optimal linear control variate can be obtained via Ordinary Least Squares regression. In this paper, we investigate a similar idea for constructing a quadrature rule (Oates and Girolami 2016; Assaraf and Caffarel 1999;Mira, Solgi, and Imparato 2013;Oates, Girolami, and Chopin 2017;Oates et al. 2019;South et al. 2018;Holzmüller and Bach 2023), which uses a non-parametric machine learning-based estimator as a regression-adjusted control variate. We aim to answer the following two questions: Is using optimal nonparametric machine learning algorithms to construct control variates an optimal way to improve Monte Carlo methods? What are the factors that determine the effectiveness of the control variate? x x Y Y (a)(b) Rare and extreme event Figure 1. According to the Sobolev Embedding Theorem (Adams and Fournier 2003), the Sobolev space W s,p can be embedded in L p * , where 1 p * = 1 p -s d . When s is large enough, as shown in (a), the smoothness assumption can rule out the existence of rare and extreme events. When s is not sufficiently large, specifically s < 2dq-dp 2pq , there may exist a peak (a.k.a rare and extreme event) that makes the Monte Carlo simulation hard. Under such circumstances, the function's 2q-th moment is unbounded. To understand the two questions, we consider a basic but fundamental prototype problem of estimating moments of a Sobolev function from its values observed on (random) quadrature nodes, which has a wide range of applications in Bayesian inference, the study of complex systems, computational physics, and financial risk management (Asmussen and Glynn 2007). Specifically, we estimate the q-th moment Ω f (x) q dx of f based on values f (x 1 ), · · · , f (x n ) observed on n (random) quadrature nodes x 1 , · · · , x n ∈ Ω for a function f in the Sobolev space W s,p (Ω), where Ω ⊂ R d . The parameter q here is introduced to characterize the rare events' extremeness for estimation. To verify the effectiveness of the non-parametric regression adjusted quadrature rule, we first study the statistical limit of the problem by providing a minimax information-theoretic lower bound of magnitude n max{( 1 p -s d )q-1,-s d -1 2 } . We also provide matching upper bounds for different levels of function smoothness. Under the sufficient smoothness assumption that s > d (2q-p) 2pq , we find that the non-parametric regression adjusted control variatef can improve the rate of classical Monte Carlo algorithm and help us attain a minimax optimal upper bound. In (7) below, we bound variance Ω (f q -f q ) 2 of the Monte Carlo target by the sum of the semi-parametric influence part Ω f 2q-2 (f -f ) 2 and the propagated estimation error Ω (f -f ) 2q . Although the optimal algorithm in this regime remains the same, we need to consider three different cases to derive an upper bound on the semi-parametric influence part, which is the main contribution of our proof. We propose a new proof technique that embeds the square of the influence function (qf q-1 ) 2 and estimation error (f -f ) 2 in appropriate spaces via the Sobolev Embedding Theorem (Adams and Fournier 2003). The two norms used for evaluating (f q-1 ) 2 and (f -f ) 2 should be dual norms of each other. Also, we should select the norm for evaluating (f -f ) 2 in a way that it's easy to estimate f under the selected norm, which helps us control the error induced by (f -f ) 2 . A detailed explanation of how to select the proper norms in different cases via the Sobolev Embedding Theorem is exhibited in Figure 2. In the first regime when s > d p , we can directly embed f in L ∞ (Ω) and attain a final convergence rate of magnitude n -s d -1 2 . For the second regime when d(2q-p) p(2q-2) < s < d p , the smoothness parameter s is not large enough to ensure that f ∈ L ∞ (Ω). Thus, we evaluate the estimation error (f -f ) 2 under the L p 2 norm and embed the square of the influence function (qf q-1 ) 2 in the dual space of L p 2 (Ω). Here the validity of such embedding is ensured by the lower bound d(2q-p) p(2q-2) on s. Moreover, the semi-parametric influence part is still dominant in the second regime, so the final convergence rate is the same as that of the first case. In the third regime, when d(2q-p) 2pq < s < d (2q-p) p (2q-2) , the semi-parametric influence no longer dominates and the final converge rate transits from n -s d -1 2 to n q( 1 p -s d )-1 . When the sufficient smoothness assumption breaks, i.e. s < d(2q-p) 2pq , according to the Sobolev Embedding Theorem (Adams and Fournier 2003), the Sobolev space W s,p is embedded in L dp d-sp and dp d-sp < 2q. This indicates that rare and extreme events might be present, and they are not even guaranteed to have bounded L 2q norm, which makes the Monte Carlo estimate of the q-th moment have infinite variance. Under this scenario, we consider a truncated version of the Monte Carlo algorithm, which can be proved to attain the minimax optimal rate of magnitude n q( 1 p -s d )-1 . In contrast, the usage of regression-adjusted control variates does not improve the convergence rate under this scenario. Our results reveal how the existence of rare events will change answers to the questions raised at the beginning of the section. We also use the estimation of a linear functional as an example to investigate the algorithm's adaptivity to the noise level. In this paper, we provide minimax lower bounds for estimating the integral of a fixed function with a general assumption on the noise level. Specifically, we consider all estimators that have access to observations {x i , f (x i ) + ϵ i } n i=1 of some function f that is s-Hölder smooth, where x i i.i.d ∼ Uniform([0, 1] d ) and ϵ i i.i.d ∼ n -γ N (0, 1) for some γ > 0. Based on the method of two fuzzy hypotheses, we present a lower bound of magnitude n max{-1 2 -γ,-1 2 -s d } , which exhibits a smooth transition from the Monte Carlo rate to the Quasi-Monte Carlo rate. At the same time, our information-theoretic lower bound also matches the upper bound built for quadrature rules taking use of non-parametric regression-adjusted control variates. Related Work Regression-adjusted Control Variate Regression-adjusted control variates have shown both theoretical and empirical improvements in a broad range of applications, including the construction of confidence intervals (Angelopoulos et al. 2023;Romano, Patterson, and Candes 2019), randomized trace-estimation, (Meyer et al. 2021;Sobczyk and Luisier 2022;Lin 2017), dimension reduction (Sobczyk and Luisier 2022), causal inference (Liu and Yang 2020), estimation of the normalizing factor (Holzmüller and Bach 2023) and gradient estimation (Shi et al. 2022;Liu et al. 2017). It is also used as a technique used for proving the approximation bounds on two-layer neural networks in the Barron space (Siegel and Xu 2022). In connection to the related literature to our work, we mention (Oates and Girolami 2016;Oates, Girolami, and Chopin 2017;Oates et al. 2019;Holzmüller and Bach 2023), which also study the use of nonparametric control variate estimator. However, the theoretical analysis in (Oates and Girolami 2016; Oates, Girolami, and Chopin 2017) does not provide a specific convergence rate in the Reproducing Kernel Hilbert Space, which requires a high level of smoothness for the underlying function. In contrast to prior work, our research delves into the effectiveness of a nonparametric regression-adjusted control variate in boosting convergence rates across various degrees of smoothness assumptions and identifies the key factor that determines the effectiveness of these control variates. Quadrature Rule max { ( 1 p − s d ) q − 1, − 1 2 − s d } 1 p = s d 1 p − s d = 1 2q Embed f in L ∞ Embed f in L pd d − sp L p Minimax rate L 2pq − 2p p − 2 Estimation in L 2p * p * + 2 − 2q Estimation in L 2 −1/2 Choose an embedding good for both evaluating the semi-parametric hardness and function estimation Figure 2. We summarize the minimax optimal rates and the corresponding optimal algorithms with respect to the function smoothness here. When the function is smooth enough, regression-adjusted control variates can improve the Monte Carlo rate. However, when there exist rare and extreme events that are hard to simulate, truncating the Monte Carlo estimate directly yields a minimax optimal algorithm. Above the transition point of algorithm selection is s = d(2q-p) 2pq , while the transition point of the optimal convergence rate is s = d(2q-p) p (2q-2) . To build the optimal convergence guarantee for any algorithm that utilizes a regression-adjusted control variatef , we need to embed the square of the influence function (qf q-1 ) 2 in an appropriate space via the Sobolev Embedding Theorem and evaluate the estimation error (f -f ) 2 under the dual norm of the norm associated with the chosen space, which allows us to achieve optimal semi-parametric efficiency. Our selections of the metrics in different regimes are shown in this figure. Gautier, Bardenet, and Valko 2019), Nyström approximation (Hayakawa, Oberhauser, andLyons 2021, 2023), kernel herding (Chen, Welling, and Smola 2012;Lacoste-Julien, Lindsten, and Bach 2015;Huszár and Duvenaud 2012) and kernel thinning (Chen et al. 2018;Dwivedi andMackey 2021b, 2021a). Nevertheless, the quadrature points chosen in these studies all have the ability to reconstruct the function's information, which results in a suboptimal rate for estimating the moments. Functional Estimation There are also lines of research that investigated the optimal rates of estimating both linear (Oates et al. 2019;Novak 2006;Traub et al. 1994;Novak and Wozniakowski 2008;Novak and Woźniakowski 2008;Bakhvalov 2015;Hinrichs et al. 2014;Novak 2016;Hinrichs et al. 2020;Hinrichs et al. 2022;Krieg and Sonnleitner 2020;Krieg, Novak, and Sonnleitner 2022) and nonlinear (Birgé and Massart 1995;Donoho and Nussbaum 1990;Donoho 1988;Donoho andLiu 1991a, 1991b;Robins et al. 2008;Jiao et al. 2015;Krishnamurthy et al. 2014;Mathé 1991;Heinrich 2009aHeinrich , 2009bHan, Jiao, and Mukherjee 2020;Lepski, Nemirovski, and Spokoiny 1999;Heinrich 2018) functionals, such as integrals and the L q norm. However, as far as the authors know, previous works on this topic have assumed sufficient smoothness, which rules out the existence of rare and extreme events that are hard to simulate. Additionally, existing proof techniques are only applicable in scenarios where there is either no noise or a constant level of noise present. We have developed a novel and unified proof technique that leverages the method of two fuzzy hypotheses, which allows us to account for not only rare and extreme events but also different levels of noise. Contribution • We determine all the regimes when a quadrature rule utilizing a nonparametric estimator as a control variate to reduce the Monte Carlo estimate's variance can boost the convergence rate of estimating the moments of a Sobolev function. Under sufficient smoothness assumption, which rules out the existence of rare and extreme events due to the Sobolev Embedding Theorem, the regression-adjusted control variate improves the convergence rate and achieves the minimax optimal rate. The major technical difficulty in building the convergence guarantee in this regime is determining the right evaluation metric for function estimation. In our work, we bring a new proof technique to select such a metric by embedding the influence function into an appropriate space via the Sobolev Embedding Theorem and evaluating the function estimation in the corresponding dual norm to achieve optimal semi-parametric efficiency. The selection of the metric is shown in Figure 2. • Without the sufficient smoothness assumption, however, there may exist rare and extreme events that are hard to simulate. In this circumstance, we discover that a truncated version of the Monte Carlo method is minimax optimal, while regression-adjusted control variate can't improve the convergence rate. As far as the authors know, our paper is the first work that considers this problem beyond the sufficient smoothness regime. • To study how the regression adjusted control variate adapts to the noise level, we examine the linear functionals, i.e. the definite integral. We prove that this method is minimax optimal regardless of the level of noise present in the observed data. Notations Let || · || be the standard Euclidean norm and Ω = [0, 1] d be the unit cube in R d for any fixed d ∈ N. |D k f (x) -D k f (y)| ||x -y|| s-⌊s⌋ . (1) The corresponding Hölder space is defined as C s (Ω) := f ∈ C(Ω) : ||f || C s (Ω) < ∞ . When s = 0, we have that the two norms || · || C 0 (Ω) and || · || L ∞ (Ω) are equivalent and C 0 (Ω) = L ∞ (Ω). Let N 0 := N ∪ {0} be the set of all non-negative integers. For any s ∈ N 0 and 1 ≤ p ≤ ∞, we define the Sobolev space W s,p (Ω) by W s,p (Ω) := f ∈ L p (Ω) : D α f ∈ L p (Ω), ∀ α ∈ N d 0 satisfying |α| ≤ s .(2) Let (c) + denote max{c, 0} for any c ∈ R. Fix any two non-negative sequences {a n } ∞ n=1 and {b n } ∞ n=1 . We write a n ≲ b n , or a n = O(b n ), to denote that a n ≤ Cb n for some constant C independent of n. Similarly, we write a n ≳ b n , or a n = ω(b n ), to denote that a n ≥ cb n for some constant c independent of n. We use a n = Θ(b n ) to denote that a n = O(b n ) and a n = ω(b n ). Information-Theoretic Lower Bound on Moment Estimation Problem Setup To understand how the non-parametric regression-adjusted control variate improves the Monte Carlo estimator's convergence rate, we consider a prototype problem that estimates a function's q-th moment. For any fixed q ∈ N and f ∈ W s,p (Ω), we want to estimate the q-th moment I q f := Ω f q (x)dx with n random quadrature points {x i } n i=1 ⊂ Ω. On each quadrature point x i (i = 1, · · · , n), we can observe the function value y i := f (x i ). In this section, we study the information-theoretic limit for the problem above via the method of two fuzzy hypotheses (Tsybakov 2004). We have the following information-theoretic lower bound on the class H f ,q n that contains all estimatorsĤ q : Ω n × R n → R of the q-th moment I q f . E {x i } n i=1 ,{y i } n i=1 Ĥ q {x i } n i=1 , {y i } n i=1 -I q f ≳ n max -q s d -1 p -1,-1 2 -s d . (3) Proof Sketch Here we give a sketch for our proof of Theorem 1. Our proof is based on the method of two fuzzy hypotheses, which is a generalization of the traditional Le Cam's two-point method. In fact, each hypothesis in the generalized method is constructed via a prior distribution. In order to attain a lower bound of magnitude ∆ via the method of two fuzzy hypotheses, one needs to pick two prior distributions µ 0 , µ 1 on the Sobolev space W s,p (Ω) such that the following two conditions hold. Firstly, the estimators I q f differ by ∆ with constant probability under the two priors. Secondly, the TV distance between the two corresponding distributions P 0 and P 1 of data generated by µ 0 and µ 1 is of constant magnitude. In order to prove the two lower bounds given in (3), we pick two different pairs of prior distributions as follows: Below we set m = Θ(n 1 d ) and divide the domain Ω into m d small cubes Ω 1 , Ω 2 , · · · , Ω m d , each of which has side length m -1 . For any p ∈ (0, 1), we use v p , w p to denote the discrete random variables satisfying P(v p = 0) = P(w p = -1) = p and P(v p = 1) = P(w p = 1) = 1p. (I) For the first lower bound in (3), we construct some bump function g ∈ W s,p (Ω) satisfying supp(g) ⊆ Ω 1 and I q g = Ω 1 g(x)dx = Θ(m q(-s+ d p )-d ). Now let's take some sufficiently small constant ϵ ∈ (0, 1) and pick µ 0 , µ 1 to be discrete measures supported on the two finite sets v 1+ϵ 2 g and v 1-ϵ 2 g . On the one hand, the difference between the q-th moments under µ 0 and µ 1 can be lower 2 ) with constant probability. On the other hand, note that KL(P 0 ||P 1 ) can be bounded by the KL divergence between two multivariate discrete distributions (w (0) j 1 , · · · , w (0) jn ) and (w (1) j 1 , · · · , w (1) jn ), where {w (0) j i } n i=1 and {w (1) j i } n i=1 are independent and identical copies of w 1+κ 2 and w 1-κ 2 respectively. Hence, KL(P 0 ||P 1 ) is of constant magnitude. Combining the two cases above gives us the minimax lower bound in (3). We defer a complete proof of Theorem 1 to Appendix Appendix 2.2. Minimax Optimal Estimators for Moment Estimation This section is devoted to constructing minimax optimal estimators of the q-th moment. We show that under the sufficient smoothness assumption, a regression-adjusted control variate is essential for building minimax optimal estimators. However, when the given function is not sufficiently smooth, we demonstrate that a truncated version of the Monte Carlo algorithm is minimax optimal, and control variates cannot give any improvement. Sufficient Smoothness Regime: Non-parametric Regression-Adjusted Control Variate This subsection is devoted to building a minimax optimal estimator of the q-th moment under the assumption that s d > 1 p -1 2q , which guarantees that functions in the space W s,p are sufficiently smooth. From the Sobolev Embedding theorem, we know that the sufficient smoothness assumption implies W s,p (Ω) ⊂ L p * (Ω) ⊂ L 2q (Ω), where 1 p * = 1 p -s d . Given any function f ∈ W s,p (Ω) along with n uniformly sampled quadrature points {x i } n i=1 and corresponding observations {y i = f (x i )} n i=1 of f , the key idea behind the construction of our estimatorĤ q C is to build a nonparametric estimationf of f based on a sub-dataset and usef as a control variate for Monte Carlo simulation. Consequently, it takes three steps to compute the numerical estimation of I q f for any estimatorĤ q C : Ω n × R n → R. The first step is to divide the observed data into two subsets S 1 : = {(x i , y i )} n 2 i=1 , S 2 := {(x i , y i )} n i= n 2 +1 of equal size and use a machine learning algorithm to compute a nonparametric estimationf 1: n 2 of f based on S 1 . Without loss of generality, we may assume that the number of data points is even. Secondly, we treatf 1: n 2 as a control variate and compute the q-th moment I q f . Using the other dataset H q C {x i } n i=1 , {y i } n i=1 := Ωf q 1: n 2 (x)dx + 2 n n i= n 2 +1 y q i -f q 1: n 2 (x i ) .(4) We assume that our function estimationf is obtained from an n 2 -oracle K n 2 : Ω n 2 × R n 2 → W s,p (Ω) satisfying Assumption 3.1. For example, there are lines of research (Krieg and Sonnleitner 2020; Krieg, Novak, and Sonnleitner 2022;Mathé 1991;Heinrich 2009aHeinrich , 2009b considering how the moving least squares method (Wendland 2001(Wendland , 2004 can achieve the convergence rate in (5). Assumption 3.1 (Optimal Function Estimator as an Oracle). Given any function f ∈ W s,p (Ω) and n ∈ N, let {x i } n i=1 be n data points sampled independently and identically from the uniform distribution on Ω. Assume that there exists an oracle K n : Ω n × R n → W s,p (Ω) that estimates f based on the n points {x i } n i=1 along with the n observed function values {f (x i )} n i=1 and satisfies the following bound for any r satisfying 1 r ∈ ( d-sp pd , 1]: E {x i } n i=1 ||K n ({x i } n i=1 , {f (x i )} n i=1 ) -f || r L r (Ω) 1 r ≲ n -s d +( 1 p -1 r )+ .(5) Based on the oracle above, we can obtain the following upper bound that matches the informationtheoretic lower bound in Theorem 1. Theorem 2 (Upper Bound on Moment Estimation with Sufficient Smoothness). Assume that p > 2, q < p < 2q and s > 2dq-dp 2pq . Let {x i } n i=1 be n quadrature points independently and identically sampled from the uniform distribution on Ω and {y i := f (x i )} n i=1 be the corresponding n observations of f ∈ W s,p (Ω). Then the estimatorĤ q C constructed in (4) above satisfies E {x i } n i=1 ,{y i } n i=1 Ĥ q C {x i } n i=1 , {y i } n i=1 -I q f ≲ n max{-q( s d -1 p )-1,-s d -1 2 } ,(6) Proof Sketch Given a non-parametric estimatorf of the function f , we may bound the variance of the Monte Carlo process by (f q -f q ) 2 and further upper bound it by the sum of the following two terms: |f q -f q | 2 ≲ |f q-1 (f -f )| 2 semi-parametric influnce + |(f -f ) q | 2 estimation error propagation .(7) The first term above represents the semi-parametric influence part of the problem, as qf q-1 is the influence function for the estimation of the q-th moment f q . The second term characterizes how function estimation affects functional estimation. If we consider the special case of estimating the mean instead of a general q-th moment, i.e, q = 1, the semi-parametric influence term will disappear. Consequently, the convergence rate won't transit from n -1 2 -s d to n -q( s d -1 p )-1 in the special case. Although the algorithm remains unchanged in the sufficient smooth regime, we need to consider three separate cases to obtain an upper bound on the integral of the semi-parametric influence term |f q-1 (f -f )| 2 in (7). An illustration of the three cases is given in Figure 2. From Hölder's inequality, we know that Ω f 2q-2 (x)(f (x) -f (x)) 2 dx can be upper bounded by ||f 2q-2 || L r ′ (Ω) ||(f -f ) 2 || L r * (Ω) , where || · || L r ′ (Ω) and |||| L r * (Ω) are dual norms. Therefore, the main difficulty here is to embed the function f in different spaces via the Sobolev Embedding Theorem under different assumptions on the smoothness parameter s. When the function is smooth enough, i.e. s > d p , we embed the function f in L ∞ (Ω) and evaluate the estimation error f -f under the L 2 norm. Then our assumption on the oracle (5) gives us an upper bound of magnitude n -2s d on ||f -f || 2 L 2 (Ω) , which helps us further upper bound the semi-parametric influence part Ω f 2q-2 (x)(f (x) -f (x)) 2 dx by n -2s d up to constants. When d(2q-p) p(2q-2) < s < d p , we embed the function f in L 2pq-2p p-2 (Ω) ⊆ L pd d-sp (Ω) and evaluate the estimation error f -f under the L p norm. Applying our assumption on the oracle (5) again implies that the semi-parametric influence part Ω f 2q-2 (x)(f (x) -f (x)) 2 dx can be upper bounded by n -2s d up to constants. When d(2q-p) 2pq < s < d(2q-p) p(2q-2) , we embed the function f in L p * and evaluate the error of the oracle in L 2p * p * +2-2q , where 1 p * = 1 p -s d . Similarly, we can use (5) to upper bound the semi-parametric influence part x∈Ω f 2q-2 (x)(f (x) -f (x)) 2 dx by n 2q( 1 p -s d )-1 . The upper bound on the propagated estimation error x∈Ω (f (x) -f (x)) 2q dx in (7) can be derived by evaluating the error of the oracle under the L 2q norm. i.e, by picking r = 2q in (5) above, which yields an upper bound of magnitude n 2q( 1 p -s d )-1 . The obtained upper bounds on the semi-parametric influence part and the propagated estimation error above provide us with a clear view of the upper bound on the variance of f q -f q , which is the random variable we aim to simulate via Monte-Carlo in the second stage. Using the standard Monte-Carlo algorithm to simulate the expectation of f q -f q then gives us an extra n -1 2 factor for the convergence rate, which helps us attain the final upper bounds given in (6). A complete proof of Theorem 2 is given in Appendix Appendix 3.1. Beyond the Sufficient Smoothness Regime: Truncated Monte Carlo In this subsection, we study the case when the sufficient smoothness assumption breaks, i.e. s d < 1 p -1 2q . According to the Sobolev Embedding theorem, we have that W p s is embedded in L dp d-sp . Since 1 p -s d > 1 2q implies dp d-sp < 2q, the underlying function f is not guaranteed to have bounded L 2q norm, which indicates the existence of rare and extreme events. Consequently, the Monte Carlo estimate of f 's q-th moment must have infinite variance, which makes it hard to simulate. Here we present a truncated version of the Monte Carlo algorithm that can achieve the minimax optimal convergence rate. For any fixed parameter M > 0, our estimator is designed as follows: H q M {x i } n i=1 , {y i } n i=1 := 1 n n i=1 max min{y i , M}, -M q .(8) In Theorem 3, we provide the convergence rate of the estimator (8) E {x i } n i=1 ,{y i } n i=1 Ĥ q M {x i } n i=1 , {y i } n i=1 -I q f ≲ n -q( s d -1 p )-1 . (9) Proof Sketch The error can be decomposed into bias and variance parts. The bias part is caused by the truncation in our algorithm, which is controlled by the parameter M and can be bounded by {x:|f (x)|>M} |f | q dx. According to the Sobolev Embedding Theorem, W s,p (Ω) can be embedded in the space L p * , where 1 p * = 1 p -s d . As |f (x)| > M implies |f (x)| q < M q-p * |f (x)| p * ,q-p * + M q-p * 2 √ n . By selecting M = Θ(n 1 p * ) = Θ(n 1 p -s d ), we obtain the final convergence rate n -q( s d -1 p )-1 . A complete proof of Theorem 3 is given in Appendix Appendix 3.2. Remark 1. (Heinrich 2018) has shown that the convergence rate of the optimal non-parametric regressionbased estimation is n -s d + 1 p -1 q , which is slower than the convergence rate of the truncated Monte Carlo estimator that we show above. Adapting to the Noise Level: a Case Study for Linear Functional In this section, we study how the regression-adjusted control variate adapts to different noise levels. Here we consider the linear functional, i.e. estimating a function's definite integral via low-noise observations at random points. Problem Setup We consider estimating I f = Ω f (x)dx, the integral of f over Ω, for a fixed function f ∈ C s (Ω) with uniformly sampled quadrature points {x i } n i=1 ⊂ Ω. On each quadrature point x i (i = 1, · · · , n), we have a noisy observation y i := f (x i ) + ϵ i . Here the ϵ i 's are independently and identically distributed Gaussian noises sampled from N (0, n -2γ ), where γ ∈ [0, ∞]. Information-Theoretic Lower Bound on Mean Estimation In this subsection, we present a minimax lower bound (Theorem 4) for all estimatorsĤ : Ω n ×R n → R of the integral I f of a function f ∈ C s (Ω) when one can only access noisy observations. i = f (x i ) + ϵ i } n i=1 to estimate the integral of f , where {x i } n i=1 and {ϵ i } n i=1 are independently and identically sampled from the uniform distribution on Ω and the normal distribution N (0, n -2γ ) respectively. Assuming that γ ∈ [0, ∞] and s > 0, we have inf H∈H f n sup f ∈C s (Ω) E {x i } n i=1 ,{y i } n i=1 Ĥ {x i } n i=1 , {y i } n i=1 -I f ≳ n max{-1 2 -γ,-1 2 -s d } .(10) Remark 2. Functional estimation is a well-studied problem in the literature of nonparametric statistics. However, current information-theoretic lower bounds for functional estimation (Birgé and Massart 1995;Donoho and Nussbaum 1990;Donoho 1988;Robins et al. 2008;Jiao et al. 2015;Krishnamurthy et al. 2014;Tsybakov 2004;) assume a constant level of noise on the observed function values. One essential idea for proving these lower bounds is to leverage the existence of the observational noise, which enables us to upper bound the amount of information required to distinguish between two reduced hypotheses. In contrast, we provide a minimax lower bound that is applicable for noises at any level by constructing two priors with overlapping support and assigning distinct probabilities to the corresponding Bernoulli random variables, which separates the two hypotheses. A comprehensive proof of Theorem 4 is given in Appendix Appendix 4.2. Optimal Nonparametric Regression-Adjusted Quadrature Rule In the discussion below, we use the nearest-neighbor method as an example. For any k ∈ {1, 2, · · · , n 2 }, the k-nearest neighbor estimatorf k-NN of f is given byf k-NN (z) : = 1 k k j=1 y i (z) j , where {x i (z) j } n 2 j=1 is a permutation of the quadrature points {x i } n 2 i=1 such that ||x i (z) 1 -z|| ≤ ||x i (z) 2 -z|| ≤ · · · ≤ ||x i (z) n 2 -z|| holds for any z ∈ Ω. Moreover, we use T k,z := {x i (z) j } k j=1 to denote the collection of the k nearest neighbors of z among {x i } n 2 i=1 for any z ∈ Ω. For any 1 ≤ i ≤ n 2 , we take D i ⊂ Ω to be the region formed by all the points whose k nearest neighbors contain x i , i.e, D i := z ∈ Ω : x i ∈ T k,z . Our estimatorĤ k-NN can be formally represented aŝ H k-NN {x i } n i=1 , {y i } n i=1 = n 2 i=1 V(D i ) k y i Ωf k-NN (x)dx + 2 n n i= n 2 +1 y i - 2 n n i= n 2 +1 1 k n 2 j=1 1{x i ∈ D j }y j 2 n n i= n 2 +1 y i -f k-NN (x i ) . In the following theorem, we present an upper bound on the expected risk of the estimator H k-NN : Theorem 5 (Matching Upper Bound for Integral Estimation). Let {x i } n i=1 be n quadrature points independently and identically sampled from the uniform distribution on Ω and {y i := f (x i ) + ϵ i } n i=1 be the corresponding n noisy observations of f ∈ C s (Ω), where {ϵ i } n i=1 are independently and identically sampled from the normal distribution N (0, n -2γ ). Assuming that γ ∈ [0, ∞] and s ∈ (0, 1), we have that there exists k ∈ N such that the estimatorĤ k-NN constructed above satisfies E {x i } n i=1 ,{y i } n i=1 Ĥ k-NN {x i } n i=1 , {y i } n i=1 -I f ≲ n max{-1 2 -γ,-1 2 -s d } .(11) Remark 3. Our upper bound in Theorem 5 matches our minimax lower bound in Theorem 4, which indicates that the regression-adjusted quadrature rule associated with the nearest neighbor estimator is minimax optimal. When the noise level is high (γ < s d ), the control variate helps to improve the rate from n -1 2 (the Monte Carlo rate) to n -1 2 -γ via eliminating all the effects of simulating the smooth function. When the noise level is low (γ > s d ), we show that our estimatorĤ k-NN can achieve the optimal rate of quadrature rules (Novak 2016). We defer a complete proof of Theorem 5 to Appendix Appendix 4.3. Discussion and Conclusion In this paper, we have investigated whether a non-parametric regression-adjusted control variate can improve the rate of estimating functionals and if it is minimax optimal. Using the Sobolev Embedding Theorem, we discover that the existence of rare and extreme events will change the answer to this question. We show that when rare and extreme events are present, using a non-parametric machine learning algorithm as a control variate does not help, and truncated Monte Carlo is minimax optimal. Investigating how to apply importance sampling under this scenario may be of future interest. Also, the study of how regression-adjusted control variates adapt to the noise level for non-linear functionals (Han, Jiao, and Mukherjee 2020;Lepski, Nemirovski, and Spokoiny 1999) is left as future work. Another interesting direction is to analyze how to use the data distribution's information (Oates and Girolami 2016;Oates, Girolami, and Chopin 2017) . 2004. Scattered data approximation. Vol. 17. Cambridge university press. The appendix is organized as follows: • In Appendix A, we list some notations and standard lemmas used in our proofs. • Appendix B contains a comprehensive proof of the information-theoretic lower bound on the estimation of q-th moments, which is established in Theorem 1. • In Appendix C, we provide a detailed proof of Theorem 2 and 3, which gives us the minimax optimal upper bound on estimating q-th moments. • Appendix D consists of our proof for the information-theoretic lower bounds and minimax optimal upper bounds on integral estimation and function estimation, which are listed in Theorem 4 and 5. Appendix 1. Preliminaries and Basic Tools Appendix 1.1 Preliminaries This subsection is devoted to presenting some basic notations used in our proofs. For any fixed convex function f : R + → R satisfying f (1) = 0, we use D f (·||·) to denote the corresponding f -divergence, i.e, D f (P||Q) = Y f dP dQ dQ for any two probability distributions P and Q over some fixed space Y. In particular, when f (x) = 1 2 |x -1|, D f (·||·) is the total variation (TV) distance TV(·||·). When f (x) = x log x, D f (·||·) coincides with the Kullback-Leibler (KL) divergence KL(·||·). Moreover, for any a ∈ R, we use δ a (·) to denote the Dirac delta distribution at point a, i.e, ∞ -∞ f (x)δ a (x) = f (a) for any function f : R → R. Appendix 1.2 Basic Lemmas In this subsection, we list some basic lemmas that serve as essential tools in our proofs. Lemma 1 (Sobolev Embedding Theorem (Adams and Fournier 2003)). For some fixed dimension d ∈ N, we have that (I) For any s, t ∈ N 0 and p, q ∈ R satisfying s > t, p < d and 1 ≤ p < q ≤ ∞, we have W s,p (R d ) ⊆ W t,q (R d ) when the relation 1 p -s d = 1 q -t d holds. In the special case when t = 0, we have W s,p (R d ) ⊆ L q (R d ) for any s ∈ N and p, q ∈ R satisfying 1 ≤ p < q ≤ ∞ and 1 p -s d ≤ 1 q . (II) For any α ∈ (0, 1), let β = d 1-α ∈ (d, ∞]. Then we have C 1 (R d ) ∩ W 1,β (R d ) ⊆ C α (R d ). Lemma 2 (Hölder's Inequality). For any fixed domain Ω and p, q ∈ [1, ∞] satisfying 1 p + 1 q = 1, we have that ||fg|| L 1 (Ω) ≤ ||f || L p (Ω) ||g|| L q (Ω) holds for any f ∈ L p (Ω), g ∈ L q (Ω). Lemma 3 (Hoeffding's Inequality). Let X 1 , X 2 , · · · , X n be independent random variables satisfying X i ∈ [a i , b i ] for any 1 ≤ i ≤ n. Then for any ϵ > 0, the sum S n := n i=1 X i of these n random variables satisfies the following inequality: P(S n ≥ E[S n ] + t) ≤ exp - 2t 2 n i=1 (b i -a i ) 2 P(S n ≤ E[S n ] -t) ≤ exp - 2t 2 n i=1 (b i -a i ) 2(12) Lemma 4 (Data Processing Inequality). Given some Markov Chain X → Z, where X and Z are two random variables the measurable spaces (X , µ) and (Z, ν) respectively. Let K be the transition kernel of the Markov Chain above, i.e, for any x ∈ X , the probability distribution of Z is given by K(·, x) when conditioned on X = x. For any two fixed two distributions P, Q over X with probability density functions p, q, we use K P (·) and K Q (·) to denote the corresponding marginal distributions respectively, i.e, K P (·) := X K(·, x)p(x)dµ(x) and K Q (·) = X K(·, x)q(x)dµ(x). Then we have D f (K P ||K Q ) ≤ D f (P||Q) holds for any f -divergence D f (·||·). Appendix 2. Proof of Lower Bounds in Section 2 Appendix 2.1 A Key Lemma for Building Minimax Optimal Lower Bounds In this subsection, we firstly present the method of two fuzzy hypotheses, which turns out to be the most essential tool for establishing all the minimax optimal lower bounds in our paper, before giving our complete proof of Theorem 1. For any fixed θ ∈ˆ, assume that our observation X is distributed as P θ . LetF be an arbitrary estimator of F(θ) based on X. Let µ 0 , µ 1 be two prior measures onˆ. Assume that there exist constants c ∈ R, ∆ ∈ (0, ∞) and β 0 , β 1 ∈ [0, 1), such that: µ 0 (θ ∈ˆ: F(θ) ≤ c -∆) ≥ 1 -β 0 , µ 1 (θ ∈ˆ: F(θ) ≥ c + ∆) ≥ 1 -β 1 .(13) For j ∈ {0, 1}, we use P j (·) := P θ (·)µ j (dθ) to denote the marginal distribution P j associated with the prior distribution µ j . Then we have the following lower bound: inf F sup θ∈Θ P θ (|F -F(θ)| ≥ ∆) ≥ 1 -TV(P 0 ||P 1 ) -β 0 -β 1 2 .(14) Appendix 2.2 Proof of Theorem 1 (Information-Theoretic Lower Bound on Moment Estimation) In this subsection, we give a detailed proof of the two minimax lower bounds established in Theorem 1 above via the method of two fuzzy hypotheses (Lemma 5). We start off by introducing some preliminary tools used in our proof. Consider the function K 0 defined as follows: K 0 (x) := d i=1 exp - 1 1 -x 2 i 1(|x i | ≤ 1), ∀ x = (x 1 , x 2 , · · · , x d ) ∈ R d .(15) Moreover, we pick some function K satisfying K(x) := K 0 (2x), ∀ x ∈ R d ,(16) From our construction of K and K 0 above, we have that K 0 is in C ∞ (R d ) and compactly supported on [-1 2 , 1 2 ] d . Furthermore, we set m = (200n) 1 d and divide the domain Ω into m d small cubes Ω 1 , Ω 2 , · · · , Ω m d , each of which has side length m -1 . For any 1 ≤ j ≤ m d , we use c j to denote the center of the cube Ω j . Similar to the proof sketch of Theorem 1, below we again use w p to denote the discrete random variable satisfying P(w p = -1) = p and P(w p = 1) = 1p for any p ∈ (0, 1). Furthermore, we use ⃗ x := (x 1 , x 2 , · · · , x n ) and ⃗ y := (y 1 , y 2 , · · · , y n ) to denote the two ndimensional vectors formed by the quadrature points and observed function values, After introducing all preliminaries above, let's present the essential parts of our proof. Given that our lower bound in Theorem 1 consists of two terms, our proof is also divided into two parts: (Case I) For the first lower bound in (3), let's consider two functions g 0 and g 1 defined as follows: g 0 (x) ≡ 0 (∀ x ∈ Ω), g 1 (x) = m -s+ d p K(m(x -c 1 )) (x ∈ Ω 1 ), 0 (otherwise).(17) Clearly we have g 0 ∈ W s,p (Ω) and I q g 0 = 0. Now let's verify that g 1 ∈ W s,p (Ω) for any m. Note that the following bound holds for any t ∈ N d 0 satisfying |t| ≤ s: ||D t g 1 || L p (Ω) = Ω 1 m -s+ d p m |t| (D t K)(m(x -c 1 )) p dx 1 p = m |t|-s+ d p [-1 2 , 1 2 ] d (D t K)(y) p 1 m d dy 1 p = m |t|-s ||D t K|| L p ([-1 2 , 1 2 ] d ) ≲ 1. This implies g 1 ∈ W s,p (Ω) for any m, as desired. Moreover, computing the q-th moment of g 1 yields I q g 1 = Ω g q 1 (x)dx = Ω 1 (m -s+ d p K(m(x -c 1 ))) q dx = m -q(s-d p ) [-1 2 , 1 2 ] d (K(y)) q 1 m d dy = m -q(s-d p )-d ||K|| q L q ([-1 2 , 1 2 ] d ) .(18) Now let us take ϵ = 1 2 and pick two discrete measures µ 0 , µ 1 supported on the finite set {g 0 , g 1 } ⊂ W s,p (Ω) as below: µ 0 ({g 0 }) = 1 + ϵ 2 , µ 0 ({g 1 }) = 1 -ϵ 2 , µ 1 ({g 0 }) = 1 -ϵ 2 , µ 1 ({g 1 }) = 1 + ϵ 2 .(19) On the one hand, by taking c = ∆ = 1 2 I q g 1 and β 0 = β 1 = 1-ϵ 2 , we may use (19) to deduce that µ 0 (f ∈ W s,p (Ω) : I q f ≤ c -∆) = µ 0 (I q f ≤ 0) ≥ 1 + ϵ 2 = 1 -β 0 , µ 1 (f ∈ W s,p (Ω) : I q f ≥ c + ∆) = µ 1 (I q f ≥ I q g 1 ) ≥ 1 + ϵ 2 = 1 -β 1 .(20) Hence, we have that (13) holds true. On the other hand, recall that the quadrature points {x 1 , · · · , x n } are identical and independent samples from the uniform distribution on Ω, which enables us to write the marginal distributions in an explicit form as follows: P 0 (⃗ x,⃗ y) = 1 + ϵ 2 i:x i ∈Ω 1 δ 0 (y i ) + 1 -ϵ 2 i:x i ∈Ω 1 δ g 1 (x i ) (y i ) · m d j=2 i:x i ∈Ω j δ 0 (y i ) , P 1 (⃗ x,⃗ y) = 1 -ϵ 2 i:x i ∈Ω 1 δ 0 (y i ) + 1 + ϵ 2 i:x i ∈Ω 1 δ g 1 (x i ) (y i ) · m d j=2 i:x i ∈Ω j δ 0 (y i ) .(21) In particular, we have P 0 = P 1 when the set {i : x i ∈ Ω 1 } is empty. Combing this fact with (21) above allows us to compute the KL divergence between P 0 and P 1 as below KL(P 0 ||P 1 ) = Ω · · · Ω ∞ -∞ · · · ∞ -∞ log P 0 (⃗ x,⃗ y) P 1 (⃗ x,⃗ y) P 0 (⃗ x,⃗ y)dy 1 · · · dy n dx 1 · · · dx n = Ω · · · Ω ∞ -∞ · · · ∞ -∞ log 1+ϵ 2 i:x i ∈Ω 1 δ 0 (y i ) + 1-ϵ 2 i:x i ∈Ω 1 δ g 1 (x i ) (y i ) 1-ϵ 2 i:x i ∈Ω 1 δ 0 (y i ) + 1+ϵ 2 i:x i ∈Ω 1 δ g 1 (x i ) (y i ) · 1 + ϵ 2 i:x i ∈Ω 1 δ 0 (y i ) + 1 -ϵ 2 i:x i ∈Ω 1 δ g 1 (x i ) (y i ) · m d j=2 i:x i ∈Ω j δ 0 (y i ) n i=1 dy i n i=1 dx i = Ω · · · Ω ∞ -∞ · · · ∞ -∞ log 1+ϵ 2 i:x i ∈Ω 1 δ 0 (y i ) + 1-ϵ 2 i:x i ∈Ω 1 δ g 1 (x i ) (y i ) 1-ϵ 2 i:x i ∈Ω 1 δ 0 (y i ) + 1+ϵ 2 i:x i ∈Ω 1 δ g 1 (x i ) (y i ) · 1 + ϵ 2 i:x i ∈Ω 1 δ 0 (y i ) + 1 -ϵ 2 i:x i ∈Ω 1 δ g 1 (x i ) (y i ) i:x i ∈Ω 1 dy i n i=1 dx i = log 1 + ϵ 1 -ϵ 1 + ϵ 2 + log 1 -ϵ 1 + ϵ 1 -ϵ 2 P {i : x i ∈ Ω 1 } ̸ = ∅ = ϵ log 1 + ϵ 1 -ϵ P {i : x i ∈ Ω 1 } ̸ = ∅ .(22) Moreover, since the probability that {i : x i ∈ Ω 1 } = ∅ equals to ( m d -1 m d ) n = ( m d -1 m d ) m d 200 , we have P {i : x i ∈ Ω 1 } ̸ = ∅ = 1 -1 - 1 m d m d 200 ≤ 1 - 1 e 1 - 1 m d 1 200 ≤ 1 -(2e) -1 200 .(23) Now we may combine (22), (23) and Pinkser's inequality to upper bound the TV distance between P 0 and P 1 as below: TV(P 0 ||P 1 ) ≤ 1 2 KL(P 0 ||P 1 ) ≤ 1 -(2e) -1 200 2 ϵ log 1 + ϵ 1 -ϵ ≤ 3 100 ϵ = √ 3 10 ϵ.(24) Finally, by substituting (18), (24), ∆ = 1 2 I q g 1 and β 0 = β 1 = 1-ϵ 2 = 1 4 into (14) and applying Markov's inequality, we obtain the final lower bound inf H q ∈H f ,q n sup f ∈W s,p (Ω) E {x i } n i=1 ,{y i } n i=1 Ĥ q {x i } n i=1 , {y i } n i=1 -I q f ≥ ∆ inf H q ∈H f ,q n sup f ∈W s,p (Ω) P {x i } n i=1 ,{y i } n i=1 Ĥ q {x i } n i=1 , {y i } n i=1 -I q f ≥ ∆ ≥ 1 2 I q g 1 1 -TV(P 0 ||P 1 ) -β 0 -β 1 2 ≥ 1 4 1 - √ 3 10 ϵI q g 1 = 1 8 1 - √ 3 10 (200n) - q d (s-d p )-1 ||K|| q L q ([-1 2 , 1 2 ] d ) ≳ n -q( s d -1 p )-1 ,(25) which is exactly the first term in the RHS of (3). (Case II) Now let us proceed to prove the second lower bound in (3). For any 1 ≤ j ≤ m d , consider first some function f j defined as follows f j (x) = m -s K(m(x -c j )) (x ∈ Ω j ), 0 (otherwise),(26) which satisfies supp(f j ) ⊆ Ω j , f j ∈ C ∞ (Ω) and f j (x) ≥ 0 (∀ x ∈ Ω). We further pick two constants α, M satisfying α := ||K|| L ∞ ([-1 2 , 1 2 ] d ) and M = 3α. Now consider the following finite set of 2 m d functions: S := M + m d j=1 η j f j : η j ∈ {±1}, ∀ 1 ≤ j ≤ m d .(27) We will proceed to verify that any element in S must be in W s,p (Ω) for any m. Note that for any respectively. Then we define µ 0 , µ 1 to be two discrete measures supported on the finite set S such that the following condition holds for any η j ∈ {±1} (1 ≤ j ≤ m d ): η j ∈ {±1} (1 ≤ j ≤ m d ) and any t ∈ N d 0 satisfying |t| ≤ s, we have |D t M + m d j=1 η j f j | p L p (Ω) ≤ M + | m d j=1 η j (D t f j ) | L p (Ω) p ≤ 2 p M p + | m d j=1 η j (D t f j ) | p L p (Ω) ≲ M p + m d j=1 ||D t f j || p L p (Ω j ) = M p + m d j=1 Ω j m -s+|t| (D t K)(m(x -c j )) p dx = M p + m (|t|-s)p m d j=1 [-1 2 , 1 2 ] d (D t K)(y) p 1 m d dy ≤ M p + ||D t K|| p L p ([-1 2 , 1 2 ] d ) ≲ 1.µ k M + m d j=1 η j f j = m d j=1 P(w (k) j = η j ), k ∈ {0, 1}.(28) In order to determine the separation distance ∆ between the two priors µ 0 and µ 1 , we need to define two quantities A := Ω j (M + f j (x)) q dx and B := Ω j (Mf j (x)) q dx, which both remain the same for any 1 ≤ j ≤ m d . Now consider deriving a lower bound on the quantity ∆ ′ := A -B > 0. Note that for any fixed j ∈ {1, 2, · · · , m d }, we have M > 2α ≥ 2m -s ||K|| L ∞ ([-1 2 , 1 2 ] d ) = 2||f j || L ∞ (Ω j ) , which implies M + y > 1 2 M > 0 for any y ∈ [-||f j || L ∞ (Ω j ) , ||f j || L ∞ (Ω j ) ] . This helps us obtain the following lower bound on ∆ ′ : ∆ ′ = Ω j (M + f j (x)) q dx - Ω j (M -f j (x)) q dx = Ω j f j (x) -f j (x) q(M + y) q-1 dy dx ≥ Ω j f j (x) -f j (x) q( 1 2 M) q-1 dy dx = q 2 q-1 M q-1 Ω j 2f j (x) dx ≳ Ω j f j (x)dx = Ω j m -s K(m(x -c j ))dx = m -s [-1 2 , 1 2 ] d K(y) 1 m d dy = m -s-d ||K|| L 1 ([-1 2 , 1 2 ] d ) .(29) Moreover, let us pick λ = 1 2 and apply Hoeffding's Inequality (Lemma 3) to the bounded random variables {w (0) j } m d j=1 and {w (1) j } m d j=1 to deduce that P m d j=1 w (0) j ≥ -(1 -λ)m d κ ≤ exp - 2(λm d κ) 2 4m d = exp - 1 2 λ 2 κ 2 m d , P m d j=1 w (1) j ≤ (1 -λ)m d κ ≤ exp - 2(λm d κ) 2 4m d = exp - 1 2 λ 2 κ 2 m d .(30) By taking c := m d 2 (A + B), ∆ := (1λ)κm d (A -B) = (1λ)κm d ∆ ′ and β 0 = β 1 = exp -1 2 λ 2 κ 2 m d , we may combine (29) and (30) justified above to get that µ 0 (f ∈ W s,p (Ω) : I q f ≤ c -∆) = P m d j=1 I q M+w (0) j f j ≤ 1 -(1 -λ)κ 2 m d A + 1 + (1 -λ)κ 2 m d B ≥ P m d j=1 w (0) j ≤ -(1 -λ)m d κ = 1 -P m d j=1 w (0) j ≥ -(1 -λ)m d κ ≥ 1 -exp - 1 2 λ 2 κ 2 m d = 1 -β 0 , µ 1 (f ∈ W s,p (Ω) : I q f ≥ c + ∆) = P m d j=1 I q M+w (1) j f j ≥ 1 + (1 -λ)κ 2 m d A + 1 -(1 -λ)κ 2 m d B ≥ P m d j=1 w (1) j ≥ (1 -λ)m d κ = 1 -P m d j=1 w (0) j ≤ (1 -λ)m d κ ≥ 1 -exp - 1 2 λ 2 κ 2 m d = 1 -β 1 ,(31) which indicates that (13) holds true. Now let's consider bounding the KL divergence between the two marginal distributions P 0 , P 1 associated with µ 0 , µ 1 , respectively. Using the fact that {x 1 , · · · , x n } are identical and independent samples from the uniform distribution on Ω again allows us to write the marginal distributions in an explicit form as follows: P 0 (⃗ x,⃗ y) = m d j=1 1 + κ 2 i:x i ∈Ω j δ M-f j (x i ) (y i ) + 1 -κ 2 i:x i ∈Ω j δ M+f j (x i ) (y i ) , P 1 (⃗ x,⃗ y) = m d j=1 1 -κ 2 i:x i ∈Ω j δ M-f j (x i ) (y i ) + 1 + κ 2 i:x i ∈Ω j δ M+f j (x i ) (y i ) .(32) Furthermore, for any n quadrature points {x i } n i=1 , we use J n to denote the set of all indices j satisfying that Ω j contains at least one of the points in {x i } n i=1 , i.e, J n := J n (x 1 , · · · , x n ) = j : 1 ≤ j ≤ m d and Ω j ∩ {x 1 , · · · , x n } ̸ = ∅(33) Given that m d = 200n > n, we have |J n | ≤ n for any n quadrature points {x i } n i=1 . Using this upper bound on |J n | allows us to bound the KL divergence between P 0 and P 1 in the following way: KL(P 0 ||P 1 ) = Ω · · · Ω ∞ -∞ · · · ∞ -∞ log P 0 (⃗ x,⃗ y) P 1 (⃗ x,⃗ y) P 0 (⃗ x,⃗ y)dy 1 · · · dy n dx 1 · · · dx n = Ω · · · Ω ∞ -∞ · · · ∞ -∞ log m d j=1 1+κ 2 i:x i ∈Ω j δ M-f j (x i ) (y i ) + 1-κ 2 i:x i ∈Ω j δ M+f j (x i ) (y i ) 1-κ 2 i:x i ∈Ω j δ M-f j (x i ) (y i ) + 1+κ 2 i:x i ∈Ω j δ M+f j (x i ) (y i ) · m d j=1 1 + κ 2 i:x i ∈Ω j δ M-f j (x i ) (y i ) + 1 -κ 2 i:x i ∈Ω j δ M+f j (x i ) (y i ) n i=1 dy i n i=1 dx i = Ω · · · Ω ∞ -∞ · · · ∞ -∞ log j∈Jn 1+κ 2 i:x i ∈Ω j δ M-f j (x i ) (y i ) + 1-κ 2 i:x i ∈Ω j δ M+f j (x i ) (y i ) 1-κ 2 i:x i ∈Ω j δ M-f j (x i ) (y i ) + 1+κ 2 i:x i ∈Ω j δ M+f j (x i ) (y i ) · j∈Jn 1 + κ 2 i:x i ∈Ω j δ M-f j (x i ) (y i ) + 1 -κ 2 i:x i ∈Ω j δ M+f j (x i ) (y i ) n i=1 dy i n i=1 dx i = Ω · · · Ω j∈Jn ∞ -∞ · · · ∞ -∞ log 1+κ 2 i:x i ∈Ω j δ M-f j (x i ) (y i ) + 1-κ 2 i:x i ∈Ω j δ M+f j (x i ) (y i ) 1-κ 2 i:x i ∈Ω j δ M-f j (x i ) (y i ) + 1+κ 2 i:x i ∈Ω j δ M+f j (x i ) (y i ) · 1 + κ 2 i:x i ∈Ω j δ M-f j (x i ) (y i ) + 1 -κ 2 i:x i ∈Ω j δ M+f j (x i ) (y i ) i:x i ∈Ω j dy i n i=1 dx i = Ω · · · Ω |J n | log 1 + κ 1 -κ 1 + κ 2 + log 1 -κ 1 + κ 1 -κ 2 n i=1 dx i ≤ nκ log 1 + κ 1 -κ . (34) Now we may combine (34) and Pinkser's inequality to upper bound the TV distance between P 0 and P 1 as below: TV(P 0 ||P 1 ) ≤ 1 2 KL(P 0 ||P 1 ) ≤ nκ 2 log 1 + κ 1 -κ ≤ 3n 2 κ = 1 3 .(35) Finally, by substituting (29), (35), ∆ = (1λ)κm d ∆ ′ and β 0 = β 1 = exp -1 exp(-50 27 ) < 1 6 into (14) and applying Markov's inequality, we obtain the final lower bound inf H q ∈H f ,q n sup f ∈W s,p (Ω) E {x i } n i=1 ,{y i } n i=1 Ĥ q {x i } n i=1 , {y i } n i=1 -I q f ≥ ∆ inf H q ∈H f ,q n sup f ∈W s,p (Ω) P {x i } n i=1 ,{y i } n i=1 Ĥ q {x i } n i=1 , {y i } n i=1 -I q f ≥ ∆ ≥ (1 -λ)κm d ∆ ′ 1 -TV(P 0 ||P 1 ) -β 0 -β 1 2 ≥ 1 2 √ 2 3 √ 3n · (200n) · ∆ ′ 6 ≳ √ n∆ ′ ≳ √ n(200n) -s+d d ||K|| L 1 ([-1 2 , 1 2 ] d ) ≳ n -s d -1 2 ,(36) which is exactly the second term in the RHS of (3). Combining the two lower bounds proved in (25) and (36) concludes our proof of Theorem 1. Appendix 3. Proof of Upper Bounds in Section 3 Appendix 3.1 Proof of Theorem 2 (Regression-Adjusted Control Variate) In this subsection, we present a detailed proof of Theorem 2. With the first half of the quadrature points {x i } n 2 i=1 and observed function values {y i } n 2 i=1 as inputs, we pick the regression adjusted control variatef 1: n 2 to be the estimator returned by the oracle K n 2 specified in Assumption 3.1. Moreover, we use the following expression to denote the variance of the functionf q 1: n 2 (x)f q (x) with respect to the uniform distribution on Ω: Var(f q 1: n 2 -f q ) := Ω (f q (x) -f q 1: n 2 (x)) 2 dx - Ω (f q (x) -f q 1: n 2 (x))dx 2 .(37) By plugging in the expression ofĤ q C , I q f and using the fact that {x i } n i=1 are identical and independent copies of the uniform random variable over Ω, we have E {x i } n i=1 ,{y i } n i=1 Ĥ q C {x i } n i=1 , {y i } n i=1 -I q f 2 = E {x i } n i=1 Ωf q 1: n 2 (x)dx + 2 n n i= n 2 +1 f q (x i ) -f q 1: n 2 (x i ) - Ω f q (x)dx 2 = E {x i } n 2 i=1 E {x i } n i= n 2 +1 1 n 2 n i= n 2 +1 f q (x i ) -f q 1: n 2 (x i ) - Ω (f q (x) -f q 1: n 2 (x))dx 2 = E {x i } n 2 i=1 4 n 2 n i= n 2 +1 E x i f q (x i ) -f q 1: n 2 (x i ) - Ω (f q (x) -f q 1: n 2 (x))dx 2 = E {x i } n 2 i=1 4 n 2 n i= n 2 +1 Var(f q 1: n 2 -f q ) = 2 n E {x i } n 2 i=1 Var(f q 1: n 2 -f q ) .(38) From the identity above, we know that it suffices to upper bound the term E {x i } n 2 i=1 Var(f E {x i } n 2 i=1 Var(f q 1: n 2 -f q ) = E {x i } n 2 i=1 Ω (f q (x) -f q 1: n 2 (x)) 2 dx - Ω (f q (x) -f q 1: n 2 (x))dx 2 ≤ E {x i } n 2 i=1 Ω f q (x) -f q 1: n 2 (x) 2 dx = E {x i } n 2 i=1 Ω (f (x) + g 1: n 2 (x)) q -f q (x) 2 dx = E {x i } n 2 i=1 Ω g 1: n 2 (x) 0 q(f (x) + y) q-1 dy 2 dx ≤ E {x i } n 2 i=1 Ω g 1: n 2 (x) 0 1dy g 1: n 2 (x) 0 q 2 (|f (x) + y| 2 ) q-1 dy dx ≲ E {x i } n 2 i=1 Ω |g 1: n 2 (x)| · |g 1: n 2 (x)| max |f 2q-2 (x)|, |g 2q-2 1: n 2 (x)| dx ≲ E {x i } n 2 i=1 Ω |g 2q 1: n 2 (x)|dx + E {x i } n 2 i=1 Ω |g 2 1: n 2 (x)f 2q-2 (x)|dx .(39) Now let's proceed to bound from above the two expected integrals in the last line of (39). For the first expected integral, since s > 2dq-dp 2pq ⇒ 1 2q > d-sp pd , we may apply (5) in Assumption 3.1 to deduce that E {x i } n 2 i=1 Ω |g 2q 1: n 2 (x)|dx = E {x i } n 2 i=1 ||f 1: n 2 -f || 2q L 2q (Ω) ≲ (( n 2 ) -s d +( 1 p -1 2q )+ ) 2q ≲ n 2q(-s d + 1 p -1 2q ) = n 2q( 1 p -s d )-1 ,(40) where the last equality above follows from the given assumption that p < 2q. Now let's proceed to bound from above the second expected integral in (39). Here we define p * = (max{ 1 p -s d , 0}) -1 , i.e, p * = pd d-sp when s < d p and p * = ∞ otherwise. From Sobolev Embedding Theorem (Lemma 1), we have that W s,p (Ω) ⊆ L p * (Ω). Based on the value of the smoothness parameter s, we have three separate cases as below: (Case I) When s ∈ ( d p , ∞), we have p * = ∞ and f ∈ W s,p (Ω) ⊂ L ∞ (Ω). Sincef 1: n 2 and f are both in the Sobolev space W s,p (Ω) ⊆ L ∞ (Ω), we may further deduce that g 1: n 2 =f 1: n 2 f ∈ W s,p (Ω) ⊆ L ∞ (Ω) ⊆ L 2 (Ω). By picking r = 2 in in (5) of Assumption 3.1, we may use the facts that p > 2 and f ∈ L ∞ (Ω) to deduce that E {x i } n 2 i=1 Ω |g 2 1: n 2 (x)f 2q-2 (x)|dx ≲ E {x i } n 2 i=1 Ω |g 2 1: n 2 (x)|dx = E {x i } n 2 i=1 ||f 1: n 2 -f || 2 L 2 (Ω) ≲ n -s d +( 1 p -1 2 )+ 2 = n -2s d ,(41) which is our final upper bound on the second expected integral in (39) under the assumption that s ∈ ( d p , ∞). (Case II) When s ∈ ( d(2q-p) p(2q-2) , d p ), we have p * = pd d-sp > pd d-p d(2q-p) p(2q-2) = p(2q-2) p-2 , which implies f ∈ W s,p (Ω) ⊆ L p * (Ω) ⊆ L p(2q-2) p-2 (Ω) ⊆ L p (Ω). Given that p p-2 > 1, we can further deduce that f 2q-2 ∈ L p p-2 (Ω) . Moreover, sincef 1: n 2 ∈ W s,p (Ω) ⊆ L p (Ω), we have that g 1: n 2 =f 1: n 2 f ∈ L p (Ω). Given that p > 2, we can further deduce that g 2 1: n 2 ∈ L p 2 (Ω). Then we may apply Hölder's inequality (Lemma 2) to g 2 1: n 2 ∈ L p 2 (Ω) and f 2q-2 ∈ L p p-2 (Ω) to obtain that E {x i } n 2 i=1 Ω |g 2 1: n 2 (x)f 2q-2 (x)|dx = E {x i } n 2 i=1 ||g 2 1: n 2 f 2q-2 || L 1 (Ω) ≤ E {x i } n 2 i=1 |g 2 1: n 2 | L p 2 (Ω) |f 2q-2 | L p p-2 (Ω) ≤ |f | 2q-2 L p(2q-2) p-2 (Ω) E {x i } n 2 i=1 |g 1: n 2 | 2 L p (Ω) .(42) Note that the function h(t) = t 2 p is concave and 1 p ∈ ( d-sp pd , 1] when p > 2. Hence, applying Jensen's inequality and picking r = p in (5) of Assumption 3.1 further allows us to upper bound the last term in (42) as follows: E {x i } n 2 i=1 Ω |g 2 1: n 2 (x)f 2q-2 (x)|dx = E {x i } n 2 i=1 ||g 2 1: n 2 f 2q-2 || L 1 (Ω) ≤ E {x i } n 2 i=1 |g 2 1: n 2 | L p * p * +2-2q (Ω) |f 2q-2 | L p * 2q-2 (Ω) ≤ |f | 2q-2 L p * (Ω) E {x i } n 2 i=1 |g 1: n 2 | 2 L 2p * p * +2-2q (Ω) .(45) Note that the function ω(t) = t p * +2-2q p * is concave since q ≥ 1. Moreover, using the given assumption d(2q-p) p(2q-2) ) we get that pd d-sp > 2q, which further yields s ∈ ( d(2q-p) 2pq ,p * + 2 -2q 2p * = pd d-sp + 2 -2q 2 pd d-sp > 2 2 pd d-sp = d -sp pd , i.e, (p * +2-2q) 2p * ∈ ( d-sp pd , 1 ]. Hence, we may apply Jensen's inequality and (5) in Assumption 3.1 to upper-bound the last term in (45) as follows: Combining the upper bounds derived in (40), (41), (44) and (47) E {x i } n 2 i=1 Var(f q 1: n 2 -f q ) ≲ n 2q( 1 p -s d )-1 + max{n -2s d , n 2q( 1 p -s d )-1 }.(48) Finally, substituting (48) into 38) derived at the beginning gives us the final upper bound: E {x i } n i=1 ,{y i } n i=1 Ĥ q C {x i } n i=1 , {y i } n i=1 -I q f ≤ E {x i } n i=1 ,{y i } n i=1 Ĥ q C {x i } n i=1 , {y i } n i=1 -I q f 2 = 2 n E {x i } n 2 i=1 Var(f q 1: n 2 -f q ) ≲ n -1 2 n 2q( 1 p -s d )-1 + max{n -2s d , n 2q( 1 p -s d )-1 } ≲ max{n -s d -1 2 , n -q( s d -1 p )-1 }.(49) This concludes our proof of Theorem 2. Appendix 3.2 Proof of Theorem 3 (Truncated Monte Carlo) In this subsection, we provide a complete proof of Theorem 3. For any fixed parameter M > 0, we may divide Ω into the following two regions: Ω + M := {x ∈ Ω : |f (x)| ≥ M}, Ω - M := {x ∈ Ω : |f (x)| < M},(50) where Ω + M ∩ Ω -M = ∅ and Ω + M ∪ Ω -M = Ω. Let f M (x) := max min{f (x), M}, -M (∀ x ∈ Ω) denote a truncated version of the given function f , where M is the threshold. Also, we use the following expression to denote the expectation of the q-th power of the truncated function f M with respect to the uniform distribution on Ω: E(f q M (x)) = Ω max min{f (x), M}, -M q dx = Ω + M M q dx + Ω - M f (x) q dx,(51) where the last identity in (51) above follows from our definition of the two regions defined in (50). In a similar way, we can define the variance of the function f q M as below: Var(f q M (x)) = E(f 2q M (x)) -E(f q M (x)) 2 = Ω max min{f (x), M}, -M 2q dx - Ω max min{f (x), M}, -M q dx 2 .(52) Furthermore, as {x i } n i=1 are identical and independent samples of the uniform distribution on Ω, we have that for any 1 ≤ i ≤ n, the following identity holds E {x i } n i=1 ,{y i } n i=1 Ĥ q M {x i } n i=1 , {y i } n i=1 = E {x i } n i=1 ,{y i } n i=1 1 n n i=1 max min{y i , M}, -M q = E x i max min{f (x i ), M}, -M q = E x i [f q M (x i )] = E(f q M (x)).(53) Now we may use (53) and the bias-variance decomposition to derive an upper bound on the squared expected risk of the estimatorĤ q M as follows: E {x i } n i=1 ,{y i } n i=1 Ĥ q M {x i } n i=1 , {y i } n i=1 -I q f 2 = E {x i } n i=1 ,{y i } n i=1 Ĥ q M {x i } n i=1 , {y i } n i=1 -E(f q M (x)) + E(f q M (x)) -I q f 2 ≤ 2E {x i } n i=1 ,{y i } n i=1 1 n n i=1 max min{y i , M}, -M q -E(f q M (x)) 2 + 2E {x i } n i=1 ,E z,{x i } n i=1 ||z -x i (z) k || 2 ≲ ( k n ) 2 d .(59) function K defined in (15) and (16) above, which satisfies supp(K) ⊆ [-1 2 , 1 2 ] d and K ∈ C ∞ ([-1 2 , 1 2 ] d ). In an analogous way, for any 1 ≤ j ≤ m d , we associate each cube Ω j with a bump function f j defined as follows: f j (x) = m -s K(m(x -c j )) (x ∈ Ω j ), 0 (otherwise),(65) where supp(f j ) ⊆ Ω j , f j ∈ C ∞ (Ω) and f j (x) ≥ 0 (∀ x ∈ Ω). Then let's consider the following finite set of 2 m d functions: S := m d j=1 η j f j : η j ∈ {±1}, ∀ 1 ≤ j ≤ m d .(66) We will first verify that S ⊆ C s (Ω). Fix any element f * = m d j=1 η j f j ∈ S. On the one hand, from our construction of the f j 's given in (65) above, we have max |t|≤⌊s⌋ ||D t f * || L ∞ (Ω) = max |t|≤⌊s⌋ m -s+|t| ||D t K|| L ∞ ([-1 2 , 1 2 ] d ) ≤ max |t|≤⌊s⌋ ||D t K|| L ∞ ([-1 2 , 1 2 ] d ) .(67) On the other hand, for any 1 ≤ i ̸ = j ≤ m d , we consider the function ψ i f i + ψ j f j , where the scalars ψ j , ψ j ∈ {0, ±1}. Now let's may pick β := d 1-{s} , where {s} = s -⌊s⌋ ∈ (0, 1) denotes the fractional part of s. Given that f j ∈ C ∞ (Ω), we may upper bound the Sobolev norm || · || W 1,β of the function D t (ψ i f i + ψ j f j ) for any t ∈ N d 0 satisfying |t| = ⌊s⌋ as follows: |D t (ψ i f i + ψ j f j ) | β W 1,β (Ω) = |ψ i | β |D t f i | β W 1,β (Ω i ) + |ψ j | β |D t f j | β W 1,β (Ω j ) ≤ |D t f i | β L β (Ω i ) + d r=1 | ∂ ∂x r D t f i | β L β (Ω i ) + |D ⌊s⌋ f j | β L β (Ω j ) + d r=1 | ∂ ∂x r D t f j | β L β (Ω j ) = l∈{i,j} Ω l m -s+|t| D t K(m(x -c l )) β dx + l∈{i,j} d r=1 Ω l m -s+|t|+1 ∂ ∂x r D t K(m(x -c l )) β dx.(68) From our choice of β and assumption on the bump function K, we may further upper bound the Sobolev norm |D t (ψ i f i + ψ j f j ) | W 1,β (Ω) as below: |D t (ψ i f i + ψ j f j ) | β W 1,β (Ω) ≤ l∈{i,j} m -β{s} [-1 2 , 1 2 ] d D t K(y) β 1 m d dy + l∈{i,j} dm β(1-{s}) sup |t ′ |≤⌊s⌋+1 [-1 2 , 1 2 ] d D t ′ K(y) β 1 m d dy ≤ 2m -β{s}-d |D t K | β L β ([-1 2 , 1 2 ] d ) + 2dm β(1-{s})-d · sup |t ′ |≤⌊s⌋+1 |D t ′ K | β L β ([-1 2 , 1 2 ] d ) ≲ |D t K | β L β ([-1 2 , 1 2 ] d ) + sup |t ′ |≤⌊s⌋+1 |D t ′ K | β L β ([-1 2 , 1 2 ] d ) ,(69) where the last inequality above follows from our choice of β. From (69) and the second part of the Sobolev Embedding Theorem (Lemma 1), we can deduce that D t (ψ i f i + ψ j f j ) ∈ C 1 (Ω) ∩ W 1, d 1-{s} (Ω) ⊆ C {s} (Ω) and the following inequality holds: |D t (ψ i f i + ψ j f j ) | C {s} (Ω) ≲ |D t (ψ i f i + ψ j f j ) | W 1,β (Ω) ≲ sup |t ′ |=⌊s⌋ |D t ′ K | β L β ([-1 2 , 1 2 ] d ) + sup |t ′ |=⌊s⌋+1 |D t ′ K | β L β ([-1 2 , 1 2 ] d ) 1 β ,(70) Furthermore, combining (70) with our construction of the f j 's given in (65) |D t f * (x) -D t f * (y)| ||x -y|| s-⌊s⌋ ≤ max 1≤i̸ =j≤k ψ i ,ψ j ∈{0,±1} max |t|=⌊s⌋ sup x̸ =y∈Ω |D t (ψ i f i + ψ j f j )(x) -D t (ψ i f i + ψ j f j )(x)| ||x -y|| {s} ≤ max 1≤i̸ =j≤k,|t|=⌊s⌋ ψ i ,ψ j ∈{0,±1} |D t (ψ i f i + ψ j f j ) | C {s} (Ω) ≲ sup |t ′ |=⌊s⌋ |D t ′ K | β L β ([-1 2 , 1 2 ] d ) + sup |t ′ |=⌊s⌋+1 |D t ′ K | β L β ([-1 2 , 1 2 ] d ) 1 β(71) Finally, adding the two inequalities (67) and (71) gives us that for any f * ∈ S, we have ||f * || C s (Ω) = max |t|≤⌊s⌋ ||D t f * || L ∞ (Ω) + max |t|=⌊s⌋ sup x,y∈Ω,x̸ =y |D t f * (x) -D t f * (y)| ||x -y|| s-⌊s⌋ ≲ max |t|≤⌊s⌋ ||D t K|| L ∞ ([-1 2 , 1 2 ] d ) + sup |t ′ |=⌊s⌋ |D t ′ K | β L β ([-1 2 , 1 2 ] d ) + sup |t ′ |=⌊s⌋+1 |D t ′ K | β L β ([-1 2 , 1 2 ] d ) 1 β ≲ 1.(72) From the arbitrariness of f * , we can then deduce that S ⊆ C s (Ω), as desired. For any p ∈ (0, 1), below we again use w p to denote the discrete random variable satisfying P(w p = -1) = p and P(w p = 1) = 1p. (1) j } m d j=1 to be independent and identical copies of w 1+κ 2 and w 1-κ 2 respectively. Then we define µ 0 , µ 1 to be two discrete measures supported on the finite set S such that the following condition holds for any η j ∈ {±1} (1 ≤ j ≤ m d ): µ k m d j=1 η j f j = m d j=1 P(w (k) j = η j ), k ∈ {0, 1}.(73) Then we proceed to determine the separation distance ∆ between the two priors µ 0 and µ 1 . Similar to what we did in the proof of Theorem 1, we need to first define the following quantity C := Ω j f j (x)dx, which remains the same for any 1 ≤ j ≤ m d . Moreover, applying (65) helps us evaluate the quantity C directly as follows C = Ω j f j (x)dx = Ω j m -s K(m(x -c j ))dx =j ≥ -(1 -λ)m d κ ≤ exp - 2(λm d κ) 2 4m d = exp - 1 2 λ 2 κ 2 m d , which indicates that (13) holds true. Now let's consider bounding the KL divergence between the two marginal distributions P 0 , P 1 associated with µ 0 , µ 1 , respectively. Applying the fact that {x 1 , · · · , x n } and {ϵ 1 , · · · , ϵ n } are identical and independent samples from the uniform distribution on Ω and the normal distribution N (0, n -2γ ) allows us to write the marginal distributions in an explicit form as follows: P 0 (⃗ x,⃗ y) = m d j=1 1 -κ 2 i:x i ∈Ω j 1 √ 2πn -γ e - (y i -f j (x i )) 2 2n -2γ + 1 + κ 2 i:x i ∈Ω j 1 √ 2πn -γ e - (y i +f j (x i )) 2 2n -2γ , P 1 (⃗ x,⃗ y) = m d j=1 1 + κ 2 i:x i ∈Ω j 1 √ 2πn -γ e - (y i -f j (x i )) 2 2n -2γ + 1 -κ 2 i:x i ∈Ω j 1 √ 2πn -γ e - (y i +f j (x i )) 2 2n -2γ .(77) Furthermore, for any n fixed quadrature points ⃗ x = (x 1 , x 2 , · · · , x n ), we use P k (· | ⃗ x) to denote the marginal distribution of the observed function values ⃗ y = (y 1 , y 2 , · · · , y n ) conditioned on ⃗ x for k ∈ {0, 1}. Since {x i } n i=1 are identically and independently sampled from the uniform distribution on Ω, we have that the two probability densities P k (⃗ x,⃗ y) and P k (⃗ y | ⃗ x) have the same mathematical expression for any k ∈ {0, 1}. Then we may further rewrite the KL divergence between the two marginal distributions P 0 , P 1 as follows: KL(P 0 ||P 1 ) = Ω · · · Ω ∞ -∞ · · · ∞ -∞ log P 0 (⃗ x,⃗ y) P 1 (⃗ x,⃗ y) P 0 (⃗ x,⃗ y)dy 1 · · · dy n dx 1 · · · dx n = Ω · · · Ω ∞ -∞ · · · ∞ -∞ log P 0 (⃗ y | ⃗ x) P 1 (⃗ y | ⃗ x) P 0 (⃗ y | ⃗ x)dy 1 · · · dy n dx 1 · · · dx n = Ω · · · Ω KL P 0 (· | ⃗ x)||P 1 (· | ⃗ x) dx 1 · · · dx n . It now remains to upper bound the KL divergence between the two conditional distributions P 0 (· | ⃗ x) and P 1 (· | ⃗ x) for any fixed ⃗ x = (x 1 , · · · , x n ). In order to derive such an upper bound, we need to introduce the following notations first. For any n quadrature points {x i } n i=1 , we use J n to denote the set of all indices j satisfying that Ω j contains at least one of the points in {x i } n i=1 , i.e, J n := J n (⃗ x) = j : 1 ≤ j ≤ m d and Ω j ∩ {x 1 , · · · , x n } ̸ = ∅ . Moreover, we use ⃗ ω Furthermore, for any fixed quadrature points ⃗ x = (x 1 , · · · , x n ) and weights ⃗ ω Jn := {ω j : j ∈ J n } ⊆ Combining the expressions in (77), (80) and (81) allows us to rewrite the two conditional distributions P k (· | ⃗ x) as below: P k (⃗ y |⃗ x) = P k (⃗ x,⃗ y) = {±1} |Jn| G(⃗ x, ⃗ ω Jn )p (k) Jn ( ⃗ ω Jn )d ⃗ ω Jn(82) where k ∈ {0, 1}. Applying the data processing inequality (Lemma 4) to (82) above then enables us to derive the following upper bound on KL P 0 (· | ⃗ x)||P 1 (· | ⃗ x) for any n fixed quadrature points ⃗ x = (x 1 , · · · , x n ): KL P 0 (· | ⃗ x)||P 1 (· | ⃗ x) ≤ KL p (0) Jn || p (1) Jn = |J n | log 1 + κ 1 -κ 1 + κ 2 + log 1 -κ 1 + κ 1 -κ 2 ≤ nκ log 1 + κ 1 -κ(83) where the equality in (83) (1) j } m d j=1 are independent and identical copies of w 1+κ 2 and w 1-κ 2 respectively. The last inequality of (83) above, however, is deduced from the fact that m d = 200n > n, which implies |J n | ≤ n for any n quadrature points {x i } n i=1 . Substituting (83) into (78) and applying Pinkser's inequality yields the final upper bound on the TV distance between P 0 and P 1 : TV(P 0 ||P 1 ) ≤ 1 2 KL(P 0 ||P 1 ) ≤ Ω · · · Ω nκ 2 log 1 + κ 1κ dx 1 · · · dx n = nκ 2 log 1 + κ 1κ ≤ 3n 2 κ = 1 3 . Finally, by substituting (74) E {x i } n i=1 ,{y i } n i=1 Ĥ {x i } n i=1 , {y i } n i=1 -I f ≥ ∆ inf H∈H f n sup f ∈C s (Ω) P {x i } n i=1 ,{y i } n i=1 Ĥ {x i } n i=1 , {y i } n i=1 -I f ≥ ∆ ≥ (1 -λ)κm d C 1 -TV(P 0 ||P 1 ) -β 0 -β 1 2 ≥ 1 2 √ 2 3 √ 3n · (200n) · C 6 ≳ √ nC ≳ √ n(200n) -s+d d ||K|| L 1 ([-1 2 , 1 2 ] d ) ≳ n -s d -1 2 ,(85) which is exactly the second term in the RHS of (10). Combining the two lower bounds proved in (64) and (85) concludes our proof of Theorem 4 d )-1 ) with constant probability. On the other hand, KL(P 0 ||P 1 ) can be upper bounded by the KL divergence between v is of constant magnitude.(II) For the second lower bound in (3), we set M > 0 to be some sufficiently large constant and κ = Θ( 1 √ n ). For any 1 ≤ j ≤ m d , we construct bump functions f j ∈ W s,p (Ω) satisfying supp(f j ) ⊆ Ω j and I k f j= Ω j f j (x)dx = Θ(m -ks-d ) for any 1 ≤ j ≤ m d and 1 ≤ k ≤ s. Now let's pick µ 0 , µ 1 to be discrete measures supported on the two finite sets M + , applying Hoeffding's inequality yields that the q-th moments under µ 0 and µ 1 differ by Θ(n -s d -1 Theorem 4 ( 4Lower Bound for Integral Estimation). Let H f n denote the class of all the estimators that use n quadrature points {x i } n i=1 and noisy observations {y Lemma 5 ( 5Method of Two Fuzzy Hypotheses: Theorem 2.15 (i), (Tsybakov 2004)). Let F :ˆ→ R be some continuous functional defined on the measurable space (ˆ, U) and taking values in (R, B(R)), where B(R) denotes the Borel σ-algebra on R. Suppose that each parameter θ ∈ˆis associated with a distribution P θ , which together form a collection {P θ : θ ∈ˆ} of distributions. Jn to denote |J n |-dimensional vector formed by the random variables {ω (k) j : j ∈ J n } and p(k) Jn (·) to denote the probability density function of ⃗ ω (k) Jn , where k ∈ {0, 1}. From our assumption on the distribution of the = β 1 f 1, (84), ∆ = (1λ)κm d C and β 0 ∈C s (Ω) There is a long literature on building quadrature rules in the Reproducing Kernel Hilbert Space, including Bayes-Hermite quadrature (O'Hagan 1991; Kanagawa, Sriperumbudur, and Fukumizu 2016; Bach 2017; Karvonen and Sarkka 2018; Kanagawa and Hennig 2019), determinantal point processes (Belhadji, Bardenet, and Chainais 2019; Belhadji 2021; Bardenet and Hardy 2020;Smoothness s Truncate Monte Carlo Regression-adjusted Control Variate Also, let 1 = 1{·} denote the indicator function, i.e, for any event A we have 1{A} = 1 if A is true and 1{A} = 0 otherwise. For any region R ⊆ Ω, we use V(R) := Ω 1{x ∈ R}dx to denote the volume of R. Let C(Ω) denote the space of all continuous functions f : Ω → R and ⌊·⌋ be the roundingfunction. For any s > 0 and f ∈ C(Ω), we define the Hölder norm || · || C s (Ω) by ||f || C s (Ω) := max |k|≤⌊s⌋ ||D k f || L ∞ (Ω) + max |k|=⌊s⌋ sup x,y∈Ω,x̸ =y Theorem 1 (Lower Bound on Estimating the Moment). When p > 2 and q < p < 2q, let Hf n denote the class of all the estimators that use n quadrature points {x i } n i=1 and observed function values {y i = f (x i )} n i=1 to estimate the q-th moment of f , where {x i } n i=1 are independently and identically sampled from the uniform distribution on Ω. Then we have inf H q ∈H f ,q n sup f ∈W s,p (Ω) by choosing the truncation parameter M in an optimal way. Theorem 3 (Upper Bound on Moment Estimation without Sufficient Smoothness). Assuming that p > 2, q < p < 2q and s < Let {x i } n i=1 be n quadrature points independently and identically sampled from the uniform distribution on Ω and {y i := f (x i )} n i=1 be the corresponding n observations of f ∈ W s,p (Ω). Then we have that the estimatorĤ2dq-dp 2pq , we pick M = Θ(n 1 p -s d ). q M constructed in (8) above satisfies the bias can be upper bounded by M q-p * . Similarly, the variance is controlled by M and can be upper bounded by M q-p * 2 . Combining the bias and variance bound, we can bound the final error as M to achieve both better computational trackability and convergence rate (Oates et al. 2019). Yiping Lu is supported by the Stanford Interdisciplinary Graduate Fellowship (SIGF). Jose Blanchet is supported in part by the Air Force Office of Scientific Research under award number FA9550-20-1-0397. Lexing Ying is supported is supported by National Science Foundation under award DMS-2208163. Hinrichs, Aicke, David Krieg, Erich Novak, Joscha Prochno, and Mario Ullrich. 2020. On the power of random information. Tsybakov, Alexandre B. 2004. Introduction to nonparametric estimation, 2009. URL https://doi. org/10.1007/b13794. Revised and extended from the 9 (10). Wendland, Holger. 2001. Local polynomial reproduction and moving least squares approximation. IMA Journal of Numerical Analysis 21 (1): 285-300.Acknowledgement Multivariate Algorithms and information-based complexity 27:43-64. Hinrichs, Aicke, David Krieg, Erich Novak, and Jan Vybıéral. 2022. Lower bounds for integration and recovery in L 2 . Journal of Complexity 72:101662. Hinrichs, Aicke, Erich Novak, Mario Ullrich, and H Woźniakowski. 2014. The curse of dimensionality for numerical integration of smooth functions. Mathematics of Computation 83 (290): 2853-2863. Holzmüller, David, and Francis Bach. 2023. Convergence rates for non-log-concave sampling and log-partition estimation. arXiv preprint arXiv:2303.03237. Oates, Chris, and Mark Girolami. 2016. Control functionals for quasi-monte carlo integration. In Artificial intelligence and statistics, 56-65. PMLR. Oates, Chris J, Jon Cockayne, François-Xavier Briol, and Mark Girolami. 2019. Convergence rates for a class of estimators based on stein's method. Bernoulli 25 (2): 1141-1159. Oates, Chris J, Mark Girolami, and Nicolas Chopin. 2017. Control functionals for monte carlo integration. Journal of the Royal Statistical Society. Series B (Statistical Methodology), 695-718. Robins, James, Lingling Li, Eric Tchetgen, Aad van der Vaart, et al. 2008. Higher order influence functions and minimax estimation of nonlinear functionals. Probability and statistics: essays in honor of David A. Freedman 2:335-421. Romano, Yaniv, Evan Patterson, and Emmanuel Candes. 2019. Conformalized quantile regression. Advances in neural information processing systems 32. Shi, Jiaxin, Yuhao Zhou, Jessica Hwang, Michalis Titsias, and Lester Mackey. 2022. Gradient estimation with discrete stein operators. Advances in Neural Information Processing Systems 35:25829-25841. Siegel, Jonathan W, and Jinchao Xu. 2022. High-order approximation rates for shallow neural networks with cosine and relu k activation functions. Applied and Computational Harmonic Analysis 58:1-26. Sobczyk, Aleksandros, and Mathieu Luisier. 2022. Approximate euclidean lengths and distances beyond johnson-lindenstrauss. arXiv preprint arXiv:2205.12307. South, Leah F, CJ Oates, A Mira, and C Drovandi. 2018. Regularised zero-variance control variates. arXiv preprint arXiv:1811.05073. Traub, Joseph F, GW Wasilkowski, H Wozniakowski, and Erich Novak. 1994. Information-based complexity. SIAM Review 36 (3): 514-514. finally allows us to upper bound the expected variance E{x i } n 2 i=1 Var(f q 1: n 2 -f q ) as below: {y i } n to denote the k-th nearest neighbor of z among {x i } n i=1 . When z is also uniformly distributed over the domain Ω, we have the following upper bound on the expected distance between z and xi (z) k : above gives us thatmax |t|=⌊s⌋ sup x,y∈Ω,x̸ =y Moreover, by picking λ = 1 2 , we may apply Hoeffding's Inequality (Lemma 3) to the bounded random variables {wm -s [-1 2 , 1 2 ] d K(y) 1 m d dy = m -s-d ||K|| L 1 ([-1 2 , 1 2 ] d ) . (74) (0) j } m d j=1 and {w (1) j } m d j=1 to deduce that P m d j=1 w (0) © Working Paper 2021. This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited. 2q-2 (Ω), which yields the following upper bound: (x)f 2q-2 (x)|dx ≲ n 2q( 1 p -s d )-1 .(47) -γ,-1 2 -s d } ,(96)which concludes our proof of Theorem 5. Substituting(43)d(2q-p)p(2q-2) ), we have that s < d p , which indicates that p * = pd d-sp satisfies 2q < p * < p(2q-2) p-2 . Given that p * > 2q > 2q -2 and f ∈ W s,p (Ω) ⊆ L p * (Ω), we can deduce that f 2q-2 ∈ L p * 2q-2 (Ω). Furthermore, note that p * > 2q implies 2p * p * +2-2q < p * and p * < p(2q-2) p-2 implies 2p * p * +2-2q > p. Sincef 1: n 2 and f are both in the Sobolev space W s,p (Ω) ⊆ L p * (Ω), we may further deduce that g 1: n 2 =f 1: n 2 f ∈ W s,p (Ω) ⊆ L p * (Ω) ⊆ L 2p * p * +2-2q (Ω). Given that q ≥ 1 ⇒ p * p * +2-2q ≥ 1, we have g 2 1: n 2 ∈ L p * p * +2-2q (Ω). Then we may apply Hölder's inequality (Lemma 2) to g 2In order to simplify the last expression in (46), let's recall the fact that p * ∈ (2q,2p * . Then we may simplify the power term in the last expression of (46) as follows:where the first and the second term in the last line of (54) above denotes the variance and the bias part, respectively. Again, we define p * = (max{ 1 p -s d , 0}) -1 , i.e, p * = pd d-sp when s < d p and p * = ∞ otherwise. Under the assumption that s < 2dq-dp 2pq < d p , we have p * = pd d-sp ∈ (p, 2q). Moreover, from Sobolev Embedding Theorem (Lemma 1), we have that f ∈ W s,p (Ω) ⊆ L p * (Ω).On the one hand, since p < 2q, we can deduce that |f (x)| 2q ≤ M 2q-p * |f (x)| p * for any x ∈ Ω -M and M 2q ≤ M 2q-p * |f (x)| p * for any x ∈ Ω + M , which helps us upper bound the variance part as below:where the last step of (55) above follows from the fact that f ∈ W s,p (Ω) ⊆ L p * (Ω). On the other hand, using the fact thatwe may upper-bound the bias part as follows:where the last step above again follows from the fact that f ∈ W s,p (Ω) ⊆ L p * (Ω). By substituting(55)and(56)into(54), we obtain thatwhich finishes our proof of Theorem 3.Appendix 4. Proof of Minimax Lower and Upper Bounds in Section 4This section is organized as follows. The first subsection consists of one important lemma used in our proof. In the second subsection, we provide complete proof for the minimax optimal lower bound on the estimation of integrals under any level of noise. In the third subsection, a complete proof for the upper bound on the estimation of integrals is given.Appendix 4.1 A Key Lemma for Establishing the Upper Bound on Integral EstimationLemma 6 (Bound on the Expected k-Nearest Neighbor Distance: Theorem 2.4,(Biau and Devroye 2015)). Assume that x 1 , x 2 , · · · , x n are independent and identical samples from the uniform distribution on the domain Ω = [0, 1] d . For any k ∈ {1, 2, · · · , n} and z ∈ Ω, we use x i (z) kAppendix 4.2 Proof of Theorem 4 (Lower Bound on Integral Estimation)Here we present a comprehensive proof of the two lower bounds given in Theorem 4 above by applying the method of two fuzzy hypotheses (Lemma 5). Below we again use ⃗ x := (x 1 , x 2 , · · · , x n ) and ⃗ y := (y 1 , y 2 , · · · , y n ) to denote the two n-dimensional vectors formed by the quadrature points and observed function values. Since our lower bound in Theorem 4 consists of two terms, we need to prove the two bounds in the following two separate cases: (Case I) For the first lower bound in(10), let's consider two constant functions g 0 and g 1 defined as follows:Clearly we have g 0 , g 1 ∈ C s (Ω). Then let's take µ k to be a Dirac delta measure supported on the set. By picking c = ∆ = 1 2 I g 1 = 1 2 n -γ-1 2 and β 0 = β 1 = 0, we then obtain that µ 0 (f ∈ W s,p (Ω) :which indicates that (13) holds true. Now let's consider bounding the KL divergence between the two marginal distributions P 0 , P 1 associated with µ 0 , µ 1 , respectively. Given that the quadrature points {x i } n i=1 and the observational noises {ϵ i } n i=1 are independent and identical samples from the uniform distribution on Ω and the normal distribution N (0, n -2γ ), we can write the marginal distributions in an explicit form as follows:From(62)we can see that P 0 and P 1 are two n-dimensional normal distributions having the same covariance matrix but different mean vectors. Computing the KL divergence between them and applying Pinsker's inequality then give us that TV(P 0 ||P 1 ) ≤ 1 2 KL(P 0 ||P 1 ) = n(n -γ-1 2 ) 2 4n -2γ = 1 2 .Substituting(63), ∆ = 1 2 I g 1 = 1 2 n -γ-1 2 and β 0 = β 1 = 0 into(14)and applying Markov's inequality yield the final lower boundwhich is exactly the first term in the RHS of (10). (1)By taking c := 0, ∆ := (1λ)κm d C and β 0 = β 1 = exp -1 2 λ 2 κ 2 m d , we may use (75) justified above to get thatAppendix 4.3 Proof of Theorem 5 (Upper Bound on Integral Estimation)Before proving the upper bound on integral estimation, we need to derive an upper bound on the expected error of the k-nearest neighbor estimatorf k-NN , which is built based on the first half of the given dataset {(x i , y i )} n i=1 , with respect to the L 2 norm. From our construction off k-NN given in Section 4.2, we have that for any fixed n 2 quadrature points {x i } n 2 i=1 , z ∈ Ω and k ∈ {1, 2, · · · , n 2 }, the expected value off k-NN (z) with respect to the observational noises {ϵ i } n 2 i=1 is given bywhere {x. Now let's consider using the bias-variance decomposition to upper bound the error ||f k-NN (z)f (z)|| 2 L 2 (Ω) . Based on the expected value computed in (86) above, we may decompose the functionf k-NNf as a sum of the bias part and the variance part as follows:where the function B corresponds to the bias part and the function V corresponds to the variance part. Using the decompositionf k-NNf = B + V allows us to upper bound the expected error of f k-NN with respect to the L 2 norm as below:where z above is uniformly distributed over the domain Ω and independent of x i for any 1 ≤ i ≤ n 2 . On the one hand, using the expression of the variance part V derived in (88) above and the fact thatare independent and identical distributed noises, we may compute the first term in (89) above as follows:On the other hand, since s ∈ (0, 1) and the given function f is s-Hölder smooth, we have that the inequality |f (x)f (y)| ≲ ||x -y|| s holds true for any x, y ∈ Ω. Combining this inequality with the expression of the bias part B derived in (88) above helps us upper bound the second term in (89) as below:The second least inequality follows from the fact that ω(t) := t s is a concave function when s ∈ (0, 1), while the last inequality is obtained by plugging in (59) given in Lemma 6. Substituting(90)and (91) into (89) then yields that for any k ∈ {1, 2, · · · , n 2 }, the expected error off k-NN with respect to the L 2 norm can be upper bounded as follows:Furthermore, from our construction of the integral estimatorĤ k-NN given in Section 4.2, we may upper bound the expectation of the estimatorĤ k-NN 's squared error via the expected error off k-NNBased on the magnitude of the noises, we have the following two cases for the final upper bound:When γ ∈ [0, s d ), the optimal k is determined by balancing the two terms n -2γ k and k n 2s d in(93), which yields n -2γ k = k n 2s d ⇒ k = Θ(n 2(s-γd) d+2s ). The corresponding upper bound is given by 1 n n -2γ k + k n 2s d + n -2γ-1 ≲ 1 n n -2γ-2(s-γd) d+2s + n -1-2γ = n -2s(1+2γ) 2s+d -1 + n -2γ-1 ≲ max{n -2s(1+2γ) 2s+d -1 , n -2γ-1 } = n -2γ-1 .When γ ∈ [ s d , ∞], we note that k ∈ {1, 2, · · · , n 2 } must be of at least constant level. Therefore, the optimal k is determined by balancing the two terms n -2γ-1 k and n -2γ-1 , which yields that k = Θ(1) is of constant level. The corresponding upper bound is given by 1 n n -2γ k + k n 2s d + n -2γ-1 ≲ n -2s d -1 + n -2γ-1 ≲ max{n -2s d -1 , n -2γ-1 } = n -2s d -1 .Finally, substituting(94)and(95)into(93)gives us the final upper bound: Sobolev spaces. Robert A Adams, John Jf Fournier, ElsevierAdams, Robert A, and John JF Fournier. 2003. Sobolev spaces. Elsevier. . Anastasios N Angelopoulos, Stephen Bates, Clara Fannjiang, Tijana Michael I Jordan, Zrnic, arXiv:2301.096332023. Prediction-powered inference. arXiv preprintAngelopoulos, Anastasios N, Stephen Bates, Clara Fannjiang, Michael I Jordan, and Tijana Zrnic. 2023. Prediction-powered inference. arXiv preprint arXiv:2301.09633. Søren Asmussen, Peter W Glynn, Stochastic simulation: algorithms and analysis. Springer57Asmussen, Søren, and Peter W Glynn. 2007. Stochastic simulation: algorithms and analysis. Vol. 57. Springer. Zero-variance principle for monte carlo algorithms. Roland Assaraf, Michel Caffarel, Physical review letters. 83234682Assaraf, Roland, and Michel Caffarel. 1999. Zero-variance principle for monte carlo algorithms. Physical review letters 83 (23): 4682. On the equivalence between kernel quadrature rules and random feature expansions. Francis Bach, The Journal of Machine Learning Research. 181Bach, Francis. 2017. On the equivalence between kernel quadrature rules and random feature expansions. The Journal of Machine Learning Research 18 (1): 714-751. On the approximate calculation of multiple integrals. Nikolai Bakhvalov, Sergeevich, Journal of Complexity. 314Bakhvalov, Nikolai Sergeevich. 2015. On the approximate calculation of multiple integrals. Journal of Complexity 31 (4): 502-516. Monte carlo with determinantal point processes. Rémi Bardenet, Adrien Hardy, Annals of Applied Probability. Bardenet, Rémi, and Adrien Hardy. 2020. Monte carlo with determinantal point processes. Annals of Applied Probability. An analysis of ermakov-zolotukhin quadrature using kernels. Ayoub Belhadji, Advances in Neural Information Processing Systems. 34Belhadji, Ayoub. 2021. An analysis of ermakov-zolotukhin quadrature using kernels. Advances in Neural Information Processing Systems 34:27278-27289. Kernel quadrature with dpps. Ayoub Belhadji, Rémi Bardenet, Pierre Chainais, Advances in Neural Information Processing Systems. 32Belhadji, Ayoub, Rémi Bardenet, and Pierre Chainais. 2019. Kernel quadrature with dpps. Advances in Neural Information Processing Systems 32. Gérard Biau, Luc Devroye, Lectures on the nearest neighbor method. Springer246Biau, Gérard, and Luc Devroye. 2015. Lectures on the nearest neighbor method. Vol. 246. Springer. Estimation of integral functionals of a density. Lucien Birgé, Pascal Massart, The Annals of Statistics. 231Birgé, Lucien, and Pascal Massart. 1995. Estimation of integral functionals of a density. The Annals of Statistics 23 (1): 11-29. Stein points. Wilson Chen, Lester Ye, Jackson Mackey, François-Xavier Gorham, Chris Briol, Oates, PMLRInternational conference on machine learning. Chen, Wilson Ye, Lester Mackey, Jackson Gorham, François-Xavier Briol, and Chris Oates. 2018. Stein points. In International conference on machine learning, 844-853. PMLR. Yutian Chen, Max Welling, Alex Smola, arXiv:1203.3472Super-samples from kernel herding. arXiv preprintChen, Yutian, Max Welling, and Alex Smola. 2012. Super-samples from kernel herding. arXiv preprint arXiv:1203.3472. Regression-based methods for using control variates in monte carlo experiments. Russell Davidson, James G Mackinnon, Journal of Econometrics. 541-3Davidson, Russell, and James G MacKinnon. 1992. Regression-based methods for using control variates in monte carlo experiments. Journal of Econometrics 54 (1-3): 203-222. One-sided inference about functionals of a density. David L Donoho, The Annals of Statistics. Donoho, David L. 1988. One-sided inference about functionals of a density. The Annals of Statistics, 1390-1420. Geometrizing rates of convergence, iii. The Annals of Statistics. David L Donoho, Richard C Liu, Geometrizing rates of convergence, ii. The Annals of StatisticsDonoho, David L, and Richard C Liu. 1991a. Geometrizing rates of convergence, ii. The Annals of Statistics, 633-667. . 1991b. Geometrizing rates of convergence, iii. The Annals of Statistics, 668-701. Minimax quadratic estimation of a quadratic functional. David L Donoho, Michael Nussbaum, Journal of Complexity. 63Donoho, David L, and Michael Nussbaum. 1990. Minimax quadratic estimation of a quadratic functional. Journal of Complexity 6 (3): 290-323. Raaz Dwivedi, Lester Mackey, arXiv:2110.01593arXiv:2105.05842Generalized kernel thinning. arXiv preprint2021b. Kernel thinningDwivedi, Raaz, and Lester Mackey. 2021a. Generalized kernel thinning. arXiv preprint arXiv:2110.01593. . 2021b. Kernel thinning. arXiv preprint arXiv:2105.05842. On two ways to use determinantal point processes for monte carlo integration. Guillaume Gautier, Rémi Bardenet, Michal Valko, Advances in Neural Information Processing Systems. 32Gautier, Guillaume, Rémi Bardenet, and Michal Valko. 2019. On two ways to use determinantal point processes for monte carlo integration. Advances in Neural Information Processing Systems 32. On estimation of Lr-norms in gaussian white noise models. Yanjun Han, Jiantao Jiao, Rajarshi Mukherjee, Probability Theory and Related Fields. 1773-4Han, Yanjun, Jiantao Jiao, and Rajarshi Mukherjee. 2020. On estimation of Lr-norms in gaussian white noise models. Probability Theory and Related Fields 177 (3-4): 1243-1294. Optimal rates of entropy estimation over lipschitz balls. Yanjun Han, Jiantao Jiao, Tsachy Weissman, Yihong Wu, The Annals of Statistics. 486Han, Yanjun, Jiantao Jiao, Tsachy Weissman, and Yihong Wu. 2020. Optimal rates of entropy estimation over lipschitz balls. The Annals of Statistics 48 (6): 3228-3250. Satoshi Hayakawa, Harald Oberhauser, Terry Lyons, arXiv:2107.09597arXiv:2301.095172023. Sampling-based nyström approximation and kernel quadrature. arXiv preprintPositively weighted kernel quadrature via subsamplingHayakawa, Satoshi, Harald Oberhauser, and Terry Lyons. 2021. Positively weighted kernel quadrature via subsampling. arXiv preprint arXiv:2107.09597. . 2023. Sampling-based nyström approximation and kernel quadrature. arXiv preprint arXiv:2301.09517. Randomized approximation of sobolev embeddings, iii. Stefan Heinrich, Journal of Complexity. 255Journal of ComplexityHeinrich, Stefan. 2009a. Randomized approximation of sobolev embeddings, ii. Journal of Complexity 25 (5): 455-472. . 2009b. Randomized approximation of sobolev embeddings, iii. Journal of Complexity 25 (5): 473-507. . 2018. On the complexity of computing the Lq norm. Journal of Complexity 49:1-26. Control variates for quasi-monte carlo. Fred J Hickernell, Christiane Lemieux, Art B Owen, Statistical Science. 201Hickernell, Fred J, Christiane Lemieux, and Art B Owen. 2005. Control variates for quasi-monte carlo. Statistical Science 20 (1): 1-31.
[]
[ "Sphaleron rate from a modified Backus-Gilbert inversion method", "Sphaleron rate from a modified Backus-Gilbert inversion method" ]
[ "Claudio Bonanno ", "Francesco D &apos; Angelo ", "Massimo D&apos;elia ", "Lorenzo Maio ", "Manuel Naviglio ", "\nInstituto de Física Teórica UAM-CSIC\nc/ Nicolás Cabrera 13-15\n", "\nDipartimento di Fisica dell'Università di Pisa & INFN Sezione di Pisa\nUniversidad Autónoma de Madrid\nLargo Pontecorvo 3, IE-28049, 56127Cantoblanco, Madrid, PisaSpain, Italy\n" ]
[ "Instituto de Física Teórica UAM-CSIC\nc/ Nicolás Cabrera 13-15", "Dipartimento di Fisica dell'Università di Pisa & INFN Sezione di Pisa\nUniversidad Autónoma de Madrid\nLargo Pontecorvo 3, IE-28049, 56127Cantoblanco, Madrid, PisaSpain, Italy" ]
[]
We compute the sphaleron rate in quenched QCD for a temperature T ≃ 1.24 Tc from the inversion of the Euclidean lattice time correlator of the topological charge density. We explore and compare two different strategies: one follows a new approach proposed in this study and consists in extracting the rate from finite lattice spacing correlators, and then in taking the continuum limit at fixed smoothing radius followed by a zero-smoothing extrapolation; the other follows the traditional approach of extracting the rate after performing such double extrapolation directly on the correlator. In both cases the rate is obtained from a recently-proposed modification of the standard Backus-Gilbert procedure. The two strategies lead to compatible estimates within errors, which are then compared to previous results in the literature at the same or similar temperatures; the new strategy permits to obtain improved results, in terms of statistical and systematic uncertainties.
null
[ "https://export.arxiv.org/pdf/2305.17120v1.pdf" ]
258,947,100
2305.17120
76c32ef104757938afbeace42d34fa390f834eaf
Sphaleron rate from a modified Backus-Gilbert inversion method 26 May 2023 (Dated: May 29, 2023) Claudio Bonanno Francesco D &apos; Angelo Massimo D&apos;elia Lorenzo Maio Manuel Naviglio Instituto de Física Teórica UAM-CSIC c/ Nicolás Cabrera 13-15 Dipartimento di Fisica dell'Università di Pisa & INFN Sezione di Pisa Universidad Autónoma de Madrid Largo Pontecorvo 3, IE-28049, 56127Cantoblanco, Madrid, PisaSpain, Italy Sphaleron rate from a modified Backus-Gilbert inversion method 26 May 2023 (Dated: May 29, 2023)numbers: 1238Aw1115Ha1238Gc1238Mh We compute the sphaleron rate in quenched QCD for a temperature T ≃ 1.24 Tc from the inversion of the Euclidean lattice time correlator of the topological charge density. We explore and compare two different strategies: one follows a new approach proposed in this study and consists in extracting the rate from finite lattice spacing correlators, and then in taking the continuum limit at fixed smoothing radius followed by a zero-smoothing extrapolation; the other follows the traditional approach of extracting the rate after performing such double extrapolation directly on the correlator. In both cases the rate is obtained from a recently-proposed modification of the standard Backus-Gilbert procedure. The two strategies lead to compatible estimates within errors, which are then compared to previous results in the literature at the same or similar temperatures; the new strategy permits to obtain improved results, in terms of statistical and systematic uncertainties. INTRODUCTION The study of real-time topological transitions in finite temperature QCD, the so-called sphaleron transitions, has recently attracted much attention from the theoretical community due to its connection to several intriguing phenomenological aspects of the Standard Model, and beyond. In particular, an extremely interesting role is played by the sphaleron rate Γ Sphal = lim Vs→∞ tM→∞ 1 V s t M tM 0 dt ′ M Vs d 3 x q(t ′ M , x) 2 = dt M d 3 x q(t M , x)q(0, 0) ,(1) where t M is the real Minkowski time and q(x) = 1 32π 2 ε µνρσ Tr{G µν (x)G ρσ (x)}(2) is the QCD topological charge density, expressed in terms of the gluon field strength G µν ≡ ∂ µ A ν −∂ ν A µ +i[A µ , A ν ]. For example, a non-vanishing sphaleron rate drives local fluctuations in the difference between the left and right axial quark numbers N L − N R , being q(x) coupled to the divergence of the axial quark current J µ 5 =ψγ µ γ 5 ψ * Electronic address: [email protected] † Electronic address: [email protected] ‡ Electronic address: [email protected] § Electronic address: [email protected] ¶ Electronic address: [email protected] due to the anomalous breaking of U(1) A . When imbalances in the axial quark number due to sphaleron transitions are created in the presence of strong background magnetic fields, such as those generated for short times during heavy-ion collisions, they lead to the so-called Chiral Magnetic Effect [1][2][3][4], which is one of the most intriguing predictions for the quark-gluon plasma. Another example of the importance of Γ Sphal comes instead from Beyond Standard Model phenomenology. Indeed, the sphaleron rate has been recently recognized as an essential input for the computation of the rate of thermal axion production in the early Universe via axion-pion scattering [5]. Because of such prominent phenomenological role, the computation of the QCD sphaleron rate at finite temperature has been tackled in recent years in the literature, although so far just restricting to the quenched case [6][7][8][9] (i.e., the quarkless pure SU(3) gauge theory). Due to the non-perturbative nature of sphaleron dynamics, being driven by topological excitations, numerical Monte Carlo (MC) simulations on the lattice are a natural tool to compute Γ Sphal . Being the latter based on the Euclidean formulation of QCD, the real-time definition of Γ Sphal in Eq. (1) cannot be directly used to compute this quantity numerically. However, using the Kubo formula, one can express the rate in terms of the spectral density ρ(ω) in the zero-frequency limit (here T is the temperature): Γ Sphal = 2T lim ω→0 ρ(ω) ω .(3) The quantity ρ(ω) is related to the Euclidean topological charge density time-correlator (with t the imaginary Euclidean time) G(t) ≡ d 3 x q(t, x)q(0, 0)(4) via the following integral relation [10]: G(t) = − ∞ 0 dω π ρ(ω) cosh ω 2T − ωt sinh ω 2T .(5) It is clear that, to extract Γ Sphal from lattice simulations, the main difficulty is constituted by the inversion of Eq. (5). To this end, we adopt the Backus-Gilbert method [11], which allows to numerically reconstruct the spectral density ρ(ω) in terms of a linear combination of the values of the correlator G(t) determined on the lattice, whose coefficients are obtained from the minimization of a suitable functional. More precisely, we rely on the recent modification of the standard Backus-Gilbert method introduced in Ref. [12]. Another aspect that has to be treated with some care is the lattice determination of the topological charge density correlator. As a matter of fact, due to UV noise, it is customary to determine topological quantities from smoothened configurations obtained from the application of some smoothing algorithm. After smoothing, UV fluctuations are suppressed up to a scale known as the smoothing radius, which is proportional to the square root of the amount of smoothing performed. However, since smoothing modifies short-distance fluctuations, computing G(t) using Eq. (4) from determinations of q(x) obtained on smoothened gauge fields unavoidably modifies the behavior of the correlator at small times. A possible strategy to overcome this issue, adopted in Refs. [6,8], is to perform a double extrapolation of the correlator: first one performs a continuum extrapolation of the lattice correlator at fixed smoothing radius; finally, one extrapolates continuum determinations of G(t) towards the zero-smoothing-radius limit. The latter approach, however, has the drawback of working only for sufficiently large Euclidean times t. Indeed, the range of smoothing radii that can be considered for the zero-smoothing extrapolation is bounded from below (as a minimum amount of smoothing is necessary to ensure that we are correctly identifying the topological background of the configuration) and from above (as the smoothing radius needs to be smaller than the time distance t between the correlated sources). Therefore, such range closes for smaller values of t. While this fact does not constitute a total obstruction for the extraction of Γ Sphal from the Backus-Gilbert method (being it related to the zero-frequency behavior of ρ(ω)/ω, which is dominated by the behavior of G(t) at larger times), it makes the reconstruction of the spectral density noisier, making it more difficult to obtain reliable results for Γ Sphal . In this work, instead, we propose a different approach, namely, to move the double extrapolation on the rate. In practice, we determine the rate from the correlators obtained at finite lattice spacing and smoothing radius, and then we perform the double extrapolation outlined earlier directly on Γ Sphal . The goal of our work is to compare these two methods in view of an application to the more computationally demanding case of full QCD. Therefore, we focus on one value of the temperature, namely T ≃ 1.24 T c ≃ 357 MeV, and we perform our study in quenched QCD, where our results can also be compared with other independent determinations in the literature. This paper is organized as follows: in Sec. 2 we explain in details our numerical setup, focusing on the computation of the correlator and on the inversion method to extract the rate; in Sec. 3 we present our numerical results for the rate; in Sec. 4 we draw our conclusions and discuss future perspectives. NUMERICAL SETUP A. Lattice action We discretize the Euclidean pure-SU(3) gauge action S YM = (1/4g 2 ) d 4 x Tr{G µν (x)G µν (x)} on a N 3 s × N t lattice with lattice spacing a using the standard Wilson lattice gauge action S W = − β 3 n,µ>ν ℜTr [Π µν (n)] ,(6) where β = 6/g 2 is the bare inverse gauge coupling and Π µν (n) ≡ U µ (n)U ν (n+μ)U † µ (n+ν)U † ν (n) is the plaquette. We performed simulations for 4 values of β, corresponding to 4 values of the lattice spacing a, keeping the spatial volume (aN s ) 3 ≃ (1.66 fm) 3 , the aspect ratio N s /N t = 3 and the temperature T = (aN t ) −1 ≃ 357 MeV ≃ 1.24 T c fixed for each gauge ensemble. All simulations parameters are summarized in Tab [13], and using the reference value for the Sommer parameter r0 ≃ 0.472 fm of Ref. [14]. For the critical temperature, the reference value Tc ≃ 287 MeV was used [15,16]. The total statistics collected is expressed in thousands (k), and measures were collected every 20 MC updating steps. SU(2) subgroups of SU(3). In particular, our single MC updating step consisted of 1 lattice sweep of HB followed by 4 lattice sweeps of OR. The measure of the topological charge density correlator was performed every 20 MC steps, and the total statistics employed to compute G(t) is reported in Tab. I. B. Lattice topological charge density correlator and smoothing We discretized the continuum topological charge density in Eq. (2) using the standard clover definition, which is the simplest lattice discretization with definite parity: q L (n) = −1 2 9 π 2 ±4 µνρσ=±1 ε µνρσ Tr {Π µν (n)Π ρσ (n)} ,(7) where it is understood that ε (−µ)νρσ = −ε µνρσ . To obtain the correlator in dimensionless physical units, we measured the time profile Q L (n t ) of the lattice topological charge Q L Q L (n t ) = n q L (n t , n), Q L = nt Q L (n t ),(8) and computed G L (tT ) T 5 = N 5 t N 3 s Q L (n t,1 )Q L (n t,2 ) ,(9) where the physical time separation between the sources is given by tT = |n t,1 − n t,2 |/N t , |n t,1 − n t,2 | ≤ N t /2, 1 − |n t,1 − n t,2 |/N t , |n t,1 − n t,2 | > N t /2. Note that it is sufficient to compute the correlator up to tT = 0.5, as G L (tT ) = G L (1 − tT ). The topological charge profiles entering Eq. (9) are computed after smoothing, in order to ensure that we consider only correlations of fluctuations of physical origin. Indeed, the lattice topological charge Q L in Eq. (8) renormalizes multiplicatively as follows [21,22]: Q L = Z Q (β)Q,(11) where Q is the continuum integer-valued topological charge. Moreover, the two-point function of the lattice topological charge density contains short-distance UV artefacts, leading for instance to the appearance of additive renormalizations in higher-order cumulants of the topological charge distribution [23,24], which become dominant in the continuum limit, overcoming the physical signal. Being such effects related to fluctuations on the scale of the UV cut-off, which are dumped by smoothing, computing the lattice topological charge density correlator on smoothened configurations removes such renormalizations, ensuring that one is correctly considering only correlations of physical relevance. Several smoothing algorithms have been adopted in the literature, such as cooling [25][26][27][28][29][30][31], stout smearing [32,33] or gradient flow [34,35]. All choices give consistent results when properly matched to each other [31,36,37]. In this work we choose cooling for its simplicity and numerical cheapness. One cooling step consists in a sweep of the lattice where we align each link U µ (n) to its local staple. Iterating the cooling steps drives the Wilson action (6) closer to a local minimum, thus dumping UV fluctuations while leaving the global topological content of the field configuration unaltered. We recall that, while in the continuum G(t) < 0 for every t > 0 because of reflection positivity [22,[38][39][40][41][42][43], on the lattice this property is violated for smaller time separations, because the sources entering in the lattice correlator are smoothed. As a matter of fact, the lattice correlator G L is negative only when the time separation between the sources is larger that the smoothing radius; otherwise, it will be positive. Of course, after the double extrapolation (i.e., continuum limit followed by zero-smoothing limit), the negativity of the correlator is recovered. C. Inversion Method Once the correlation function G(t) is computed, Eq. (5) has to be inverted to extract the spectral function ρ(ω) and then compute the sphaleron rate using Eq. (3). Let us rewrite Eq. (5) as: G(t) = − ∞ 0 dω π ρ(ω) f (ω) K ′ t (ω),(12) where f (ω) is an arbitrary function, and where we redefined the basis function as K ′ t (ω) ≡ f (ω) cosh[ω/(2T ) − ωt] sinh[ω/(2T )] .(13) In the case of Backus-Gilbert techniques, one constructs the estimatorρ(ω) of the spectral function as: ρ(ω) = −πf (ω) 1/T t=0 g t (ω)G(t),(14) where g t are unknown coefficients to be determined. The advantage of this formulation is that we can set f (ω) = ω andω = 0, so that we are able to directly estimate from the correlator the ratio ρ(ω)/ω in the limit ω → 0: ρ(ω) ω ω = 0 = −π 1/T t=0 g t (0)G(t). This is, apart from an overall factor, the sphaleron rate according to the Kubo formula (3). Combining Eqs. (12) and (14), one obtains the following relation between the estimatorρ(ω) and the physical spectral function ρ(ω): ρ(ω) ω = ∞ 0 dω∆(ω,ω) ρ(ω) ω ,(15) where ∆(ω,ω) = 1/T t=0 g t (ω)K ′ t (ω)(16) is the so-called resolution function. From Eq. (15) it follows that, assuming a resolution function normalized to 1, if ∆(ω,ω) has a sharp peak aroundω as a function of ω, thenρ is a good approximation of the actual spectral function ρ. This is particularly evident in the limit in which ∆(ω,ω) tends to a Dirac delta-function δ(ω −ω): in this case the relation ρ(ω) = ρ(ω) holds exactly. Clearly, in a real calculation the resolution function will have a peak of finite width aroundω. Thus, the estimatorρ(ω) will actually be an average of the spectral function over such a region around ω. This means that the larger the width of the resolution function is, the less faithfully we are able to reconstruct the actual spectral density ρ fromρ. It is therefore clear that the strategy used to fix the shape of the resolution function in terms of the unknown g t coefficients plays a crucial role in determining the quality of our estimation of the spectral density viaρ. To compute the coefficients g t , we apply the modified Backus-Gilbert regularization method recently proposed in [12]. This approach consists in minimizing a functional depending on the difference between the resolution function ∆(ω,ω) and some chosen target function δ(ω,ω), whose shape is fixed on the basis of physical considerations. Since such procedure is typically extremely noisy, it is customary to regularize it by adding to the minimized functional a term related to the statistical error on the reconstructed quantity. In our case, the functional F [g t ] that is minimized to determine g t takes the following form: F [g t ] = (1 − λ)A α [g t ] + λ C B[g t ], λ ∈ [0, 1),(17) where C is a normalization factor proportional to the square of the value of the correlator in a fixed point (here we used C = G(tT = 0.5) 2 ), λ is a free parameter whose role will be discussed later, and A α and B are suitable functionals depending on g t . The functional A α is related to the distance between the resolution and the given target function δ(ω,ω): A α [g t ] = ∞ 0 dω [∆(ω,ω) − δ(ω,ω)] 2 e αω , α < 2. (18) As proposed in [44], the square distance between ∆(ω,ω) and δ(ω,ω) is further multiplied by an exponentially growing factor to promote larger frequencies in the integral defining A α [g t ]. This is justified by the known one-loop perturbative result for ρ(ω), which predicts that ρ(ω) diverges as a power-law in ω at large frequencies [45]. In our analysis we used α = 2 − , i.e., α = 1.99. The second functional is proportional to the uncertainty on the final quantity (i.e., the spectral density): B[g t ] = 1/T t,t ′ =0 Cov t,t ′ g t g t ′ ,(19) where Cov t,t ′ = [G(t) − G(t) ][G(t ′ ) − G(t ′ ) ] denotes the covariance matrix of the correlator. As proposed in [4], we used the pseudo-Gaussian target function δ(ω,ω = 0) = 2 σπ 2 ω sinh(ω/σ) ,(20) which depends on the free parameter σ. Being σ related to the width of the target function, its choice directly reflects on the width of the resolution function obtained after the minimization procedure outlined above, and thus on the quality of our estimation of the spectral function. Choosing larger values of σ will yield smaller errors on the rate, as coefficients g t will have smaller fluctuations, but the results will also be less physically reliable. On the other hand, the more peaked the target function is chosen, the noisier our determination of the rate will be. In our analysis, we chose σ/T = 1.75, but we also checked that choosing other values of σ/T = 1.6, 1.9 gave compatible results for the rate within the errors 1 . Therefore, we fixed σ/T = 1.75 for all analyzed ensembles, meaning that we used such value both for the correlators we obtained at finite lattice spacing and for the one obtained in the continuum limit. For this value of the width of the target function, the observed relative deviation at the peak between the resolution and the target function was ∼ 5% for λ = 0. Once ρ(ω)/ω|ω =0 is obtained from the Backus-Gilbert inversion method, we compute the sphaleron rate using Eq. (3). We do so for several values of the free parameter λ ∈ [0, 1) appearing in the functional (17). When λ → 0, i.e., when we neglect the regulator term B[q t ], statistical errors on the sphaleron rate explode, since the inversion problem defining ρ is ill-posed, and coefficients g t will have sizeable fluctuations. As λ is increased, the inversion problem gets regularized and errors on Γ Sphal decrease. However, when λ → 1, we are neglecting the contribution of the functional A α , and the resulting resolution function we get from our minimization procedure is practically uncontrained, and can vary sizeably even upon a small variation of λ. Therefore, in this regime, the result of our inversion cannot be trusted from a physical point of view, and will be dominated by systematic effects. Therefore, to provide a correct estimation of the sphaleron rate, we chose λ in order to stay within the statistically-dominated region, and we included any observed systematic variation of the rate within this region in our final error budget. More precisely, this is the procedure we have followed to estimate our final error on the rate. First, we compute the sphaleron rate as a function of the quantity d[g t ](λ) ≡ A 0 [g t ] B[g t ] ,(21) where the statistical error on Γ Sphal was computed, for each value of d[g t ](λ), from a bootstrap analysis carried over O(1000) bootstrap resamplings. According to our previous discussion, it is clear that, when d[g t ](λ) is small, we are reasonably within the statisticallydominated regime. Then, we select a point in the statistically-dominated region, corresponding to a value d[g t ](λ 1 ) ≪ 1, whose central value will be the central value of our final estimate of Γ Sphal , and whose statistical error will be the statistical error on our determination of the rate. Finally, we select a second point deeper in the statistically dominated regime d[g t ](λ 2 ) < d[g t ](λ 1 ) to estimate possible systematics. More precisely, we compute a systematic error which is proportional to the difference between the central values of the rates obtained for λ 1 and λ 2 (according to Eqs. (27) and (28) of [44]). In the end, the final error on Γ Sphal (λ 1 ) is obtained summing in quadrature the systematic and the statistical errors. NUMERICAL RESULTS FOR THE SPHALERON RATE In this section we will show and discuss our results for the sphaleron rate, obtained by using two different strategies: the standard one, based on the double-extrapolated time correlator of the topological charge density; and the new one, proposed in this paper, for which the double extrapolation is performed directly on the sphaleron rate itself. In both cases, we make use of the the modified Backus-Gilbert method described in Sec. 2 C. A. Rate from the double-extrapolated correlator To extrapolate the lattice correlator G L (tT )/T 5 towards the continuum limit at fixed smoothing radius r s , with our setup it is sufficient to keep n cool /N 2 t fixed for each lattice spacing. As a matter of fact, the relation between the smoothing radius r s in lattice units and the number of cooling steps n cool is given by [36]: r s a ≃ 8n cool 3 .(22) Therefore, n cool /N 2 t ∝ (r s T ) 2 . Since n cool can only assume integer values, in order to keep n cool /N 2 t fix for each ensemble we performed a spline cubic interpolation of our correlators at non-integer values of n cool . Moreover, in order to compute the continuum limit of G(tT ), we also need the same physical time separation tT for each lattice spacing. Therefore, for each value of n cool , we also interpolated the correlators obtained on coarser lattices to the values of tT obtainable on the finest one. Also in this case, we did a spline cubic interpolation of the correlators, similarly to what has been done in Ref. [8]. In Fig. 1, we show the behavior of the tT and n coolinterpolated correlators for n cool /N 2 t ≃ 0.069 as a function of tT for all explored lattice spacings. Moreover, in Fig. 1 we also show the comparison between the corre-lators obtained for n cool /N 2 t ≃ 0.069 for β = 6.440 on a 36 3 × 12 and a 48 3 × 12 lattice. Results fall on top of each other, thus we assume that our results obtained on lattices with aspect ratio 3 and spatial extent of ∼ 1.66 fm do not suffer for significant finite size effects. To take the continuum limit, we will assume standard O(a 2 ) = O(1/N 2 t ) corrections and we will fit our data for different values of β according to the following fit function: G L tT, N t , n cool N 2 t T 5 = G tT, n cool N 2 t T 5 + c tT, n cool N 2 t 1 N 2 t + o 1 N 2 t ,(23) where c is a constant factor that, in principle, depends both on the time separation of the sources in the correlator and on the smoothing radius. Examples of the continuum limit of G L (tT, N t , n cool /N 2 t ) for two values of tT according to fit function (23) are shown in Fig. 2. We observe that results at our 3 finest lattice spacings can be reliably fitted with a linear function in 1/N 2 t . Compatible extrapolations within the errors are obtained fitting all available points and including further 1/N 4 t corrections, cf. Fig. 2. Therefore, in what follows we employed the extrapolations obtained with the first fit as our estimates of the continuum limit of the correlator. Once the correlator is extrapolated towards the continuum limit, there is a residual dependence on the smoothing radius r s . In Ref. [8] it was shown, using the gradient flow formalism, that the dependence of the continuum-extrapolated correlator is linear in the flow time τ flow ∝ r 2 s . Given that the linear relation τ flow /a 2 = n cool /3 [36] holds for the Wilson action in the pure SU(3) gauge theory, we thus expect to observe a linear dependence on n cool /N 2 t of our continuum-extrapolated correlator 2 . Therefore, our final double-extrapolated correlator G(tT )/T 5 is obtained from a linear fit in n cool /N 2 t according to the fit function: G tT, n cool N 2 t T 5 = G(tT ) T 5 +c(tT ) n cool N 2 t ,(24) wherec is a constant factor depending on the value of the time separation tT . When performing such zero-cooling extrapolation of G(tT, n cool /N 2 t ), we fixed the fit range following these prescriptions. For the upper bound, we chose n (max) cool in order to ensure that r s T < tT , i.e., cf. Eq. (22): n (max) cool N 2 t 3 8 (tT ) 2 .(25) 2 See also Refs. [46,47], where a linear behavior on n cool is observed in 2d CP N−1 models for, respectively, the continuum limit at fixed smoothing radius in physical units of the topological susceptibility χ and of the topological susceptibility slope χ ′ . For our largest time separation tT = 0.5, we could extend our linear fit region up to n cool /N 2 t ≃ 0.090, corresponding, respectively, to n cool 13, 18, 24, 37 for N t = 12, 14, 16, 20. For the lower bound, we choose n (min) cool in order to ensure that the topological susceptibility 3 a 4 χ = Q 2 /(N 3 s N t ) has reached a plateau (as a function of n cool /N 2 t ) for all the explored values of β, cf. Fig. 3. In our case, it turns out that n cool /N 2 t = 0.012 is a reasonable lower bound, corresponding, respectively, to n cool 1, 2, 3, 4 for N t = 12, 14, 16, 20. These prescriptions were chosen to ensure that we did enough cooling so as to correctly identify the correct topological charge for all the lattice configurations, but at the same time that we did not do too much cooling so as to make the sources in the correlator overlap onto each other. However, a drawback of this procedure is that, when tT approaches 0, the fit range becomes narrower and narrower, eventually closing. As a matter of fact, for time separations tT ≤ 0.2 we could not perform a reliable zero-cooling extrapolation. Therefore, we could only compute the double-extrapolated correlator for tT > 0.2. In Fig. 4 we show examples of the zero-cooling extrapolation for two values of tT , while in Fig. 5 we show our complete double-extrapolated correlator G(tT )/T 5 . Our final correlator turns out to be negative in all cases as expected, and in overall good agreement with the double-extrapolated correlator obtained for the same temperature in Ref. [6], where the gradient flow was used as smoothing method to define the lattice topological charge density. We can now consider the determination of the sphaleron rate. Applying the inversion method outlined in Sec. 2 C to our double-extrapolated correlator, we find, see Fig. 6: Γ Sphal T 4 = 0.079(25), T ≃ 1.24 T c .(26) Our result is compatible, within errors, with the one of Ref. [6] at the same T , Γ Sphal /T 4 = 0.12(3), obtained applying the standard Backus-Gilbert method. B. Double extrapolation of the sphaleron rate In this section, we follow a different strategy to compute the sphaleron rate; namely, we extract Γ Sphal,L from the correlators G L (tT ) obtained at finite lattice spacing as a function of n cool , using the same inversion method of Sec. 2 C, with the aim of postponing the doubleextrapolation of the correlator directly onto the rate itself. A first bonus feature of this approach is that no time interpolation of the correlators is now needed in the double-extrapolation procedure. In Fig. 7 we show examples of the results obtained from the modified Backus-Gilbert for all available values of N t and for approximately the same value of n cool /N 2 t , while in Fig. 8 we collect our final results for the rate at finite lattice spacing as a function of n cool . On this basis, we can now perform the continuum limit at fixed smoothing radius (r s T ) 2 ∝ n cool /N 2 t according to the fit function: Γ Sphal,L T 4 N t , n cool N 2 t = Γ Sphal T 4 n cool N 2 t + k n cool N 2 t 1 N 2 t ,(27) where k is a constant factor depending on the value of n cool /N 2 t . Also in this case, in order to keep n cool /N 2 t fixed, we have performed a spline cubic interpolation of our results of Γ Sphal,L /T 4 as a function of n cool . Examples of continuum extrapolations of Γ Sphal,L for a few values of n cool /N 2 t are shown in Fig. 9. Interestingly enough, unlike what has been observed for the topological charge density correlator, we observe a very mild dependence of the sphaleron rate on the lattice spacing. As a matter of fact, it is possible to obtain an excellent best fit of our data with a linear function in 1/N 2 t using all available values of N t , and results obtained restricting such fit to our three finest lattice spacings turn out in excellent agreement within the errors. In principle, one could expect to observe a residual dependence on the smoothing radius of the continuumextrapolated values Γ Sphal (n cool /N 2 t ). However, we observe that Γ Sphal approaches a plateau for small enough values of n cool /N 2 t , see Fig. 10. This is reasonable, since smoothing only modifies the high-frequency components of the spectral density. Thus, being Γ Sphal related to the zero-frequency limit of ρ(ω), it is reasonable that this quantity becomes insensitive to the value of the smoothing radius, if this is taken sufficiently small. Therefore, in this case we do not perform any zerocooling extrapolation, and simply take the shaded confidence band depicted in Fig. 10 as our final result for the rate: Γ Sphal T 4 = 0.060(15), T ≃ 1.24 T c .(28) Before commenting further on our final number for the sphaleron rate, let us discuss about the n cool interpolation. From Fig. 8, we observe that the n cool dependence of Γ Sphal,L is pretty mild, in particular for smaller values of n cool , thus, it is reasonable to believe that the rate will vary only little upon the n cool interpolation. To check this assumption, we have also performed our continuum extrapolation at fixed smoothing radius in the following way: given a value of n cool for the lattice with the smallest temporal extent N t = 12, the corresponding (integer) value n ′ cool for another temporal extent N t is given by n ′ cool = round n cool (N t /12) 2 (cf. Eq. 22) where round[x] denotes the closest integer to x. Results obtained with this approximation are shown in Fig. 10 as square points. As it can be appreciated, no difference is observed in the final continuum extrapolation for the sphaleron rate compared to the ones obtained interpolating in n cool (round points). Therefore, we can conclude that, although in principle being a better approximation to keep n cool /N 2 t fixed among different lattices with different temporal extents N t , in the end not even the n cool interpolation is needed with this approach. Let us now discuss the final result for the rate obtained with the approach described in this section and reported in Eq. (28). This result turns out to be compatible with the one found from the inversion of the double extrapolated correlator illustrated in Sec. 3 A, Γ Sphal /T 4 = 0.079(25), but has a smaller relative uncertainty. Moreover, this result points to a smaller central value for the sphaleron rate compared to the one reported in Ref. [6] at the same temperature, Γ Sphal /T 4 = 0.12 (3), even if still compatible within less than two standard deviations. We can also compare our results with the recent determinations of Ref. [9], where a completely different strategy to compute Γ Sphal from quenched lattice simulations was pursued. The smallest temperature explored in that work is T ≃ 1.3 T c which is close but not exactly equal to the one studied here. However, our result turns out to be in perfect agreement with the one reported in that paper at that temperature: Γ Sphal /T 4 = 0.061(2). CONCLUSIONS In this work we have computed the sphaleron rate Γ Sphal in quenched QCD for a temperature T ≃ 1.24 T c ≃ FIG. 10: Dependence of the continuum-extrapolated sphaleron rate on the smoothing radius (rsT ) 2 ∝ n cool /N 2 t . The full round point and the shaded area represent our final result for Γ Sphal /T 4 . The full triangle and starred points represent, respectively, the rate obtained from the inversion of the double-extrapolated correlator, and the one computed in Ref. [6] at the same temperature, but adopting the standard Backus-Gilbert method and using the gradient flow as smoothing method. 357 MeV from lattice numerical Monte Carlo simulations using the modified Backus-Gilbert method proposed by the Rome group to invert the integral relation between the Euclidean topological charge density time correlator and the spectral density, whose zero-frequency limit is directly related to Γ Sphal . In this paper we have followed two strategies. The first one is similar to what has been already done in the past, namely, we have performed a double extrapolation of the topological charge density correlator (continuum limit at fixed smoothing radius in physical units followed by zerosmoothing limit) and then extracted the rate from the inversion of such double-extrapolated correlator. The second method, instead, constists in extracting the rate directly from the inversion of finite-lattice-spacing correlators, in order to postpone the double extrapolation directly on the rate itself. The two methods give consistent results, but we find that the second is preferable for various reasons. First, it eliminates both the need of interpolating in tT (as the rate is extracted from finite lattice spacing correlators) and in n cool (as the rate depends very mildly on the smoothing radius, so that no difference is observed upon interpolating our results for the rate in n cool , rather than just taking the result for the integer n cool closest to the reference smoothing radius). Second, we find that the rate is affected by smaller lattice artifacts, and that it is practically insensitive to the value of the smoothing radius for small enough values of n cool . In the end, thus, the second strategy turns out to be much more simpler and computationally cheaper, and finally yields a smaller error compared to the first one. As we have already discussed above, the reason for this advantage should be searched in the fact that the sphaleron rate is naturally sensitive only to the low frequency part of the two-point function of the topological charge correlator, so that, as long as the smoothing procedure affects only short distance physics, it is expected also to affect Γ Sphal only mildly. We find our final result for the rate, quoted in Eq. (28), to be smaller but compatible within the errors with the one reported in Ref. [6] for the same temperature, which was obtained inverting the double-extrapolated correlator, but using the gradient flow as smoothing method and using the standard Backus-Gilbert inversion technique to compute Γ Sphal . We stress however that the possible (mild) tension is likely not related to the different smoothing procedure, since we also find that our doubleextrapolated correlator is in perfect agreement with the one computed in Ref. [6] at the same T . Finally, perfect compatibility is found with the result obtained for the sphaleron rate at T ≃ 1.3 T c in Ref. [9], where a completely different method to extract the rate was pursued (based on the computation of the susceptibility of the so-called "sphaleron topological charge"). Our present results can be considered as a basis for a future application of the new strategy proposed in this paper to the computation of the sphaleron rate in full QCD at finite temperature, being this quantity of great interest both for studying the properties of the quarkgluon plasma and for obtaining intriguing predictions about axion phenomenology. FIG. 1 : 1Top: determinations of the correlator GL(tT ) for n cool /N 2 t ≃ 0.069 for all explored values of the lattice spacing. Bottom: comparison of the correlators obtained at β = 6.440 for n cool /N 2 t ≃ 0.069 on a 36 3 × 12 and on a 48 3 × 12 lattices. Lines connecting the points have been plotted just to guide the eye. FIG. 2 : 2Examples of the continuum extrapolation at fixed n cool /N 2 t of the correlator for two different values of tT . FIG. 3 :FIG. 4 : 34Behavior of the topological susceptibility as a function of the number of cooling steps for the four explored lattice spacings. The dashed line denotes the minimum value of n cool /N 2 t employed for the zero-cooling extrapolation. Examples of the zero-cooling extrapolation of the correlator G(tT, n cool /N 2 t ) for two different values of tT . FIG. 5 :FIG. 6 : 56Comparison of the results for the double extrapolated correlator G(tT )/T 5 obtained in this work with those reported in Ref.[6] at the same temperature and using the gradient flow as smoothing method. Results for the rate Γ Sphal as a function of d[gt], defined in Eq. (21), extracted from the double extrapolated correlator. The square and diamond points represent, respectively, our choices for λ1 and λ2, see discussion below Eq. (21) for more details. The full point and the shaded area represent our final result for Γ Sphal . FIG. 7 : 7Results for the rate Γ Sphal,L as a function of d[gt], defined in Eq. (21), extracted from the finite lattice spacing correlators and for, respectively, n cool = 10, 14, 18, 28 for Nt = 12, 14, 16, 20. Square and diamond points represent, respectively, our choices for λ1 and λ2, see discussion below Eq. (21) for more details. The full points and the shaded areas represent our final results for Γ Sphal,L . FIG. 8 :FIG. 9 : 89Results for Γ Sphal,L /T 4 as a function of n cool for all the explored lattice spacings. Continuum extrapolation of the sphaleron rate at fixed smoothing radius (rsT ) 2 ∝ n cool /N 2 t for a few values of n cool /N 2 t . . I.Ns Nt β a [fm] L [fm] T [MeV] T /Tc Stat. 36 12 6.440 0.0460 1.66 357 1.24 80k 42 14 6.559 0.0395 10k 48 16 6.665 0.0345 16k 60 20 6.836 0.0276 5k TABLE I : ISummaryof simulation parameters. Scale set- ting was done according to the determination of a(β)/r0 of Ref. When σ/T = 2, our target function exactly matches the basis function K ′ t (ω) in (13) for t = 1/(2T ), which is the time separation expected to give the dominant contribution to ρ(ω = 0). Thus, if σ/T = 2, one would only need to determine one gt coefficient. Since such a choice would be too simplistic, we restricted to the case σ/T < 2. The topological susceptibility was computed using the so-called α-rounded lattice charge, i.e., defining Q = round[αQ L (n cool )], where Q L (n cool ) is the definition in Eq. (8) computed after n cool cooling steps and α is found by minimizing the mean squared difference between αQ L (n cool ) and round[αQ L (n cool )][48,49]. AcknowledgmentsIt is a pleasure to thank Giuseppe Gagliardi and Francesco Sanfilippo for useful discussions.The work of Claudio Bonanno is supported by the Spanish Research Agency (Agencia Estatal de Investigación) through the grant IFT Centro de Excelencia Severo Ochoa CEX2020-001007-S and, partially, . K Fukushima, D E Kharzeev, H J Warringa, 0808.3382Phys. Rev. D. 7874033K. Fukushima, D. E. Kharzeev, and H. J. Warringa, Phys. Rev. D 78, 074033 (2008), 0808.3382. . D E Kharzeev, 1312.3348Prog. Part. Nucl. Phys. 75133D. E. Kharzeev, Prog. Part. Nucl. Phys. 75, 133 (2014), 1312.3348. . N Astrakhantsev, V V Braguta, M Elia, A Y Kotov, A A Nikolaev, F Sanfilippo, 1910.08516Phys. Rev. D. 10254516N. Astrakhantsev, V. V. Braguta, M. D'Elia, A. Y. Ko- tov, A. A. Nikolaev, and F. Sanfilippo, Phys. Rev. D 102, 054516 (2020), 1910.08516. . G Almirante, N Astrakhantsev, V Braguta, M Elia, L Maio, M Naviglio, F Sanfilippo, A Trunin, PoS. 2022155G. Almirante, N. Astrakhantsev, V. Braguta, M. D'Elia, L. Maio, M. Naviglio, F. Sanfilippo, and A. Trunin, PoS LATTICE2022, 155 (2023). . A Notari, F Rompineve, G Villadoro, 2211.03799A. Notari, F. Rompineve, and G. Villadoro (2022), 2211.03799. . A Y Kotov, JETP Letters. 108352A. Y. Kotov, JETP Letters 108, 352 (2018). . A Y Kotov, PoS. 2018147A. Y. Kotov, PoS Confinement2018, 147 (2019). . L Altenkort, A M Eller, O Kaczmarek, L Mazur, G D Moore, H.-T Shu, 2012.08279Phys. Rev. D. 103114513L. Altenkort, A. M. Eller, O. Kaczmarek, L. Mazur, G. D. Moore, and H.-T. Shu, Phys. Rev. D 103, 114513 (2021), 2012.08279. . M B Mancha, G D Moore, 2210.05507M. B. Mancha and G. D. Moore (2022), 2210.05507. . H B Meyer, 1104.3708Eur. Phys. J. A. 4786H. B. Meyer, Eur. Phys. J. A 47, 86 (2011), 1104.3708. . G Backus, F Gilbert, Geophysical Journal International. 16169G. Backus and F. Gilbert, Geophysical Journal Interna- tional 16, 169 (1968). . M Hansen, A Lupo, N Tantalo, 1903.06476Phys. Rev. D. 9994508M. Hansen, A. Lupo, and N. Tantalo, Phys. Rev. D 99, 094508 (2019), 1903.06476. ALPHA). M Guagnelli, R Sommer, H Wittig, hep-lat/9806005Nucl. Phys. B. 535389M. Guagnelli, R. Sommer, and H. Wittig (ALPHA), Nucl. Phys. B 535, 389 (1998), hep-lat/9806005. . R Sommer, 1401.3270PoS. 201315R. Sommer, PoS LATTICE2013, 015 (2014), 1401.3270. . S Borsanyi, 1203.4469JHEP. 0910S. Borsanyi et al., JHEP 09, 010 (2012), 1203.4469. . S Borsanyi, K R , Z Fodor, D A Godzieba, P Parotto, D Sexty, 2202.05234Phys. Rev. D. 10574513S. Borsanyi, K. R., Z. Fodor, D. A. Godzieba, P. Parotto, and D. Sexty, Phys. Rev. D 105, 074513 (2022), 2202.05234. . M Creutz, Phys. Rev. D. 36515M. Creutz, Phys. Rev. D 36, 515 (1987). . M Creutz, Phys. Rev. D. 212308M. Creutz, Phys. Rev. D 21, 2308 (1980). . A D Kennedy, B J Pendleton, Phys. Lett. B. 156393A. D. Kennedy and B. J. Pendleton, Phys. Lett. B 156, 393 (1985). . N Cabibbo, E Marinari, Phys. Lett. B. 119387N. Cabibbo and E. Marinari, Phys. Lett. B 119, 387 (1982). . M Campostrini, A Di Giacomo, H Panagopoulos, Phys. Lett. B. 212206M. Campostrini, A. Di Giacomo, and H. Panagopoulos, Phys. Lett. B 212, 206 (1988). . E Vicari, H Panagopoulos, 0803.1593Phys. Rept. 470E. Vicari and H. Panagopoulos, Phys. Rept. 470, 93 (2009), 0803.1593. . P Di Vecchia, K Fabricius, G C Rossi, G Veneziano, Nucl. Phys. B. 192392426(1981)P. Di Vecchia, K. Fabricius, G. C. Rossi, and G. Veneziano, Nucl. Phys. B 192, 392 (1981), [,426(1981)]. . M , hep- lat/0302007Nucl. Phys. B. 661139M. D'Elia, Nucl. Phys. B 661, 139 (2003), hep- lat/0302007. . B Berg, Phys. Lett. B. 104475B. Berg, Phys. Lett. B 104, 475 (1981). . Y Iwasaki, T Yoshie, Phys. Lett. B. 131159Y. Iwasaki and T. Yoshie, Phys. Lett. B 131, 159 (1983). . S Itoh, Y Iwasaki, T Yoshie, Phys. Lett. B. 147141S. Itoh, Y. Iwasaki, and T. Yoshie, Phys. Lett. B 147, 141 (1984). . M Teper, Phys. Lett. B. 162357M. Teper, Phys. Lett. B 162, 357 (1985). . E.-M Ilgenfritz, M Laursen, G Schierholz, M Müller-Preussker, H Schiller, Nucl. Phys. B. 268693E.-M. Ilgenfritz, M. Laursen, G. Schierholz, M. Müller- Preussker, and H. Schiller, Nucl. Phys. B 268, 693 (1986). . M Campostrini, A Di Giacomo, H Panagopoulos, E Vicari, Nucl. Phys. B. 329683M. Campostrini, A. Di Giacomo, H. Panagopoulos, and E. Vicari, Nucl. Phys. B 329, 683 (1990). . B Alles, L Cosmai, M D&apos;elia, A Papa, hep-lat/0001027Phys. Rev. D. 6294507B. Alles, L. Cosmai, M. D'Elia, and A. Papa, Phys. Rev. D 62, 094507 (2000), hep-lat/0001027. . M , APEPhys. Lett. B. 192163M. Albanese et al. (APE), Phys. Lett. B 192, 163 (1987). . C Morningstar, M J Peardon, hep-lat/0311018Phys. Rev. D. 6954501C. Morningstar and M. J. Peardon, Phys. Rev. D 69, 054501 (2004), hep-lat/0311018. . M Lüscher, 0907.5491Commun. Math. Phys. 293899M. Lüscher, Commun. Math. Phys. 293, 899 (2010), 0907.5491. . M Lüscher, 1006.4518JHEP. 0871Erratum: JHEP03,092(2014)M. Lüscher, JHEP 08, 071 (2010), [Erratum: JHEP03,092(2014)], 1006.4518. . C Bonati, M D&apos;elia, 1401.2441Phys. Rev. D. 89105005C. Bonati and M. D'Elia, Phys. Rev. D D89, 105005 (2014), 1401.2441. . C Alexandrou, A Athenodorou, K Jansen, 1509.04259Phys. Rev. D. 92125014C. Alexandrou, A. Athenodorou, and K. Jansen, Phys. Rev. D 92, 125014 (2015), 1509.04259. . B Alles, M Elia, A Di Giacomo, R Kirchner, hep- lat/9709074Nucl. Phys. B Proc. Suppl. 63B. Alles, M. D'Elia, A. Di Giacomo, and R. Kirch- ner, Nucl. Phys. B Proc. Suppl. 63, 510 (1998), hep- lat/9709074. . E Vicari, hep- lat/9901008Nucl. Phys. B. 554301E. Vicari, Nucl. Phys. B 554, 301 (1999), hep- lat/9901008. . I Horvath, A Alexandru, J B Zhang, Y Chen, S J Dong, T Draper, K F Liu, N Mathur, S Tamhankar, H B Thacker, hep- lat/0504005Phys. Lett. B. 61749I. Horvath, A. Alexandru, J. B. Zhang, Y. Chen, S. J. Dong, T. Draper, K. F. Liu, N. Mathur, S. Tamhankar, and H. B. Thacker, Phys. Lett. B 617, 49 (2005), hep- lat/0504005. . A Chowdhury, A K De, A Harindranath, J Maiti, S Mondal, 1208.4235JHEP. 1129A. Chowdhury, A. K. De, A. Harindranath, J. Maiti, and S. Mondal, JHEP 11, 029 (2012), 1208.4235. . H Fukaya, JLQCDS Aoki, JLQCDG Cossu, JLQCDS Hashimoto, JLQCDT Kaneko, JLQCDJ Noaki, JLQCD1509.00944Phys. Rev. D. 92111501H. Fukaya, S. Aoki, G. Cossu, S. Hashimoto, T. Kaneko, and J. Noaki (JLQCD), Phys. Rev. D 92, 111501 (2015), 1509.00944. . L Mazur, L Altenkort, O Kaczmarek, H.-T Shu, PoS. 2019L. Mazur, L. Altenkort, O. Kaczmarek, and H.-T. Shu, PoS LATTICE2019, 219 (2020), 2001.11967. . C , 2212.08467C. Alexandrou et al. (2022), 2212.08467. . M Laine, A Vuorinen, Y Zhu, 1108.1259JHEP. 0984M. Laine, A. Vuorinen, and Y. Zhu, JHEP 09, 084 (2011), 1108.1259. . C Bonanno, M D&apos;elia, F Margari, 2208.00185Phys. Rev. D. 10714515C. Bonanno, M. D'Elia, and F. Margari, Phys. Rev. D 107, 014515 (2023), 2208.00185. . C Bonanno, 2212.02330Phys. Rev. D. 10714514C. Bonanno, Phys. Rev. D 107, 014514 (2023), 2212.02330. . L Debbio, H Panagopoulos, E Vicari, hep-th/0204125JHEP. 0844L. Del Debbio, H. Panagopoulos, and E. Vicari, JHEP 08, 044 (2002), hep-th/0204125. . C Bonati, M D&apos;elia, A Scapellato, 1512.01544Phys. Rev. D. 9325028C. Bonati, M. D'Elia, and A. Scapellato, Phys. Rev. D 93, 025028 (2016), 1512.01544.
[]
[ "Optical and infrared photometry of new very low-mass stars and brown dwarfs in the σ Orionis cluster", "Optical and infrared photometry of new very low-mass stars and brown dwarfs in the σ Orionis cluster" ]
[ "V J S Béjar \nInstituto de Astrofísica de Canarias\nE-38205La Laguna, TenerifeSpain\n", "M R Zapatero Osorio \nLAEFF-INTA\nP.O. Box 50727E-28080MadridSpain\n", "R Rebolo \nInstituto de Astrofísica de Canarias\nE-38205La Laguna, TenerifeSpain\n\nConsejo Superior de Investigaciones Científicas\nMadridSpain\n" ]
[ "Instituto de Astrofísica de Canarias\nE-38205La Laguna, TenerifeSpain", "LAEFF-INTA\nP.O. Box 50727E-28080MadridSpain", "Instituto de Astrofísica de Canarias\nE-38205La Laguna, TenerifeSpain", "Consejo Superior de Investigaciones Científicas\nMadridSpain" ]
[ "Astron. Nachr./AN" ]
We present an RI photometric survey covering an area of 430 arcmin 2 around the multiple star σ Orionis. The observations were conducted with the 0.8 m IAC-80 Telescope at the Teide Observatory. The survey limiting R and I magnitudes are 22.5 and 21, and completeness magnitudes 21 and 20, respectively. We have selected 53 candidates from the I vs. R-I colour-magnitude diagram (I = 14-20) that follow the previously known photometric sequence of the cluster. Adopting an age of 2-4 Myr for the cluster, we find that these objects span a mass range from 0.35 M⊙ to 0.015 M⊙. We have performed J-band photometry of 52 candidates and Ks photometry for 12 of them, with the result that 50 follow the expected infrared sequence for the cluster, thus confirming with great confidence that the majority of the candidates are bona fide members. JHKs photometry from the Two Micron All Sky Survey (2MASS) is available for 50 of the candidates and are in good agreement with our data. Out of 48 candidates, which have photometric accuracies better than 0.1 mag in all bands, only three appear to show near-infrared excesses.
10.1002/asna.200310247
[ "https://export.arxiv.org/pdf/astro-ph/0404125v1.pdf" ]
119,382,487
astro-ph/0404125
3a0494cf0d9f685e48ddda0b7f37812a5e4cd6f7
Optical and infrared photometry of new very low-mass stars and brown dwarfs in the σ Orionis cluster Apr 2004 V J S Béjar Instituto de Astrofísica de Canarias E-38205La Laguna, TenerifeSpain M R Zapatero Osorio LAEFF-INTA P.O. Box 50727E-28080MadridSpain R Rebolo Instituto de Astrofísica de Canarias E-38205La Laguna, TenerifeSpain Consejo Superior de Investigaciones Científicas MadridSpain Optical and infrared photometry of new very low-mass stars and brown dwarfs in the σ Orionis cluster Astron. Nachr./AN 200X) X, XXX-XXX32Apr 2004Received date will be inserted by the editor; accepted date will be inserted by the editorarXiv:astro-ph/0404125v1 6Stars: low-massbrown dwarfs -Stars: luminosity functionmass function -Stars: colour-magnitude diagrams (H-R diagrams) -Stars: formation -Stars: Open clusters and assotiations -Stars: individual (σ Orionis) We present an RI photometric survey covering an area of 430 arcmin 2 around the multiple star σ Orionis. The observations were conducted with the 0.8 m IAC-80 Telescope at the Teide Observatory. The survey limiting R and I magnitudes are 22.5 and 21, and completeness magnitudes 21 and 20, respectively. We have selected 53 candidates from the I vs. R-I colour-magnitude diagram (I = 14-20) that follow the previously known photometric sequence of the cluster. Adopting an age of 2-4 Myr for the cluster, we find that these objects span a mass range from 0.35 M⊙ to 0.015 M⊙. We have performed J-band photometry of 52 candidates and Ks photometry for 12 of them, with the result that 50 follow the expected infrared sequence for the cluster, thus confirming with great confidence that the majority of the candidates are bona fide members. JHKs photometry from the Two Micron All Sky Survey (2MASS) is available for 50 of the candidates and are in good agreement with our data. Out of 48 candidates, which have photometric accuracies better than 0.1 mag in all bands, only three appear to show near-infrared excesses. Introduction The Orion complex, because of its relative closeness and youth, is one of the most suitable sites for understanding lowmass star formation processes. Recently, ROSAT pointed observations within this complex has led to the discovery of the very young stellar cluster σ Orionis, around the multiple star of the same name (Walter et al. 1997;Wolk & Walter 1998). Follow-up photometric and spectroscopic studies have revealed a sequence of objects in the colour-magnitude diagram that extends well below the substellar limit (Béjar et al. 1999, hereafter BZOR;Béjar et al. 2001, hereafter BMZO). Studies of the depletion of lithium in the atmosphere of K6-M8.5 type low-mass members of the cluster impose an upper limit of 8 Myr on the age and suggest a most likely cluster age in the interval 2-4 Myr (Zapatero Osorio et al. 2002). Hipparcos provides a distance modulus of m − M = 7.7 ± 0.7 (Perryman et al. 1997) for the central star, σ Orionis. This star is affected by a low extinction value of E(B − V ) = 0.05 Correspondence to: [email protected] (Lee 1968), and the associated cluster also seems to exhibit very little reddening (see BMZO and Oliveira et al. 2002). These combined characteristics of youth, proximity and low extinction make σ Orionis one of the most interesting clusters for studying young substellar objects and the substellar mass function. In this paper we present an extension of the RI survey conducted by BZOR, with the aim of detecting new low-mass star and brown dwarf candidates in the σ Orionis cluster. We also present near-infrared data for these objects. Details of the observations are indicated in Section 2. In sections 3.1 and 3.2 we explain the selection of the candidates and discuss their membership and the presence of near-infrared excesses. Our conclusions are given in section 4. Observations Optical Photometry We obtained RI images with the IAC-80 Telescope, at the Teide Observatory on 1998 January 22 and 23. The camera Table 2) are indicated by filled circles, while open circles denote field stars brighter than 13 mag. Relative brightness is indicated by symbol size. North is up and East is to the left. consists of a 1024 × 1024 Thomson CCD detector, providing a pixel projection of 0.4325 arcsec and a field of view of 7.4 × 7.4 arcmin 2 in each exposure. We observed eight different fields in the two filters, covering a total area of 430 arcmin 2 . Exposure times were 1800 s in each filter and field. Table 1 lists the central coordinates of the eight fields. The location of the surveyed region is shown in Figure 1, where some bright stars are indicated. The star σ Orionis, shown at the centre, is also included. Raw frames were reduced within the IRAF 1 environment, using the CCDRED package. Images were biassubtracted and flatfield-corrected. We combined sky flats taken at dusk and dawn to obtain the flatfields. The photometric analysis was performed using routines within DAOPHOT, including the selection of objects with a stellar point spread function (PSF) using the DAOFIND task (extended objects were mostly avoided) and aperture and PSF photometry. The nights were photometric, and instrumental magnitudes were transformed into the Cousins RI system using observations of standard stars from Landolt (1992) made every night. Average seeing ranged from 1.5 to 2 arcsec. The survey completeness and limiting magnitudes were R = 21, I =20 and R = 22.5, I = 21, respectively. We adopted as the completeness magnitude the value at which the histogram of detections as a function of magnitude reaches maximum (∼ 10-σ detection); and as limiting magnitude the value at which less than 50% of the objects at the maximum of the histogram are detected(∼ 3-σ detection limit). We have constructed I vs. R − I colour-magnitude diagrams for each field in order to identify cool cluster members. Figure 2 shows the combination of these diagrams for all the fields. We consider as candidate members objects redder and brighter than the field stars, following the cluster sequence previously defined in BZOR. The lower envelope to the photometric sequence delineated by previously spectroscopically confirmed members is used to separate our candidates from interlopers and background sources. We have selected 53 candidate members, five of which are known from previous surveys (BZOR). Their magnitudes are in the range I = 14-20 mag, which, according to recent theoretical models, correspond to masses in the range 0.35-0.015 M ⊙ (D' Antona & Mazzitelli 1997;Burrows et al. 1997;Baraffe et al. 1998;Chabrier et al. 2000). The substellar mass limit at the age and distance of the σ Orionis cluster is located at I ∼ 16 mag. Table 2 contains the list of selected candidates: around 31 are stellar and 22 substellar. Photometric data and coordinates are also included. Error bars account for the IRAF magnitude error and the uncertainty of the photometric calibration, which is typically ±0.04-0.06 mag. Astrometry was carried out using the USNO-SA2.0 catalogue (Monet et al. 1996); we estimate having achieved a precision better than 2 arcsec. Finder charts (3.7 × 3.7 arcmin 2 ) are provided in Figures A1 and A2. Infrared Photometry We obtained Jand K s -band point observations of the selected candidates with the 1.52 m Carlos Sánchez Telescope (TCS), at the Teide Observatory, on 1998 September 18, December 17, 1999 January 23, 24 and 2000 January 27. The infrared camera (CAIN) is equipped with an HgCdTe 256 × 256 detector (Nicmos 3), which, with its wide optics configuration, provides a pixel projection of 1.00 arcsec, covering an area of 4.3 × 4.3 arcmin 2 in each exposure. Total exposure times ranged from 60 to 720 s depending on the filter and the expected magnitude of the candidates. Raw data were processed within the IRAF environment. Each frame consisted of 9-10 exposures obtained using a dithering pattern on the detector. Final images were obtained combining individual images, properly aligned and Table 2. Completeness and limiting magnitudes are indicated by a dashed and solid line, respectively. sky-subtracted. Aperture photometry was performed for each object using the PHOT routine within DAOPHOT. A typical radial aperture of 4-5 times the FWHM was adopted. Weather conditions were photometric. Average seeing ranged from 1.6 to 2.2 arcsec. In order to transform instrumental magnitudes into the UKIRT system, each night we observed several field standards (Hunt et al. 1998) and the Pleiades brown dwarf Calar3 (Zapatero Osorio, Martín & Rebolo 1997a). The photometry of the candidates is shown in Table 2. Error bars include the IRAF magnitude error and the uncertainty of the photometric calibration, which is typically ±0.05 mag. For 50 of the candidates, all except three, JHK s photometry is available in the Two Micron All Sky Survey (2MASS) Point Source Catalogue, Third Incremental Release (Cutri et al. 2003). These data are also listed in Table 2. We estimate that the average difference between the 2MASS and TCS photometry in the J band is J(2MASS)-J(TCS)=0.012 ± 0.017 (where 0.017 stands for the error of the mean, the standard deviation of the differences is 0.116). For the ten candidates for which we have K s -band photometry in common, the average difference is K s (2MASS)-K s (TCS)=0.080 ± 0.040 (where 0.040 stands for the error of the mean, the standard deviation of the differences is 0.115). For eight of the objects the differences are larger than 0.2 mag in any of the two filters. One is an unresolved binary and other has large error bars in the 2MASS photometry. Small offsets between both data sets could be caused by differences in the filter systems, in the shape and strength of water absorption bands above the observatories, or due to the intrinsic vari-ability in the atmospheres of some of the targets. In fact, one of the objects with a difference larger than 0.2 mag, S Ori J053825.4-024241, is known to present a strong photometric variability (Caballero et al. 2004). Figure 3 shows the I vs. I − J diagrams for the selected candidates with our photometry (top panel) together with that obtained from the 2MASS catalogue (bottom panel). Figure 4 shows a I vs. I − K s diagram for those objects with available K s photometry. Discussion Selection of Candidates and Cluster Membership Colour-magnitude diagrams based on the optical filters R, I and Z have proved to be a good technique for distinguishing true low-mass members from background objects in young nearby clusters (Prosser 1994;Zapatero Osorio et al. 1999;BZOR ;Bouvier et al. 1998). We have selected our cluster candidates using optical colour-magnitude diagrams in the R and I bands. The most important sources of contamination in our survey are field M dwarfs. Galaxies are mostly resolved within our completeness magnitude and, given the galactic latitude of the σ Orionis cluster (b = −17.3 deg), giant stars are not expected to contribute in a significant number (< 5%) in comparison with main-sequence dwarf stars ). According to the density of M field dwarfs obtained by , we expect that our proposed photometric sequence for the cluster is not contaminated by field dwarfs of spectal type earlier than M4 and by no more than three of later spectral type within the completeness of our survey. Contamination becomes important in the fainter part of our diagram, where, in addition to the larger error bars, foreground objects can be located in the cluster sequence reddened by the interstellar medium. To discriminate between bona fide cluster members and field objects, either spectroscopic data or infrared photometry is required. The advantage of the latter is that it can be performed in a shorter integration time in relative small telescopes. The combination of optical and infrared data is a trustworthy technique for distinguishing bona fide cool cluster members (Zapatero Osorio et al. 1997a;Martín et al. 2000;BMZO). The membership of most of the low-mass stars and brown dwarfs (> 90%) identified using both optical and infrared photometric sequences in low-extinction young clusters like the Pleiades have been later confirmed by proper motions, radial velocity or the presence of lithium (Zapatero Osorio et al. 1997b;Moraux, Bouvier & Stauffer 2001). Hence, to confirm that our candidates selected by means of optical diagrams are not reddened field stars, we collected the point near-infrared observations described in Sect. 2.2. It can be seen from the I vs. I − J colour-magnitude diagrams in Figure 3 that 49 objects show redder colours and magnitudes brighter than the 10 Myr isochrone, which corresponds roughly to the lower envelope of the photometric sequence of previously confirmed members; two candidates with I∼17.5 and I Wolk (1996). For comparison, the 1, 3 and 10 Myr Next-Gen theoretical isochrones (solid lines from right to left), from the Lyon group (Baraffe et al. 1998), new 3 Myr dusty isochrones (dotted line) from the Lyon group (Chabrier et al. 2000) and models (dashed line) from D' Antona & Mazzitelli (1998) Wolk (1996). For comparison, the same theoretical isochrones as in the previous figure are plotted. shows a redder I − J colour according to 2MASS photometry; the other candidate lies on the 10-Myr isochrone when using the 2MASS photometry (bottom panel). All the 51 objects are considered as very likely members. Only two candidates appear to have colours clearly bluer than the 10 Myr isochrone, and are hereafter considered to be probable noncluster members. There are five candidates in common with the BZOR survey; their cluster membership is supported in both studies. In BZOR we obtained spectroscopy for ten candidates, of which nine were confirmed as cluster members. These nine members follow the infrared photometric sequence of the cluster (BMZO), while the rejected candidate (S Ori 44) has bluer infrared colours. In Table 2 we list spectral types of 13 objects in the present survey. Spectroscopic data have been obtained from Béjar (2000), Zapatero Osorio et al. (2002) and Barrado y Navascués et al. (2003). These spectra show the presence of Hα emission in all cases and Li absorption in eight of them observed with higher resolution. As a result, all the objects confirmed by the spectroscopy, have also been previously confirmed by infrared photometry, which argues in favour of the reliability of our selection criteria. For those objects with available spectroscopy, we have estimated their R − I colour excesses (E(R − I)), according to the R − I colour expected for their spectral types, using relations derived by Kirkpatrick & McCarthy (1994), and the R − I photometry obtained here. From Table 2, we can see that all the 13 objects show a visual extinction lower than 1 (E(R − I) < 0.27, A V < 1) and all except one show an extinction lower than 0.25 (E(R − I) < 0.07, A V < 0.25). We have obtained K s photometry for several objects in both surveys. Figure 4 shows the I vs. I-K s diagram of present paper candidates. These data confirmed our previous results in the J band. In conclusion, although we cannot say, for individual sources, that each candidate belongs to σ Orionis until we have confirmed their cool temperature and youth spectroscopically, or measured their proper motions, we are confident that most of the candidates confirmed with infrared data are bona fide cluster members. Near-Infrared Excesses and the Possible Presence of Discs Using the available JHK s photometry in the 2MASS catalogue, we have constructed the H − K s vs. J − K s colourcolour diagram shown in Figure 5, where we also present Next Gen models (solid lines) from the Lyon group (Baraffe et al. 1998) displaced according to different extinctions, the field dwarf sequence (dashed line) from Bessell & Brett (1988) and Kirkpatrick & McCarthy (1994) and the Classical T Tauri (CTT) star loci (dash-dotted line) from Meyer et al. (1997). Only photometry more accuracy than 0.1 mag. is plotted in Figure 5. From this colour-colour diagram we can see that three out of 48 of our candidates show a nearinfrared excess (S Ori J054001.9-022133, S Ori J053943.2-023243 and S Ori J053825.4-024241). The first of these is an M4 low-mass star with an R − I colour of 1.52 ± 0.06, consistent with its spectral type and with the presence of negligible interstellar extinction (see Table 2). This suggests that the infrared excess is caused by the presence of a disc. For the second, spectroscopy is unavailable and we can not estimate their extinction, so we do not know if the infrared excess is caused by the presence of a disc, interstellar extinction or a local small cloud within the cluster. The last is the strongly variable brown dwarf candidate with an R−I colour of 1.80±0.08 and no available spectroscopy. Its strong infrared excess can not be explained by the existence of normal interstellar reddening and it is most probably related to the presence of a disc. The fraction of objects found with nearinfrared excesses is in good agreement with previous studies by Oliveira et al. (2002), who found excesses in only two out of 34 cluster members and Barrado y Navascués et al. (2003), who found excesses in 5-9% of cluster members. Conclusions In this paper we present an RI survey of the σ Orionis cluster, covering an area of 430 arcmin 2 . We have selected 53 candidates in the magnitude range 14 < I < 20 that follow the optical photometric sequence of the cluster corresponding to masses in the range 0.35-0.015 M ⊙ . All but two of the candidates follow the cluster sequence in the infrared and are considered to be likely cluster members: around 31 are stars and 20 are brown dwarfs. The available spectroscopy for some of these objects confirms them as bona fide members. Using near-infrared photometry from 2MASS, we conclude (Baraffe et al. 1998), reddened by visual extinctions of A V = 0, 1and 2, are plotted in solid lines from bottom to top. The field dwarf sequence (dashed line) from Bessell & Brett (1988) and Kirkpatrick & McCarthy (1994) and the CTT star loci (dash-dotted line) from Meyer et al. (1997) are also indicated. that only three of the 48 (6 %) candidates show near-infrared excesses possibly related to the presence of discs. Fig. 1 . 1Location of surveyed fields (open squares) around the star σ Orionis. Our candidates (from Fig. 2 . 2I vs. R − I colour-magnitude diagram in the σ Orionis cluster resulting from our survey. Filled circles denote our selected candidates listed in 4 Fig. 3 . 43− J∼1.8 lie very close to the 10-Myr isochrone in the upper panel of the figure, one of them is the strongly variable object S Ori J053825.Top panel: I vs. I − J colour-magnitude diagram of selected candidates in our survey (filled symbols). Open circles denote previously known members taken from Fig. 4 . 4are also indicated. Bottom panel: I vs. I − J colour-magnitude diagram of selected candidates in our survey. As in previous figure, but J-band photometry taken from 2MASS. I vs. I − K s colour-magnitude diagram of selected candidates. Filled symbols indicate those with 2MASS photometry and open circles with error bars those with TCS photometry. Open circles denote previously known members taken from Fig. 5 . 5J − H vs. H − K s colour-colour diagram of selected candidates with available JHK s photometry with accuracy better than 0.1 mag. from 2MASS. The 3 Myr Next Gen isochrone from the Lyon group Fig. A1 . A1Finder charts in the I-band (3.7 ′ ×3.7 ′ ). North is up and East is left. Fig. A2 . A2Finder charts in the I-band (3.7 ′ ×3.7 ′ ). North is up and East is left (continuation). Table 1 . 1Fields coordinatesField R.A. (J2000) Dec. (J2000) (h m s) ( • ′ ′′ ) 1 5 38 16.8 −2 36 23 2 5 38 48.3 −2 44 06 3 5 38 19.3 −2 44 04 4 5 39 54.9 −2 31 59 5 5 40 05.8 −2 39 23 6 5 39 56.4 −2 24 40 7 5 39 55.5 −2 46 47 8 5 37 08.8 −2 39 26 Table 2 . 2Sigma Ori members candidates studied in this paper c WILEY-VCH Verlag Berlin GmbH, 13086 Berlin, Germany 2001 0044-6337/01/32210-0223 $ 17.50+.50/0 IRAF is distributed by National Optical Astronomy Observatories, which is operated by the Association of Universities for Research in Astronomy, Inc., under contract with the National Science Foundation. Acknowledgements. We thank J. Licandro for his help in the acquisition of infrared data at the Carlos Sánchez Telescope and J. A. Caballero and the anonymous referee for their useful comments. We thank I. Baraffe and the Lyon group, F. D'Antona and A. Burrows for sending us electronic versions of their recent models. We are indebted to T. Mahoney for the english revision of this manuscript. This work is based on observations obtained at the TCS and IAC-80 telescope operated by the Instituto de Astrofísica de Canarias at the Spanish Observatorio del Teide (Tenerife, Spain). Partial financial support was provided by the Spanish MCYT project AYA2001-1657. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. . I Baraffe, G Chabrier, F Allard, P H Hauschildt, A&A. 337403Baraffe, I., Chabrier, G., Allard, F., & Hauschildt, P. H. 1998, A&A, 337, 403 . Barrrado, A&A. 404171Barrrado et al. 2003, A&A, 404, 171 . V J S Béjar, PhD. at Universidad de La LagunaBéjar, V. J. S. 2000, PhD. at Universidad de La Laguna. . V J S Béjar, Astrophys. J. 556830BMZOBéjar, V. J. S. et al. 2001, Astrophys. J. , 556, 830 (BMZO) . V J S Béjar, M R Osorio, R Rebolo, Astrophys. J. 521671BZORBéjar, V. J. S., Zapatero Osorio, M. R., Rebolo, R. 1999, Astro- phys. J. , 521, 671 (BZOR) . M S Bessell, Brett, PASP. 1001134Bessell, M. S., & Brett 1988, PASP, 100, 1134 . J Bouvier, J R Stauffer, E L Martín, D Barrado Y Navascués, B Wallace, V J S Béjar, A&A. 336490Bouvier, J., Stauffer, J. R., Martín, E. L., Barrado y Navascués, D., Wallace, B., and Béjar, V. J. S. 1998, A&A, 336, 490 . A Burrows, Astrophys. J. 491A&A. submittedBurrows, A., et al. 1997, Astrophys. J. , 491, 856. Caballero, J. A., Béjar, V: J. S., Rebolo, R., Zapatero Osorio, M. R., 2004, A&A, (submitted) . G Chabrier, I Baraffe, F Allard, P H Hauschildt, Astrophys. J. 542464Chabrier, G.,Baraffe, I., Allard, F., & Hauschildt, P. H. 2000, Astro- phys. J. , 542, 464 Explanatory Supplement to the 2MASS Third Incremental Data Release D'Antona. R M Cutri, MmSAI. 68807Cutri, R. M. et al. 2003, Explanatory Supplement to the 2MASS Third Incremental Data Release D'Antona, F., and Mazzitelli, I. 1997, MmSAI, 68, 807 . F D&apos;antona, I Mazzitelli, Bol. Obs. Tonantz. Tacub. 111D'Antona, F., and Mazzitelli, I. 1998, Francesca D'Antona web site at http://perseus.mporzio.astro.it/∼dantona/ Haro, M. 1953, Bol. Obs. Tonantz. Tacub., 1, part 7, 11 . L K Hunt, F Mannucci, L Testi, S Migliorini, R M Stanga, C Baffa, F Lissi, L Vanzi, Astron. J. 1152594Hunt, L. K., Mannucci, F., Testi, L., Migliorini, S., Stanga, R. M., Baffa, C., Lissi, F., Vanzi, L. 1998, Astron. J. , 115, 2594 . J D Kirkpatrick, D W Mccarthy, AJ. 107333Kirkpatrick, J. D., & McCarthy, D. W. 1994, AJ, 107, 333 . J D Kirkpatrick, J T Mcgraw, T R Hess, J Liebert, D W Mc-Carthy, ApJS. 94749Kirkpatrick, J. D., McGraw, J. T., Hess, T. R., Liebert, J., & Mc- Carthy, D. W. 1994, ApJS, 94, 749 . A U Landolt, Astrophys. J. 104913AJLandolt, A. U. 1992, AJ, 104, 340. Lee, T. A. 1968, Astrophys. J. , 152, 913 . J M Oliveira, R D Jeffries, M J Kenyon, S A Thomson, T Naylor, A&A. 38322Oliveira, J. M., Jeffries, R. D., Kenyon, M. J., Thomson, S. A., Nay- lor, T., 2002, A&A, 383, L22 . M A C Perryman, A&A. 32349Perryman, M. A. C. et al. 1997, A&A, 323, L49 . E L Martín, W Brandner, J Bouvier, K L Luhman, J Stauffer, G Basri, Zapatero Osorio, M R , Astrophys. J. 543299Martín, E. L., Brandner, W., Bouvier, J., Luhman, K. L., Stauffer, J., Basri, G., and Zapatero Osorio, M. R. 2000, Astrophys. J. , 543, 299 . M R Meyer, N Calvet, L A Hillenbrand, AJ. 114288Meyer, M. R., Calvet, N., & Hillenbrand, L. A. 1997, AJ, 114, 288 . D G Monet, USNO-SA2.0Washington: US Naval ObsMonet, D. G., et al. 1996, USNO-SA2.0 (Washington: US Naval Obs.) . E Moraux, J Bouvier, J R Stauffer, A&A. 367211Moraux, E.; Bouvier, J.; Stauffer, J. R. 2001, A&A, 367, 211 . Ch F Prosser, Astrophys. J. 1071422Prosser, Ch. F. 1994, Astrophys. J. , 107, 1422 . F M Walter, S J Wolk, M Freyberg, J H M M Schmitt, Memorie, S. A. It. 68 N4Walter, F. M., Wolk, S. J., Freyberg, M., Schmitt, J. H. M. M. 1997, in Memorie, S. A. It. 68 N. 4, 1081-1088 . S D Wiramihardja, T Kogure, S Yoshida, M Nakano, K Ogura, T Iwata, PASJ. 4327Wiramihardja, S. D., Kogure, T., Yoshida, S., Nakano, M., Ogura, K., & Iwata, T. 1991, PASJ, 43, 27 . S J Wolk, Ph. D. University of New York at Stony BrookWolk, S. J., 1996, Ph. D. University of New York at Stony Brook. S J Wolk, F M Walter, R ; R. Rebolo &amp; M, Osorio, Very Low-Mass Stars and Brown Dwarfs in Stellar Clusters and Associations" workshop. La Palmain preparationWolk, S. J., and Walter, F. M., 1998, in "Very Low-Mass Stars and Brown Dwarfs in Stellar Clusters and Associations" workshop, 11-15 May, La Palma, ed. R. Rebolo & M. R. Zapatero Osorio, in preparation . Zapatero Osorio, M R Béjar, V J S Pavlenko, Ya Rebolo, R Prieto, C Martín, E L García López, R J , A&A. 384937Zapatero Osorio, M. R., Béjar, V. J. S., Pavlenko, Ya.; Rebolo, R., Allende Prieto, C., Martín, E. L., García López, R. J. 2002, A&A, 384, 937 . Zapatero Osorio, M R Martín, E L Rebolo, R , A&A. 323105Zapatero Osorio, M. R., Martín, E. L., & Rebolo, R. 1997a, A&A, 323, 105 . Zapatero Osorio, M R Rebolo, R Martín, E L Basri, G Magazzù, A Hodkin, S T Jameson, R F Cossburn, M R , ApJ. 4910Zapatero Osorio, M.R., Rebolo, R., and Martín, E. L., Basri, G., Magazzù, A., Hodkin, S. T., Jameson,R. F., Cossburn,M. R. 1997b, ApJ, 491, L00 . Zapatero Osorio, M R Rebolo, R Magazzù, A Martín, E L Steele, I A Jameson, R F , A&AS. 134537Zapatero Osorio, M. R., Rebolo, R., Magazzù, A., Martín, E. L., Steele, I. A., Jameson, R. F., 1999 A&AS, 134, 537
[]
[ "ON THE CAUCHY PROBLEM FOR BOLTZMANN EQUATION MODELLING A POLYATOMIC GAS", "ON THE CAUCHY PROBLEM FOR BOLTZMANN EQUATION MODELLING A POLYATOMIC GAS" ]
[ "Irene M Gamba \nDepartment of Mathematics\nDepartment of Mathematics and Informatics Faculty of Sciences\nOden Institute of Computational Engineering and Sciences University of Texas at Austin\n2515 Speedway Stop C1200 Austin78712-1202TexasUSA\n", "Milana Pavić-Čolić \nUniversity of Novi\nSad Trg Dositeja Obradovića 421000Novi SadSerbia\n" ]
[ "Department of Mathematics\nDepartment of Mathematics and Informatics Faculty of Sciences\nOden Institute of Computational Engineering and Sciences University of Texas at Austin\n2515 Speedway Stop C1200 Austin78712-1202TexasUSA", "University of Novi\nSad Trg Dositeja Obradovića 421000Novi SadSerbia" ]
[]
In the present manuscript we consider the Boltzmann equation that models a polyatomic gas by introducing one additional continuous variable, referred to as microscopic internal energy. We establish existence and uniqueness theory in the space homogeneous setting for the full non-linear case, under an extended Grad assumption on transition probability rate, that comprises hard potentials for both the relative speed and internal energy with the rate in the interval (0, 2], which is multiplied by an integrable angular part and integrable partition functions. The Cauchy problem is resolved by means of an abstract ODE theory in Banach spaces, for an initial data with finite and strictly positive gas mass and energy, finite momentum, and additionally finite k * polynomial moment, with k * depending on the rate of the transition probability and the structure of a polyatomic molecule or its internal degrees of freedom. Moreover, we prove that polynomially and exponentially weighted Banach space norms associated to the solution are both generated and propagated uniformly in time. *
10.1063/5.0103621
[ "https://arxiv.org/pdf/2005.01017v1.pdf" ]
218,486,989
2005.01017
e46cd01b3684e45887a628fa9809f5b2507c1b33
ON THE CAUCHY PROBLEM FOR BOLTZMANN EQUATION MODELLING A POLYATOMIC GAS Irene M Gamba Department of Mathematics Department of Mathematics and Informatics Faculty of Sciences Oden Institute of Computational Engineering and Sciences University of Texas at Austin 2515 Speedway Stop C1200 Austin78712-1202TexasUSA Milana Pavić-Čolić University of Novi Sad Trg Dositeja Obradovića 421000Novi SadSerbia ON THE CAUCHY PROBLEM FOR BOLTZMANN EQUATION MODELLING A POLYATOMIC GAS arXiv:2005.01017v1 [math-ph] 3 May 2020 In the present manuscript we consider the Boltzmann equation that models a polyatomic gas by introducing one additional continuous variable, referred to as microscopic internal energy. We establish existence and uniqueness theory in the space homogeneous setting for the full non-linear case, under an extended Grad assumption on transition probability rate, that comprises hard potentials for both the relative speed and internal energy with the rate in the interval (0, 2], which is multiplied by an integrable angular part and integrable partition functions. The Cauchy problem is resolved by means of an abstract ODE theory in Banach spaces, for an initial data with finite and strictly positive gas mass and energy, finite momentum, and additionally finite k * polynomial moment, with k * depending on the rate of the transition probability and the structure of a polyatomic molecule or its internal degrees of freedom. Moreover, we prove that polynomially and exponentially weighted Banach space norms associated to the solution are both generated and propagated uniformly in time. * Introduction In this manuscript we consider a single polyatomic gas. The more complex structure of a molecule that may have more than one atom causes new phenomena at the level of molecular collisions. In particular, besides the translational motion in the physical space as in the classical case of monatomic elastic collisions, there appear possibilities of molecular rotations and/or vibrations, referred to as internal degrees of freedom. Collisional kinetic theory captures this feature by introducing the so-called microscopic internal energy of a molecule. Then, an elastic collision of polyatomic gases means conservation of the total -kinetic plus internal -energy of the two colliding molecules if binary interactions are taken into account. Within the kinetic theory there is no unique way how to approach the microscopic internal energy, which can be understood as a measure of deviation from the classical case of a single monatomic gas. For example, in the semi-classical approach [19,28,42,20], internal energy is assumed to take discrete values. The idea is to prescribe one distribution function to each energy level, resulting in a system of equations describing a gas. This model uses experimental data and is adequate for computational tasks. On the other hand, there exist continuous kinetic models that take the another path -the idea is to introduce one additional variable, a continuous microscopic internal energy, and to parametrize both molecular velocity and internal energy, using the so-called Borgnakke-Larsen procedure, which leads to one single Boltzmann equation [18,23,22]. Among continuous models, there are subtle differences causing by the choice of a functional space as an environment where physical intuition is provided, which are reflected on the structure of the cross-section, as pointed out in [24]. Both semi-classical and continuous models abound with many formal results. For instance, the Champan-Enskog method was developed in [1] for the semi-classical models and recently in [10] for the continuous model from [23]. Many macroscopic models of extended thermodynamics are derived starting from the continuous kinetic model [23]. In fact, the additional variable of internal energy fitted naturally into the two hierarchies of moment equations for a polyatomic gas, as first observed in [35,36]. The maximum entropy principle was the main tool to close system of equations corresponding to six and fourteen moments [38,14,34,40], and to numerically test those models [29]. An interesting reader can consult [39] and references therein. In addition, the formal results spread to the mixture of polyatomic gases [33,11,12], that may be reactive as well [13,8]. Despite the great interest of research community, so far there is no any mathematically rigorous result addressing polyatomic gas kinetic model. Our manuscript contributes in this direction: we establish the existence and uniqueness theory for the solution of the space homogeneous Boltzmann equation introduced in [18]. The underlying assumption on the transition probability rate is of extended Grad type, meaning that besides the positive power of relative speed, we need the very same contribution of the internal energy. Moreover, the relative speed and internal energy are combined additively, and not multiplicatively as it was thought and used in literature. Surprisingly enough, such a model of transition function perfectly embeds into physical interpretation, providing the total agreement with the models of extended thermodynamics and experimental data, as shown in [24]. The existence and uniqueness result is obtained by an application of the theory of ODEs in Banach spaces [31], that has been successfully used in many frameworks, such as mixture setting [25], polymers kinetic problems [3], quantum Boltzmann equation for bosons in very low temperature [7] and the weak wave turbulence models for stratified flows [27]. The present manuscript is also devoted to the study of norms in the Banach space L 1 (R 3 ×R + ) associated to the solution of the time evolution problem for Boltzmann equation in the space probability density functions defined over the classical metric space R 3 × R + , with both polynomial and exponential weighted, by following the usual path established for the classical Boltzmann equation for the single elastic monatomic gas model. This research starts with [21,43] in the case of polynomial moments and with [15] where the concept of exponential moments is presented, as much the techniques for moments summability leading to the understanding of the behavior of high energy tail with a Gaussian weigh associated to the solution of the Boltzmann equation for hard spheres (i.e. power exponent γ = 1) and a constant angular part. These results were profounded in [17] for inelastic interactions and non-Gaussian weighted moments, then in [26] to collision kernels for hard potentials (i.e. γ ∈ (0, 1]) for any angular section with L 1+ -integrability. Further, generation of exponential moments of order γ/2 with bounded angular section were shown in [32]. New approach is taken in [4] based on partial sums summability techniques, that extended the results to collision kernels for hard potentials with γ ∈ (0, 2] for any angular section with just L 1 -integrability. In particular, exponential moments of order γ are shown to generate in a finite time, while Gaussian moments propagate if initially are finite, which holds independently of γ. Moreover, all these result were generalized when the angular part is not integrable (in the angular non-cutoff regime) [41,30,16,37,2]. A keypoint in this analysis consists in showing that dissipation is built in the collision operator. This is manifested showing the decay of k th -polynomial moment of the collision operator for a sufficiently large k, depending on the data that has finite initial mass, energy and a moment of order k 1+δ , with δ > 0, since, in our notation k = 1 is the kinetic energy. This property is warranted by the control, from above and below, of the transition rates associated to the velocity and internal energy interactions laws. This approach requires detailed averaging over parameters in a compact manifold that distributes scattering mechanism as functions of the scattering angle over the sphere of influence of the interaction, as much as distributes total energy to molecular variables. In the classical case of a single monatomic gas, this result is known as sharp Povzner Lemma by angular averaging over the sphere, introduced for the first time in [15] for hard spheres in three dimensions and extended in [26] to variable hard potentials in velocity dimension bigger or equal than three, and used in many results for kinetic collisional binary transport in inelastic interactions theories, such as granular flow [17], and gas mixtures [25]. This averaging on compact manifolds of the transition rate functions only involves the moments of the positive contribution to the Boltzmann flow dynamics, that means the moments associated to the gain operator produce a dissipative mechanism for a large enough moment k, depending on the collision law as much as on the transition probability functions. For a classical single monatomic gas, dissipation has an immediate effect: the decay is shown in the k-moment of the gain operator, for any k > 1, with k = 1 defining the macroscopic gas energy, which is conserved since the local energies associated the gain and loss operator balance to zero. Another example of this behavior was recently shown by the authors in the gas mixture setting [25] corresponding to a system of Boltzmann equations with disparate masses, with their corresponding energy identity showing that the positive contribution of k-moments coming from each gain operator associated to the system yields an analog dissipative effect, albeit for k > k * , with k * depending on the mass species ratios when taken by pairwise interactions, and shown k * to grow with the disparateness of molecular masses. In the present manuscript the order of moment k * at which the positive contribution becomes submissive is identified by the smallest positive constant A k * to be characterized in Section 5, which depends on the averaging manifold Lemma associated to the potential rate of the transition probabilities, on internal modes of molecules, related to the complexity of molecular structure as much as on the initial data. This strictly positive constant A k * can be identified as the coercive factor associated to be Boltzmann flow for a probability distribution density f (·, t) in the Banach space L 1 (R 3 × R + ), as a priori k th -moment estimates are sufficient to generate infinitely many Ordinary Differential Inequalities (ODIs) with a negative superlinear term proportional to A k * . Such a flow in Banach spaces in solvable globally in time. In this sense, one can view A k * as the analog to the role of coercive form associated to elliptic and parabolic flows in continum mechanics models where coerciveness is crucial for the existence and uniqueness theories in Sobolev spaces. We stress out that condition on initial states exclude singular measures and does not need entropy bounds. Yet the resulting construction of a unique solution in the space on polynomial moments, secures entropy boundedness at any time, if initially so. The manuscript is organized as follows. First we introduce a kinetic model describing a polyatomic gas in Section 2, together with the notation and main definitions. Then in Section 3 we make precise sufficient properties for establishing existence and uniqueness theory, which comprises assumption on the form of transition function as the additive form of relative speed and microscopic internal energy with a potential γ ∈ (0, 2] multiplied by some factors, together with its upper and lower bounds. Then in Section 4 we state and prove the two fundamental lemmas, namely the Energy Identity Decomposition and the Polyatomic Compact Manifold Averaging Lemma. These estimates identify the k * moment that will yield the coercive constant A k * . Section 5 deals with the statements and proofs to a priori estimates for k th -moments of any order k ≥ k k * and defines the explicit form of A k * . These results enable us to identify an invariant region in which the collision operator will satisfy all the properties needed for existence and uniqueness result proved theory in Section 6 by means of solving an time evolution ODE in a suitable invariant region Ω in the Banach space L 1 (R 3 × R + ). Then, the solution of the Boltzmann equation for polyatomic gases has a property of summability of moments that is expressed in the generation and propagation of exponential moments in Section 7. Finally, the Appendix contains some technical results needed for the theory. Kinetic model for a polyatomic gas In this Section we will describe the Boltzmann equation for a polyatomic gas. We adapt the continuous approach, which introduces a single continuous variable I that we call microscopic internal energy, supposed to capture all the phenomena related to a more complex structure of a polyatomic molecule. The main feature is the presence of internal degrees of freedom that a molecule undertakes on an interaction, or collision. Besides the usual translational motion, a polyatomic molecule may experience rotations and/or vibrations, referred to as internal modes. At the microscopic level of collisions, such motions cause appearance of the microscopic internal energy, apart from the usual kinetic energy in the conservation of energy law, under the assumption of elastic collisions. On the other side, at the macroscopic level, internal modes reflect on the energy law as well. As in this manuscript, we restrict to polytropic gases (meaning that macroscopic internal energy is linear with respect to the temperature), the caloric equation of state reads e = D k T 2 m , (2.1) where e is the internal energy, k the Boltzmann constant, m mass and T temperature of the gas. The constant D is related to the degrees of freedom. In the classical case for elastic interactions, only translation is taken into account, corresponding to D taking the value of the space dimension and the kinetic collisional model of the classical Boltzmann equation. In general D is determined by the sum of the total degrees of freedom, translational as much as rotational and vibrational motion associated to the collision. That means, in space dimension three, this constant takes at least the value D = 3, which is the classical case for monatomic gases modeled by the scalar Boltzmann equation, but for the polyatomic model the constant D must be larger than the dimension of the space of motion, D > 3. 2.1. Collision modelling. The starting point is to model a collision process between two interacting molecules. We suppose that a colliding pair of molecules have velocities and microscopic internal energies (v ′ , I ′ ), (v ′ * , I ′ * ) ∈ R 3 × [0, ∞) before the interaction, that became (v, I) and (v * , I * ), respectively, after such interaction. Under the assumption of elastic interactions, these quantities are linked through the conservation laws of local momentum and total (kinetic + microscopic internal) molecular energy, namely, v + v * = v ′ + v ′ * , m 2 |v| 2 + I + m 2 |v * | 2 + I * = m 2 |v ′ | 2 + I ′ + m 2 |v ′ * | 2 + I ′ * . (2.2) It is often more convenient to work in the center of mass reference frame by the introduction of center of mass V and relative velocity u as follows V := v + v * 2 , u := v − v * . (2.3) Then, the total molecular energy law from (2.2) can be simply written by m 4 |u| 2 + I + I * = m 4 |u ′ | 2 + I ′ + I ′ * =: E. (2.4) since clearly conservation of local momentum implies conservation of center of mass velocity V = V ′ . (2.5) Collisional laws express pre-collisional quantities v ′ , I ′ , v ′ * , I ′ * in terms of the postcollisional ones. This is achieved via a parametrization of local conservation equations (2.2), according to the Borgnakke-Larsen procedure. To this end, we focus on energy (2.4) and first introduce a parameter R ∈ [0, 1] that distributes local energy proportion of the total energy E into a pure kinetic part RE and a pure internal part of energy, proportional to (1 − R)E, according to m 4 |u ′ | 2 = RE, I ′ + I ′ * = (1 − R)E. In addition, we set a parameter r ∈ [0, 1] to distribute the proportion of total internal energy (1 − R)E to each interacting states corresponding to the incoming molecular internal energy pair I ′ , I ′ * as follows I ′ = r(1 − R)E, I ′ * = (1 − r)(1 − R)E. (2.6) Finally, we introduce the classical scattering direction associated to classical collisional elastic theory, σ ∈ S 2 , in order to parametrize pre-collisional relative molecular velocity u ′ , u ′ = |u ′ | σ = 2 RE m σ. (2.7) We note that this relation holds for a classical monatomic single species model in the absence of internal energy modes for which |u ′ | = |u|. This representation introduces the fundamental set of coordinates in center of mass and the pure kinetic energy. The last equation together with moment conservation law from (2.2) yields expressions for velocities, v ′ = V + RE m σ, v ′ * = V − RE m σ. (2.8) 2.2. The collision transformation. The first step in modelling the collision operator is to study transformation from post-to pre-collisional quantities. In particular, we need to compute Jacobian of this transformation, in order to ensure invariance of the measure appearing in the weak form of collision operator. Lemma 2.1. The Jacobian of transformation T : (v, v * , I, I * , r, R, σ) → (v ′ , v ′ * , I ′ , I ′ * , r ′ , R ′ , σ ′ ), (2.9) where velocities v ′ and v ′ * are defined in (2.8), energies I ′ and I ′ * in (2.6), and r ′ = I I + I * = I E − m 4 |u| 2 , R ′ = m |u| 2 4E , σ ′ = u |u| , (2.10) is given by J T = (1 − R)R 1/2 (1 − R ′ )R ′1/2 = (1 − R) |u ′ | (1 − R ′ ) |u| . (2.11) The proof of this Lemma can be found in Appendix A. For later purposes we also prove the following Lemma, which finds a function invariant with respect to the collision process that contains the factor I α I α * , crucial for polyatomic modelling. As we will see, α will be related to the degrees of freedom D from macroscopic caloric equation of state (2.1). We first introduce the following functions, referred by partition functions for the kinetic-internal energy split, and internal molecular energy split, respectively, given by ϕ α (r) := (r(1 − r)) α , ψ α (R) := (1 − R) 2α ,(2.12) which ensure the expected invariance property for the conservative polyatomic gas model. Lemma 2.2. Let functions ϕ α (r) and ψ α (R) be from (2.12). The following invariance holds I α I α * ϕ α (r)ψ α (R) = I ′α I ′α * ϕ α (r ′ ), ψ α (R ′ ) , for any power α ∈ R, where the involved quantities are linked via the mapping (2.9). Proof. We first write r(1 − R) = I ′ E , I = r ′ (1 − R ′ )E, (1 − r)(1 − R) = I ′ * E , I * = (1 − r ′ )(1 − R ′ )E, so that r(1 − R) I (1 − r)(1 − R) I * = I ′ r ′ (1 − R ′ ) I ′ * (1 − r ′ )(1 − R ′ ). To conclude the proof, it remains to raise this equation to the power α. 2.3. The Boltzmann collision operator for binary polyatomic gases. In this manuscript, we follow the definition of collision operator from [18]. Then the natural working functional framework for the evolutions of probability densities is the Banach space L 1 (R 3 ) × L 1 (R + ) in the variables v and I. This Boltzmann type collision operator, written in strong form, is modeled by the bilinear non-local form (2.12). The region of integration is ∆ × K, where ∆ denotes the unbounded regions of definition of molecular velocity v and internal energy I, and K a compact manifold embedded in the four dimensional space, that is, Q(f, g)(v, I) = ∆×K f ′ g ′ * I I * I ′ I ′ * α − f g * × B (1 − R) R 1/2 ϕ α (r) ψ α (R)dR dr dσdI * dv * , (2.13) α > −1, with functions ϕ α (r), ψ α (R) from∆ := R 3 × [0, ∞), and K := [0, 1] 2 × S 2 . (2.14) We have used standard abbreviations f ′ := f (v ′ , I ′ ), g ′ * := g(v ′ * , I ′ * ), f := f (v, I), g * := g(v * , I * ). The transition probability rates are, in part, quantified by probability measures denoted by B := B(v, v * , I, I * , R, r, σ) ≥ 0, (2.15) that are assumed to be invariant with respect to the following two changes of variables (2.18) Besides these usual assumptions on the transition function B, we will have some additional ones, as stated in the Section 3.1 below. It is worthwhile to rewrite strong form (2.13) in a different manner, Q(f, g)(v, I) = ∆×K f ′ g ′ * (I ′ I ′ * ) α − f g * (I I * ) α × B (1 − R) R 1/2 ϕ α (r) ψ α (R)I α I α * dR dr dσdI * dv * , (2.19 ) obtained by just pulling out the factor (I I * ) α from the gain term. We explain a role of each term involved in the definition of collision operator (2.13) or equivalently (2.19). First, renormalization of a distribution function f by the factor I α will allow to obtain a correct macroscopic energy law from (2.1), and we shall see below that there is a link between α and degrees of freedom D introduced in (2.1). Because of the additional factor (I I * ) α we need to incorporate functions ϕ α (r) and ψ α (R) in order to have invariance property by virtue of the Lemma 2.2. Finally, term (1 − R) R 1/2 is coming from the Jacobian of collision transformation computed in the Lemma 2.1, which ensures good definition of the weak form. Remark 1. It should be noted that the transition probability rate form (2.15) is a more general form than a differential cross section, which is the usual expression for classical elastic collisional theory given by just |u| b(û · σ) in three dimensions. In this work, the form of B may not only include such differential cross section factor, but also needs to include other factors in order to obtain an invariant measure that describes the transition states (2.16) and (2.17) involving internal energies that characterized the modeling of polyatomic gasses. Because of this fact, we refer to Bϕ α (r)(1 − R)R 1/2 ψ α (R)I α * I α as the transition probability rate form. In addition, the roll of this factor in the Boltzmann collisional theory is crucial for the theory of existence and uniqueness, as much as decay rates to equilibrium. 2.4. Weak form of collision operator. We first describe an invariant measure which ensures well defined weak form of the collision operator, by means of the following Lemma. dA = B(v, v * , I, I * , R, r, σ) ϕ α (r) (1 − R)R 1/2 ψ α (R) I α I α * dσdrdRdI * dIdv * dv. (2.20) Proof. = 1 2 ∆ 2 ×K f g * (χ(v ′ , I ′ ) + χ(v ′ * , I ′ * ) − χ(v, I) − χ(v * , I * )) dA, × B dσ ϕ α (r)dr (1 − R)R 1/2 ψ α (R)dR dI * dv * dI dv = 1 2 ∆ 2 ×K f g * (II * ) α (χ(v ′ , I ′ ) + χ(v ′ * , I ′ * ) − χ(v, I) − χ(v * , I * )) dA,Q(f, g)(v, I)χ(v, I)dI dv = ∆ 2 ×K f g * I α I α * (χ(v ′ , I ′ ) − χ(v, I)) dA = ∆ 2 ×K f g * I α I α * (χ(v ′ * , I ′ * ) − χ(v * , I * )) dA, which yields desired estimate (2.21). The Boltzmann equation. In order to describe a polyatomic gas, list of arguments of a distribution function is extended by microscopic internal energy I, i.e. we take f := f (t, v, I). The evolution of f is governed by the Boltzmann equation ∂ t f = Q(f, f )(v, I), (2.22) where collision operator is written in (2.13) or equivalently (2.19). H-theorem. A natural dissipative quantity that is minimized at the statistical equilibrium is usually given by the concept of entropy in the associated evolution of the probability density function, solutions to the associated Boltzmann equation for binary polyatomic gases. In this case, as in the classical case of elastic monatomic gases, such quantity is is given by the entropy functional, written in the space space homogeneous setting, H(f )(t) := ∆ f (t, v, I) log(f (t, v, I)I −α ) dIdv . (2.23) Then, by means of the weak formulation associated to the equation (2.22) defined above, the evolution of the entropy (2.23) is obtained when multiplying both sides of equation (2.22) by log(f (t, v, I)I −α ) and integrating with respect to the pair v and I. This results in the associated entropy production functional D(f )(t) associated to the Boltzmann collision operator for binary polyatomic gasses, that is D(f )(t) := ∆ Q(f, f )(t, v, I) log(f (t, v, I, )I −α ) dIdv . (2.24) The following theorem focuses on the properties of this entropy dissipation functional. Theorem 2.5 (The H-theorem). Let the transition function B be positive function almost everywhere, and let f ≥ 0 such that the collision operator Q(f, f ) and entropy production D(f ) are well defined. Then the following properties hold i. Entropy production is non-positive, that is D(f ) ≤ 0. ii. The three following properties are equivalent (1) D(f ) = 0 for all v ∈ R 3 , I ∈ R + , (2) Q(f, f ) = 0 for all v ∈ R 3 , I ∈ R + ,(3) There exists n ≥ 0, U ∈ R 3 , and T > 0, such that the unit mass renormalized Maxwellian equilibrium for polyatomic gases is M eq (v, I) = n Z(T ) m 2πk B T 3/2 I α e − 1 kT ( m 2 |v−U(t)| 2 +I) , (2.25) where Z(T ) is a partition (normalization) function Z(T )(t) = [0,∞) I α e − I kT (t) dI = (kT (t)) α+1 Γ(α + 1), with Γ as gamma function. The proof can be found in [18]. Functional space. We first introduce the Lebesgue weight associated to the velocity v ∈ R 3 and microscopic internal energy I ∈ R + v, I = 1 + 1 2 |v| 2 + I m , (2.26) which is independent of mass units. We will look for a solution of the Boltzmann equation (2.22) in the Banach space weighted polynomially in terms of this Lebesgue weight, without assuming the initial entropy (2.23) to be bounded. More precisely, we define L 1 k = f measurable : ∆ |f (v, I)| v, I 2k dIdv < ∞, k ≥ 0 , (2.27) with the range of integration ∆ = R 3 × [0, ∞) from (2.14). Its associated norm is f L 1 k = ∆ |f (v, I)| v 2k dIdv. (2.28) We recall the monotonicity property for norms weighted with (2.26), f L 1 k 1 ≤ f L 1 k 2 whenever 0 ≤ k 1 ≤ k 2 . (2.29) When we refer to the distribution function, the norm (2.28) is called the polynomial moment, as the following Definition 2.6 precises. Definition 2.6. Polynomial moment of order k ≥ 0 associated to the distribution function f ≥ 0 is defined with m k [f ](t) = ∆ f (t, v, I) v, I 2k dI dv, with the Lebesgue weight from (2.26). In this manuscript, we will study exponentially weighted L 1 -norms, as well. Definition 2.7. Exponential moment for distribution function f or its exponential weighted L 1 −norm of a rate β > 0 and an order s, 0 < s ≤ 1, is defined by E s [f ](β, t) = ∆ f (t, v, I) e β v,I 2s dI dv. (2.30) In the following Section, we provide a physical interpretation to some polynomial moments. 2.8. Macroscopic observables. We first note that for certain test functions weak form (2.21) annihilates. This is encoded in the collision conservation laws (2.2). Namely, the following Lemma holds. χ 1 (v, I) = m, χ k (v, I) = m v k , k = 1, 2, 3, χ 5 (v, I) = m 2 |v| 2 + I. Macroscopic observables are defined as moments of the distribution function f against the collision invariants, more precisely, we define mass density ρ, momentum density ρ U and total energy density ρ 2 |U | 2 + ρ e of a polyatomic gas as ρ = ∆ mf dI dv, ρ U = ∆ mvf dI dv, ρ 2 |U | 2 + ρ e = ∆ m 2 |v| 2 + I f dI dv. Let us highlight relation between collision invariants and the Lebesgue weight (2.26), v, I 2 = 1 m (χ 1 + χ 5 ) . Therefore, polynomial moment multiplied by mass, m m k (t), for k = 0 is interpreted as gas mass density, while for k = 1 we get mass density plus total energy density of the gas, m m 0 = ρ, m m 2 = ρ + ρ 2 |U | 2 + ρ e. Moreover, if f solves the Boltzmann equation (2.22), then integration against collision invariants yields macroscopic conservation laws, ∂ t ρ = 0, ∂ t ρ U = 0, ∂ t ρ 2 |U | 2 + ρ e = 0. We can finally find connection with the caloric equation of state for a polytropic gas (2.1). By introducing the peculiar velocity c = v − U , we can define the internal energy density, ρ e = ∆ m 2 |c| 2 + I f dI dv. In equilibrium, when distribution function has a shape of local Maxwellian (2.25) internal energy takes the form ρ e = α + 5 2 n k T. (2.31) Now relation to caloric equation of state for a polytropic gas (2.1) becomes evident. Then we can connect D from (2.1) and α from (2.31), that also appears in the definition of collision operator (2.13), α = D − 5 2 . Since for polyatomic gases D > 3, we obtain the overall condition α > −1. We recall that α = −1 corresponds to a monatomic gas (D = 3), when we obtain the classical relation ρ e = (3/2)nkT . Sufficient properties for existence and uniqueness theory In this Section we describe sufficient tools needed to build the Existence and Uniqueness theory be presented in Section 6. Namely, at the first place we choose an appropriate transition function B that corresponds to an extended Grad assumption. Then we prove essential estimates, upper and lower bounds for such a transition function. Moreover, we find relevant physical examples, namely the three models for B, that satisfy the imposed conditions. With these estimates on the transition function, we are going to have a priori estimates for any solution of the Boltzmann equation for polyatomic gases in L 1 k * for k * determined by the conditions of Lemma 3.3 and the bounds for transition functions B. Finally, we define an invariant region Ω ⊂ L 1 1 for the Boltzmann equation, in which the collision operator Q : Ω → L 1 1 satisfies (i) Hölder continuity, (ii) Sub-tangent and (iii) one-sided Lipschitz conditions. These are sufficient conditions to obtain existence and uniqueness of a global in time solution with a regularity to be described in Section 6. 3.1. Transition function B. In this manuscript, we want to keep the transition function B as general as possible in order to allow our kinetic model to cover a broad class of physical interpretations. One of the essential ingredients for building existence and uniqueness theory is an assumption on transition function B, that quantify the collision frequency through scattering mechanisms and partition functions as a function of the total molecular energy (2.4). To that end, apart from its positivity and micro-reversibility requirements stated in (2.18), we assume the following minimal mathematical requirements to ensure existence and uniqueness properties associated to the initial value problem for the Boltzmann equation for binary interaction of polyatomic gases as defined in (2.22). Assumption 3.1 (The form of the transition function B). LetB =B(v, v * , I, I * ) be defined as B(v, v * , I, I * ) := |u| γ + I + I * m γ/2 , u := v − v * , γ ∈ (0, 2]. (3.1) We assume that the transition function B := B(v, v * , I, I * , r, R, σ) satisfies the following extended Grad assumption for collision kernels, d lb γ (r) e lb γ (R) b(û · σ)B ≤ B ≤ d ub γ (r) e ub γ (R) b(û · σ)B, (3.2) for every v, v * ∈ R 3 , I, I * , r, R ∈ [0, 1], σ ∈ S 2 ,û = u/ |u|, where functions b, d ub γ (r), d lb γ (r), e ub γ (R) , and e lb γ (R) satisfy the following integrability conditions, (1) the angular function b is integrable b(û · σ) ∈ L 1 (S 2 ; dσ), (3.3) (2) functions d ub γ (r) and d lb γ (r) are integrable with respect to the measure ϕ α (r)dr, with ϕ α (r) from (2.12), more precisely d ub γ (r)ϕ α (r), d lb γ (r)ϕ α (r) ∈ L 1 ([0, 1]; dr),(3. 4) and additionally d ub γ (r) = d ub γ (1 − r), d lb γ (r) = d lb γ (1 − r) , which ensures the second microreversibility property (2.18), (3) functions e ub γ (R) and e lb γ (R) are integrable with respect to the measure ψ α (R)(1− R)R 1/2 dR, where ψ α is introduced in (2.12), namely e ub γ (R)ψ α (R)(1 − R)R 1/2 , e lb γ (R)ψ α (R)(1 − R)R 1/2 ∈ L 1 ([0, 1]; dR). (3.5) Note that assumption 3.1 stresses the transition probability associated to the differential cross section is an extended Grad assumption for hard potentials with a dependance to the internal energy exchange which characterizes polyatomic collisional gas models. As such, the assumption of transition function B will be shown to be achievable for at least three choices of models of B(v, v * , I, I * , r, R, σ) satisfying conditions (3.1-3.5), that are sufficient to rigorously prove the existence of finite upper and strictly positive lower bounds sufficient for solving the Cauchy problem associated to a natural Banach space defined by (2.27) with a natural norm characterized by the Lebesgue weight function (2.26) and norm by (2.28). Such upper and lower estimates must allow the control infinity system of ordinary Differential Inequalities (ODI) associated to solutions to the initial value problem (2.22) uniformly in time. This strategy is rather elaborated and shall be presented in several steps. We first start by stating the following two lemmas, whose proofs can be found in the Appendix C-D. Lemma 3.2 (Upper bound). For any γ ∈ (0, 2] the following inequality holds |v − v * | γ + I + I * m γ/2 ≤ 2 3γ 2 −1 ( v, I γ + v * , I * γ ) , (3.6) for v, v * ∈ R 4 and I, I * ∈ [0, ∞). Lemma 3.3 (Lower bound). Let γ ∈ [0, 2]. For some function 0 ≤ {f (t)} t≥0 ⊂ L 1 1 we assume that it satisfies c ≤ ∆ f (t, v, I)dI dv ≤ C, c ≤ ∆ f (t, v, I) 1 2 |v| 2 + I m dI dv ≤ C, ∆ f (t, v, I) v dI dv = 0, (3.7) for some positive constants c and C. Assume also boundedness of the moment ∆ f (t, v, I) 1 √ 2 |v| + I m 2+λ dI dv ≤ B, λ > 0. (3.8) Then there exists a constant c lb > 0 which depends on constants c, C, B, λ and γ from the assumptions (3.7)-(3.8) above such that ∆ f (t, w, J) |v − w| γ + I + J m γ/2 dJ dw ≥ c lb v, I γ . (3.9) In the proof of this Lemma 3.3 given in the Appendix D, the constant c lb is constructed, and its explicit formula can be found in (D.10). Remark 2. We observe that conditions (3.4) and (3.5) involve the weighted averages of the factors d lb γ (r) and d ub γ (r) product to the partition function for the molecular energy ϕ α (r); as well as e lb γ (R) and e ub γ (R) product to the partition function for the split of kinetic and internal energy ψ α (R); respectively. We introduce the short hand notation to these averages by defining the following constants, For each choice of the extended Grad decomposition that follows we specify functions d ub γ (r), d lb γ (r), e ub γ (R) and e lb γ (R) that, not only fulfill the integrability conditions, but also provide explicit expressions for controlling constants, from above and below (3.10). These integrals are used in the Section 3. Here we calculate the constants (3.10) only in the first example. c lb γ,α := 1 0 d lb γ (r)ϕ α (r)dr, C lb γ,α := 1 0 e lb γ (R)ψ α (R)(1 − R)R 1/2 dR, c ub γ,α := 1 0 d ub γ (r)ϕ α (r)dr, C ub γ,α := 1 0 e ub γ (R)ψ α (R)(1 − R)R 1/2 dR. 3.2.1. Model 1 (The total energy). We first consider the total energy in the relative velocity-center of mass velocity framework, that is B(v, v * , I, I * , r, R, σ) = b(û · σ) m 4 |v − v * | 2 + I + I * γ/2 , γ ∈ (0, 2]. (3.11) Then B is of the form (3.2) with d lb γ (r) = d ub γ (r) = 1, e lb γ (R) = m γ/2 2 −(γ/2+1) , e ub γ (R) = m γ/2 ,c lb γ,α = c ub γ,α = Γ(α + 1) 2 Γ(2α + 2) , C lb γ,α = m γ/2 2 −(γ/2+1) √ πΓ(2α + 2) 2Γ 2α + 7 2 , C ub γ,α = m γ/2 √ πΓ(2α + 2) 2Γ 2α + 7 2 , (3.12) for α > −1, where Γ represents the Gamma function. Model 2 (kinetic and microscopic internal energy detached). In this model, we split kinetic and microscopic internal energy for the colliding pair of particles, by using parameter R ∈ [0, 1], B(v, v * , I, I * , r, R, σ) = b(û·σ) R γ/2 |v − v * | γ + (1 − R) γ/2 I + I * m γ/2 ,d lb γ (r) = d ub γ (r) = 1, e lb γ (R) = min{R, (1−R)} γ/2 , e ub γ (R) = max{R, (1−R)} γ/2 . (3.14) Another possible choice is d lb γ (r) = d ub γ (r) = 1, e lb γ (R) = R γ/2 (1 − R) γ/2 , e ub γ (R) = 1. 3.2.3. Model 3 (kinetic and particle's microscopic internal energies detached). In this model we separate kinetic and microscopic internal energy with the parameter R ∈ [0, 1]. Furthermore, internal energy is divided among colliding particles with the help of parameter r ∈ [0, 1]. More precisely, we consider, B(v, v * , I, I * , r, R) = b(û · σ) R γ/2 |v − v * | γ + r(1 − R) I m γ/2 + (1 − r)(1 − R) I * m γ/2 , (3.15) for γ ∈ (0, 2]. Then the form (3.2) is satisfied with d lb γ (r) = min{r, (1 − r)} γ/2 , d ub γ (r) = 1, e ub γ (R) = 2 1−γ/2 max {R, (1 − R)} γ/2 , e lb γ (R) = min{R, (1 − R)} γ/2 , (3.16) or with d lb γ (r) = r γ/2 (1 − r) γ/2 , d ub γ (r) = 1, e ub γ (R) = 2 1−γ/2 , e lb γ (R) = R γ/2 (1 − R) γ , as shown in B.1.3 and B.2.3. This Model 3 for the transition function B is of particular importance for establishing macroscopic moments models starting from the Boltzmann equation. Namely, in [24] it is shown that the six moments model with this transition function provides a source term which satisfies the macroscopic residual inequality on the whole range of six moments model validity, that provides the total agreement with the extended thermodynamics theory of six moments, as one of the rare systems admitting a non-linear or far from equilibrium closure of the governing equations using the entropy principle. Moreover, for this model 3 of the transition function B, the macroscopic fourteen moments model achieves the matching with experimental data as well. More precisely, [24] shows that the model 3 yields transport coefficients (shear and bulk viscosities and heat conductivity) that provide both experimentally measured dependence of the shear viscosity upon temperature and the Prantdtl number which coincides with its theoretical value at a satisfactory level. This remarkable success lies in the new additive form of the transition function that we propose in (3.1), instead of the multiplicative one used so far in the literature. Fundamental lemmas In this Section we prove two fundamental lemmas, that should be used sequentially, as they are presented. All of them are motivated by the search of a proof showing that k-th polynomial moment of the solution will satisfy an ODI with a negative superlinear term, that is d dt m k [f ](t) = ∆ Q(f, f )(v, I) v, I 2k dIdv ≤ −A k m 1+ γ/2 k k + B k m k , for large enough k to be precise later, where the constant A k is strictly positive, that depends on the transition probability and partition functions, is the analogue to a coercive constant in the classical theory for diffusion type equations in continuum mechanics, where the control of the energies or suitable Banach norms is done in Sobolev spaces, while in the framework of statistical mechanics the suitable norms are non-reflexive Banach spaces, as in this case L 1 k . We first define the total energy of the two colliding molecules using the Lebesgue weight (2.26). E := v, I 2 + v * , I * 2 = v ′ , I ′ 2 + v ′ * , I ′ * 2 = 2 + |V | 2 + E m , with E from (2.4). In order to encode behavior of a polyatomic gas, we first need to understand energy recombination during a collision process, using transformations (2.8) and (2.6). This knowledge is crucial for expressing pre-collisional quantities v ′ , I ′ 2 and v ′ * , I ′ Lemma 4.2 (Energy Identity Decomposition). Let v ′ , v ′ * , I ′ and I ′ * be defined in collision transformations (2.8) and (2.6). There exists convex conjugate factors p = p(v, v * , I, I * , r, R) and q = q(v, v * , I, I * , r, R), i.e. p + q = 1, and a function s = s(v, v * , I, I * , r, R) such that the following representation holds v ′ , I ′ 2 = E p + sV · σ , v ′ * , I ′ * 2 = E q − sV · σ . Moreover, this representation preserves the total molecular energy, v ′ , I ′ 2 + v ′ * , I ′ * 2 = E (p + sV · σ) + (q − sV · σ) ≡ E . (4.1) This energy identity allows to find a dissipation effect of the collision operator. Namely, we will prove that k-th moment of the gain term decreases with respect to k, allowing the moment of the same order k of the loss term to prevail in the dynamics, when sufficiently large order of moments k is taken into account. The decay of the gain term is attained by averaging k th -power of the postcollisional total molecular energies, that is v ′ , I ′ 2k + v ′ * , I ′ * 2k . Due to an additional variable I in the polyatomic gas model, the averaging needs to be performed with respect to the compact manifold that contains a domain of the two parameters: (i) one angular parameter (scattering direction) σ that splits the kinetic energy on molecular velocities and (ii) one additional parameter r that distributes the total internal energy among colliding molecules. This result can be viewed as an extension of the angular averaging Povzner lemma used for classical elastic and inelastic collisional theory for single of multiple mixture of monatomic gases. v ′ , I ′ 2k + v ′ * , I ′ * 2k b(û · σ) d ub γ (r) ϕ α (r)dr dσ ≤ 2 C k v, I 2 + v * , I * 2 k , (4.2) with the constant C k is a contracting, that is C k ց 0, as k → ∞. and 2 C k < b L 1 (dσ) d ub γ ϕ α L 1 (dr) , k >k. (4.3) Moreover, when b(σ ·û) ∈ L ∞ (S 2 ; dσ) and d ub γ (r)ϕ α (r) ∈ L ∞ ([0, 1]; dr), the following holds 4) and the decay rate is also known C k < 4π b L ∞ (dσ) d ub γ ϕ α L ∞ (dr) , k >k,(4.C k ∼ 1 k , as n → ∞. Therefore, the total energy identity (4.1) enables to obtain a partial crucial result that controls the averaging on the compact manifold K of the k th -power of the postcollisional total molecular energies, that is v ′ , I ′ 2k + v ′ * , I ′ * 2k by the k thpower of the molecular energy, i.e. E k time a factor C k is 'contracting", that means it decays as k grows to infinity. This result is an imperative for proving decay of the k-th moment of collision operator gain term when averaged over the compact manifold S 2 × [0, 1] in variables σ and r. This fact allows for the corresponding k-th moment of the loss term to prevail in the dynamics, when sufficiently large order of moment is taken into account. The order of moment needed to guarantee this property is studied in the upcoming Remark 3. Remark 3 (Study of the sufficient moment order to ensure prevail of the loss term). For the single monatomic species, when the averaging is performed only in the scattering direction σ, it was sufficient to take the order of moment k > 1 to prove the dominance of the moment associated to the loss term with respect to the same moment of the gain term. In the monatomic gas mixture setting, the value of k = k * depends on the ratio of mass species, and it is shown that k * grows as this ratio deviates from 1/2, where 1/2 corresponds to the single specie case. In the current setting, corresponding to polyatomic gases, the averaging is performed over the angular scattering direction σ, as well as on the parameter r that redistributes the internal energy. Then the Averaging Lemma ensures property (4.3) for some k >k. However, this is not enough to ensure the prevail of the loss term. The reason is the additive form of transition function (3.2), which implies that bounds of B with respect to r and R variable, d lb γ (r), e lb γ (R) and d ub γ (r), e lb γ (R), may differ. In this case, in addition to (4.3) we will require 2 C k e ub γ (R) (1 − R)R 1/2 ψ α (R) L 1 (dR) < b L 1 (dσ) d lb γ ϕ α L 1 (dr) e lb γ (R) (1 − R)R 1/2 ψ α (R) L 1 (dR) , k >k * , (4.5) or using the notation (3.10), 2 C k C ub γ,α < b L 1 (dσ) c lb γ,α C lb γ,α , k >k * . In other words, one fixes ak * , such that 0 < C k < Ck * < b L 1 (dσ) c lb γ,α C lb γ,α , (2C ub γ,α ) −1 for any k >k * . (4.6) It is clear now that the orderk * at which the loss term become dominant in the dynamics, depends on γ, α and the transition function model at hand. It is worthwhile to mention that the Averaging Lemma ensures the existence of suchk * , since only the contracting constant C k depends on k. Note also that (4.6) reduces to (4.3) if d lb γ (r) = d ub γ (r) for all r ∈ [0, 1] and e lb γ (R) = e ub γ (R) for all R ∈ [0, 1], in which casek * =k. The value ofk * can be explicitly computed, under the additional assumption of b(û · σ) ∈ L ∞ (dσ), and d ub γ (r)ϕ α (r) ∈ L ∞ (dr),(4.7) when we can explicitly compute the constant C k from Lemma 4.3, as shown in (4.16). We focus on the three models for transition function B introduced in Section 3.2. Note that for all the three models the condition of boundedness of the function d ub γ ϕ α is fulfilled when α ≥ 0, in which case d ub γ ϕ α L ∞ (dr) = 1. Therefore, the condition (4.6) reduces to C ∞ k < 1 2 c lb γ,α C lb γ,α C ub γ,α := C * γ,α ,(4.8) where C ∞ k is explicitly calculated in (4.17). To complete the study, it remains to calculate the constants c lb γ,α , C lb γ,α and C ub γ,α for the three models. To that end, we need to determine multiplying functions d lb γ (r), e lb γ (R) and e ub γ (R). For the Model 1 we use constants already calculated in (3.12), taking m = 1. The Model 2 takes the bounds from (3.14), while for the Model 3 we assume (3.16). The results are presented in the Figure 1. 4.1. Proof of the Lemma 4.2 (Energy Identity Decomposition). We consider partitions of the energy E obtained by introducing convex combinations associated to functions Θ and Σ that may depend on v, v * , I, I * and R, as follows (i) for Θ ∈ [0, 1] we have θE = 1 + |V | 2 ⇒ (1 − Θ) E = 1 + E m , (i) for Σ ∈ [0, 1] we get Σ (1 − Θ) E = 1 + R E m ⇒ (1 − Σ) (1 − Θ) E = (1 − R) E m . Now, using collisional rules (2.8) and (2.6) yield the associated Lebesgue weights for the calculation of total molecular energy of the postcollisional (primed) states v ′ , I ′ 2 = 1 + 1 2 |V | 2 + 1 2 R E m + RE m |V |V · σ + r(1 − R) E m , v ′ * , I ′ * 2 = 1 + 1 2 |V | 2 + 1 2 R E m − RE m |V |V · σ + (1 − r)(1 − R) E m , which can be rewritten in terms of functions Θ and Σ and the parameter r ∈ [0, 1] from (2.6) as follows v ′ , I ′ 2 = E 1 2 Θ + 1 2 Σ(1 − Θ) + r(1 − Σ)(1 − Θ) + (ΘE − 1)(Σ(1 − Θ)E − 1)V · σ, v ′ * , I ′ * 2 = E 1 2 Θ + 1 2 Σ(1 − Θ) + (1 − r)(1 − Σ)(1 − Θ) − (ΘE − 1)(Σ(1 − Θ)E − 1)V · σ . (4.9) Now set the convex factors from (4.9), to be p := 1 2 Θ + 1 2 Σ(1 − Θ) + r(1 − Σ)(1 − Θ) q := 1 2 Θ + 1 2 Σ(1 − Θ) + (1 − r)(1 − Σ)(1 − Θ), which clearly add up to unity. In addition set s := (ΘE − 1)(Σ(1 − Θ)E − 1). (4.10) Recall that the total molecular energy for a polyatomic state interacting (or colliding) pairs (v, I) and (v * , I * ) is given by E := v, I 2 + v * , I * 2 . Hence, adding the two left hand sides of identities from (4.9), the conservation of the total, i.e. kinetic plus internal molecular energy is given by v ′ , I ′ 2 + v ′ * , I ′ * 2 = E (p + sV · σ + q − sV · σ) = E (p + q) = E , so the energy identity (4.1) holds. Proof of the Lemma 4.3 (The Polyatomic Compact Manifold Averaging Lemma). In order to prove this Lemma, we first use energy identity and representation (4.9), that generated the s factor in (4.10), s := (ΘE − 1)(Σ(1 − Θ)E − 1). Using the Young inequality we get an estimate s ≤ 1 2 ΘE + Σ(1 − Θ)E − 2 ≤ E (Θ + Σ(1 − Θ)) 2 . Therefore, denoting µ =V · σ, we have the following estimate ±sµ ≤ E (Θ + Σ(1 − Θ)) 2 |µ| . Therefore, the convex form (4.9) can be estimated pointwise v ′ , I ′ 2 ≤ E (Θ + Σ(1 − Θ)) 1 + |µ| 2 + (1 − Σ)(1 − Θ)r , v ′ * , I ′ * 2 ≤ E (Θ + Σ(1 − Θ)) 1 + |µ| 2 + (1 − Σ)(1 − Θ)(1 − r) . (4.11) Moreover, we can write (Θ + Σ(1 − Θ)) 1 + |µ| 2 + (1 − Σ)(1 − Θ)r ≤ max 1 + |µ| 2 , Σ 1 + |µ| 2 + (1 − Σ) r ≤ max 1 + |µ| 2 , r , and (Θ + Σ(1 − Θ)) 1 + |µ| 2 + (1 − Σ)(1 − Θ)(1 − r) ≤ max 1 + |µ| 2 , 1 − r . This allows to estimate (4.11) as follows v ′ , I ′ 2 ≤ E max 1 + |µ| 2 , r , v ′ * , I ′ * 2 ≤ E max 1 + |µ| 2 , 1 − r . Now the left-hand side of (4.2) becomes S 2 1 0 v ′ , I ′ 2k + v ′ * , I ′ * 2k b(û · σ) d ub γ (r) ϕ α (r)dr dσ ≤ 2 E k S 2 1 0 max 1 + |µ| 2 , r k b(û · σ) d ub γ (r) ϕ α (r)dr dσ, (4.12) after using symmetry properties with respect to r and the fact ϕ α (r) = ϕ α (1 − r), as much as d ub γ (r) = d ub γ (1 − r). Let us now study double integral C k = S 2 1 0 max 1 + |µ| 2 , r k b(û · σ) d ub γ (r) ϕ α (r)dr dσ, µ =V · σ. (4.13) We dissociate the following two cases: (i) b is assumed integrable on the sphere S 2 , that is b(û · σ) ∈ L 1 (S 2 ; dσ) and d ub γ (r)ϕ α (r) ∈ L 1 ([0, 1]; dr), when the constant C k has an integral form, or (ii) b is supposed bounded on the sphere S 2 , that is b(û · σ) ∈ L ∞ (S 2 ; dσ) and d ub γ (r)ϕ α (r) ∈ L ∞ ([0, 1]; dr), when we can compute explicitly the constant C k . The case b(σ ·û) ∈ L 1 (S 2 ; dσ) and d ub γ (r)ϕ α (r) ∈ L 1 ([0, 1]; dr). Following ideas from [25], we use polar coordinates for σ andV with zenithû. Denoting with θ the angle between σ andû, we decompose σ as σ = cos θû + sin θ ω, withû · ω = 0 and ω = (cos φ, sin φ), θ ∈ [0, π), φ ∈ [0, 2π). (4.14) In the same fashion we decomposeV , by denoting with δ ∈ [0, π) the angle between V andû,V = cos δû + sin δ Φ, where Φ ∈ S 1 withû · Φ = 0. Then the scalar product µ = σ ·V becomes µ = cos θ cos δ + Φ · ω sin θ sin δ. Defining τ := cos θ and expressing sin θ = √ 1 − τ 2 , since sin θ ≥ 0 on the range of θ, this scalar product reads µ = τ cos δ + Φ · ω 1 − τ 2 sin δ. (4.15) In the integral (4.13), we first express σ in its polar coordinates (4.14) and then change variables θ → τ = cos θ, which yields C k = 2π 0 π 0 1 0 max 1 + |µ| 2 , r k b(cos θ) d ub γ (r) ϕ α (r)dr sin θdθdφ = 2π 0 1 −1 1 0 max 1 + |µ| 2 , r k b(τ ) d ub γ (r) ϕ α (r)dr dτ dφ. Note that 1 + |µ| 2 ≤ 1, r ≤ 1, since |µ| ≤ 1, and the equality holds only when |µ| = 1 (in other words σ = ±V ) or r = 1. Therefore, the sequence of functions D k (x, y) := max 1 + x 2 , y k decreases monotonically in k > 1 and tends to 0 as k → ∞ for every x, y ∈ (0, 1) up to a set of measure zero. Finally, we conclude by monotone convergence Theorem that the constant C k is contracting C k ց 0, as k → ∞. The case b(σ ·û) ∈ L ∞ (S 2 ; dσ) and d ub γ (r)ϕ α (r) ∈ L ∞ ([0, 1]; dr). Under the additional assumption on boundedness of b and d ub γ (r)ϕ α (r), we can obtain the explicit decay rate of the constant C k . Indeed, the integral (4.13) does not depend on σ ·û anymore, so we may instead ofû takeV as a zenith of polar coordinates in (4.14), which amounts to take δ = 0 in (4.15) that implies µ = τ . Denoting C ∞ k := 1 0 1 0 max 1 + |µ| 2 , r k dr dµ, in this case we can write C k = 2π 0 1 −1 1 0 max 1 + |µ| 2 , r k b(µ) d ub γ (r) ϕ α (r)dr dµdφ ≤ 4π b L ∞ (dσ) d ub γ ϕ α L ∞ (dr) 1 0 1 0 max 1 + |µ| 2 , r k dr dµ = 4π b L ∞ (dσ) d ub γ ϕ α L ∞ (dr) C ∞ k , (4. 16) the inequality is by exploiting boundedness of b and a γ (r)ϕ α . It is easy to compute the double integral in C ∞ k and we obtain C ∞ k = 1 k + 1 + 2k (k + 1)(k + 2) 1 − 1 2 k+2 , k > 1. (4.17) Therefore, we see that in this case C ∞ k < 1, for any k > 1, and C ∞ k ∼ 1 k , k → ∞, and so we get the desired property (4.4). 5. L 1 k moments a priori estimates The previous polyatomic compact manifold averaging Lemma 4.3 together with the requirement (4.5) is a sufficient to show that the evolution of the k-th polynomial moment of the collision operator will become negative for k > k * . Lemma 5.1 (Moments bound for the collision operator). Let f ∈ L 1 1 satisfying assumptions from Lower bound Lemma 3.3 and condition (3.9). Moreover, suppose that the transition function B satisfies Assumption 3.1. Then for any γ ∈ (0, 2], the following inequality holds ∆ Q(f, f )(v, I) v, I 2k dIdv ≤ −A k m 1+ γ/2 k k + B k m k , (5.1) for large enough k such that k > k * , k * = max k * , 1 + γ, 1 + λ/2 finite (5.2) wherek * is such that (4.6) holds. For such k * , the constant A k ≥ A k * and B k defined below are bounded and strictly positive, with A k * = c lb C − γ/2 k b L 1 (dσ) d lb γ ϕ α L 1 (dr) e lb γ (R) (1 − R)R 1/2 ψ α (R) L 1 (dR) −2 C k * e ub γ (R) (1 − R)R 1/2 ψ α (R) L 1 (dR) , (5.3) B k = C k e ub γ (R) (1 − R)R 1/2 ψ α (R) L 1 (dR) 2 3γ 2 +1 C   ⌊ k+1 2 ⌋ ℓ=1 k ℓ   , and c lb is the lower bound for the collision frequency in terms of the γ-power of Lebesgue weight that enables the super linear behavior for moments of order k > k * , C k is from Lemma 4.3, C is from Lemma 3.3. Proof. For the test function χ(v, I) = v, I 2k , weak form (2.21) yields W := ∆ Q(f, f )(v, I) v, I 2k dIdv = 1 2 ∆ 2 ×K f f * I α I α * v ′ , I ′ 2k + v ′ * , I ′ * 2k − v, I 2k − v * , I * 2k dA. We now use the form of transition probability rate (3.2). Because of the integrability properties of all multiplying functions involved, we can separate the integral W into the gain W + W + = 1 2 ∆ 2 ×K f f * v ′ , I ′ 2k + v ′ * , I ′ * 2k B(v, v * , I, I * , r, R, σ) × ϕ α (r) (1 − R)R 1/2 ψ α (R) dσ dr dR dI * dv * dIdv, (5.4) and loss part W − , W − = 1 2 ∆ 2 ×K f f * v, I 2k + v * , I * 2k B(v, v * , I, I * , r, R, σ) × ϕ α (r) (1 − R)R 1/2 ψ α (R) dσ dr dR dI * dv * dIdv, (5.5) so that W = W + − W − . (5.6) We treat each term separately. For the gain part, we use the bound from above stated in (3.2), W + ≤ 1 2 ∆ 2 ×K f f * v ′ , I ′ 2k + v ′ * , I ′ * 2k b(û · σ) d ub γ (r) e ub γ (R)B(v, v * , I, I * ) × ϕ α (r) (1 − R)R 1/2 ψ α (R) dσ dr dR dI * dv * dIdv, = 1 2 C ub γ,α ∆ 2 S 2 ×[0,1] 2 v ′ , I ′ 2k + v ′ * , I ′ * 2k b(û · σ) d ub γ (r) ϕ α (r) dσ dr × f f * B (v, v * , I, I * )dI * dv * dIdv, where the constant is defined in (3.10). The Averaging Lemma 4.3 estimates the primed quantities averaged over the compact set S 2 × [0, 1], and for the gain term it yields W + ≤ C k C ub γ,α ∆ 2 f f * |v − v * | γ + I + I * m γ/2 × v, I 2 + v * , I * 2 k dI * dv * dIdv, (5.7) Now the polynomial inequalities from Lemmas E.1 and E.2 yield v, I 2 + v * , I * 2 k ≤ v, I 2k + v * , I * 2k + ℓ k ℓ=1 k ℓ v, I 2ℓ v * , I * 2(k−ℓ) + v, I 2(k−ℓ) v * , I * 2ℓ , ≤ v, I 2k + v * , I * 2k +c k v, I 2 v * , I * 2(k−1) + v, I 2(k−1) v * , I * 2 , (5.8) withc k = ℓ k ℓ=1 k ℓ , ℓ k = ⌊ k + 1 2 ⌋. (5.9) Thus, the bound for W + becomes W + ≤ C k C ub γ,α ∆ 2 f f * |v − v * | γ + I + I * m γ/2 v, I 2k + v * , I * 2k +c k v, I 2 v * , I * 2(k−1) + v, I 2(k−1) v * , I * 2 dI * dv * dIdv. (5.10) Now we turn to the loss term W − defined in (5.5). We first use the bound from below of the transition function B, and obtain W − ≥ 1 2 b L 1 (dσ) c lb γ,α C lb γ,α ∆ 2 f f * |v − v * | γ + I + I * m γ/2 × v, I 2k + v * , I * 2k dI * dv * dIdv. (5.11) Gathering the estimates for the gain term (5.10) and for the loss term (5.11), the weak form W from (5.6) becomes W ≤ 1 2 ∆ 2 ×K f f * |v − v * | γ + I + I * m γ/2 −à k * v, I 2k + v * , I * 2k +B k v, I 2 v * , I * 2(k−1) + v, I 2(k−1) v * , I * 2 dI * dv * dIdv. (5.12) with the uniform in k constantà k * , for k * chosen in (5.2), defined bỹ A k * = b L 1 (dσ) c lb γ,α C lb γ,α − 2 C k C ub γ,α , (5.13) strictly positive for large enough k > k * , by virtue of (4.6); and the constantB k bounded for each fixed kB k = 2c k C k C ub γ,α ≥ 0. (5.14) Now for (5.12) we make of use the upper bound (3.6) and the lower bound (3.9) for the term |v − v * | γ + I + I * m γ/2 . Indeed, (5.12) becomes, W ≤ −c lbÃk * m k+γ/2 + 2 3γ 2 −1B k m 1+γ/2 m k−1 + m k−1+γ/2 m 1 , k ≥ k * , (5.15) where we have used moment notation from the Definition 2.6. For the first term we apply Jensen's inequality, ∆ f (v, I) v, I 2k+γ dIdv ≥ ∆ f (v, I)dIdv − γ/2 k ∆ f (v, I) v, I 2k dIdv 1+ γ/2 k , or in terms of moments, m k+γ/2 ≥ m − γ/2 k 0 m 1+ γ/2 k k . For the second term we use monotonicity of moments (2.29). Thus, (5.15) becomes W ≤ −c lbÃk * m − γ/2 k 0 m 1+ γ/2 k k + 2 3γ 2B k m 1 m k ≤ −c lbÃk * C − γ/2 k m 1+ γ/2 k k + 2 3γ 2B k C m k , where the constant C is from Lemma 3.3. Denoting A k * = c lbÃk * C − γ/2 k , B k = 2 3γ 2B k C. the last inequality concludes the proof. When regularizing properties of the collision operator stated in Lemma 5.1 are combined with the Boltzmann equation (2.22), then we obtain ordinary differential inequality for L 1 polynomially weighted norms or polynomial moments m k in the sense of Definition 2.6. Lemma 5.2 (Ordinary Differential Inequality for Polynomial Moment). If f a solution of the Boltzmann equation for polyatomic gases (2.22) and m k its associated polynomial moment of order k in the sense of Definition 2.6. Then, for any k > k * , with k * finite from (5.2), and γ ∈ (0, 2] the following estimate holds d m k dt ≤ −A k * m 1+ γ/2 k k + B k m k , (5.16) where where both A k * and B k are positive constants form Lemma 5.1, equations (5.13) and (5.14), respectively. Remark 4. By recalling the definition of A k * from (5.13), this constant can be identify as the coercive factor A k * = c lb b L 1 (dσ) c lb γ,α C lb γ,α − 2 C k C ub γ,α C − γ/2 k > 0 for any k > k * , with k * from (5.2). This strictly positive factor, which controls the lower bound to the absorption term on the moments inequality (5.16), provides a sufficient condition to proceed next with Theorem 5.3 yielding the global in time propagation and generation of k th -moments of any order k > k * , with k * , provided that the initial data f 0 (v, I) has positive mass, positive energy, as much as satisfies conditions of the Lower Bound Lemma 3.3. These estimates are obtained without the need of entropy estimates. Yet if the initial entropy is bounded, the constructed solutions will have well defined entropy that remains bounded for all times by the initial one. Proof. In order to get differential equation for polynomial moment m k from Definition 2.6 we integrate the Boltzmann equation (2.22) over the space (v, I) ∈ R 3 × [0, ∞) against test function χ(v, I) = v, I 2k . Using the weak form (2.21), we get d m k dt = ∆ Q(f, f )(v, I) v, I 2k dIdv. Applying Lemma 5.1 on the right-hand side we get the desired estimate. This differential inequality by means of a comparison principle for ODEs implies the following Theorem. 1. (Generation) There is a constant C m such that for any k > k * , with k * from (5.2), and any γ ∈ (0, 2], m k [f ](t) ≤ B k A k * k γ/2 1 − e − γB k 2k t − 2k γ , ∀t > 0, (5.17) uniformly in k > k * , with A k * > 0 and 0 ≤ B k bounded, for all k > k * , as defined in (5.16). (Propagation) Moreover , if m k [f ](0) < ∞, then m k [f ](t) ≤ max B k A k * 2k γ , m k [f ](0) , (5.18) for all t ≥ 0. Proof. The aim of the proof is to associate an ODE of Bernoulli type to the derived ODI (5.16) from Lemma 5.2, y ′ (t) = −a y(t) 1+c + b y(t), (5.19) whose solution is y(t) = a b 1 − e −c b t + y(0) −c e −c b t − 1 c . (5.20) Dropping initial data we get an estimate y(t) ≤ a b 1 − e −c b t − 1 c , ∀t > 0. (5.21) On the other hand, when y(0) is assumed to be finite, after noticing that y(t) is a monotone function of t, which approaches to y(0) as t → 0 on one hand, and converges to (a/b) −1/c when t → ∞ on the other hand, we have the following estimate y(t) ≤ max{y(0), (a/b) −1/c }, ∀t ≥ 0. (5.y(t) ≤ a b − 1 c (c b) − 1 c e b 2 t − 1 c , t < 1 1 − e −c b − 1 c , t ≥ 1. Replacing as in (5.23), we get m k [f ](t) ≤ B m max{1, t − k γ/2 }, ∀t > 0, (5.24) where the constant is B m = B k A k * k γ/2 max γB k 2k − 2k γ e B k 2 , 1 − e − γB k 2k − 2k γ , for any k > k * , and k * is such that (5.2) holds. Existence and Uniqueness Theory It this Section, we establish an existence and uniqueness theorem that solves the Cauchy problem 6.1. The invariant region Ω needed to solve the Cauchy problem for Boltzmann equation. The goal of this Section is to set conditions on initial data that ensures existence and uniqueness of the solution to the Cauchy problem (6.1) associated to the Boltzmann equation for polyatomic gases under conditions described in Section 3. These conditions will include moments with physical interpretation of mass and energy of the gas, and the imposed restrictions will be physically relevant. For instance, we will necessitate bounded mass and energy both from below and above, thus excluding zero and infinitely large mass and energy. Moreover, we will require bounded moment of order k * := max k * , 1 + γ, 1 + λ/2 , (6.2) for γ ∈ (0, 2] related to the potential of the transition function (3.1), λ > 0 is from the lower bound (3.8) andk * from (4.5) is sufficiently large to ensure the prevail of the polynomial moments of loss term with respect to those same moments of the gain term. Suchk * depends on γ, α and the model of transition function at hand. Now we define the bounded, convex and closed subset Ω ⊆ L 1 1 , Ω = f (v, I) ∈ L 1 1 : f ≥ 0 in (v, I), ∆ v f dI dv = 0, ∃ c 0 , C 0 , c 1 , C 1 , C k * > 0, and C 0 < c 1 , such that ∀t ≥ 0, c 0 ≤ m 0 [f ](t) ≤ C 0 , c 1 ≤ m 1 [f ](t) ≤ C 1 , m k * [f ](t) ≤ C k * , with k * from (6.2) . (6. 3) The value of k * which determines how many moments need to be bounded initially in order to guarantee existence and uniqueness to the Cauchy problem (6.1) is strongly related to the collision operator Q(f, f ). More precisely, the study of polynomial moments associated to the collision operator will allow to define the following map L γ,k * : [0, ∞) → R, L γ,k * := −A x 1+ γ/2 k * + Bx, (6.4) where A and B are positive constants for any γ > 0 and k * from (6.2). This map has only one root x * γ,k * at which L γ,k * changes from positive to negative. Therefore, at the interval [0, x * γ,k * ] it reaches its maximum value denoted with L * γ,k * . This implies that the constant C k * := x * γ,k * + L * γ,k * . (6.5) is well-defined and strictly positive, which ensures that the dynamics of collision operator allows to construct the constant C k * from the definition of the region Ω. We emphasize that monotonicity of moments (2.29) implies that the condition of boundedness of the moment of order k * stated in (6.3), implies boundedness of all moments of lower order n for 1 ≤ n ≤ k * . The following result holds. ([0, ∞) , Ω) ∩ C 1 (0, ∞) , L 1 1 . Proof. The goal is apply general ODE theory from Appendix F, that is to study collision operator Q as mapping Q : Ω → L 1 1 , and to show (1) Hölder continuity condition Q(f, f ) − Q(g, g) L 1 1 ≤ C H f − g 1/2 L 1 1 ,(6.6) (2) Sub-tangent condition lim h→0+ dist (f + hQ(f, f ), Ω) h = 0, where dist (H, Ω) = inf ω∈Ω h − ω L 1 1 , (3) One-sided Lipschitz condition [Q(f, f ) − Q(g, g), f − g] ≤ C L f − g L 1 1 , (6.7) where brackets [·, ·] by Remark 6 become [Q(f, f ) − Q(g, g), f − g] = lim h→0 − (f − g) + h (Q(f, f ) − Q(g, g)) L 1 1 − (f − g) L 1 1 h ≤ ∆ (Q(f, f )(v, I) − Q(g, g)(v, I)) sign (f (v, I) − g(v, I)) v, I 2 dI dv. First, we check Q : Ω → L 1 1 is well defined. Indeed, for any f ∈ Ω, using |·| = · sign(·) Q(f, f ) L 1 1 = ∆ Q(f, f )(v, I) sign (Q(f, f )(v, I)) v, I 2 dI dv ≤ 1 2 ∆ 2 ×K f f * (II * ) α v ′ , I ′ 2 + v ′ * , I ′ * 2 + v, I 2 + v * , I * 2 dA by virtue of the weak form (2.21) for the test function χ(v, I) = sign (Q(f, f )(v, I)) v, I 2 . Using microscopic energy law (2.2) and the form of transition function (3.2) with the multiplying functions from above together with the upper bound (3.6), as much as monotonicity of moments (2.29) we get Q(f, f ) L 1 1 ≤ C K ∆ 2 f f * v, I 2 + v * , I * 2 ( v, I γ + v * , I * γ ) dI * dv * dI dv = 2 C K f L 1 1+γ/2 f L 1 0 + f L 1 1 f L 1 γ/2 ≤ 4 C K f L 1 1+γ/2 f L 1 1 , where C K = 2 3γ 2 −1 K d ub γ (r) ϕ α (r) b(û · σ) e ub γ (R) ψ α (R) (1 − R)R 1/2 dσ dr dR. (6.8) Since f ∈ Ω, the right hand side is bounded, and thus Q(f, f ) ∈ L 1 1 . Then, the proof consists in three parts. Part I: Hölder continuity condition. We first rewrite difference of the two collision operators acting on distribution functions f and g as collision operator on sums and differences of these two distribution functions, Q(f, f ) − Q(g, g) = 1 2 (Q(f + g, f − g) + Q(f − g, f + g)) ,(6.9) by virtue of the bilinear structure of the strong form of collision operator (2.13). Using this relation, we write the L 1 2 norm I H := Q(f, f ) − Q(g, g) L 1 2 = ∆ |Q(f, f ) − Q(g, g)| v, I 2 dI dv ≤ 1 2 ∆ (|Q(f + g, f − g)| + |Q(f − g, f + g)|) v, I 2 dI dv. The absolute value of collision operators will be rewritten in terms of the sign function using |·| = · sign(·), that will be understood as a test function. This implies I H ≤ 1 2 ∆ 2 ×K ((f (v, I) + g(v, I)) |f (v * , I * ) − g(v * , I * )| + |f (v, I) − g(v, I)| (f (v * , I * ) + g(v * , I * ))) × v ′ , I ′ 2 + v ′ * , I ′ * 2 + v, I 2 + v * , I * 2 × B dσ ϕ α (r)dr (1 − R)R 1/2 ψ α (R)dR dI * dv * dI dv = ∆ 2 ×K ((f (v, I) + g(v, I)) |f (v * , I * ) − g(v * , I * )| + |f (v, I) − g(v, I)| (f (v * , I * ) + g(v * , I * ))) v, I 2 + v * , I * 2 × B dσ ϕ α (r)dr (1 − R)R 1/2 ψ α (R)dR dI * dv * dI dv, and the last equality is by energy conservation law during collision (2.2). Now we make use of the transition function (3.2), and its bound from above (3.6), I H ≤ C K ∆ 2 ((f (v, I) + g(v, I)) |f (v * , I * ) − g(v * , I * )| + |f (v, I) − g(v, I)| (f (v * , I * ) + g(v * , I * ))) × v, I 2 + v * , I * 2 ( v, I γ + v * , I * γ ) dI * dv * dI dv, (6.10) with C K from (6.8). We rewrite (6.10) in moment notation, I H ≤ 2 C K f + g L 1 1+γ/2 f − g L 1 0 + f + g L 1 1 f − g L 1 γ/2 + f + g L 1 γ/2 f − g L 1 1 + f + g L 1 0 f − g L 1 1+γ/2 . Furthermore, monotonicity of norms (2.29) yields I H ≤ 4 C K f − g L 1 1+γ/2 f + g L 1 1+γ/2 + f + g L 1 1 . Next, we use interpolation inequality (E.1) on f − g L 1 1+γ/2 and get f − g L 1 1+γ/2 ≤ f − g 1/2 L 1 1 f − g 1/2 L 1 1+γ . Moreover, characterization of the set Ω gives the following bounds f − g 1/2 L 1 1+γ ≤ f 1/2 L 1 1+γ + g 1/2 L 1 1+γ ≤ 2 C 1/2 1+γ , and f + g L 1 1+γ/2 ≤ 2 C 1+γ/2 , f + g L 1 1 ≤ 2 C 1 . Finally, denoting C H := 16 C K C 1/2 1+γ C 1+γ/2 C 1 , we get desired estimate (6.6). Part II: Sub-tangent condition. We first study the collision frequency ν(f )(v, I) := ∆×K f (v * , I * )B ϕ α (r)ψ α (R)(1 − R)R 1/2 dσ dr dR dI * dv * . Using the form (3.2) of the transition function B together with its bound from above (3.6), we obtain ν(f )(v, I) ≤ C K ∆ f (v * , I * ) ( v, I γ + v * , I * γ ) dI * dv * , ≤ C K C 0 v, I γ + C γ/2 ≤ 2 C K C 0 + C γ/2 1 + 1 2 |v| 2 + I m γ/2 , (6.11) with C K from (6.8) and using characterization of the invariant region Ω as in (6.3). The idea of the proof of sub-tangent condition is to prove that for f ∈ Ω and for any ǫ > 0 there exists h 1 > 0 such that the ball centered at f + hQ(f, f ) with radius hǫ has non-empty intersection with Ω for any 0 < h < h 1 , as formulated in the Proposition 1 below. Then for such h 1 we have h −1 dist (f + hQ(f, f ), Ω) ≤ ǫ, for all 0 < h < h 1 , which concludes the sub-tangent condition. Therefore, it remains to prove the following Proposition 1. for any 0 < h < h 1 . Proof. We first define characteristic function χ ρ1 (v) of the ball of radius ρ 1 ≥ 0 in the velocity space v ∈ R 3 , B vel (0, ρ 1 ) := v ∈ R 3 : |v| ≤ ρ 1 . as much as characteristic function χ ρ2 (I) of the interval [0, ρ 2 ], ρ 2 ≥ 0, in the internal energy space I ∈ [0, ∞), B en (0, ρ 2 ) := I ∈ [0, ∞) : I m ≤ ρ 2 . Then we notice that for (v, I) ∈ B vel (0, ρ 1 ) × B en (0, ρ 2 ) we have 1 2 |v| 2 + I m ≤ ρ 1 √ 2 + ρ 2 . Denoting ρ := ρ 1 √ 2 + ρ 2 ,1 2 |v| 2 + I m ≤ ρ . Then we denote characteristic function χ ρ (v, I) of the ball B(0, ρ) in the velocityinternal energy space, and define truncated distribution function f ρ (t, v, I) := f (t, v, I) χ ρ (v, I). Now consider the following function g ρ = f + h Q(f ρ , f ρ ), (6.13) for h > 0. The aim is to find ρ and h so that g ρ ∈ B(f + hQ(f, f ), hǫ) ∩ Ω. Let us find ρ and h so that f ρ ∈ Ω. We first note that for any f ∈ Ω, its truncation f ρ ∈ Ω as well. Then using definition (2.19) we can estimate Q(f ρ , f ρ )(v, I) ≥ −f ρ ∆×K f ρ * B ϕ α (r)ψ α (R)(1 − R)R 1/2 dσ dr dR dI * dv * , since the first term is positive. Using bound on the collision frequency (6.11) we get Q(f ρ , f ρ )(v, I) ≥ −2 C K C 0 + C γ/2 1 + 1 2 |v| 2 + I m γ/2 f ρ ≥ −2 C K C 0 + C γ/2 (1 + ρ γ ) f. Therefore, for g ρ we can bound g ρ ≥ f 1 − 2 C K C 0 + C γ/2 (1 + ρ γ ) h ≥ 0, for any h ∈ (0, Finally, let us prove that L 1 k * norm of g ρ is bounded. To that end we study the the map L γ,k * : [0, ∞) → R from (6.4) L γ,k * := −Ax 1+ γ/2 k * + Bx, where A and B are positive constants for any γ > 0. Denoting with x * γ,k * the only root at which L γ,k * changes from positive to negative, we can write for any x ≥ 0, L γ,k * (x) ≤ max 0≤x≤x * γ,k * L γ,k * (x) =: L * γ,k * . Such defined map L γ,k * allows to write (5.1) for k = k * in terms of it, ∆ Q(f, f )(v, I) v, I k * dIdv ≤ L γ,k * (m k * [f ]) ≤ L * γ,k * . Define C k * as in (6.5), that is C k * = x * γ,k * + L * γ,k * . For any f ∈ Ω, we have two possibilities: either (i) m k * [f ] ≤ x * γ,k * or (ii) m k * [f ] > x * γ,k * . In the first case, for the k * -moment of g ρ we get m k * [g ρ ] ≤ x * γ,k * + h ∆ Q(f ρ , f ρ )(v, I) v, I 2k * dIdv ≤ x * γ,k * + hL * γ,k * ≤ C k * , where we have used h ≤ 1, without loss of generality. In the second case, we take ρ = ρ(f ) > 0 large enough to ensure m k * [f ρ ] > x * γ,k * as well. In that case, L γ,k * is negative, i.e. L γ,k * (m k * [f ρ ]) ≤ 0. Therefore, m k * [g ρ ] ≤ x * γ,k * ≤ C k * . Therefore, in either case g ρ is bounded, and moreover we have constructed the constant of boundedness C k * . We conclude that g ρ ∈ Ω provided that 0 < h < h * , h * = min 1, 1 2 C K C 0 + C γ/2 (1 + ρ(f ) γ ) On the other hand, let us show that g ρ ∈ B(f + hQ(f, f ), hǫ). From the Hölder estimate (6.6) we get h −1 f + hQ(f, f ) − g ρ L 1 1 = Q(f, f ) − Q(f ρ , f ρ ) L 1 1 ≤ C H f − f ρ 1/2 L 1 1 ≤ ǫ, for ρ = ρ(ǫ) large enough. Thus, for this choice of ρ, we have g ρ ∈ B(f + hQ(f, f ), hǫ). Finally, we conclude the proof of the Proposition by choosing ρ = max {ρ(f ), ρ(ǫ)} , and h 1 = min 1, 1 2 C K C 0 + C γ/2 (1 + ρ γ ) . Part III: One-sided Lipschitz condition. The left hand side of (6.7) after use of representation (6.9) becomes I L := [Q(f, f ) − Q(g, g), f − g] ≤ ∆ (Q(f, f )(v, I) − Q(g, g)(v, I)) sign (f (v, I) − g(v, I)) v, I 2 dI dv ≤ 1 2 ∆ (Q(f + g, f − g)(v, I) + Q(f − g, f + g)(v, I)) × sign (f (v, I) − g(v, I)) v, I 2 dI dv. Using the weak form (2.21), we get I L ≤ 1 4 ∆ 2 ×K (f + g)(f * − g * ) (II * ) α + (f − g)(f * + g * ) (II * ) α × sign (f ′ − g ′ ) v ′ , I ′ 2 + sign (f ′ * − g ′ * ) v ′ * , I ′ * 2 −sign (f − g) v, I 2 − sign (f * − g * ) v * , I * 2 dA We bound sign function by 1 for the first two terms, and from the last two terms we use |·| = · sign(·) where applicable, I L ≤ 1 4 ∆ 2 ×K ((f + g) |f * − g * | + |f − g| (f * + g * )) × v ′ , I ′ 2 + v ′ * , I ′ * 2 + ((f + g) |f * − g * | − |f − g| (f * + g * )) v, I 2 + (−(f + g) |f * − g * | + |f − g| (f * + g * )) v * , I * 2 dA (II * ) α . Using the energy collision law (2.2), after cancellations of some terms we get I L ≤ 1 2 ∆ 2 ×K (f + g) |f * − g * | v, I 2 + |f − g| (f * + g * ) v * , I * 2 dA (II * ) α = ∆ 2 ×K |f − g| (f * + g * ) v * , I * 2 dA (II * ) α , the last equality is due to the change of variables (2.17) in the first integral. We make use of the transition function B assumption (3.2) and the upper bound (3.6), I L ≤ C K ∆ 2 |f − g| (f * + g * ) v * , I * 2 ( v, I γ + v * , I * γ ) dI * dv * dI dv = C K f − g L 1 γ/2 f + g L 1 1 + f − g L 1 0 f + g L 1 1+γ/2 ≤ 2 C K C 1 + C 1+γ/2 f − g L 1 1 where C K is from (6.8), and we have used monotonicity of norms (2.29) and definition of the set Ω from (6.3), which concludes the proof. Generation and propagation of exponential moments In the case of single monatomic gas [5] and monatomic gas mixtures [25], generation and propagation of polynomial moments implied the same properties of exponential moments. The same result holds for polyatomic gases. Theorem 7.1 (Generation and propagation of exponential moments). Let f be the solution of the Cauchy problem (6.1). The following properties hold. (a) (Generation) There exist constants β > 0 and B E > 0 such that E γ/2 [f ](β min {t, 1} , t) ≤ B E , ∀t ≥ 0. (b) (Propagation) Let 0 < s ≤ 1. Suppose that there exists a constant β 0 > 0, such that E s [f ](β 0 , 0) ≤ M 0 < ∞. (7.1) Then there exist constants 0 < β ≤ β 0 and C E > 0 such that E s [f ](β, t) ≤ C E , ∀t ≥ 0. (7.2) Proof. Let f be the solution of the Cauchy problem (6.1). The proof strongly relies on generation and propagation of polynomial moments stated in Theorem 5.3, but it uses polynomial moment ODI written in a slightly different manner. More precisely, we consider polynomial moment m δq [f ](t) =: m δq , 0 < δ ≤ 1, q ≥ 0, with δq > k * , with k * as defined in (5.2). Then, setting k = δq in (5.16) and (5.7) yields m δq ≤ 1 2 ∆ 2 [0,1] f f * 2 C δq C ub γ,α v, I 2 + v * , I * 2 δq , − b L 1 (dσ) c lb γ,α C lb γ,α v, I 2δq + v * , I * 2δq B dI * dv * dIdv. The only difference with respect to the proof of Lemma 5.2 comes with estimate on positive contribution. Here, since δ ≤ 1, we can write v, I 2 + v * , I * 2 δq ≤ v, I 2δ + v * , I * 2δ q Now we apply Lemma E.1, as we did for (5.8) in proof of Lemma 5.2, but because of the previous estimate we get slightly different result, v, I 2δ + v * , I * 2δ q ≤ v, I 2δq + v * , I * 2δq + ℓq ℓ=1 q ℓ v, I 2δℓ v * , I * 2(δq−δℓ) + v, I 2(δq−δℓ) v * , I * 2δℓ , with ℓ q = ⌊ q+1 2 ⌋. Now we use bounds from above (3.6) and below (3.9) for the transition functionB. Following the same ideas and with the same notation (5.13) as in Lemma 5.1, we obtain polynomial moment ODI where K 1 and K 2 are positive constants, K 1 =à δq , K 2 = 2 C ub γ,α , for δq > k * , with k * from (5.2). Part I: propagation of exponential moments. Using Taylor series of an exponential function, one can represent exponential moment as E s [f ](β, t) = ∞ k=0 β k k! m sk [f ](t). (7.4) We consider its partial sum an a shifted by γ/2 one, namely, When it will be important to highlight dependence on t and β, we will also, for example, write E n s (β, t) instead of E n s . E n s = n k=0 β k k! m sk , E n s;γ = n k=0 β k k! m sk+γ/2 ,(7. The idea of proof is to show that the partial sum E n s is bounded uniformly in time t and n. Taking derivative with respect to time t of (7.4), we get d dt E n s = n k=0 β k k! m ′ sk = k0−1 k=0 β k k! m ′ sk + n k=k0 β k k! m ′ sk ,(7.6) where k 0 is an index that will be determined later on. Since E n s is written in terms of m sk we derive ordinary differential inequality (ODI) for polynomial moment m sk . Indeed, in polynomial ODI (7.3) we set δ := s, q := k, that yields d dt m sk ≤ −K 1 m sk+γ/2 + K 2 C sk ℓ k ℓ=1 k ℓ m sℓ+γ/2 m sk−sℓ + m sk−sℓ+γ/2 m sℓ . (7.7) Now we make use the last inequality (7.7) for the second term from (7.6) that yields d dt E n s ≤ k0−1 k=0 β k k! m ′ sk − K 1 n k=k0 β k k! m sk+γ/2 + K 2 n k=k0 C sk β k k! ℓ k ℓ=1 k ℓ m sℓ+γ/2 m sk−sℓ + m sk−sℓ+γ/2 m sℓ =: S 0 − K 1 S 1 + K 2 S 2 . (7.8) We estimate each sum S 0 , S 1 and S 2 separately. S 0 ≤ c k0 k0−1 k=0 β k k! ≤ c k0 e β ≤ 2 c k0 ,(7.10) for β small enough to satisfy e β ≤ 2. (7.11) Term S 1 . We complete first the term S 1 to make appear shifted partial sum E n s;γ/2 by means of S 1 = n k=k0 β k k! m sk+γ/2 = E n s;γ − k0−1 k=0 β k k! m sk+γ/2 . From the bound (7.9) we can estimate m sk+γ/2 as well, m sk+γ/2 ≤ c k0 , k = 0, . . . , k 0 − 1, which together with considerations for the term S 0 yields S 1 ≥ E n s;γ − 2c k0 . (7.12) Term S 2 . Term S 2 can be separated into two terms that will be treated in the same way, namely S 2 = n k=k0 C sk β k k! ℓ k ℓ=1 k ℓ m sℓ+γ/2 m sk−sℓ + m sk−sℓ+γ/2 m sℓ =: S 21 + S 22 . Rearranging S 21 we can write S 21 = n k=k0 C sk ℓ k ℓ=1 β ℓ m sℓ+γ/2 ℓ! β k−ℓ m sk−sℓ (k − ℓ)! ≤ C sk0 E n s;γ E n s , where the last inequality is due to the monotone decreasing property of C k . Proceeding in the same fashion for S 22 , we can estimate S 2 ≤ 2 C sk0 E n s;γ E n s . (7.13) Finally, plugging all the estimates (7.10), (7.12) and (7.13) into (7.8), we get an ODI for E n s , d dt E n s ≤ −K 1 E n s;γ + 2c k0 (1 + K 1 ) + 2 K 2 C sk0 E n s;γ E n s . (7.14) Next goal is to find a bound on E n s using this ODI. To that end, for each n ∈ N we define T n := sup{t ≥ 0 : E n s (β, τ ) ≤ 4M 0 , ∀τ ∈ [0, t]},(7.15) where M 0 is a bound on initial exponential moment from (7.1). We will show that E n s (t) is uniformly bounded in t and n by proving that T n = ∞ for all n ∈ N. Firstly, we show that the sequence T n is well-defined and positive. Indeed, since β ≤ β 0 , at time t = 0 we have E n s (β, 0) = n k=0 β k k! m sk (0) ≤ n k=0 β k 0 k! m sk (0) ≤ E s (β 0 , 0) < 4M 0 , uniformly in n, by assumption (7.1). Since each term m sk (t) is continuous function of t, so is E n s (β, t). Therefore, E n s (β, t) < 4M 0 on some time interval [0, t n ), t n > 0, which implies that T n is well-defined and positive for every n ∈ N. From definition (7.15) of T n it follows that E n s (β, t) ≤ 4M 0 for t ∈ [0, T n ]. Then ODI (7.14) becomes d dt E n s ≤ −E n s;γ (K 1 − 8 K 2 C sk0 M 0 ) + 2c k0 (1 + K 1 ) . (7.16) Since C sk0 , converges to zero as k 0 goes to infinity, we can choose k 0 > k * s large enough so that the following inequality holds K 1 − 8 K 2 C sk0 M 0 > K 1 2 . (7.17) Therefore, ODI (7.16) becomes d dt E n s ≤ − K 1 2 E n s;γ + 2c k0 (1 + K 1 ) ,(7.18) for k 0 > k * s large enough. It remains to find a lower bound for E n s;γ in terms of E n s . Indeed, we can estimate E n s;γ = n k=0 β k k! m sk+γ/2 ≥ n k=0 β k k! { v,I ≥β −1/2 } f (t, v, I) v, I 2(sk+γ/2) dI dv ≥ β −γ/2 E n s − n k=0 β k k! { v,I <β −1/2 } f (t, v, I) v, I 2sk dI dv ≥ β −γ/2 E n s − n k=0 β k(1−s) k! m 0 (0) ≥ β −γ/2 E n s − m 0 (0)e β 1−s . Plugging this result into (7.18) yields d dt E n s ≤ − K 1 2 β −γ/2 E n s + K 1 2 β −γ/2 m 0 (0)e β 1−s + 2c k0 (1 + K 1 ) . By the maximum principle for ODEs, it follows E n s (β, t) ≤ max E n s (β, 0), m 0 (0) e β 1−s + 4c k0 (1 + K 1 ) K 1 β −γ/2 ≤ M 0 + m 0 (0) e β 1−s + β γ/2 4c k0 (1 + K 1 ) K 1 , (7.19) for any t ∈ [0, T n ]. On the other hand, since s ≤ 1, the following limit property holds m 0 (0) e β 1−s + β γ/2 4c k0 (1 + K 1 ) K 1 → m 0 (0), as β → 0, and m 0 (0) < E n s (β 0 , 0) for any β 0 , and therefore, by (7.1), m 0 (0) < M 0 . Thus, we can choose sufficiently small β = β 1 such that m 0 (0) e β 1−s + β γ/2 4c k0 (1 + K 1 ) K 1 < 3M 0 ,(7.20) for any s ≤ 1 and K 1 from (7.17) that depends on k * as well. In that case, inequality (7.19) implies the following strict inequality E n s (β, t) < 4M 0 , (7.21) for any t ∈ [0, T n ] and 0 < β ≤ β 1 , with β depending on k * defined in (5.2). For chosen k 0 depending on k * from (5.2) and such that (7.18) holds, we also take β so that 0 < β ≤ β 0 and (7.11), (7.20) are satisfied, which amounts to take β = min {β 0 , ln 2, β 1 }. In this case, we have strict inequality (7.21), E n s (β, t) < 4M 0 , that holds on the closed interval [0, T n ] uniformly in n. Because of the continuity of E n s (β, t) with respect to time t, this strict inequality actually holds on a slightly larger time interval [0, T n + ε), ε > 0. This contradicts the maximality of T n unless T n = +∞. Therefore, E n s (β, t) ≤ 4M 0 for all t ≥ 0 and n ∈ N. Thus, letting n → ∞ we conclude E s [f ](β, t) = lim n→∞ E n s [f ](β, t) ≤ 4M 0 , ∀t ≥ 0, i. e. the solution f to Boltzmann equation with finite initial exponential moment of order s and rate β 0 will propagate exponential moments of the same order s and a rate β that satisfies β = min {β 0 , ln 2, β 1 }. Part II: generation of exponential moments In this Section, we associate an exponential moment of order γ and rate βt, with β depending on k * from (5.2), to the solution f of the Boltzmann equation, E γ/2 [f ](βt, t) = I i=1 R 3 f i (t, v) e βt v γ i dv = ∞ k=0 (βt) k k! m γk/2 [f ](t). (7.22) We define ts partial sum, and a shifted one E n γ/2 [f ](βt, t) = n k=0 (βt) k k! m γk/2 [f ](t), E n γ/2;γ [f ](βt, t) = n k=0 (βt) k k! m γk/2+γ/2 [f ](t). As in the previous Section, we will relieve notation by omitting explicit dependence on time t and relation to f , namely Coming back to (7.23), for the first term we simply re-index the sum and use definition of shifted partial sum, and for the last one we use the above polynomial moment ODI (7.24), which together implies E n γ/2 [f ](βt, t) =: E n γ/2 , E n γ/2;γ [f ](βt, t) := E n γ/2;γ . Taking time derivative of E n γ/2 yields d dt E n γ/2 = β n k=1 (βt) k−1 (k − 1)! m γk/2 + k0−1 k=0 (βt) k k! m ′ γk/2 + n k=k0 (βt) k k! m ′ γk/2 .d dt E n γ/2 ≤ β E n γ/2;γ + k0−1 k=0 (βt) k k! m ′ γk/2 − K 1 n k=k0 (βt) k k! m γk/2+γ/2 + K 2 n k=k0 (βt) k k! C γk 2 ℓ k ℓ=1 k ℓ m γℓ/2+γ/2 m γk/2−γℓ/2 + m γk/2−γℓ/2+γ/2 m γℓ/2 =: β E n γ/2;γ + S 0 − K 1 S 1 + K 2 (S 21 + S 22 ) . (7.25) We treat each term separately. Term S 0 . From polynomial moment generation estimate (5.24) we can bound polynomial moment of any order, as well as its derivative by means of (5.16). In particular, For S 0 taking t ≤ 1, we have m ′ γk/2 ≤c k0 t −k and therefore m γk/2 ≤ B m max t>0 {1, t −k }, m ′ γk/2 ≤ B γk/2 B m max t>0 {1, t −k }.S 0 := k0−1 k=0 (βt) k k! m ′ γk/2 ≤c k0 k0−1 k=0 β k k! ≤c k0 e β ≤ 2c k0 , for β such that e β ≤ 2. (7.26) Term S 1 . Using boundedness of m γk/2+γ/2 , we can write S 1 := n k=k0 (βt) k k! m γk/2+γ/2 = E n γ/2;γ − k0−1 k=0 (βt) k k! m γk/2+γ/2 ≥ E n γ/2;γ − 2c k0 1 t , for β chosen as in (7.26). Term S 2 . Terms S 21 and S 22 are treated in the same fashion. We will detail calculation for S 21 . We first reorganize the terms in sum and get . Gathering all estimates together, (7.25) becomes d dt E n γ/2 ≤ β E n γ/2;γ + 2c k0 − K 1 E n γ/2;γ − 2c k0 1 t + K 2 C γk 0 2 E n γ/2;γ E n γ/2 , (7.27) for β satisfying (7.26). For such β we γ fixed, we definē for t = 0, we get E n γ/2 (0, 0) ≤ E γ/2 (0, 0) = m 0 (0) < 4M 0 . Continuity of partial sum E n γ/2 with respect to t implies E n γ/2 (βt, t) ≤ 4M 0 on a slightly larger time interval t ∈ [0, t n ), t n > 0, and thusT n > 0. We now find a bound on E n γ/2 . Consider t ∈ [0,T n ]. On this interval, E n γ/2 (βt, t) ≤ 4M 0 , as well as sinceT n ≤ 1 yields t −1 ≥ 1, which implies for (7.27) the following estimate d dt E n γ/2 ≤ −E n γ/2;γ −β + K 1 − K 2 C γk 0 2 4M 0 + 2c k0 (1 + K 1 ) t . Since C γk 0 2 converges to zero as k 0 grows, we can choose large enough k 0 and small enough β, all depending on k * from (5.2), so that −β + K 1 − K 2 C γk 0 2 4M 0 > K 1 2 , that yields d dt E n γ/2 ≤ − K 1 2 E n γ/2;γ + K 3 t , for K 3 := 2c k0 (1 + K 1 ). Finally, shifted moment can be bounded as follows E n γ/2;γ (βt, t) = n+1 k=1 (βt) k m γk/2 (t) k! k βt ≥ 1 βt n k=2 (βt) k m γk/2 (t) k! ≥ E n γ/2 (βt, t) −M 0 βt , that yields d dt E n γ/2 ≤ − K 1 2βt E n γ/2 −M 0 − 2β K 1 K 3 . Now we choose β small enough so that M 0 + 2β K 1 K 3 < 2M 0 , or, equivalently β < K 1M0 2K 3 , which implies d dt E n γ/2 (βt, t) ≤ − K 1 2βt E n γ/2 (βt, t) − 2M 0 . Integrating this differential inequality with an integrating factor t K 1 2β , yields E n γ/2 (βt, t) ≤ max E n γ/2 (0, 0), 2M 0 ≤ 2M 0 , ∀t ∈ [0,T n ], (7.28) since E γ/2 (0, 0) = m 0 (0) < 2M 0 . Therefore, from (7.28) the following bound on E n γ/2 (βt, t) holds E n γ/2 (βt, t) ≤ 2M 0 < 4M 0 , ∀t ∈ [0,T n ]. Exploring the continuity of the partial sum E n γ/2 (βt, t) this inequality holds on a slightly larger interval, which contradicts maximality ofT n , unlessT n = 1. Therefore, we can concludeT n = 1 for all n ∈ N, or in other words E n γ/2 (βt, t) ≤ 4M 0 , ∀t ∈ [0, 1], ∀n ∈ N. Letting n → ∞, we conclude E n γ/2 (βt, t) ≤ 4M 0 , ∀t ∈ [0, 1]. (7.29) In particular, for time t = 1, (7.29) can be seen as an initial condition for propagation (7.1), and thus the exponential moment of the order γ and a rate 0 <β ≤ β that depends on k * from (5.2), stays uniformly bounded for all t > 1. (6) Finally, we pass to pre-collisional quantities with the following mapping T 6 : |u| 2 , u |u| , V, I, E, r, ER, σ → |u ′ | 2 , u ′ |u ′ | , V ′ , I ′ , I ′ * , r ′ , R ′ , σ ′ . Let us compute Jacobian of this central transformation. First, for V we are using conservation law (2.5). Change of the unit vectors u |u| and σ can be considered as a rotation. Thus we eliminate these variables and for the rest of variables, we use the following relations and prove that it satisfies the lower bound stated in Lemma 3.3. To that end, we first define the two balls, as much as we did in the proof of sub-tangent condition in Proposition 1. Namely, we introduce the ball B vel (0, ρ 1 ) in the velocity space v ∈ R 3 and the ball B en (0, ρ 2 ) in the internal energy space I ∈ [0, ∞), Our proof is based on ideas from [5] for a single Boltzmann equation, used in [25] for the mixture setting, as well. It is worthwhile to mention that here f does not need to be a solution to the Boltzmann equation, it is an arbitrary function that satisfies assumption of the Lemma. The idea of proof is to divide the whole space R 3 × [0, ∞) into the ball B(0, ρ) and its complement, and to prove the lower bound (3.9) in these two spaces. Then, considerations for the complement B(0, ρ) c will fix the value for radius ρ = ρ * that ensures the lower bound (3.9) with an explicitly calculated constant c lb . Proof. Since the case γ = 0 is trivial, we suppose γ ∈ (0, 2]. We first consider the model 2 described in (3.13). For some ρ > 0 we first take (v, I) ∈ B(0, ρ) c , by following the notation (D.1). For the left-hand side of (3. We pass to the second step of the proof. We take now (v, I) ∈ B(0, ρ * ). For any S 1 , S 2 > 0, to be determined later, we write Appendix E. Some technical results Lemma E.1 (Polynomial inequality I). Assume p > 1, and let n p = ⌊ p+1 2 ⌋. Then for all x, y > 0, the following inequality holds (x + y) p − x p − y p ≤ np n=1 p n x n y p−n + x p−n y n . Lemma E.2 (Polynomial inequality II). Let b + 1 ≤ a ≤ p+1 2 . Then for any x, y ≥ 0, x a y p−a + x p−a y a ≤ x b y p−b + x p−b y b . Lemma E.3 (Interpolation inequality). Let k = αk 1 + (1 − α)k 2 , α ∈ (0, 1), 0 < k 1 ≤ k ≤ k 2 . Then for any g ∈ L 1 k g L 1 k ≤ g α L 1 k 1 g 1−α L 1 k 2 . (E.1) Appendix F. Existence and Uniqueness Theory for ODE in Banach spaces Theorem F.1. Let E := (E, · ) be a Banach space, S be a bounded, convex and closed subset of E, and Q : S → E be an operator satisfying the following properties: (c) One-sided Lipschitz condition [Q[u] − Q[v], u − v] ≤ C u − v , ∀u, v ∈ S, where [ϕ, φ] = lim h→0 − h −1 ( φ + hϕ − φ ). Then the equation ∂ t u = Q[u] , for t ∈ (0, ∞), with initial data u(0) = u 0 in S, has a unique solution in C([0, ∞), S) ∩ C 1 ((0, ∞), E). The proof of this Theorem on ODE flows on Banach spaces can be found in [5]. Remark 6. In Section 6, we will concentrate on E := L 1 1 . Therefore, for one-sided Lipschitz condition, we will use the following inequality, [ϕ, φ] ≤ ∆ ϕ(v, I) sign(φ(v, I)) v, I 2 dI dv, as pointed out in [5]. (v, v * , I, I * , R, r, σ)↔ (v ′ , v ′ * , I ′ , I ′ * , R ′ , r ′ , σ ′ ),(2.16) (v, v * , I, I * , R, r, σ) ↔ (v * , v, I * , I, R, 1 − r, −σ), (2.17) which secures microreversibility. That means the transition function B is invariant for such exchange of state satisfying B(v, v * , I, I * , R, r, σ) = B(v ′ , v ′ * , I ′ , I ′ * , R ′ , r ′ , σ ′ ) = B(v * , v, I * , I, R, 1 − r, −σ). Lemma 2 . 3 . 23For any α > −1, the following measure is invariant with respect to the changes (2.16)-(2.17) Lemma 2 . 8 . 28The collision invariants for the collision operator (2.13), i.e. functions χ(v, I) for which the weak form (2.21) annihilates ∆ Q(f, g)(v, I) χ(v, I) dI dv = 0, are linear combination of the following functions . Models for transition function B. Next we propose three different choices of the transition function B and prove that each choice satisfies all conditions on Assumption 3.1. We also focus our attention to the multiplicative factors depending on r and R, to the term b(û · σ)B, in the upper and lower bounds of B from (3.2). ∈ (0, 2]. As proven in the appendix sections B.1.2 and B.2.2, this model satisfies the form (3.2), with Definition 4. 1 ( 1The total energy in the Lebesgue weight form). Let v ′ , v ′ * , I ′ and I ′ * be functions of v, v * , I, I * , r, R and σ as given in (2.8) and (2.6). Then we define the total energy in the Lebesgue weight (2.26) form as follows Lemma 4. 3 ( 3The Polyatomic Compact Manifold Averaging Lemma). Let v ′ , v ′ * , I ′ and I ′ * be given as in (2.8) and (2.6). Suppose that functions b(σ ·û) and d ub γ (r) satisfy the integrability conditions (3.3) and (3.4). Then the following estimate holds Figure 1 . 1Study of the constant C * γ,α defined in (4.8), for the physical values of α = 0 ( ) and α = 0.5 ( ), while varying γ ∈ (0, 2]. Theorem 5. 3 ( 3Generation and propagation of polynomial moments). If f a solution of the Boltzmann equation (2.22) with the transition function from Assumption 3.1. The following properties hold. t) := m k [f ](t), a := A k * , b := B k , c := 21) yields generation of polynomial moments (5.17), with with k * from (5.2), while (5.22) implies propagation result (5.18). Remark 5. For the solution to the Bernoulli equation (5.20) we can write the following estimate ∂ t f (t, v, I) = Q(f, f )(v, I) f (0, v, I) = f 0 (v, I),(6.1) under the Assumption 3.1 on the transition function B. Theorem 6 . 1 ( 61Existence and Uniqueness). Assume that f (0, v, I) = f 0 (v, I) ∈ Ω. Then the Boltzmann equation (6.1) for the transition function B under the Assumption 3.1 has the unique solution in C Proposition 1 . 1Fix f ∈ Ω. Then for any ǫ > 0 there exists h 1 > 0 such that B(f + hQ(f, f ), hǫ) ∩ Ω = ∅, (6.12) we notice B vel (0, ρ 1 ) × B en (0, ρ 2 ) ⊆ B(0, ρ), with B(0, ρ) := (v, I) ∈ R 3 × [0, ∞) : 1 2 1CK (C0+Cγ/2)(1+ρ γ ) ). Next, weak form (2.21) implies∆ Q(f ρ , f ρ )(v, I) dI dv = 0, ∆ Q(f ρ , f ρ )(v, I) v, I 2 dI dv = 0, which yields m 0 [g ρ ](t) = m 0 [f ](t), m 1 [g ρ ](t) = m 1 [f ](t), independently of ρ and h. Then upper and lower bounds for these polynomial moments of f imply the same kind of estimates for g ρ . m ′ δq (t) ≤ −K 1 m δq+γ/2 + C δq K 2 ℓq ℓ=1q ℓ m δℓ+γ/2 m δq−δℓ + m δq−δℓ+γ/2 m δℓ , (7.3) 2 + K 2 22need to make use of the equation for m ′ γk/2 . To that end, in polynomial ODI (7.3) we set δ := γ/2, q := k and get d dt m γk/2 ≤ −K 1 m γk/2+γ/γℓ/2+γ/2 m γk/2−γℓ/2 + m γk/2−γℓ/2+γ/2 m γℓ/2 . (7.24) Denotec k0 := max k∈{0,...k0−1} B m , B γk/2 B m . T n := sup t ∈ [0, 1] : E n γ/2 [f ](βt, t) ≤ 4M 0 .Let us first show thatT n is well defined. JJ (|u| 2 ,I,E,r,ER) →(|u ′ | 2 ,I ′ ,I ′ * ,r ′ ,R T6 = 1 − R (E − m 4 |u| 2 ) = 1 − R I + I * = 1 − R (1 − R ′ )E .Appendix D. Proof of the Lemma 3.3 (Lower bound)In this Section we considerB from (3.1), namelỹB(v, v * , I, I * ) := |u| γ + I + I * m γ/2 , u := v − v * , γ ∈ [0, 2]. BB vel (0, ρ 1 ) := v ∈ R 3 : |v| ≤ ρ 1 , B en (0, ρ 2 ) := I ∈ [0, ∞) : I m ≤ ρ 2 . as much as characteristic function χ ρ2 (I) of the interval [0, ρ 2 ], ρ 2 ≥ 0, in the internal energy space I ∈ [0, ∞). Then for any (v, I) ∈ B vel (0, ρ 1 ) × B en (0, ρ 2 vel (0, ρ 1 ) × B en (0, ρ 2 ) ⊆ B(0, ρ), with B(0, ρ) := (v, I) ∈ ∆ := R 3 × [0, ∞) f (t, w, J) |v − w| γ + Q [u] − Q[v] ≤ C u − v β , β ∈ (0, 1), ∀u, v ∈ S; (b) Sub-tangent condition lim h→0+ dist (u + hQ[u], S) h = 0, ∀u ∈ S; as proven is in the Appendix B.1.1, B.2.1. Therefore, using (2.12), performing the integration to calculate the constants (3.10) by means of the Gamma function, for this Model 1 it follows Term S 0 . Propagation of polynomial moments (5.18) ensures bound on m sk uniformly in time, which implies from (5.16) bound on its derivative as well. Thus, there exists a constant c k0 such that m sk , m ′ sk ≤ c k0 for all k ∈ {0, 1, . . . , k 0 }.(7.9) For S 0 this yields in terms of particular partitions of the total energy, as shows the following Lemma. AcknowledgmentsThe authors would like to thank Professor Thierry Magin for fruitful discussions on the topic. Authors also thank and gratefully acknowledge the hospitality and support from the Oden Institute of Computational Engineering and Sciences and the University of Texas Austin. TheAppendix A. Proof of the Lemma 2.1 (Jacobian of the collision transformation)Proof. Using ideas from[23], we decompose the mapping T from (2.9) into a sequence of mappings and calculate Jacobian of each of them. Then the Jacobian of T will be a product of those Jacobians. More precisely, T can be understood as a composition of the following transformationswhere composition is understood as (f • g)(x) = f (g(x)) and each T i is described below.(1) We first pass to the center-of-mass reference frame where u and V are relative velocity and velocity of center of mass from(2.3). It is clear that Jacobian of this transformation is 1,(2) For the relative velocity u we pass to its spherical coordinates |u| , u |u| , where u/ |u| ∈ S 2 is the angular variable, with the transformation T 2 , (u, V, I, I * , r, R, σ) → |u| , u |u| , V, I, I * , r, R, σ , whose Jacobian is(3) In order to facilitate further calculation, we consider square of relative speed instead of relative speed itself, with Jacobian J T5 = E.(7) Now we go back, first from squares to squares of relative speed to relative speed itself,with Jacobian(8) For u ′ we pass from spherical coordinates to Cartesian ones,with Jacobian(9) We go back from center-of-mass reference frame,with unit JacobianFinally, we get the Jacobian of transformation T ,where for the last inequality we have used |u ′ | = 4RE m and |u| = 4R ′ E m .Appendix B. Explicit calculation of multiplicative factors to the transition function modelsThis appendix provides upper and lower estimates for the multiplicative factors d lb γ (r), d ub γ (r), e lb γ (R), e ub γ (R) for the three models of transition function B = B(v, v * , I, I * , r, R, σ) introduced in section 3.1, (3.11), (3.13) and (3.15), namely, for any γ ∈ (0, 2]. Therefore, we can take d ub γ (r) = 1. and e ub γ (R) = m γ/2 ., and thus one possible choice isAnother more course estimate can be obtained by using R ≤ 1, which leads to the choice d ub γ (r) = e ub γ (R) = 1., and therefore d lb γ (r) = 1 and for e lb γ (R) we have two possible choices, Proof. We first writeThen, usingand since γ/2 ≤ 1, we can estimateFor the first integral we develop the square and use the first and the third assumption from (3.7), while for the second one we add and subtract λ, then use domain of integration to manipulate, which together yield Therefore, there we can find a constant c lb > 0 such that the desired lower bound (3.9) holds. As announced, this constant c lb can be constructed by using the inequality (D.7). Namely, we get in the kinetic theory of polyatomic gas mixtures: modelling, analysis and computation. References. in the kinetic theory of polyatomic gas mixtures: modelling, analysis and computation. References Grushin Application of the generalized Chapman-Enskog method to the transport-coefficient calculation in a reacting gas mixture. B V Alexeev, A Chikhaoui, I , Phys. Rev. E. 494B. V. Alexeev, A. Chikhaoui, and I. T. Grushin Application of the generalized Chapman- Enskog method to the transport-coefficient calculation in a reacting gas mixture, Phys. Rev. E, 49(4): 2809-2825, 1994. Emergence of exponentially weighted L p -norms and Sobolev regularity for the Boltzmann equation. R Alonso, Commun. Part. Diff. Eq. 445R. Alonso, Emergence of exponentially weighted L p -norms and Sobolev regularity for the Boltzmann equation, Commun. Part. Diff. Eq., 44(5): 416-446, 2019. One-dimensional dissipative Boltzmann equation: measure solutions, cooling rate, and self-similar profile. R Alonso, V Bagland, Y Cheng, B Lods, SIAM J. Math. Anal. 501R. Alonso, V. Bagland, Y. Cheng, and B. Lods, One-dimensional dissipative Boltzmann equation: measure solutions, cooling rate, and self-similar profile, SIAM J. Math. Anal., 50(1): 1278-1321, 2018. A new approach to the creation and propagation of exponential moments in the Boltzmann equation. R Alonso, J A Cañizo, I M Gamba, C Mouhot, Comm. Partial Differential Equations. 381R. Alonso, J. A. Cañizo, I. M. Gamba, and C. Mouhot, A new approach to the creation and propagation of exponential moments in the Boltzmann equation, Comm. Partial Differential Equations, 38 (1): 155-169, 2013. Revisiting the Cauchy problem for the Boltzmann equation for hard potentials with integrable cross section: from generation of moments to propagation of L ∞ bounds. R J Alonso, I M Gamba, preprintR. J. Alonso and I. M. Gamba, Revisiting the Cauchy problem for the Boltzmann equation for hard potentials with integrable cross section: from generation of moments to propagation of L ∞ bounds. preprint, 2018. Propagation of weighted Banach space regularity to solutions of the Boltzmann equations for polyatomic gases. R J Alonso, I M Gamba, M Pavić-Čolić, work in progressR. J. Alonso, I. M. Gamba, and M. Pavić-Čolić, Propagation of weighted Banach space regularity to solutions of the Boltzmann equations for polyatomic gases, work in progress, 2020. The Cauchy problem and BEC stability for the quantum Boltzmann-Condensation system for bosons at very low temperature, preprint. R J Alonso, I M Gamba, M B Tran, ArXiv 1609.07467.v3R. J. Alonso, I. M. Gamba and M. B.Tran, The Cauchy problem and BEC stability for the quantum Boltzmann-Condensation system for bosons at very low temperature, preprint, ArXiv 1609.07467.v3, 2018. On the Maxwell-Stefan diffusion limit for a reactive mixture of polyatomic gases in non-isothermal setting. B Anwasia, M Bisi, F Salvarani, A J Soares, Kinet. Relat. Models. 131B. Anwasia, M. Bisi, F. Salvarani, A. J. Soares, On the Maxwell-Stefan diffusion limit for a reactive mixture of polyatomic gases in non-isothermal setting, Kinet. Relat. Models, 13(1), 63 -95, 2020. Non-linear extended thermodynamics of real gases with 6 fields. T Arima, T Ruggeri, M Sugiyama, S Taniguchi, Int. J. Non Linear Mech. 72T. Arima, T. Ruggeri, M. Sugiyama, and S. Taniguchi, Non-linear extended thermodynamics of real gases with 6 fields Int. J. Non Linear Mech., 72: 6-15, 2015. On the Chapman-Enskog asymptotics for a mixture of monoatimic and polyatomic rarefied gases. C Baranger, M Bisi, S Brull, L Desvillettes, Kinet. Relat. Models. 114C. Baranger, M. Bisi, S. Brull and L. Desvillettes, On the Chapman-Enskog asymptotics for a mixture of monoatimic and polyatomic rarefied gases, Kinet. Relat. Models, 11(4), 821 -858, 2018. Multi-temperature hydrodynamic limit from ki-netic theory in a mixture of rarefied gases. M Bisi, G Martalò, G Spiga, Acta Appl. Math. 122M. Bisi, G. Martalò, G. Spiga, Multi-temperature hydrodynamic limit from ki-netic theory in a mixture of rarefied gases, Acta Appl. Math. 122, 37-51, 2012. Multi-temperature Euler hydrodynamics for a reacting gas from a kinetic approach to rarefied mixtures with resonant collisions. M Bisi, G Martalò, G Spiga, Europhys. Lett. 9555002M. Bisi, G. Martalò, G. Spiga, Multi-temperature Euler hydrodynamics for a reacting gas from a kinetic approach to rarefied mixtures with resonant collisions, Europhys. Lett. 95, 55002, 2011. A BGK model for reactive mixtures of polyatomic gases with continuous internal energy. M Bisi, R Monaco, A J Soares, J. Phys. A. 5112125501M. Bisi, R. Monaco and A. J. Soares, A BGK model for reactive mixtures of polyatomic gases with continuous internal energy, J. Phys. A, 51(12), 125501, 2018. Dynamical pressure in a polyatomic gas: Interplay between kinetic theory and Extended Thermodynamics. M Bisi, T Ruggeri, G Spiga, Kinet. Relat. Models. 11M. Bisi, T. Ruggeri and G. Spiga, Dynamical pressure in a polyatomic gas: Interplay between kinetic theory and Extended Thermodynamics, Kinet. Relat. Models, 11, 71-95, 2018. Moment inequalities for the Boltzmann equation and applications to spatially homogeneous problems. A V Bobylev, J. Statist. Phys. 88A. V. Bobylev, Moment inequalities for the Boltzmann equation and applications to spatially homogeneous problems, J. Statist. Phys., 88: 1183-1214, 1997. Upper Maxwellian bounds for the Boltzmann equation with pseudo-Maxwell molecules. A V Bobylev, I M Gamba, Kinet. Relat. Models. 10A. V. Bobylev and I. M. Gamba, Upper Maxwellian bounds for the Boltzmann equation with pseudo-Maxwell molecules. Kinet. Relat. Models, 10, 573-585, 2017. Moment inequalities and high-energy tails for Boltzmann equations with inelastic interactions. A V Bobylev, I M Gamba, V A Panferov, J. Statist. Phys. 116A. V. Bobylev, I. M. Gamba, and V. A. Panferov, Moment inequalities and high-energy tails for Boltzmann equations with inelastic interactions, J. Statist. Phys., 116, 1651-1682, 2004. Microreversible collisions for polyatomic gases and Boltzmann's theorem. J.-F Bourgat, L Desvillettes, P Le Tallec, B Perthame, Eur. J. Mech., B/Fluids. 132J.-F. Bourgat, L. Desvillettes, P. Le Tallec, and B. Perthame. Microreversible collisions for polyatomic gases and Boltzmann's theorem, Eur. J. Mech., B/Fluids, 13(2): 237-254, 1994. Cowling The Mathematical Theory of Non-Uniform Gases. S Chapman, T G , Cambridge University PressCambridge3rd ednS. Chapman and T.G. Cowling The Mathematical Theory of Non-Uniform Gases, 3rd edn. Cambridge University Press, Cambridge, 1990. On the Wang Chang-Uhlenbeck equations. S Dellacherie, Discrete Cont Dyn-B. 32S. Dellacherie, On the Wang Chang-Uhlenbeck equations, Discrete Cont Dyn-B, 3(2): 229- 253, 2003. Some applications of the method of moments for the homogeneous Boltzmann and Kac equations. L Desvillettes, Arch. Rational Mech. Anal. 123L. Desvillettes, Some applications of the method of moments for the homogeneous Boltzmann and Kac equations, Arch. Rational Mech. Anal., 123, 387-404, 1993. Sur un modèle de type BorgnakkeLarsen conduisantà des lois denergie nonlinéaires en température pour les gaz parfaits polyatomiques. L Desvillettes, Ann. Fac. Sci. Toulouse Math. 62L. Desvillettes, Sur un modèle de type BorgnakkeLarsen conduisantà des lois denergie non- linéaires en température pour les gaz parfaits polyatomiques, Ann. Fac. Sci. Toulouse Math., 6(2), 257-262, 1997. A kinetic model allowing to obtain the energy law of polytropic gases in the presence of chemical reactions. L Desvillettes, R Monaco, F Salvarani, Eur. J. Mech. B Fluids. 24L. Desvillettes, R. Monaco and F. Salvarani A kinetic model allowing to obtain the energy law of polytropic gases in the presence of chemical reactions, Eur. J. Mech. B Fluids, 24, 219-236,2005. Kinetic and macroscopic modelling of a polytropic gas. V Djordjić, M Pavić-Čolić, N Spasojević, ArXivV. Djordjić, M. Pavić-Čolić, and N. Spasojević, Kinetic and macroscopic modelling of a polytropic gas, ArXiv:2004.12225, 2020. On existence and uniqueness to homogeneous Boltzmann flows of monatomic gas mixtures. I M Gamba, M Pavić-Čolić, Arch. Ration. Mech. Anal. 235I. M. Gamba, and M. Pavić-Čolić, On existence and uniqueness to homogeneous Boltzmann flows of monatomic gas mixtures, Arch. Ration. Mech. Anal., 235, 723-781, 2020. Upper Maxwellian bounds for the spatially homogeneous Boltzmann equation. I M Gamba, V Panferov, C Villani, Arch. Ration. Mech. Anal. 194I. M. Gamba, V. Panferov, and C. Villani, Upper Maxwellian bounds for the spatially homo- geneous Boltzmann equation, Arch. Ration. Mech. Anal., 194 (2009), 253-282. On the wave turbulence theory for stratified flows in the ocean. I M Gamba, L Smith, M B Tran, Mathematical Models and Methods in Applied Sciences. 301I. M. Gamba, L. Smith and M.B. Tran On the wave turbulence theory for stratified flows in the ocean, Mathematical Models and Methods in Applied Sciences, 30(1), 105-137, 2020. V Giovangigli, Multicomponent Flow Modeling, MESST Series. BostonBirkhauserV. Giovangigli, Multicomponent Flow Modeling, MESST Series, Birkhauser Boston, 1999. Shock-wave structure for a polyatomic gas with large bulk viscosity. S Kosuge, K Aoki, Phys. Rev. Fluids. 323401S. Kosuge and K. Aoki Shock-wave structure for a polyatomic gas with large bulk viscosity Phys. Rev. Fluids, 3 023401, 2018. On measure solutions of the Boltzmann equation, part I: moment production and stability estimates. X Lu, C Mouhot, J. Differential Equations. 252X. Lu and C. Mouhot, On measure solutions of the Boltzmann equation, part I: moment production and stability estimates, J. Differential Equations, 252, 3305-3363, 2012. Nonlinear operators and differential equations in Banach spaces. R H Martin, Pure and Applied Mathematics. Wiley-InterscienceR. H. Martin, Nonlinear operators and differential equations in Banach spaces. Pure and Applied Mathematics. Wiley-Interscience, 1976. Rate of convergence to equilibrium for the spatially homogeneous Boltzmann equation with hard potentials. C Mouhot, Comm. Math. Phys. 261C. Mouhot, Rate of convergence to equilibrium for the spatially homogeneous Boltzmann equation with hard potentials, Comm. Math. Phys., 261, 629-672, 2006. Multi-velocity and multi-temperature model of the mixture of polyatomic gases issuing from kinetic theory. M Pavić-Čolić, Physics Letters A. 38324M. Pavić-Čolić Multi-velocity and multi-temperature model of the mixture of polyatomic gases issuing from kinetic theory Physics Letters A, 383(24), 2829-2835, 2019. Polyatomic gases with dynamic pressure: Kinetic non-linear closure and the shock structure. M Pavić-Čolić, D Madjarevic, S Simić, International Journal of Non-Linear Mechanics. 92160175M. Pavić-Čolić, D. Madjarevic and S. Simić, Polyatomic gases with dynamic pressure: Kinetic non-linear closure and the shock structure, International Journal of Non-Linear Mechanics 92 160175, 2017. Maximum entropy principle for rarefied polyatomic gases. M Pavić, T Ruggeri, S Simić, Physica A. 392M. Pavić, T. Ruggeri and S. Simić, Maximum entropy principle for rarefied polyatomic gases, Physica A, 392, 1302-1317, 2013. Moment equations for polyatomic gases. M Pavić, S Simić, Acta Appl. Math. 1321M. Pavić and S. Simić, Moment equations for polyatomic gases, Acta Appl. Math., 132(1), 469-482, 2014. Propagation of stretched exponential moments for the Kac equation and Boltzmann equation with Maxwell molecules. M Pavić-Čolić, M Tasković, Kinet. Relat. Models. 113M. Pavić-Čolić, and M. Tasković, Propagation of stretched exponential moments for the Kac equation and Boltzmann equation with Maxwell molecules, Kinet. Relat. Models, 11(3), 597- 613, 2018. Non-linear maximum entropy principle for a polyatomic gas subject to the dynamic pressure. T Ruggeri, Bull. Inst. Math., Acad. Sin. (New Ser.). 111T. Ruggeri, Non-linear maximum entropy principle for a polyatomic gas subject to the dy- namic pressure, Bull. Inst. Math., Acad. Sin. (New Ser.), 11(1), 1-22, 2016. T Ruggeri, M Sugiyama, Rational Extended Thermodynamics beyond the Monatomic Gas. New YorkSpringerT. Ruggeri and M. Sugiyama, Rational Extended Thermodynamics beyond the Monatomic Gas, Springer, New York, 2015. Non-equilibrium mixtures of gases: Modelling and computation. S Simić, M Pavić-Čolić, D Madjarevic, Rivista di Matematica della Universita di Parma. 61135214S. Simić, M. Pavić-Čolić and D. Madjarevic, Non-equilibrium mixtures of gases: Modelling and computation, Rivista di Matematica della Universita di Parma 6(1) 135214, 2015. On Mittag-Leffler moments for the Boltzmann equation for hard potentials without cutoff. M Tasković, R J Alonso, I M Gamba, N Pavlović, SIAM J. Math. Anal. 501M. Tasković, R. J. Alonso, I. M. Gamba, and N. Pavlović, On Mittag-Leffler moments for the Boltzmann equation for hard potentials without cutoff, SIAM J. Math. Anal., 50(1): 834-869, 2018. The heat conductivity and viscosity of polyatomic gases. C S Wang Chang, G E Uhlenbeck, J De Boer, In: Studies in Statistical Mechanics. II243268C.S. Wang Chang, G.E. Uhlenbeck and J. de Boer, The heat conductivity and viscosity of polyatomic gases, In: Studies in Statistical Mechanics, vol. II, North-Holland, Amsterdam, 243268, 1964. Entropy dissipation and moment production for the Boltzmann equation. B Wennberg, Jour. Statist. Phys. 865-6B. Wennberg, Entropy dissipation and moment production for the Boltzmann equation, Jour. Statist. Phys., 86(5-6), 1053-1066, 1997.
[]
[ "Accessing GPDs through the exclusive photoproduction of a photon-meson pair with a large invariant mass *", "Accessing GPDs through the exclusive photoproduction of a photon-meson pair with a large invariant mass *" ]
[ "Goran Duplančić \nTheoretical Physics Division\nRudjer Bošković Institute\nHR-10002ZagrebCroatia\n", "Saad Nabeebaccus \nIJCLab\nUniversité Paris-Saclay\nCNRS/IN2P3\n91405OrsayFrance\n", "Kornelija Passek-Kumerički \nTheoretical Physics Division\nRudjer Bošković Institute\nHR-10002ZagrebCroatia\n", "Bernard Pire \nCPHT\nCNRS\nEcole polytechnique\n\nInstitut Polytechnique de Paris\n91128PalaiseauFrance\n", "Lech Szymanowski \nNational Centre for Nuclear Research (NCBJ)\nWarsawPoland\n", "Samuel Wallon \nIJCLab\nUniversité Paris-Saclay\nCNRS/IN2P3\n91405OrsayFrance\n" ]
[ "Theoretical Physics Division\nRudjer Bošković Institute\nHR-10002ZagrebCroatia", "IJCLab\nUniversité Paris-Saclay\nCNRS/IN2P3\n91405OrsayFrance", "Theoretical Physics Division\nRudjer Bošković Institute\nHR-10002ZagrebCroatia", "CPHT\nCNRS\nEcole polytechnique", "Institut Polytechnique de Paris\n91128PalaiseauFrance", "National Centre for Nuclear Research (NCBJ)\nWarsawPoland", "IJCLab\nUniversité Paris-Saclay\nCNRS/IN2P3\n91405OrsayFrance" ]
[]
We study the exclusive photoproduction of a photon-meson pair with a large invariant mass, working in the QCD factorisation framework. Explicitly, we consider a ρ-meson or a charged π in the final state. This process gives access to chiral-even GPDs as well as chiral-odd GPDs. We focus here on the chiral-even sector. The computation is performed at leading order and leading twist. We discuss the prospects of measuring them in various experiments such as JLab 12-GeV, COMPASS, future EIC and LHC (in ultraperipheral collisions). In particular, the high centre of mass energies available at collider experiments can be used to probe GPDs at small skewness ξ. We also compute the polarisation asymmetries with respect to the incoming photon. The results for an alternative distribution amplitude ('holographic' form) are also compared with the predictions obtained with an asymptotic distribution amplitude.
10.5506/aphyspolbsupp.16.5-a16
[ "https://export.arxiv.org/pdf/2212.01034v2.pdf" ]
254,221,155
2212.01034
9a0968ccb1bee059c8e19d38d40c331a932941f0
Accessing GPDs through the exclusive photoproduction of a photon-meson pair with a large invariant mass * 5 Dec 2022 Goran Duplančić Theoretical Physics Division Rudjer Bošković Institute HR-10002ZagrebCroatia Saad Nabeebaccus IJCLab Université Paris-Saclay CNRS/IN2P3 91405OrsayFrance Kornelija Passek-Kumerički Theoretical Physics Division Rudjer Bošković Institute HR-10002ZagrebCroatia Bernard Pire CPHT CNRS Ecole polytechnique Institut Polytechnique de Paris 91128PalaiseauFrance Lech Szymanowski National Centre for Nuclear Research (NCBJ) WarsawPoland Samuel Wallon IJCLab Université Paris-Saclay CNRS/IN2P3 91405OrsayFrance Accessing GPDs through the exclusive photoproduction of a photon-meson pair with a large invariant mass * 5 Dec 2022Received December 7, 2022 We study the exclusive photoproduction of a photon-meson pair with a large invariant mass, working in the QCD factorisation framework. Explicitly, we consider a ρ-meson or a charged π in the final state. This process gives access to chiral-even GPDs as well as chiral-odd GPDs. We focus here on the chiral-even sector. The computation is performed at leading order and leading twist. We discuss the prospects of measuring them in various experiments such as JLab 12-GeV, COMPASS, future EIC and LHC (in ultraperipheral collisions). In particular, the high centre of mass energies available at collider experiments can be used to probe GPDs at small skewness ξ. We also compute the polarisation asymmetries with respect to the incoming photon. The results for an alternative distribution amplitude ('holographic' form) are also compared with the predictions obtained with an asymptotic distribution amplitude. Introduction A new family of 2 → 3 exclusive processes [1,2,3,4,5,6] has been shown to be very promising in view of accessing generalised parton distributions (GPDs). In the present work, we focus on the exclusive photoproduction of a photon-meson pair with a large invariant mass. Work in this direction has already been performed in [7] for the case of a ρ-meson in the final state, and in [8,9] for a charged pion in the final state. Imposing a large value for the invariant mass of the photon-meson pair provides the hard scale for employing collinear QCD factorisation. Recently, QCD factorisation has been proved for a family of exclusive 2 → 3 processes [10,11] at the leading twist, which includes the process we study. The proof of factorisation relies on the transverse momentum of the outgoing photon/meson to be large, rather than their invariant mass, which is a stricter condition. On the one hand, one of the main advantages of studying this channel is that for a transversely polarised ρ meson, this process gives access to chiral-odd GPDs at leading twist, unlike in deeply virtual meson production (DVMP). Since chiral-odd GPDs are not well-known experimentally, this provides an excellent opportunity to study them. On the other hand, these new channels with 3 particles in the final state offer complementary ways to access the chiral-even sector of GPD, besides the deeply virtual Compton scattering and DVMP. We presently focus on the chiral-even sector, which we illustrate with the case of a charged pion. More specifically, the process we study is γ(q) + N (p 1 ) −→ γ(k) + N ′ (p 2 ) + m(p m ) ,(1) where m = ρ 0,± L,T , π ± . We denote the masses of the nucleon and the meson to be M and M m respectively. The use of collinear QCD factorisation requires that −u ′ = (p m − q) 2 , −t ′ = (k − q) 2 and M 2 γm = (p m + k) 2 to be large, while −t = (p 2 − p 1 ) 2 needs to be small. For this, we employ the cuts −u ′ , −t ′ > 1 GeV 2 , and −t < 0.5 GeV 2 . We note that these cuts are sufficient to ensure that M 2 γm > 1 GeV 2 . More details regarding the kinematics can be found in [7,8]. The results will be expressed as functions of (−u ′ ) , (−t), and M 2 γm . Computation The chiral-even light-cone distribution amplitude (DA), e.g. for the π + meson is defined, at the leading twist 2, by the matrix element, π + (p π )|ū(y)γ 5 γ µ d(−y)|0 = if π p µ π 1 0 dz e −i(z−z)pπ ·y φ π (z),(2) with the decay constant f π = 131 MeV. For the computation, we use the asymptotic form of the DA, as well as an alternative form, which we call 'holographic' DA, both normalised to 1, given by φ as (z) = 6z(1 − z) , φ hol (z) = 8 π z(1 − z) .(3) The chiral-even vector GPDs of a quark q (where q = u, d) in the nucleon target are defined by p(p 2 , λ ′ )|q − y 2 γ + q y 2 |p(p 1 , λ) = 1 −1 dx e − i 2 x(p + 1 +p + 2 )y − (4) ×ū(p 2 , λ ′ ) γ + H q (x, ξ, t) + i 2m σ + α ∆ α E q (x, ξ, t) u(p 1 , λ) , and analogously for chiral-even axial GPDs. In our analysis, the contributions from the the chiral-even axial GPD E q andẼ q are neglected, since they are suppressed by kinematical factors at the cross-section level. The GPDs are parametrised through double distributions. We note that for the modelling of the chiral-even axial GPDs, we use two different parametrisations for the input PDFs: The standard scenario, for which the light sea quark and anti-quark distributions are flavour-symmetric, and the valence scenario which corresponds to completely flavour-asymmetric light sea quark densities. More details can be found in [7,8]. The amplitude for the process is expressed as the convolution over x and z of the coefficient function (hard part), the GPD and the DA. The fully differential cross-section, as a function of −u ′ , −t and M 2 γm , is then given by dσ dt du ′ dM 2 γm −t=(−t) min = |M| 2 32S 2 γN M 2 γm (2π) 3 ,(5) where −t has been set to the minimum value (−t) min allowed by the kinematics, including the imposed cuts, and is in general a function of M 2 γm and S γN . We refer the reader to [7,8,9] for the details regarding the computation, the integration over the phase space and the computation of the linear polarisation asymmetry (LPA) wrt to the incoming photon. Note that the circular asymmetry vanishes for the present unpolarised target case. Results We present only a few plots which are representative, focusing on the π + case. In Fig. 1 on the left, we show the fully differential rate as a function of −u ′ , for different values of M 2 γπ + . The effect of using the two different models for the distribution amplitude, as well as that of using the valence and standard scenarios for modelling the GPDs, is also illustrated. We thus find that using the holographic DA gives a result that is roughly twice that of the asymptotical DA. Still, to properly distinguish between the two models, one would need to include NLO corrections, since they can be large. The single differential cross-section as a function of M 2 γπ + for different values of S γN is shown on the right plot in Fig. 1. We note that while the PSfrag replacements PSfrag replacements Figure 1: Left: The fully differential cross-section for π + as a function of −u ′ is shown. M 2 γπ = 3, 4, 5 GeV 2 correspond to black, red and blue respectively. The difference between standard (dotted) and valence (solid) scenarios for an asymptotical DA, and between standard (dot-dashed) and valence (dashed) scenarios for a holographic DA is also illustrated. S γN is fixed at 20 GeV 2 . Right: The single differential cross-section for π + as a function of M 2 γπ + . The values S γN = 8, 14, 20 GeV 2 correspond to brown, green and blue respectively. The same line style conventions for the GPD and DA models are used for both plots. −u ′ (GeV 2 ) dσ γπ + dM 2 γπ + d(−u ′ )d(−t) (−t)min (pb · GeV −6 )−u ′ (GeV 2 ) dσ γπ + dM 2 γπ + d(−u ′ )d(−t) (−t)min (pb · GeV −6 ) M 2 γπ + (GeV 2 ) dσ γπ + dM 2 γπ + (pb · GeV −2 ) fully differential cross-section is largest for smaller M 2 γπ + , the range of −u ′ is more restricted, due to the shrinking of the phase space. In fact, there is a compromise between the two effects, and this explains the position of the peak around M 2 γπ + ≈ 3 GeV 2 in the single differential cross-section plot on the right of Fig. 1. More complete results, see [9], show that the position of this peak is more or less the same as S γN increases beyond 20 GeV 2 . Figure 2: Left: The plot shows the cross-section σ γπ + as a function of the centre of mass energy S γN . Right: The LPA wrt the incoming photon for the π + case is shown as a function of M 2 γπ + , at the single differential level. S γN = 8, 14, 20 GeV 2 correspond to brown, green and blue respectively. In both plots, the same line style conventions as in Fig. 1 are used. Finally, in Fig. 2, we show the variation of the cross-section as a function of S γN (left). The cross-section drops rather rapidly with S γN , and has a peak at around 20 GeV 2 (note the log scales for both axes). We note that while LHC can access very high energies, the photon flux from the Pb nucleus in p-Pb collisions decreases very rapidly with S γN . This, coupled with the fact the cross-section itself decreases with increasing S γN , implies that the total cross-section is dominated by the region of relatively small S γN . The plot on the right of Fig. 2 corresponds to the LPA at the single differential level, as a function of M 2 γπ + . An interesting feature of the plot is that the shape of the curves is very different for the two GPD models we consider, and therefore, the LPA could be used to distinguish them. The counting rates for ρ 0 L , ρ + L and π + mesons for LHC in UPC and future EIC are shown in Table 1. For LHC, we used an integrated luminosity of 1200 nb −1 , while for EIC, we used an expected integrated luminosity of 10 7 nb −1 . The range for the counting rates in each case is obtained by considering the minimum and maximum obtained from the different models (holographic DA vs asymptotical DA, and valence vs standard scenarios). Two sets of counting rates are shown, one without any cut in S γN and the other with a cut of S γN ≥ 300 GeV 2 . Introducing a lower bound on S γN allows us to study GPDs in the small ξ region. At S γN = 300 GeV 2 , we find that the region of M 2 γm where the cross-section is maximum (see Fig. 1) corresponds to ξ ≈ 5 · 10 −3 , and it goes down to ξ ≈ 7.5 × 10 −5 at S γN = 20000 GeV 2 . Despite the fact that the number of events is dominated by the region of S γN ≤ 300 GeV 2 , we find that there may still be reasonable statistics to prompt a study of our process in the small ξ region at LHC and EIC. Table 1: The counting rates for ρ 0 L , ρ + L and π + mesons for LHC in UPC and future EIC are shown. The third column shows the counting rates without any cuts in S γN , while the fourth corresponds to having a cut of S γN ≥ 300 GeV 2 , which gives access to the small ξ region. The counting rates for the JLab 12-GeV experiment, which are roughly one order of magnitude larger than those reported in the third column of Table 1, can be found in [9]. Although the statistics are lower for p-Pb UPCs at LHC and EIC, the energies that can be accessed are higher. This may enable a study of GPDs at small skewness ξ to be performed. Experiment Meson Without cut S γN ≥ 300 GeV 2LHC in UPC ρ 0 L 8.7-16 ×10 3 4.1-8.1 ×10 2 ρ + L 4.8-11 ×10 3 2.1-6.4 ×10 2 π + 1.6-9.3 ×10 3 1.0-3.4 ×10 2 future EIC ρ 0 L 13-24 ×10 3 5.9-12 ×10 2 ρ + L 7.0-15×10 3 3.1-9.3 ×10 2 π + 2.3-13×10 3 1.4-5.0 ×10 2 Probing chiral odd GPD's in diffractive electroproduction of two vector mesons. D Y Ivanov, B Pire, L Szymanowski, O V Teryaev, 10.1016/S0370-2693(02)02856-3hep-ph/0209300Phys. Lett. B. 55065D.Y. Ivanov, B. Pire, L. Szymanowski and O.V. Teryaev, Probing chiral odd GPD's in diffractive electroproduction of two vector mesons, Phys. Lett. B 550 (2002) 65 [hep-ph/0209300]. Photoproduction of a pi rhoT pair with a large invariant mass and transversity generalized parton distribution. M El Beiyad, B Pire, M Segond, L Szymanowski, S Wallon, 10.1016/j.physletb.2010.02.086Phys. Lett. B. 6881541001.4491M. El Beiyad, B. Pire, M. Segond, L. Szymanowski and S. Wallon, Photoproduction of a pi rhoT pair with a large invariant mass and transversity generalized parton distribution, Phys. Lett. B 688 (2010) 154 [1001.4491]. Hard photoproduction of a diphoton with a large invariant mass. A Pedrak, B Pire, L Szymanowski, J Wagner, 10.1103/PhysRevD.96.0740081708.01043Phys. Rev. D. 9674008A. Pedrak, B. Pire, L. Szymanowski and J. Wagner, Hard photoproduction of a diphoton with a large invariant mass, Phys. Rev. D 96 (2017) 074008 [1708.01043]. Diffractive deeply virtual Compton scattering. B Pire, L Szymanowski, S Wallon, 10.1103/PhysRevD.101.0740051912.10353Phys. Rev. D. 10174005B. Pire, L. Szymanowski and S. Wallon, Diffractive deeply virtual Compton scattering, Phys. Rev. D 101 (2020) 074005 [1912.10353]. Electroproduction of a large invariant mass photon pair. A Pedrak, B Pire, L Szymanowski, J Wagner, 10.1103/PhysRevD.101.114027Phys. Rev. D. 101114027A. Pedrak, B. Pire, L. Szymanowski and J. Wagner, Electroproduction of a large invariant mass photon pair, Phys. Rev. D 101 (2020) 114027 [2003.03263]. Diffractive rho plus lepton pair production at an electron-ion collider. W Cosyn, B Pire, 10.1103/PhysRevD.103.1140022103.01411Phys. Rev. D. 103114002W. Cosyn and B. Pire, Diffractive rho plus lepton pair production at an electron-ion collider, Phys. Rev. D 103 (2021) 114002 [2103.01411]. Exclusive photoproduction of a γ ρ pair with a large invariant mass. R Boussarie, B Pire, L Szymanowski, S Wallon, 10.1007/JHEP02(2017)0541609.03830JHEP. 0254R. Boussarie, B. Pire, L. Szymanowski and S. Wallon, Exclusive photoproduction of a γ ρ pair with a large invariant mass, JHEP 02 (2017) 054 [1609.03830]. Probing axial quark generalized parton distributions through exclusive photoproduction of a γ π ± pair with a large invariant mass. G Duplančić, K Passek-Kumerički, B Pire, L Szymanowski, S Wallon, 10.1007/JHEP11(2018)179JHEP. 111791809.08104G. Duplančić, K. Passek-Kumerički, B. Pire, L. Szymanowski and S. Wallon, Probing axial quark generalized parton distributions through exclusive photoproduction of a γ π ± pair with a large invariant mass, JHEP 11 (2018) 179 [1809.08104]. Accessing chiral-even quark generalised parton distributions in the exclusive photoproduction of a γπ ± pair with large invariant mass in both fixed-target and collider experiments. G Duplančić, S Nabeebaccus, K Passek-Kumerički, B Pire, L Szymanowski, S Wallon, 2212.00655G. Duplančić, S. Nabeebaccus, K. Passek-Kumerički, B. Pire, L. Szymanowski and S. Wallon, Accessing chiral-even quark generalised parton distributions in the exclusive photoproduction of a γπ ± pair with large invariant mass in both fixed-target and collider experiments, 2212.00655. Exclusive production of a pair of high transverse momentum photons in pion-nucleon collisions for extracting generalized parton distributions. J.-W Qiu, Z Yu, 2205.07846J.-W. Qiu and Z. Yu, Exclusive production of a pair of high transverse momentum photons in pion-nucleon collisions for extracting generalized parton distributions, 2205.07846. Single diffractive hard exclusive processes for the study of generalized parton distributions. J.-W Qiu, Z Yu, 2210.07995J.-W. Qiu and Z. Yu, Single diffractive hard exclusive processes for the study of generalized parton distributions, 2210.07995.
[]
[ "Contrastive Deep Graph Clustering with Learnable Augmentation", "Contrastive Deep Graph Clustering with Learnable Augmentation" ]
[ "Xihong Yang ", "Yue Liu ", "Sihang Zhou ", "Siwei Wang ", "Xinwang Liu ", "En Zhu " ]
[]
[]
Graph contrastive learning is an important method for deep graph clustering. The existing methods first generate the graph views with stochastic augmentations and then train the network with a cross-view consistency principle. Although good performance has been achieved, we observe that the existing augmentation methods are usually random and rely on pre-defined augmentations, which is insufficient and lacks negotiation between the final clustering task. To solve the problem, we propose a novel Graph Contrastive Clustering method with the Learnable graph Data Augmentation (GCC-LDA), which is optimized completely by the neural networks. An adversarial learning mechanism is designed to keep cross-view consistency in the latent space while ensuring the diversity of augmented views. In our framework, a structure augmentor and an attribute augmentor are constructed for augmentation learning in both structure level and attribute level. To improve the reliability of the learned affinity matrix, clustering is introduced to the learning procedure and the learned affinity matrix is refined with both the high-confidence pseudo-label matrix and the cross-view sample similarity matrix. During the training procedure, to provide persistent optimization for the learned view, we design a two-stage training strategy to obtain more reliable clustering information. Extensive experimental results demonstrate the effectiveness of GCC-LDA on six benchmark datasets.
10.48550/arxiv.2212.03559
[ "https://export.arxiv.org/pdf/2212.03559v1.pdf" ]
254,366,685
2212.03559
1c6d9a15e18f6463c3f3abfde5cee08fda704659
Contrastive Deep Graph Clustering with Learnable Augmentation Xihong Yang Yue Liu Sihang Zhou Siwei Wang Xinwang Liu En Zhu Contrastive Deep Graph Clustering with Learnable Augmentation 1Index Terms-Graph Node ClusteringContrastive LearningGraph Neural NetworkLearnable Augmentation ! Graph contrastive learning is an important method for deep graph clustering. The existing methods first generate the graph views with stochastic augmentations and then train the network with a cross-view consistency principle. Although good performance has been achieved, we observe that the existing augmentation methods are usually random and rely on pre-defined augmentations, which is insufficient and lacks negotiation between the final clustering task. To solve the problem, we propose a novel Graph Contrastive Clustering method with the Learnable graph Data Augmentation (GCC-LDA), which is optimized completely by the neural networks. An adversarial learning mechanism is designed to keep cross-view consistency in the latent space while ensuring the diversity of augmented views. In our framework, a structure augmentor and an attribute augmentor are constructed for augmentation learning in both structure level and attribute level. To improve the reliability of the learned affinity matrix, clustering is introduced to the learning procedure and the learned affinity matrix is refined with both the high-confidence pseudo-label matrix and the cross-view sample similarity matrix. During the training procedure, to provide persistent optimization for the learned view, we design a two-stage training strategy to obtain more reliable clustering information. Extensive experimental results demonstrate the effectiveness of GCC-LDA on six benchmark datasets. INTRODUCTION I N recent years, graph learning methods [1], [2], [3], [4], [5] have attracted considerable attention in various applications, e.g., node classification [6], [7], [8], Graph Anomaly Detection [9], collaborative filtering [10], [11], molecular graph [12], [13], recommendation [14], [15], [16] etc. Among all directions, deep graph clustering [17], [18], [19], [20], [21], which aims to encode nodes with neural networks and divide them into disjoint clusters without manual labels, has become a hot research spot. With the strong capability of capturing implicit supervision, contrastive learning has become an important technique in deep graph clustering. In general, the existing methods first generate augmented graph views by perturbing node connections or attributes, and then keep the same samples in different views consistent while enlarging the difference between distinct samples. Although verified effective, we find that the performance of the existing contrastive methods [18], [22], [23] heavily depends on the quality of random augmentation operations, leading to the uncertain performance. To alleviate the problem, in graph classification, JOAO [24] selects a proper augmentation type among several pre-defined candidates. Although better performance is achieved, the specific augmentation process is still based on the pre-defined schemes and is not learnable. To fill this gap, AD-GCL [25] proposes a learnable augmentation scheme to drop edges according to Bernoulli distribution, while neglecting augmentations on node attributes. More recently, AutoGCL [26] proposes an auto augmentation strategy to mask or drop nodes via learning a probability distribution. A large step is made by these algorithms by proposing learnable augmentation. However, these strategies only focus on exploring augmentation over affinity matrixes while neglecting the learning of good attribute augmentations. Moreover, previous methods isolate the representation learning process • with the downstream tasks, making the learned representation less suitable to the final learning task, degrading the algorithm performance. To solve these issues, we propose a fully learnable augmentation strategy for deep contrastive clustering in both structure and attribute levels. Especially, we design an adversarial mechanism to keep the cross-view consistency in the latent space while ensuring the diversity of augmented views. To be specific, we design the structure and attribute augmenters to learn the structure and attribute information dynamically, thus avoiding the complex and random selections of the existing and pre-defined augmentations. In addition, we refine the learned structure with the high-confidence clustering pseudo-label matrix and the cross-view sample similarity matrix. Moreover, during the model training, we present a two-stage training strategy to obtain reliable clustering information. By these settings, we integrate the clustering task and the augmentation learning into the unified framework. Firstly, the highquality augmented graph improves the discriminative capability of embeddings, thus better assisting the clustering task. Meanwhile, the high-confidence clustering results are utilized to refine the augmented graph structure. Concretely, samples within the same clusters are more likely to link. Differently, we remove the edges between samples from different clusters. The key contributions of this paper are listed as follows: • We propose a fully learnable data augmentation framework for deep contrastive clustering, termed GCC-LDA, by designing the structure and attribute augmentor to dynamically learn the structure and attribute information. • Under clustering guidance, GCC-LDA refines the augmented graph structure with the cross-view similarity matrix and high-confidence pseudo label matrix. • The clustering task and the augmentation learning are integrated into the unified framework and promote each other. • Extensive experimental results have demonstrated that GCC-LDA outperforms the existing state-of-the-art deep graph clustering competitors. RELATED WORK Contrastive Deep Graph Clustering Clustering is to divide nodes into disjoint clusters [27], [28], [29], [30], [31], [32], [33], [34], [35]. Among those methods, deep graph clustering has attracted great attention in recent years. The existing deep graph clustering methods can be roughly categorized into three classes: generative methods [17], [19], [21], [36], [37], [38], [39], [40], adversarial methods [20], [41], [42], and contrastive methods [18], [22], [23], [43], [44], [45]. More details of deep graph clustering can be found in this survey paper [46]. In recent years, the contrastive learning has achieved great success in vision [47], [48], [49], [50] and graph [5], [51], [52], [53], [54], [55]. In this paper, we focus on the data augmentation of the contrastive deep graph clustering methods. Concretely, a pioneer AGE [43] conducts contrastive learning by a designed adaptive encoder. Besides, MVGRL [22] generates two augmented graph views. Subsequently, DCRN [18] and IDCRN [56] aim to alleviate the collapsed representation by reducing correlation in both sample and feature levels. Meanwhile, the positive and negative sample selection have attracted great attention of researchers. Concretely, GDCL [23] develops a debiased sampling strategy to correct the bias for negative samples. Although promising performance has been achieved, previous methods generate different graph views by adopting uniform data augmentations like graph diffusion, edge perturbation, and feature disturbation. Moreover, these augmentations are manually selected and can not be optimized by the network, thus limiting the performance. To solve this problem, we propose a novel contrastive deep graph clustering framework with learnable graph data augmentations. Data Augmentation in Graph Contrastive Learning Graph data augmentation [57], [58], [59] is a important component of contrastive learning. The existing data augmentation methods in graph contrastive learning could rough divide into three categories, i.e. augment-free methods [60], adaptive augmentation methods [24], [61], [62], and learnable data augmentation methods [25], [26], [63]. AFGRL [60] generates the alternative view by discovering nodes that have local and global information without augmentation. While the diversity of the constructed view is limited, leading to poor performance. Furthermore, to make graph augmentation adaptive to different tasks, JOAO [24] learns the sampling distribution of the pre-defined augmentation to automatically select data augmentation. GCA [61] proposed an adaptive augmentation with incorporating various priors for topological and semantic aspects of the graph. However, the augmentation is still not learnable in the adaptive augmentation methods. Besides, in the field of graph classification, AD-GCL [25] proposed a learnable augmentation for edge-level while neglecting the augmentations on the node level. More recently, AutoGCL [26] proposed a probability-based learnable augmentation. Although promising performance has achieved, the previous methods still rely on the existing and pre-defined data augmentations. In this work, we propose a fully learnable augmentation strategy. Compared with the existing algorithms, GCC-LDA could generate the augmented view via a learnable way in both structure and attribute level. The Notation Meaning X ∈ R N ×D Attribute matrix A ∈ R N ×N Original adjacency matrix D ∈ R N ×N Degree matrix E v k ∈ R N ×d Node embeddings in k-th view ϕ(·) Non-parametric metric function S ∈ R N ×N Similarity sample matrix Z ∈ R N ×N High-confidence pseudo label matrix Aug X ∈ R N ×d Augmented attribute matrix Aug S ∈ R N ×N Augmented structure matrix discriminative capacity of the network could be improved by the learnable augmentation. METHOD In this section, we propose a novel Graph Contrastive Clustering method with Learnable Data Augmentation (GCC-LDA). The overall framework of GCC-LDA is shown in Fig.1. The main components in the proposed method include the learnable graph augmentation module and the reliable refinement module. We will detail the proposed GCC-LDA in the following subsections. Notations Definition For an undirected graph G = {X, A}. X ∈ R N ×D is the attribute matrix, and A ∈ R N ×N represents the original adjacency matrix. D = diag(d 1 , d 2 , . . . , d N ) ∈ R N ×N is denoted as the degree matrix, where d i = (vi,vj )∈E a ij . The normalized graph Laplacian matrix L = D − A is denoted as L = D − 1 2 L D − 1 2 . Moreover, we define ϕ(·) is a non-parametric metric function to calculate pair-wise similarity, e.g. cosine similarity function. Aug A and Aug X represent the augmented structure and attribute matrix, respectively. The basic notations are summarized in Table 1. Learnable Graph Augmentation Module In this subsection, we propose a learnable graph augmentation strategy in both structure and attribute level. To be specific, we design the structure augmentor and attribute augmentor to dynamically learn the structure and attribute, respectively. In the following, we will introduce these augmentors in detail. MLP-based Structure Augmentor The structure Aug S is learned by the Multi-Layer Perception in MLP structure augmentor as follows: A M LP = ϕ(E) = ϕ(M LP (A)),(1) where E ∈ R N ×D is the embedding of the original adjacency. Here, we adopt the cosine similarity function as ϕ(·) to calculate the learned structure matrix A M LP . GCN-based Structure Augmentor GCN-based structure generator embeds the attribute matrix X and original adjacency matrix A into embeddings in the latent space. For simplicity, we define the GCN-based structure augmentor as: Figure 1: Illustration of the learnable augmentation algorithm for graph contrastive clustering. In our proposed algorithm, an adversarial learning mechanism is designed to keep cross-view consistency in the latent space while ensuring the diversity of augmented views. Besides, we design the structure and attribute augmentor to dynamically learn the structure and attribute information, respectively. Moreover, we optimize the structure of the augmented view with two aspects. Specifically, on the one hand, with the two-stage training strategy, we obtain the high-confidence clustering pseudo label matrix Z ij . On the other, we calculate the cross-view similarity matrix S ij to reflect the node's adjacency relationship. After that, we refine the learnable structure Aug S with Z ij and S ij , thus integrating the clustering task and the augmentation learning into the unified framework. similar to Eq.1, E is embedding extracted by the GCN network, e.g. GCN [6], GCN-Cheby [64]. σ(·) is a non-linear operation. A GCN = ϕ(E) = σ( D − 1 2 A D − 1 2 X),(2) Attention-based Structure Augmentor Inspire by GAT [7], we design a attentive network to capture the important structure of the input graph G. To be specific, the normalized attention coefficient matrix A attij between node x i and x j could be computed as: A attij = n T (W xi ||W xj ), A attij = e (Aatt ij ) k∈Ni e (Aatt ik ) ,(3) where n and W is the learnable weight vector and weight matrix, respectively. || is the concatenation operation between the weight matrix W xi and W xj , and N i represents the indices the neighbors of node x i . By this setting, the model could preserve important topological and semantic graph patterns via the attention mechanism. To make the augmented view in a fully learnable manner, we design the attribute augmentor to dynamically learn the original attribute. MLP-based Attribute Augmentor Similar to the MLP-based structure augmentor, we utilize the Multi-Layer Perception (MLP) as the network to learn the original attribute matrix X. The learned attribute matrix Aug X ∈ R N ×D can be presented as: X M LP = M LP (X).(4) where M LP (·) is the MLP network to learn the attribute. Attention-based Attribute Augmentor To guide the network to take more attention to the important node attributes, we also design an attention-based attribute augmentor. Specifically, we map the node attributes into three different latent spaces: Q = W q X T K = W k X T V = W v X T (5) where W q ∈ R D×D , W k ∈ R D×D , W v ∈ R D×D are the learnable parameter matrices. And Q ∈ R D×N , K ∈ R D×N and V ∈ R D×N denotes the query matrix, key matrix and value matrix, respectively. The attention-based attribute matrix Aug X can be calculated by: Aug X = sof tmax( K T Q √ D )V T ,(6) After the structure augmentor and attribute augmentor, we could obtain the augmented view G v2 = (Aug S , Aug X ), which is fully learnable. Reliable Refinement Module In this subsection, we firstly embed the node into the latent space through Eq.7: E = F(G),(7) where F(·) denotes the encoder of our feature extraction framework. Subsequently, we obtain the embeddings of the original view G v1 and the augmented view G v2 with 2 -norm as follows: E v1 = F(G v1 ); E v2 = F(G v2 ),(8) In the following, we fuse the two views of the node embeddings as follows: E = 1 2 (E v1 + E v2 ).(9) Then we perform K-means [65] on E and obtain the clustering results. After that, we will refine the learned view in two manners, i.e. similarity matrix and pseudo labels matrix refinement. Similarity Matrix Refinement Through F(·), we could obtain the embeddings of each view. Subsequently, the similarity matrix S represents the similarity between i-th sample in the first view and j-th sample in the second view as formulated: S ij = (E v1 i ) T E v2 j , i, j ∈ [1, 2, ..., N ],(10) where S ij is the cross-view similarity matrix. The proposed similarity matrix S measures the similarity between samples by comprehensively considering both attribute and structure information. The connected relationships between different nodes could be reflected by S. Therefore, we utilize S to refine the structure in augmented view with Hadamard product: Aug S = Aug S S.(11) Pseudo Labels Matrix Refinement To further improve the reliability of the learned structure matrix, we extract the reliable clustering information to construct the matrix to further refine the structure in augmented view. Concretely, we utilize the top τ highconfidence pseudo labels p to construct the matrix as follows: Z ij = 1 p i = p j , 0 p i = p j .(12) where Z ij denotes the category relation between i-th and j-th samples. In detail, when Z ij = 1, two samples have the same pseudo label. While Z ij = 0 implies two samples have the different pseudo labels. The pseudo label matrix is constructed by the high-confidence category information. Therefore, the adjacency relation in the graph could be well reflected, leading to optimize the structure of the learned structure in the augmented view. The pseudo labels matrix refines the learned structure with Hadamard product as: Aug S = Aug S Z.(13) In summary, in this subsection, we propose two strategies to refine the structure of the augmented view. By this setting, the reliability of the learned structure Aug S is improved, and the important topological could be preserved. Moreover, we introduce the detailed implementation of our method with PyTorch-style pseudo codes in Algorithm 2. Loss Function The proposed GCC-LDA framework follows the common contrastive learning paradigm, where the model maximize the agreement of the cross-view [61], [66], [67], [68], [69]. In detail, GCC-LDA jointly optimizes two loss functions, including the learnable augmentation loss L a and the contrastive loss L c . To be specific, L a is the Mean Squared Error (MSE) loss between the original graph and the learnable graph, which can be formulated as: L a = −( A − Aug S 2 2 + X − Aug X 2 2 ),(14) where A, Aug S and X, Aug X are the original and the learned structure, the original and the learned attribute respectively. In GCC-LDA, we utilize the normalized temperature-scaled cross-entropy loss (NT-Xent) to pull close the positive samples, while pushing the negative samples away. The contrastive loss L c is defined as: l i,j = −log exp(sim(E v1 i , E v2 i )/temp) N k=1,k =i exp(sim(E v1 i , E v2 k )/temp) , L c = 1 2N N k=1 [l(2k − 1, 2k) + l(2k, 2k − 1)],(15) where tmp is a temperature parameter. sim(·) denotes the function to calculate the similarity, e.g. inner product. The total loss of GCC-LDA is calculated as: L = L a + αL c ,(16) where α is the trade-off between L a and L c . The first term in Eq.(16) encourages the network to generate the augmented view with distinct semantics to ensure the diversity in input space, while the second term is the contrastive paradigm to learn the consistency of two views in latent space. The discriminative capacity of the network could be improved by minimizing the total loss function in an adversarial manner. The detailed learning process of GCC-LDA is shown in Algorithm 1. The memory cost of L is acceptable. The detailed experiments are shown in section 4.3. Besides, we design a two-stage training strategy during the overall training procedure. To be specific, the discriminative capacity of the network is improved by the first training stage. Then, in the second stage, we refine the learned structure Aug S in the augmented view with the more reliable similarity matrix and the pseudo labels matrix. EXPERIMENT Experimental Setup Benchmark Datasets The experiments are implemented on four widely-used benchmark datasets, including CORA [43], BAT [76], EAT [76], AMAP [18], CITESEER 1 , and UAT [76]. The summarized information is shown in Table 3. Training Details The experiments are conducted on the Py-Torch deep learning platform with the Intel Core i7-7820x CPU, one NVIDIA GeForce RTX 2080Ti GPU, 64GB RAM. The max training epoch number is set to 400. For fairness, we conduct ten runs for all methods. For the baselines, we adopt their source with original settings and reproduce the results. Obtain the learned structure matrix Aug S and attribute matrix Aug X with our augmentors. 3: Encode the node with the network F(·) to obtain the node embeddings E v 1 and E v 2 with Eq. (7). 4: Fuse E v 1 and E v 2 to obtain E with Eq. (9). 5: Perform K-means on E to obtain the clustering result. 6: Calculate the similarity matrix of E v 1 and E v 2 . 7: Obtain high-confidence pseudo label matrix. 8: if i > num then 9: Refine the learned structure matrix Aug S with Eq.(11) and Eq. (13). Calculate the learnable augmentation loss La with Eq. (14). 12: Calculate the contrastive loss Lc with Eq. (15). 13: Update the whole network by minimizing L in Eq. (16). 14: end for 15: Perform K-means on E to obtain the final clustering result R. 16: return R Evaluation Metrics The clustering performance is evaluated by four metrics including Accuracy (ACC), Normalized Mutual Information (NMI), Average Rand Index (ARI), and macro F1- score (F1) [77], [78], [79]. Parameter Setting In our model, the learning rate is set to 1e-3 for UAT, 1e-4 for CORA/CITESEER, 1e-5 for AMAP/BAT, and 1e-7 for EAT, respectively. The threshold τ is set to 95% for all datasets. The epoch to begin second training stage num is set to 200. The trade-off α is set to 0.5. Performance Comparison In this subsection, to verify the superiority of GCC-LDA, we compare the clustering performance of our proposed algorithm with 18 baselines on four datasets with four metrics. We divide these methods into four categories, i.e. classical deep clustering methods (DEC [70], DCN [71], MGAE [80], DAEGC [36], ARGA [20], SDCN [17], AdaGAE [72]), contrastive deep graph clustering methods (AGE [43], MVGRL [22], DFCN [21], GDCL [23], DCRN [18], AGC-DRR [73]), graph structure learning methods (SLAPS [74], SUBLIME [75]), and graph augmentation methods (GCA [61], AFGRL [60], AutoSSL [63]). Here, we adopt the attention structure augmentor and the MLP attribute augmentor to generate the augmented view in a learnable way. From the results in Table.2, Table.5 and Table.4, we observe and analyze as follows: 1) GCC-LDA obtains better performance compared with classical deep graph clustering methods. The reason is that they rarely consider the topological information in the graph. 2) Contrastive deep graph clustering methods achieve sub-optimal performance compared with ours. We conjecture that the discriminative capacity of our GCC-LDA is improved with the learnable augmentation and the optimization strategies. 3) The classical graph augmentation methods achieve the unsatisfied clustering performance. This is because they merely consider the learnable of the structure, while neglecting the attribute. Moreover, almost of those methods can not optimize with the downstream tasks. 4) It could be observed that the graph structure learning methods are not comparable with ours. We analyze the reason is that those methods refine the structure with the unreliable strategy at the beginning of the training. In summary, our method outperforms most of other algorithms on four datasets with four metrics. Taking the result on CORA dataset for example, GCC-LDA exceeds the runner-up by 1.41%, 0.58%, 3.72%, 4.05% with respect to ACC, NMI, ARI, and F1. Time Cost and Memory Cost In this subsection, we implement time and memory cost experiments to demonstrate the effectiveness of the proposed GCC-LDA. Specifically, we test the training time of GCC-LDA with five baselines on four datasets. For fairness, we train all algorithms with 400 epochs. From the results in Table 7, we observe that the training time of GCC-LDA is comparable with other eight algorithms. The reason we analyze is as follows: instead of using GCN as the encoder network, we adopt graph filter to smooth the feature. This operation effectively reduces time consumption. Moreover, we conduct experiments to test GPU memory costs of our proposed GCC-LDA with five methods (i.e., DAEGC [36], SDCN [17], AGE [43], MVGRL [22],SCAGC [44] ) on six datasets. From the results in Fig. 3, we observe that the memory costs of our GCC-LDA are also comparable with other algorithms. Ablation Studies In this section, we first conduct ablation studies to verify the effectiveness of the proposed modules, and then we analyze the robustness of GCC-LDA to the hyper-parameters. Last, we conduct experiments to verify the effectiveness of our proposed loss function. Figure 2: 2D t-SNE visualization of seven methods on two benchmark datasets. The first row and second row corresponds to CORA and AMAP dataset, respectively. Effectiveness of the Structure and Attribute Augmentor To verify the effect of the proposed structure and attribute augmentor, we conduct extensive experiments as shown in Table 6. Here, we adopt "(w/o) Aug X ", "(w/o) Aug S " and "(w/o) Aug X & Aug S " to represent the reduced models by removing the structure augmentor, the attribute augmentor, and both, respectively. From the observations, it is apparent that the performance will decrease without any of our proposed augmentors, revealing that both augmentors make essential contributions to boosting the performance. Taking the result on the CORA dataset for example, the model performance is improved substantially by utilizing the attribute augmentor. Effectiveness of the Similarity and Pseudo-label Matrix Optimization In this subsection, we implement experiments to verify the effectiveness of our optimization strategies, i.e. Similarity Matrix Optimization and Pseudo-label Matrix Optimization. Here, we adopt the model without any optimization strategy as the baseline. For simplicity, we denote "NP+NS", "S", "P", and "P+S" as the baseline, baseline with similarity matrix optimization, baseline with pseudo-label matrix optimization, and ours, respectively. From the results in Fig. 4, we observe that the performance of Table 6: Ablation studies over the learnable graph augmentation module of GCC-LDA on four datasets. "(w/o) Aug X ", "(w/o) Aug S " and "(w/o) Aug X & Aug S " to represent the reduced models by removing the structure augmentor, the attribute augmentor, and both, respectively. Additionally, our algorithm is compared with four classic data augmentations. the GCC-LDA will decrease when any one of the aforementioned components is dropped. Overall, expensive experiments could demonstrate the effectiveness of our optimization strategies. Effectiveness of our learnable augmentation To avoid the existing and pre-defined augmentations on graphs, we design a novel learnable augmentation method for graph clustering. In this part, we compare our view construction method with other classical graph data augmentations including mask feature [23], drop edges [44], add edges [44], and graph diffusion [56]. Concretely, in Table 6, we adopt the data augmentation as randomly dropping 20% edges ("Drop Edges"), or randomly adding 20% edges ("Add Edges"), or graph diffusion ("Diffusion") with 0.20 teleportation rate, or randomly masking 20% features ("Mask Feature"). From the results, we observe that the performance of commonly used graph augmentations is not comparable with ours. In summary, expensive experiments have demonstrated the effectiveness of the proposed learnable augmentation. Hyper-parameter Analysis Sensitivity Analysis of hyper-parameter α We verify the sensitivity of α, the experimental results are shown in Fig.5. From these results, we observe that the performance will not fluctuate greatly when α is varying. This demonstrates that our GCC-LDA is insensitive to α. Moreover, we also investigate the influence of the hyper-parameter threshold τ . Sensitivity Analysis of hyper-parameter τ To investigate the influence of the hyper-parameter threshold τ , we conduct the experiments on four datasets as shown in Fig.6. From the results, we observe that the model obtains promising performance with the τ increasing. The reasons is that the pseudo labels are more reliable with high threshold. Visualization Analysis In this subsection, we visualize the distribution of the learned embeddings to show the superiority of GCC-LDA on CORA and AMAP datasets via t-SNE algorithm [82]. Six baselines and GCC-LDA are shown in Fig. 2, we can conclude that GCC-LDA better reveals the intrinsic clustering structure. CONCLUSION In this work, we propose a learnable augmentation method for graph contrastive clustering termed GCC-LDA. To be specific, we design a fully learnable augmentation with the Structure Augmentor and the attribute augmentor to dynamically learn the structure and attribute information, respectively. Besides, an adversarial mechanism is designed to keep cross-view consistency in the latent space while ensuring the diversity of the augmented views. Meanwhile, we propose a two-stage training strategy to obtain more reliable clustering information during the model training. Benefiting with the clustering information, we refine the learned structure with the high-confidence pseudo-label matrix. Moreover, we refine the augmented view with the cross-view sample similarity matrix to further improve the discriminative capability of the learned structure. Extensive experiments on four datasets demonstrate the effectiveness of our proposed method. Figure 4: Ablation studies over the effectiveness of the proposed similarity matrix and pseudo labels matrix refinement strategy on six benchmark datasets."NP+NS", "S", "P", and "P+S" denotes the baseline, baseline with similarity matrix optimization, baseline with pseudo-label matrix optimization, and ours, respectively. Figure 5 : 5Sensitivity analysis of the hyper-parameter α. Table 1 : 1Notation summary. Table 2 : 2Clustering performance on CORA and BAT datasets (mean ± std). Best results are bold values and the second best values are unerlined.Input: The input graph G = {X, A}; The iteration number I; num: epoch to begin second training stage; Hyper-parameters τ, α.CORA BAT Methods ACC (%) NMI (%) ARI (%) F1 (%) ACC (%) NMI (%) ARI (%) F1 (%) DEC [70] ICML 2016 46.50±0.26 23.54±0.34 15.13±0.42 39.23±0.17 42.09±2.21 14.10±1.99 07.99±1.21 42.63±2.35 DCN [71] ICML 2017 49.38±0.91 25.65±0.65 21.63±0.58 43.71±1.05 47.79±3.95 18.03±7.73 13.75±6.05 46.80±3.44 MGAE [19] CIKM 2019 43.38±2.11 28.78±2.97 16.43±1.65 33.48±3.05 53.59±2.04 30.59±2.06 24.15±1.70 50.83±3.23 DAEGC [36] IJCAI 2019 70.43±0.36 52.89±0.69 49.63±0.43 68.27±0.57 52.67±0.00 21.43±0.35 18.18±0.29 52.23±0.03 ARGA [20] TCYB 2019 71.04±0.25 51.06±0.52 47.71±0.33 69.27±0.39 67.86±0.80 49.09±0.54 42.02±1.21 67.02±1.15 SDCN [17] WWW 2020 35.60±2.83 14.28±1.91 07.78±3.24 24.37±1.04 53.05±4.63 25.74±5.71 21.04±4.97 46.45±5.90 AdaGAE [72] TPAMI 2021 50.06±1.58 32.19±1.34 28.25±0.98 53.53±1.24 43.51±0.48 15.84±0.78 07.80±0.41 43.15±0.77 AGE [43] SIGKDD 2020 73.50±1.83 57.58±1.42 50.10±2.14 69.28±1.59 56.68±0.76 36.04±1.54 26.59±1.83 55.07±0.80 MVGRL [22] ICML 2020 70.47±3.70 55.57±1.54 48.70±3.94 67.15±1.86 37.56±0.32 29.33±0.70 13.45±0.03 29.64±0.49 DFCN [21] AAAI 2021 36.33±0.49 19.36±0.87 04.67±2.10 26.16±0.50 55.73±0.06 48.77±0.51 37.76±0.23 50.90±0.12 GDCL [23] IJCAI 2021 70.83±0.47 56.60±0.36 48.05±0.72 52.88±0.97 45.42±0.54 31.70±0.42 19.33±0.57 39.94±0.57 DCRN [18] AAAI 2022 61.93±0.47 45.13±1.57 33.15±0.14 49.50±0.42 67.94±1.45 47.23±0.74 39.76±0.87 67.40±0.35 AGC-DRR [73] IJCAI 2022 40.62±0.55 18.74±0.73 14.80±1.64 31.23±0.57 47.79±0.02 19.91±0.24 14.59±0.13 42.33±0.51 SLAPS [74] NeurIPS 2021 64.21±0.12 41.16±1.24 35.96±0.65 63.72±0.26 41.22±1.25 17.05±0.87 06.86±2.14 37.64±0.57 SUBLIME [75] WWW 2022 71.14±0.74 53.88±1.02 50.15±0.14 63.11±0.58 45.04±0.19 22.03±0.48 14.45±0.87 44.00±0.62 GCA [61] WWW 2021 53.62±0.73 46.87±0.65 30.32±0.98 45.73±0.47 54.89±0.34 38.88±0.23 26.69±2.85 53.71±0.34 AFGRL [60] AAAI 2022 26.25±1.24 12.36±1.54 14.32±1.87 30.20±1.15 50.92±0.44 27.55±0.62 21.89±0.74 46.53±0.57 AutoSSL [63] ICLR 2022 63.81±0.57 47.62±0.45 38.92±0.77 56.42±0.21 42.43±0.47 17.84±0.98 13.11±0.81 34.84±0.15 GCC-LDA Ours 74.91±1.78 58.16±0.83 53.82±2.25 73.33±1.86 75.50±0.87 50.58±0.90 47.45±1.53 75.40±0.88 Algorithm 1 GCC-LDA Output: The clustering result R. 1: for i = 1 to I do 2: Table 3 : 3Dataset information.Dataset Type Sample Dimension Edge Class CORA Graph 2708 1433 5429 7 AMAP Graph 7650 745 119081 8 CITESEER Graph 3327 3703 4732 6 UAT Graph 1190 239 13599 4 BAT Graph 131 81 1038 4 EAT Graph 399 203 5994 4 Table 4 : 4Clustering performance on AMAP and EAT datasets (mean ± std). Best results are bold values and the second best values are unerlined.AMAP EAT Methods ACC (%) NMI (%) ARI (%) F1 (%) ACC (%) NMI (%) ARI (%) F1 (%) DEC [70] ICML 2016 47.22±0.08 37.35±0.05 18.59±0.04 46.71±0.12 36.47±1.60 04.96±1.74 03.60±1.87 34.84±1.28 DCN [71] ICML 2017 48.25±0.08 38.76±0.30 20.80±0.47 47.87±0.20 38.85±2.32 06.92±2.80 05.11±2.65 38.75±2.25 MGAE [19] CIKM 2019 71.57±2.48 62.13±2.79 48.82±4.57 68.08±1.76 44.61±2.10 15.60±2.30 13.40±1.26 43.08±3.26 DAEGC [36] IJCAI 2019 75.96±0.23 65.25±0.45 58.12±0.24 69.87±0.54 36.89±0.15 05.57±0.06 05.03±0.08 34.72±0.16 ARGA [20] TCYB 2019 69.28±2.30 58.36±2.76 44.18±4.41 64.30±1.95 52.13±0.00 22.48±1.21 17.29±0.50 52.75±0.07 SDCN [17] WWW 2020 53.44±0.81 44.85±0.83 31.21±1.23 50.66±1.49 39.07±1.51 08.83±2.54 06.31±1.95 33.42±3.10 AdaGAE [72] TPAMI 2021 67.70±0.54 55.96±0.87 46.20±0.45 62.95±0.74 32.83±1.24 04.36±1.87 02.47±0.54 32.39±0.47 AGE [43] SIGKDD 2020 75.98±0.68 65.38±0.61 55.89±1.34 71.74±0.93 47.26±0.32 23.74±0.90 16.57±0.46 45.54±0.40 MVGRL [22] ICML 2020 41.07±3.12 30.28±3.94 18.77±2.34 32.88±5.50 32.88±0.71 11.72±1.08 04.68±1.30 25.35±0.75 DFCN [21] AAAI 2021 76.82±0.23 66.23±1.21 58.28±0.74 71.25±0.31 49.37±0.19 32.90±0.41 23.25±0.18 42.95±0.04 GDCL [23] IJCAI 2021 43.75±0.78 37.32±0.28 21.57±0.51 38.37±0.29 33.46±0.18 13.22±0.33 04.31±0.29 25.02±0.21 DCRN [18] AAAI 2022 OOM OOM OOM OOM 50.88±0.55 22.01±1.23 18.13±0.85 47.06±0.66 AGC-DRR [73] IJCAI 2022 76.81±1.45 66.54±1.24 60.15±1.56 71.03±0.64 37.37±0.11 07.00±0.85 04.88±0.91 35.20±0.17 SLAPS [74] NeurIPS 2021 60.09±1.14 51.15±0.87 42.87±0.75 47.73±0.98 48.62±1.65 28.33±2.56 24.59±0.58 40.42±1.44 SUBLIME [75] WWW 2022 27.22±1.56 06.37±1.89 05.36±2.14 15.97±1.53 38.80±0.35 14.96±0.75 10.29±0.88 32.31±0.97 GCA [61] WWW 2021 56.81±1.44 48.38±2.38 26.85±0.44 53.59±0.57 48.51±1.55 28.36±1.23 19.61±1.25 48.22±0.33 AFGRL [60] AAAI 2022 75.51±0.77 64.05±0.15 54.45±0.48 69.99±0.34 37.42±1.24 11.44±1.41 06.57±1.73 30.53±1.47 AutoSSL [63] ICLR 2022 54.55±0.97 48.56±0.71 26.87±0.34 54.47±0.83 31.33±0.52 17.63±0.85 12.13±0.67 21.82±0.98 GCC-LDA Ours 77.24±0.87 67.12±0.92 58.14±0.82 73.02±2.34 57.22±0.73 33.47±0.34 26.21±0.81 57.53±0.67 Table 5 : 5Clustering performance on UAT and CITESEER datasets (mean ± std). Best results are bold values and the second best values are unerlined.UAT CITESEER Method ACC (%) NMI (%) ARI (%) F1 (%) ACC (%) NMI (%) ARI (%) F1 (%) DEC [70] ICML 2016 45.61±1.84 16.63±2.39 13.14±1.97 44.22±1.51 55.89±0.20 28.34±0.30 28.12±0.36 52.62±0.17 DCN [71] ICML 2017 46.82±1.14 17.18±1.60 13.59±2.02 45.66±1.49 57.08±0.13 27.64±0.08 29.31±0.14 53.80±0.11 MGAE [19] CIKM 2019 48.97±1.52 20.69±0.98 18.33±1.79 47.95±1.52 61.35±0.80 34.63±0.65 33.55±1.18 57.36±0.82 DAEGC [36] IJCAI 2019 52.29±0.49 21.33±0.44 20.50±0.51 50.33±0.64 64.54±1.39 36.41±0.86 37.78±1.24 62.20±1.32 ARGA [20] TCYB 2019 49.31±0.15 25.44±0.31 16.57±0.31 50.26±0.16 61.07±0.49 34.40±0.71 34.32±0.70 58.23±0.31 SDCN [17] WWW 2020 52.25±1.91 21.61±1.26 21.63±1.49 45.59±3.54 65.96±0.31 38.71±0.32 40.17±0.43 63.62±0.24 AdaGAE [72] TPAMI 2021 52.10±0.87 26.02±0.71 24.47±0.13 43.44±0.85 54.01±1.11 27.79±0.47 24.19±0.85 51.11±0.64 AGE [43] SIGKDD 2020 52.37±0.42 23.64±0.66 20.39±0.70 50.15±0.73 69.73±0.24 44.93±0.53 45.31±0.41 64.45±0.27 MVGRL [22] ICML 2020 44.16±1.38 21.53±0.94 17.12±1.46 39.44±2.19 62.83±1.59 40.69±0.93 34.18±1.73 59.54±2.17 DFCN [21] AAAI 2021 33.61±0.09 26.49±0.41 11.87±0.23 25.79±0.29 69.50±0.20 43.90±0.20 45.50±0.30 64.30±0.20 GDCL [23] IJCAI 2021 48.70±0.06 25.10±0.01 21.76±0.01 45.69±0.08 66.39±0.65 39.52±0.38 41.07±0.96 61.12±0.70 DCRN [18] AAAI 2022 49.92±1.25 24.09±0.53 17.17±0.69 44.81±0.87 69.86±0.18 44.86±0.35 45.64±0.30 64.83±0.21 AGC-DRR [73] IJCAI 2022 42.64±0.31 11.15±0.24 09.50±0.25 35.18±0.32 68.32±1.83 43.28±1.41 45.34±2.33 64.82±1.60 SLAPS [74] NIPS 2021 49.77±1.24 12.86±0.65 17.36±0.98 10.56±1.34 64.14±0.65 39.08±0.25 39.27±0.78 61.00±0.15 SUBLIME [75] WWW 2022 48.74±0.54 21.85±0.62 19.51±0.45 46.19±0.87 68.25±1.21 43.15±0.14 44.21±0.54 63.12±0.42 GCA [61] WWW 2021 39.39±1.46 24. 05±0.25 14.37±0.19 35.72±0.28 60.45±1.03 36.15±0.78 35.20±0.96 56.42±0.94 AFGRL [60] AAAI 2022 41.50±0..25 17.33±0.54 13.62±0.57 36.52±0.89 31.45±0.54 15.17±0.47 14.32±0.78 30.20±0.71 AutoSSL [63] ICLR 2022 42.52±0.64 17.86±0.22 13.13±0.71 34.94±0.87 66.76±0.67 40.67±0.84 38.73±0.55 58.22±0.68 GCC-LDA Ours 54.76±1.42 25.23±0.96 22.44±1.69 53.61±2.61 70.12±0.36 43.56±0.35 44.85±0.69 65.01±0.39 DAEGC SDCN AFGRL GCA AutoSSL SUBLIME Ours Table 7 : 7Training Time Comparison on six datasets with nine methods. The algorithms are measured by seconds. The Avg. represents the average time cost on six dataset. Moreover, OOM denotes Out-Of-Memory during training process.Algorithm 2 PyTorch-style Pseudo Code of Our Method. loss_a = -(MSE(X, Aug_X) + MSE(A, Aug_S) loss = loss_c + alpha * loss_a # optimization loss.backward() optimizer.step() clu_res, _, _ = clustering((E1+E2/2)) return clu_resClassical Methods Contrastive Methods DEC [70] DCN [71] MGAE [19] DAEGC [36] SDCN [17] AGE [43] MVGRL [22] MCGC [81] SCAGC [44] GCC-LDA Dataset ICML 2016 ICML 2017 SIGKDD 2019 IJCAI 2019 WWW 2020 SIGKDD 2020 ICML 2020 NeurIPS 2021 TMM 2022 Ours CORA 91.13 47.31 7.38 12.97 11.32 46.65 14.72 118.07 54.08 10.06 BAT 21.37 7.46 3.83 4.79 11.50 2.49 3.19 2.28 93.79 1.89 EAT 26.99 9.56 4.64 5.14 12.12 3.86 3.32 2.87 47.79 2.21 AMAP 264.20 94.48 18.64 39.62 19.28 377.49 131.38 OOM 150.54 83.26 CITESEER 223.95 74.49 6.69 14.70 11.00 70.76 18.31 126.06 50.00 17.24 UAT 42.30 29.57 4.75 6.44 10.64 8.95 4.27 23.10 64.70 4.15 Avg. 111.66 43.85 7.66 13.94 12.64 85.01 29.20 - 76.82 19.80 # X: Original Attribute, A: Original Structure # AG: Attribute Augmentor # SG: Structure Augmentor # P: High-confidence Pseudo Labels # num: Epoch to begin second training stage # sim: Similarity Function, simclr: simclr loss # alpha: trade-off parameter for epoch in range(epoch_num): # Attribute Matrix and Adjacency Matrix Aug_X = AG(X) Aug_S = SG(A) # Net Encoding E1 = F.normalization((X, A),dim=1,p=2) E2 = F.normalization((Gen_X, Gen_A),dim=1,p=2) # Clustering and High-confidence Pseudo Label clu_res, P = clustering((E1+E2/2)) # Cross-view Similarity Matrix M = E1 @ E2.T # Pseudo Label Matrix Q = (P==P.T).int() loss_c = simclr (E1, E2) loss = loss_c if epoch > num: # Structure Refine Gen_A = Aug_S * M Gen_A = Aug_S * Q http://citeseerx.ist.psu.edu/index Figure 3: GPU memory costs on six datasets with five methods. Lasagne: A multi-layer graph convolutional network framework via node-aware deep architecture. X Miao, W Zhang, Y Shao, B Cui, L Chen, C Zhang, J Jiang, IEEE Transactions on Knowledge and Data Engineering. X. Miao, W. Zhang, Y. Shao, B. Cui, L. Chen, C. Zhang, and J. Jiang, "Lasagne: A multi-layer graph convolutional network framework via node-aware deep architecture," IEEE Transactions on Knowledge and Data Engineering, 2021. Mulgrn: Multi-level graph relation network for few-shot node classification. L Zhang, S Wang, J Liu, Q Lin, X Chang, Y Wu, Q Zheng, IEEE Transactions on Knowledge and Data Engineering. L. Zhang, S. Wang, J. Liu, Q. Lin, X. Chang, Y. Wu, and Q. Zheng, "Mul- grn: Multi-level graph relation network for few-shot node classification," IEEE Transactions on Knowledge and Data Engineering, 2022. Graph self-supervised learning: A survey. Y Liu, M Jin, S Pan, C Zhou, Y Zheng, F Xia, P Yu, IEEE Transactions on Knowledge and Data Engineering. Y. Liu, M. Jin, S. Pan, C. Zhou, Y. Zheng, F. Xia, and P. Yu, "Graph self-supervised learning: A survey," IEEE Transactions on Knowledge and Data Engineering, 2022. Ccgl: Contrastive cascade graph learning. X Xu, F Zhou, K Zhang, S Liu, IEEE Transactions on Knowledge and Data Engineering. X. Xu, F. Zhou, K. Zhang, and S. Liu, "Ccgl: Contrastive cascade graph learning," IEEE Transactions on Knowledge and Data Engineering, 2022. Simgrace: A simple framework for graph contrastive learning without data augmentation. J Xia, L Wu, J Chen, B Hu, S Z Li, arXiv:2202.03104arXiv preprintJ. Xia, L. Wu, J. Chen, B. Hu, and S. Z. Li, "Simgrace: A simple framework for graph contrastive learning without data augmentation," arXiv preprint arXiv:2202.03104, 2022. Semi-supervised classification with graph CORA BAT AMAP EAT UAT CITESEER Figure 6: Sensitivity analysis of the hyper-parameter τ . convolutional networks. T N Kipf, M Welling, International Conference on Learning Representations. T. N. Kipf and M. Welling, "Semi-supervised classification with graph CORA BAT AMAP EAT UAT CITESEER Figure 6: Sensitivity analysis of the hyper-parameter τ . convolutional networks," in International Conference on Learning Rep- resentations, 2017. P Veličković, G Cucurull, A Casanova, A Romero, P Lio, Y Bengio, arXiv:1710.10903Graph attention networks. arXiv preprintP. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Ben- gio, "Graph attention networks," arXiv preprint arXiv:1710.10903, 2017. Progcl: Rethinking hard negative mining in graph contrastive learning. J Xia, L Wu, G Wang, J Chen, S Z Li, International Conference on Machine Learning. PMLR, 2022. J. Xia, L. Wu, G. Wang, J. Chen, and S. Z. Li, "Progcl: Rethinking hard negative mining in graph contrastive learning," in International Conference on Machine Learning. PMLR, 2022, pp. 24 332-24 346. Gadmsl: Graph anomaly detection on attributed networks via multiscale substructure learning. D Jingcan, W Siwei, L Xinwang, Z Haifang, H Jingtao, J Hu, arXiv:2211.15255arXiv preprintD. Jingcan, W. Siwei, L. Xinwang, Z. Haifang, H. Jingtao, and J. Hu, "Gadmsl: Graph anomaly detection on attributed networks via multi- scale substructure learning," arXiv preprint arXiv:2211.15255, 2022. Random-walk computation of similarities between nodes of a graph with application to collaborative recommendation. F Fouss, A Pirotte, J.-M Renders, M Saerens, IEEE Transactions on knowledge and data engineering. 193F. Fouss, A. Pirotte, J.-M. Renders, and M. Saerens, "Random-walk computation of similarities between nodes of a graph with application to collaborative recommendation," IEEE Transactions on knowledge and data engineering, vol. 19, no. 3, pp. 355-369, 2007. Revisiting graph based collaborative filtering: A linear residual graph convolutional network approach. L Chen, L Wu, R Hong, K Zhang, M Wang, Proceedings of the AAAI conference on artificial intelligence. the AAAI conference on artificial intelligence34L. Chen, L. Wu, R. Hong, K. Zhang, and M. Wang, "Revisiting graph based collaborative filtering: A linear residual graph convolutional net- work approach," in Proceedings of the AAAI conference on artificial intelligence, vol. 34, no. 01, 2020, pp. 27-34. Towards effective and generalizable fine-tuning for pre-trained molecular graph models. J Xia, J Zheng, C Tan, G Wang, S Z Li, bioRxivJ. Xia, J. Zheng, C. Tan, G. Wang, and S. Z. Li, "Towards effective and generalizable fine-tuning for pre-trained molecular graph models," bioRxiv, 2022. Pre-training graph neural networks for molecular representations: retrospect and prospect. J Xia, Y Zhu, Y Du, S Z Li, ICML 2022 2nd AI for Science Workshop. J. Xia, Y. Zhu, Y. Du, and S. Z. Li, "Pre-training graph neural networks for molecular representations: retrospect and prospect," in ICML 2022 2nd AI for Science Workshop, 2022. Dynamic connection-based social group recommendation. D Qin, X Zhou, L Chen, G Huang, Y Zhang, IEEE Transactions on Knowledge and Data Engineering. 323D. Qin, X. Zhou, L. Chen, G. Huang, and Y. Zhang, "Dynamic connection-based social group recommendation," IEEE Transactions on Knowledge and Data Engineering, vol. 32, no. 3, pp. 453-467, 2018. Self-propagation graph neural network for recommendation. W Yu, X Lin, J Liu, J Ge, W Ou, Z Qin, IEEE Transactions on Knowledge and Data Engineering. W. Yu, X. Lin, J. Liu, J. Ge, W. Ou, and Z. Qin, "Self-propagation graph neural network for recommendation," IEEE Transactions on Knowledge and Data Engineering, 2021. Learning hierarchical review graph representations for recommendation. Y Liu, S Yang, Y Zhang, C Miao, Z Nie, J Zhang, IEEE Transactions on Knowledge and Data Engineering. Y. Liu, S. Yang, Y. Zhang, C. Miao, Z. Nie, and J. Zhang, "Learning hierarchical review graph representations for recommendation," IEEE Transactions on Knowledge and Data Engineering, 2021. Structural deep clustering network. D Bo, X Wang, C Shi, M Zhu, E Lu, P Cui, Proceedings of The Web Conference. The Web ConferenceD. Bo, X. Wang, C. Shi, M. Zhu, E. Lu, and P. Cui, "Structural deep clustering network," in Proceedings of The Web Conference 2020, 2020, pp. 1400-1410. Deep graph clustering via dual correlation reduction. Y Liu, W Tu, S Zhou, X Liu, L Song, X Yang, E Zhu, AAAI Conference on Artificial Intelligence. Y. Liu, W. Tu, S. Zhou, X. Liu, L. Song, X. Yang, and E. Zhu, "Deep graph clustering via dual correlation reduction," in AAAI Conference on Artificial Intelligence, 2022. Mgae: Marginalized graph autoencoder for graph clustering. C Wang, S Pan, G Long, X Zhu, J Jiang, Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. the 2017 ACM on Conference on Information and Knowledge ManagementC. Wang, S. Pan, G. Long, X. Zhu, and J. Jiang, "Mgae: Marginalized graph autoencoder for graph clustering," in Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, 2017, pp. 889-898. Learning graph embedding with adversarial training methods. S Pan, R Hu, S Fung, G Long, J Jiang, C Zhang, IEEE transactions on cybernetics. 506S. Pan, R. Hu, S.-f. Fung, G. Long, J. Jiang, and C. Zhang, "Learning graph embedding with adversarial training methods," IEEE transactions on cybernetics, vol. 50, no. 6, pp. 2475-2487, 2019. Deep fusion clustering network. W Tu, S Zhou, X Liu, X Guo, Z Cai, J Cheng, arXiv:2012.09600arXiv preprintW. Tu, S. Zhou, X. Liu, X. Guo, Z. Cai, J. Cheng et al., "Deep fusion clustering network," arXiv preprint arXiv:2012.09600, 2020. Contrastive multi-view representation learning on graphs. K Hassani, A H Khasahmadi, International Conference on Machine Learning. PMLR, 2020. K. Hassani and A. H. Khasahmadi, "Contrastive multi-view represen- tation learning on graphs," in International Conference on Machine Learning. PMLR, 2020, pp. 4116-4126. Graph debiased contrastive learning with joint representation clustering. H Zhao, X Yang, Z Wang, E Yang, C Deng, Proc. IJCAI. IJCAIH. Zhao, X. Yang, Z. Wang, E. Yang, and C. Deng, "Graph debiased contrastive learning with joint representation clustering," in Proc. IJCAI, 2021, pp. 3434-3440. Graph contrastive learning automated. Y You, T Chen, Y Shen, Z Wang, International Conference on Machine Learning. PMLR12Y. You, T. Chen, Y. Shen, and Z. Wang, "Graph contrastive learning automated," in International Conference on Machine Learning. PMLR, 2021, pp. 12 121-12 132. Adversarial graph augmentation to improve graph contrastive learning. S Suresh, P Li, C Hao, J Neville, Advances in Neural Information Processing Systems. 34S. Suresh, P. Li, C. Hao, and J. Neville, "Adversarial graph augmentation to improve graph contrastive learning," Advances in Neural Information Processing Systems, vol. 34, pp. 15 920-15 933, 2021. Autogcl: Automated graph contrastive learning via learnable view generators. Y Yin, Q Wang, S Huang, H Xiong, X Zhang, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence36Y. Yin, Q. Wang, S. Huang, H. Xiong, and X. Zhang, "Autogcl: Au- tomated graph contrastive learning via learnable view generators," in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 8, 2022, pp. 8892-8900. Efficient one-pass multi-view subspace clustering with consensus anchors. S Liu, S Wang, P Zhang, K Xu, X Liu, C Zhang, F Gao, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence36S. Liu, S. Wang, P. Zhang, K. Xu, X. Liu, C. Zhang, and F. Gao, "Efficient one-pass multi-view subspace clustering with consensus anchors," in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 7, 2022, pp. 7576-7584. Consensus one-step multi-view subspace clustering. P Zhang, X Liu, J Xiong, S Zhou, W Zhao, E Zhu, Z Cai, IEEE Transactions on Knowledge and Data Engineering. P. Zhang, X. Liu, J. Xiong, S. Zhou, W. Zhao, E. Zhu, and Z. Cai, "Consensus one-step multi-view subspace clustering," IEEE Transactions on Knowledge and Data Engineering, 2020. Scalable multi-view subspace clustering with unified anchors. M Sun, P Zhang, S Wang, S Zhou, W Tu, X Liu, E Zhu, C Wang, Proceedings of the 29th ACM International Conference on Multimedia. the 29th ACM International Conference on MultimediaM. Sun, P. Zhang, S. Wang, S. Zhou, W. Tu, X. Liu, E. Zhu, and C. Wang, "Scalable multi-view subspace clustering with unified anchors," in Proceedings of the 29th ACM International Conference on Multimedia, 2021, pp. 3528-3536. Robust subspace clustering for multi-view data by exploiting correlation consensus. Y Wang, X Lin, L Wu, W Zhang, Q Zhang, X Huang, IEEE Transactions on Image Processing. 2411Y. Wang, X. Lin, L. Wu, W. Zhang, Q. Zhang, and X. Huang, "Robust subspace clustering for multi-view data by exploiting correlation con- sensus," IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3939-3949, 2015. Iterative views agreement: An iterative low-rank based structured optimization method to multi-view spectral clustering. Y Wang, W Zhang, L Wu, X Lin, M Fang, S Pan, arXiv:1608.05560arXiv preprintY. Wang, W. Zhang, L. Wu, X. Lin, M. Fang, and S. Pan, "Iterative views agreement: An iterative low-rank based structured optimization method to multi-view spectral clustering," arXiv preprint arXiv:1608.05560, 2016. Beyond low-rank representations: Orthogonal clustering basis reconstruction with optimized graph structure for multiview spectral clustering. Y Wang, L Wu, Neural Networks. 103Y. Wang and L. Wu, "Beyond low-rank representations: Orthogonal clustering basis reconstruction with optimized graph structure for multi- view spectral clustering," Neural Networks, vol. 103, pp. 1-8, 2018. Multi-view attributed graph clustering. Z Lin, Z Kang, L Zhang, L Tian, IEEE Transactions on Knowledge and Data Engineering. Z. Lin, Z. Kang, L. Zhang, and L. Tian, "Multi-view attributed graph clustering," IEEE Transactions on Knowledge and Data Engineering, 2021. Multi-task multi-view clustering. X Zhang, X Zhang, H Liu, X Liu, IEEE Transactions on Knowledge and Data Engineering. 2812X. Zhang, X. Zhang, H. Liu, and X. Liu, "Multi-task multi-view cluster- ing," IEEE Transactions on Knowledge and Data Engineering, vol. 28, no. 12, pp. 3324-3338, 2016. Multiple kernel clustering with dual noise minimization. J Zhang, L Li, S Wang, J Liu, Y Liu, X Liu, E Zhu, Proceedings of the 30th ACM International Conference on Multimedia. the 30th ACM International Conference on MultimediaJ. Zhang, L. Li, S. Wang, J. Liu, Y. Liu, X. Liu, and E. Zhu, "Multiple kernel clustering with dual noise minimization," in Proceedings of the 30th ACM International Conference on Multimedia, 2022, pp. 3440- 3450. Attributed graph clustering: A deep attentional embedding approach. C Wang, S Pan, R Hu, G Long, J Jiang, C Zhang, arXiv:1906.06532arXiv preprintC. Wang, S. Pan, R. Hu, G. Long, J. Jiang, and C. Zhang, "Attributed graph clustering: A deep attentional embedding approach," arXiv preprint arXiv:1906.06532, 2019. Attributed graph clustering via adaptive graph convolution. X Zhang, H Liu, Q Li, X.-M Wu, Proceedings of the 28th International Joint Conference on Artificial Intelligence. the 28th International Joint Conference on Artificial IntelligenceX. Zhang, H. Liu, Q. Li, and X.-M. Wu, "Attributed graph clustering via adaptive graph convolution," in Proceedings of the 28th International Joint Conference on Artificial Intelligence, 2019, pp. 4327-4333. Symmetric graph convolutional autoencoder for unsupervised graph representation learning. J Park, M Lee, H J Chang, K Lee, J Y Choi, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionJ. Park, M. Lee, H. J. Chang, K. Lee, and J. Y. Choi, "Symmetric graph convolutional autoencoder for unsupervised graph representation learning," in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 6519-6528. Multi-view attribute graph convolution networks for clustering. J Cheng, Q Wang, Z Tao, D Xie, Q Gao, Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence. the Twenty-Ninth International Conference on International Joint Conferences on Artificial IntelligenceJ. Cheng, Q. Wang, Z. Tao, D. Xie, and Q. Gao, "Multi-view attribute graph convolution networks for clustering," in Proceedings of the Twenty- Ninth International Conference on International Joint Conferences on Artificial Intelligence, 2021, pp. 2973-2979. Attention-driven graph clustering network. Z Peng, H Liu, Y Jia, J Hou, Proceedings of the 29th ACM International Conference on Multimedia. the 29th ACM International Conference on MultimediaZ. Peng, H. Liu, Y. Jia, and J. Hou, "Attention-driven graph clustering network," in Proceedings of the 29th ACM International Conference on Multimedia, 2021, pp. 935-943. Adversarially regularized graph autoencoder for graph embedding. S Pan, R Hu, G Long, J Jiang, L Yao, C Zhang, Proceedings of the 27th International Joint Conference on Artificial Intelligence. the 27th International Joint Conference on Artificial IntelligenceS. Pan, R. Hu, G. Long, J. Jiang, L. Yao, and C. Zhang, "Adversarially regularized graph autoencoder for graph embedding," in Proceedings of the 27th International Joint Conference on Artificial Intelligence, 2018, pp. 2609-2615. Adversarial graph embedding for ensemble clustering. Z Tao, H Liu, J Li, Z Wang, Y Fu, International Joint Conferences on Artificial Intelligence Organization. Z. Tao, H. Liu, J. Li, Z. Wang, and Y. Fu, "Adversarial graph embedding for ensemble clustering," in International Joint Conferences on Artificial Intelligence Organization, 2019. Adaptive graph encoder for attributed graph embedding. G Cui, J Zhou, C Yang, Z Liu, Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningG. Cui, J. Zhou, C. Yang, and Z. Liu, "Adaptive graph encoder for attributed graph embedding," in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2020, pp. 976-985. Self-consistent contrastive attributed graph clustering with pseudo-label prompt. W Xia, Q Wang, Q Gao, M Yang, X Gao, IEEE Transactions on Multimedia. W. Xia, Q. Wang, Q. Gao, M. Yang, and X. Gao, "Self-consistent contrastive attributed graph clustering with pseudo-label prompt," IEEE Transactions on Multimedia, 2022. Simple contrastive graph clustering. Y Liu, X Yang, S Zhou, X Liu, arXiv:2205.0786arXiv preprintY. Liu, X. Yang, S. Zhou, and X. Liu, "Simple contrastive graph clustering," arXiv preprint arXiv:2205.0786, 2022. A survey of deep graph clustering: Taxonomy, challenge, and application. Y Liu, J Xia, S Zhou, S Wang, X Guo, X Yang, K Liang, W Tu, Z S Li, X Liu, arXiv:2211.12875arXiv preprintY. Liu, J. Xia, S. Zhou, S. Wang, X. Guo, X. Yang, K. Liang, W. Tu, Z. S. Li, and X. Liu, "A survey of deep graph clustering: Taxonomy, challenge, and application," arXiv preprint arXiv:2211.12875, 2022. A simple framework for contrastive learning of visual representations. T Chen, S Kornblith, M Norouzi, G Hinton, International conference on machine learning. PMLR, 2020. T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, "A simple frame- work for contrastive learning of visual representations," in International conference on machine learning. PMLR, 2020, pp. 1597-1607. Bootstrap your own latent: A new approach to self-supervised learning. J.-B Grill, F Strub, F Altché, C Tallec, P H Richemond, E Buchatskaya, C Doersch, B A Pires, Z D Guo, M G Azar, arXiv:2006.07733arXiv preprintJ.-B. Grill, F. Strub, F. Altché, C. Tallec, P. H. Richemond, E. Buchatskaya, C. Doersch, B. A. Pires, Z. D. Guo, M. G. Azar et al., "Bootstrap your own latent: A new approach to self-supervised learning," arXiv preprint arXiv:2006.07733, 2020. Barlow twins: Self-supervised learning via redundancy reduction. J Zbontar, L Jing, I Misra, Y Lecun, S Deny, arXiv:2103.03230arXiv preprintJ. Zbontar, L. Jing, I. Misra, Y. LeCun, and S. Deny, "Barlow twins: Self-supervised learning via redundancy reduction," arXiv preprint arXiv:2103.03230, 2021. Exploring simple siamese representation learning. X Chen, K He, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition15X. Chen and K. He, "Exploring simple siamese representation learning," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 15 750-15 758. Relational symmetry based knowledge graph contrastive learning. K Liang, Y Liu, S Zhou, X Liu, W Tu, arXiv:2211.10738arXiv preprintK. Liang, Y. Liu, S. Zhou, X. Liu, and W. Tu, "Relational symmetry based knowledge graph contrastive learning," arXiv preprint arXiv:2211.10738, 2022. Deep Graph Contrastive Representation Learning. Y Zhu, Y Xu, F Yu, Q Liu, S Wu, L Wang, ICML Workshop on Graph Representation Learning and Beyond. Y. Zhu, Y. Xu, F. Yu, Q. Liu, S. Wu, and L. Wang, "Deep Graph Contrastive Representation Learning," in ICML Workshop on Graph Representation Learning and Beyond, 2020. [Online]. Available: http://arxiv.org/abs/2006.04131 Graph contrastive learning with augmentations. Y You, T Chen, Y Sui, T Chen, Z Wang, Y Shen, Advances in Neural Information Processing Systems. 33Y. You, T. Chen, Y. Sui, T. Chen, Z. Wang, and Y. Shen, "Graph con- trastive learning with augmentations," Advances in Neural Information Processing Systems, vol. 33, pp. 5812-5823, 2020. An empirical study of graph contrastive learning. Y Zhu, Y Xu, Q Liu, S Wu, arXiv:2109.01116arXiv preprintY. Zhu, Y. Xu, Q. Liu, and S. Wu, "An empirical study of graph contrastive learning," arXiv preprint arXiv:2109.01116, 2021. Structure-enhanced heterogeneous graph contrastive learning. Y Zhu, Y Xu, H Cui, C Yang, Q Liu, S Wu, Proceedings of the 2022 SIAM International Conference on Data Mining (SDM). SIAM, 2022. the 2022 SIAM International Conference on Data Mining (SDM). SIAM, 2022Y. Zhu, Y. Xu, H. Cui, C. Yang, Q. Liu, and S. Wu, "Structure-enhanced heterogeneous graph contrastive learning," in Proceedings of the 2022 SIAM International Conference on Data Mining (SDM). SIAM, 2022, pp. 82-90. Improved dual correlation reduction network. Y Liu, S Zhou, X Liu, W Tu, X Yang, arXiv:2202.12533arXiv preprintY. Liu, S. Zhou, X. Liu, W. Tu, and X. Yang, "Improved dual correlation reduction network," arXiv preprint arXiv:2202.12533, 2022. Sail: Self-augmented graph contrastive learning. L Yu, S Pei, L Ding, J Zhou, L Li, C Zhang, X Zhang, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence36L. Yu, S. Pei, L. Ding, J. Zhou, L. Li, C. Zhang, and X. Zhang, "Sail: Self-augmented graph contrastive learning," in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 8, 2022, pp. 8927-8935. Nodeaug: Semi-supervised node classification with data augmentation. Y Wang, W Wang, Y Liang, Y Cai, J Liu, B Hooi, Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningY. Wang, W. Wang, Y. Liang, Y. Cai, J. Liu, and B. Hooi, "Nodeaug: Semi-supervised node classification with data augmentation," in Proceed- ings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2020, pp. 207-217. Mixup for node and graph classification. Y Wang, W Wang, Y Liang, Y Cai, B Hooi, Proceedings of the Web Conference. the Web ConferenceY. Wang, W. Wang, Y. Liang, Y. Cai, and B. Hooi, "Mixup for node and graph classification," in Proceedings of the Web Conference 2021, 2021, pp. 3663-3674. Augmentation-free self-supervised learning on graphs. N Lee, J Lee, C Park, arXiv:2112.02472arXiv preprintN. Lee, J. Lee, and C. Park, "Augmentation-free self-supervised learning on graphs," arXiv preprint arXiv:2112.02472, 2021. Graph contrastive learning with adaptive augmentation. Y Zhu, Y Xu, F Yu, Q Liu, S Wu, L Wang, Proceedings of the Web Conference. the Web ConferenceY. Zhu, Y. Xu, F. Yu, Q. Liu, S. Wu, and L. Wang, "Graph contrastive learning with adaptive augmentation," in Proceedings of the Web Confer- ence 2021, 2021, pp. 2069-2080. Adaptive data augmentation on temporal graphs. Y Wang, Y Cai, Y Liang, H Ding, C Wang, S Bhatia, B Hooi, Advances in Neural Information Processing Systems. 34Y. Wang, Y. Cai, Y. Liang, H. Ding, C. Wang, S. Bhatia, and B. Hooi, "Adaptive data augmentation on temporal graphs," Advances in Neural Information Processing Systems, vol. 34, pp. 1440-1452, 2021. Automated selfsupervised learning for graphs. W Jin, X Liu, X Zhao, Y Ma, N Shah, J Tang, arXiv:2106.05470arXiv preprintW. Jin, X. Liu, X. Zhao, Y. Ma, N. Shah, and J. Tang, "Automated self- supervised learning for graphs," arXiv preprint arXiv:2106.05470, 2021. Convolutional neural networks on graphs with fast localized spectral filtering. M Defferrard, X Bresson, P Vandergheynst, Advances in neural information processing systems. M. Defferrard, X. Bresson, and P. Vandergheynst, "Convolutional neural networks on graphs with fast localized spectral filtering," Advances in neural information processing systems, 2016. Algorithm as 136: A k-means clustering algorithm. J A Hartigan, M A Wong, Journal of the royal statistical society. series c (applied statistics). 281J. A. Hartigan and M. A. Wong, "Algorithm as 136: A k-means clustering algorithm," Journal of the royal statistical society. series c (applied statistics), vol. 28, no. 1, pp. 100-108, 1979. Interpolation-based contrastive learning for few-label semi-supervised learning. X Yang, X Hu, S Zhou, X Liu, E Zhu, IEEE Transactions on Neural Networks and Learning Systems. X. Yang, X. Hu, S. Zhou, X. Liu, and E. Zhu, "Interpolation-based contrastive learning for few-label semi-supervised learning," IEEE Trans- actions on Neural Networks and Learning Systems, pp. 1-12, 2022. Deep graph contrastive representation learning. Y Zhu, Y Xu, F Yu, Q Liu, S Wu, L Wang, arXiv:2006.04131arXiv preprintY. Zhu, Y. Xu, F. Yu, Q. Liu, S. Wu, and L. Wang, "Deep graph contrastive representation learning," arXiv preprint arXiv:2006.04131, 2020. An empirical study of graph contrastive learning. Y Zhu, Y Xu, Q Liu, S Wu, arXiv:2109.01116arXiv preprintY. Zhu, Y. Xu, Q. Liu, and S. Wu, "An empirical study of graph contrastive learning," arXiv preprint arXiv:2109.01116, 2021. Interpolation-based correlation reduction network for semi-supervised graph learning. X Yang, Y Liu, S Zhou, X Liu, E Zhu, arXiv:2206.02796arXiv preprintX. Yang, Y. Liu, S. Zhou, X. Liu, and E. Zhu, "Interpolation-based correlation reduction network for semi-supervised graph learning," arXiv preprint arXiv:2206.02796, 2022. Unsupervised deep embedding for clustering analysis. J Xie, R Girshick, A Farhadi, International conference on machine learning. PMLRJ. Xie, R. Girshick, and A. Farhadi, "Unsupervised deep embedding for clustering analysis," in International conference on machine learning. PMLR, 2016, pp. 478-487. Towards k-meansfriendly spaces: Simultaneous deep learning and clustering. B Yang, X Fu, N D Sidiropoulos, M Hong, international conference on machine learning. B. Yang, X. Fu, N. D. Sidiropoulos, and M. Hong, "Towards k-means- friendly spaces: Simultaneous deep learning and clustering," in interna- tional conference on machine learning. PMLR, 2017, pp. 3861-3870. Adaptive graph auto-encoder for general data clustering. X Li, H Zhang, R Zhang, IEEE Transactions on Pattern Analysis and Machine Intelligence. X. Li, H. Zhang, and R. Zhang, "Adaptive graph auto-encoder for general data clustering," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021. Attributed graph clustering with dual redundancy reduction. L Gong, S Zhou, X Liu, W Tu, IJCAI. L. Gong, S. Zhou, X. Liu, and W. Tu, "Attributed graph clustering with dual redundancy reduction," in IJCAI, 2022. Slaps: Self-supervision improves structure learning for graph neural networks. B Fatemi, L El Asri, S M Kazemi, Advances in Neural Information Processing Systems. 34B. Fatemi, L. El Asri, and S. M. Kazemi, "Slaps: Self-supervision improves structure learning for graph neural networks," Advances in Neural Information Processing Systems, vol. 34, pp. 22 667-22 681, 2021. Towards unsupervised deep graph structure learning. Y Liu, Y Zheng, D Zhang, H Chen, H Peng, S Pan, Proceedings of the ACM Web Conference 2022. the ACM Web Conference 2022Y. Liu, Y. Zheng, D. Zhang, H. Chen, H. Peng, and S. Pan, "Towards unsupervised deep graph structure learning," in Proceedings of the ACM Web Conference 2022, 2022, pp. 1392-1403. Rethinking graph auto-encoder models for attributed graph clustering. N Mrabah, M Bouguessa, M F Touati, R Ksantini, arXiv:2107.08562arXiv preprintN. Mrabah, M. Bouguessa, M. F. Touati, and R. Ksantini, "Rethink- ing graph auto-encoder models for attributed graph clustering," arXiv preprint arXiv:2107.08562, 2021. Multiple kernel clustering with neighbor-kernel subspace segmentation. S Zhou, X Liu, M Li, E Zhu, L Liu, C Zhang, J Yin, IEEE transactions on neural networks and learning systems. 31S. Zhou, X. Liu, M. Li, E. Zhu, L. Liu, C. Zhang, and J. Yin, "Multiple kernel clustering with neighbor-kernel subspace segmentation," IEEE transactions on neural networks and learning systems, vol. 31, no. 4, pp. 1351-1362, 2019. Fast parameter-free multi-view subspace clustering with consensus anchor guidance. S Wang, X Liu, X Zhu, P Zhang, Y Zhang, F Gao, E Zhu, IEEE Transactions on Image Processing. 31S. Wang, X. Liu, X. Zhu, P. Zhang, Y. Zhang, F. Gao, and E. Zhu, "Fast parameter-free multi-view subspace clustering with consensus anchor guidance," IEEE Transactions on Image Processing, vol. 31, pp. 556- 568, 2021. Late fusion multiple kernel clustering with proxy graph refinement. S Wang, X Liu, L Liu, S Zhou, E Zhu, IEEE Transactions on Neural Networks and Learning Systems. S. Wang, X. Liu, L. Liu, S. Zhou, and E. Zhu, "Late fusion multiple kernel clustering with proxy graph refinement," IEEE Transactions on Neural Networks and Learning Systems, 2021. Variational graph auto-encoders. T N Kipf, M Welling, arXiv:1611.07308arXiv preprintT. N. Kipf and M. Welling, "Variational graph auto-encoders," arXiv preprint arXiv:1611.07308, 2016. Multi-view contrastive graph clustering. E Pan, Z Kang, Advances in neural information processing systems. 34E. Pan and Z. Kang, "Multi-view contrastive graph clustering," Advances in neural information processing systems, vol. 34, pp. 2148-2159, 2021. Visualizing data using t-sne. L Van Der Maaten, G Hinton, Journal of machine learning research. 911L. Van der Maaten and G. Hinton, "Visualizing data using t-sne." Journal of machine learning research, vol. 9, no. 11, 2008.
[]
[ "EVIDENCE FOR SELF-ORGANIZED CRITICALITY PHENOMENA IN PROMPT PHASE OF SHORT GAMMA-RAY BURSTS", "EVIDENCE FOR SELF-ORGANIZED CRITICALITY PHENOMENA IN PROMPT PHASE OF SHORT GAMMA-RAY BURSTS" ]
[ "Xiu-Juan ‡ Li \nSchool of Physics and Physical Engineering\nQufu Normal University\n273165QufuChina\n", "Wen-Long Zhang \nSchool of Cyber Science and Engineering\nQufu Normal University\n273165QufuChina\n\nSchool of Physics and Physical Engineering\nQufu Normal University\n273165QufuChina\n", "Yi Shuang-Xi \nSchool of Physics and Physical Engineering\nQufu Normal University\n273165QufuChina\n", "Yu-Peng Yang \nSchool of Physics and Physical Engineering\nQufu Normal University\n273165QufuChina\n", "Li ", "Jia-Lun " ]
[ "School of Physics and Physical Engineering\nQufu Normal University\n273165QufuChina", "School of Cyber Science and Engineering\nQufu Normal University\n273165QufuChina", "School of Physics and Physical Engineering\nQufu Normal University\n273165QufuChina", "School of Physics and Physical Engineering\nQufu Normal University\n273165QufuChina", "School of Physics and Physical Engineering\nQufu Normal University\n273165QufuChina" ]
[]
The prompt phase of gamma-ray burst (GRB) contains essential information regarding the physical nature and central engine, which are as yet unknown. In this paper, we investigate the self-organized criticality (SOC) phenomena in GRB prompt phase as done in X-ray flares of GRBs. We obtain the differential and cumulative distributions of 243 short GRB pulses, such as peak flux, FWHM, rise time, decay time, and peak time in the fourth BATSE TTE Catalog with the Markov Chain Monte Carlo (MCMC) technique. It is found that these distributions can be well described by power-law models. In particular, comparisons are made in 182 short GRB pulses in the third Swift GRB Catalog from 2004 December to 2019 July. The results are essentially consistent with those in BATSE ones. We notice that there is no obvious power-law index evolution across different energy bands for either BATSE or Swift sGRBs. The joint analysis suggests that GRB prompt phase can be explained by a Fractal-Diffusive, Self-Organized Criticality (FD-SOC) system with the spatial dimension S = 3 and the classical diffusion β = 1. Our findings show that GRB prompt phases and X-ray flares possess the very same magnetically dominated stochastic process and mechanism.
10.3847/1538-4365/acc398
[ "https://export.arxiv.org/pdf/2303.06667v1.pdf" ]
257,496,871
2303.06667
cb9aaae78967fb02512a41960905b193d35a3a76
EVIDENCE FOR SELF-ORGANIZED CRITICALITY PHENOMENA IN PROMPT PHASE OF SHORT GAMMA-RAY BURSTS 12 Mar 2023 Xiu-Juan ‡ Li School of Physics and Physical Engineering Qufu Normal University 273165QufuChina Wen-Long Zhang School of Cyber Science and Engineering Qufu Normal University 273165QufuChina School of Physics and Physical Engineering Qufu Normal University 273165QufuChina Yi Shuang-Xi School of Physics and Physical Engineering Qufu Normal University 273165QufuChina Yu-Peng Yang School of Physics and Physical Engineering Qufu Normal University 273165QufuChina Li Jia-Lun EVIDENCE FOR SELF-ORGANIZED CRITICALITY PHENOMENA IN PROMPT PHASE OF SHORT GAMMA-RAY BURSTS 12 Mar 2023(Received , 2023; Revised , 2023; Accepted , 2023) Submitted to ApJSarXiv:2303.06667v1 [astro-ph.HE] * * Draft version March 14, 2023 Typeset using L A T E X twocolumn style in AASTeX62High energy astrophysics (739); Gamma-ray bursts (629); The prompt phase of gamma-ray burst (GRB) contains essential information regarding the physical nature and central engine, which are as yet unknown. In this paper, we investigate the self-organized criticality (SOC) phenomena in GRB prompt phase as done in X-ray flares of GRBs. We obtain the differential and cumulative distributions of 243 short GRB pulses, such as peak flux, FWHM, rise time, decay time, and peak time in the fourth BATSE TTE Catalog with the Markov Chain Monte Carlo (MCMC) technique. It is found that these distributions can be well described by power-law models. In particular, comparisons are made in 182 short GRB pulses in the third Swift GRB Catalog from 2004 December to 2019 July. The results are essentially consistent with those in BATSE ones. We notice that there is no obvious power-law index evolution across different energy bands for either BATSE or Swift sGRBs. The joint analysis suggests that GRB prompt phase can be explained by a Fractal-Diffusive, Self-Organized Criticality (FD-SOC) system with the spatial dimension S = 3 and the classical diffusion β = 1. Our findings show that GRB prompt phases and X-ray flares possess the very same magnetically dominated stochastic process and mechanism. INTRODUCTION Gamma-ray burst (GRB) is a sudden release of gamma-ray emission, which lasts from milliseconds to thousands of seconds. Despite many studies done on GRBs, the natures are still strongly debated. The lightcurves of GRB prompt phases are the imprints of the activities of central engine and contain the key information of internal energy dissipation and physical mechanisms. In general, it is difficult to accurately extract the properties of GRB lightcurves due to their notoriously complex and irregular structures. Fortunately, a fraction of GRBs are overwhelmingly single-peaked and double-peaked (e.g. Hakkila et al. 2018;Li et al. 2020Li et al. , 2021 providing valuable insights into the physical processes by which GRBs release energy. Bak et al. (1987) reported that a real, many-bodied, physical system in an external field assembles itself into [email protected];[email protected] a critical state which can be triggered by a small perturbation and give rise to an avalanche-like chain reaction of any size due to some driving forces. This is known as the concept of self-organizing criticality (SOC) proposed as an attempt to explain the existence of self-similarities over extended ranges of spatial and temporal scales in a wide variety of systems. It is worth noting that the SOC phenomena are commonly discussed in many astrophysical systems, such as solar flare, stellar flares, lunar craters, the asteroid belt, Saturn ring particles, magnetospheric substorms, radiation belt electrons, pulsar glitches, soft gamma-ray repeaters, black-hole objects, blazars, cosmic rays, X-ray bursts, and fast radio bursts (e.g. Lu & Hamilton 1991;Melatos et al. 2008;Aschwanden 2014;Aschwanden & Dudok de Wit 2021;Aschwanden 2022;Li et al. 2015;Wang & Dai 2013;Yi et al. 2016Yi et al. , 2022Liu et al. 2017;Du et al. 2021;Wang et al. , 2021Wei et al. 2021;Cheng et al. 2020;Zhang et al. 2022), etc. As a common astrophysical phenomena in the universe, X-ray flares are observed in a good frac-tion of GRB afterglows (e.g. Chincarini et al. 2010;Margutti et al. 2010). It is well known that GRB X-ray flares are attributed to the erratic GRB central engine activities just as the GRB prompt emission components which also provide important clues to the nature of the central engines (e.g. Burrows et al. 2005;Margutti et al. 2010;Abdo et al. 2011;Yi et al. 2015;Chang et al. 2021). For example, the similar lag-luminosity relation provides strongly evidence for the direct link between GRB X-ray flares and prompt phases . Wang & Dai (2013) investigated the statistical properties of X-ray flares of GRBs and found that both GRB X-ray flares and solar flares might origin from the similar physical mechanism, i.e., they might be produced by a magnetic reconnection process and thus both can be well explained by physical framework of a SOC system. Moreover, the further investigation based on a large sample of Swift GRB X-ray flares reported obtained the similar conclusions (e.g. Yi et al. 2016;Wei 2022). Thus, it is necessary to determine whether the GRB prompt phases can also be explained by the SOC model and if so to probe the possible SOC behavior in GRB prompt phases, etc. From a more physical point of view, Aschwanden (2011Aschwanden ( , 2012Aschwanden ( , 2014 developed the definition of SOC systems and proposed an analytical macroscopic SOC model. Adopting the expectation criteria proposed by the above model, we can identify an SOC system by examining the power-law-like distributions of relevant observed parameters (e.g. Wang & Dai 2013;Yi et al. 2016;Zhang et al. 2022). Recently, Lyu et al. (2020) investigated the properties of BATSE GRBs with multi-pulses in their prompt lightcurves reported by Hakkila & Preece (2011). By fitting the distributions of several observed pulses parameters, they tentatively suggested that SOC phenomena exist indeed in prompt phase of GRBs. However, the indisputable fact is that their investigations were mainly performed for the bursts with at least three pulses and single-peaked and doublepeaked bursts were not included (e.g. Hakkila et al. 2018;Li et al. 2020Li et al. , 2021. In this study, we systematically compile the temporal properties of the singlepeaked and double-peaked GRB pulses in the fourth BATSE TTE Catalog and the third Swift/BAT catalog given by our recent works (Li et al. , 2021 and perform a analysis focusing on the differential and cumulative distributions of characterized pulse parameters to search for the evidences of the SOC system. Particularly, in order to avoid instrumental selection effect, we examine whether the distributions evolve among different energy channels for both satellite bursts. Sample selection and data analysis are presented in Section 2. Section 3 displays our main results. We discuss some possible physical explanations of the results in Section 4. Finally, we summarize the results in Section 5. DATA AND METHOD Our recent works (Li et al. , 2021 utilized the empirical "KRL" function proposed by Kocevski et al. (2003) to fit the lightcurves of short GRBs in the fourth BATSE TTE Catalog and the third Swift GRB Catalog from 2004 December to 2019 July. For BATSE sGRBs, the photon counts are accumulated into four standard energy channels, labeled as Ch1 (25-55 keV), Ch2 (55-110 keV), Ch3 (110-320 keV), and Ch4 (≥320 keV). Similarly, the mask-weighted lightcurve data of Swift sGRBs are taken from the Swift website (Lien et al. 2016) for four energy channels: Ch1 (15-25 keV), Ch2 (25-50 keV), Ch3 (50-100 keV), and Ch4 (100-350 keV). In total, 243 BATSE and 182 Swift pulses are obtained. Note that the pulse numbers in different energy channels are different due to either selection effect or low signal-to-noise ratio. Then, we extract the parameters of these pulses, including peak flux (f m ), full width at half-maximum (FWHM), peak time (t m ), rise time (t r ), and decay time (t d ). The detailed data processing refers to our previous works (Li et al. , 2021. To identify the possible SOC features in GRB prompt phases, we study in detail the differential and cumulative distributions of these temporal parameters. Here, a empirical thresholded power-law distribution function is used to fit the differential occurrence frequency distributions of GRBs (Aschwanden 2015;Lyu et al. 2020), which can be written as, N diff = dN (x) dx ∝ (x 0d + x) −α d , x 1 ≤ x ≤ x 2 ,(1) where N is the number of events, x 0d is a constant by considering the threshold effects, x 1 and x 2 are the minimum and maximum values of scale-free range, and α d is the power-law index of differential distribution, respectively. This size function is identical to a "Generalized Pareto distribution" (Hosking & Wallis 1987), the "Generalized Pareto Type II distribution" (e.g. Johnson 1994;Arnold 2015), and the "Lomax distribution" (Lomax 1954). The uncertainty of the differential distribution is given by σ dif f,i = N bin,i /△x i , where N bin,i refers to the number of events of the i-bin, and △x i is the bin size. According to Aschwanden (2015), the cumulative distribution function of Equation 1 can be obtained as, N cum (> x) = 1+(N −1)× (x 2 + x 0c ) 1−αc − (x + x 0c ) 1−αc (x 2 + x 0c ) 1−αc − (x 1 + x 0c ) 1−αc ,(2) where N is the total number of events, x 0c is a constant by considering the threshold effects, and α c is the powerlaw index of cumulative distribution. The uncertainty of the cumulative distribution in a given bin i is estimated with σ cum,i = √ N i , where N i refers to the number of events of the bin. We use the standard reduced chisquare (χ 2 ν ) goodness to identify a best fit. The χ ν can be written as χ ν,dif f = 1 (n x − n par ) nx i=1 [N f it,dif f (x i ) − N obs,dif f (x i )] 2 σ 2 dif f,i (3) for the differential distribution function, and χ ν,cum = 1 (n x − n par ) nx i=1 [N f it,cum (x i ) − N obs,cum (x i )] 2 σ 2 cum,i(4) for the cumulative distribution function (Aschwanden 2015), where n x is the number of logarithmic bins, n par is the number of the free parameters, N diff,obs (x i ) and N cum,obs (x i ) are the observed values, N diff,fit (x i ) and N cum,fit (x i ) are the corresponding theoretical values for differential distribution and cumulative distribution, respectively. Note that the points below the threshold x 0 are just noise and do not contribute to the accuracy of the best-fit power-law index, thus they are ignored when the reduced chi-square is calculated. Markov chain Monte Carlo (MCMC) method is used to fit GRB data for the self-organized criticality model with PYTHON package pymc 1 and obtain the optimal distribution parameters and the 95% confidence regions. RESULTS Distributions The differential and cumulative distributions are fitted with Equations 1 and 2. The fitting results are shown in Figures 1 -5 for BATSE bursts and Figures 6 -10 for Swift bursts, respectively. In each Figure, the differential and cumulative distributions for the total sample are shown in panels (a) and (b). Note that we adopt a rank-order plot to get the differential distributions. The detailed method refers to Section 7.1.3 in Aschwanden (2011). In addition, in order to analyse the possible evolution from low to high energy channel, the distributions of different energy channels for both satellite bursts are fitted. The fitting results are shown in panels (c) -(f). It's important to note that the cumulative distribution rather than differential distribution is used due to the 1 https://pypi.org/project/pymc/ fact that the sample number of individual energy channel is not sufficient to bin the data. The detailed fitting results are summarized in Tables 1 and 2. From Figures 1 -5 (a), we find that all the parameters of BATSE GRBs possess the similar power-law differential distributions. In Figure 1 (a), the best-fitting power-law index of peak flux for the total BATSE sample is 2.48 ± 0.02. The result is larger than the value of 2.09 ± 0.18 measured by Lyu et al. (2020) for a sample of BATSE bursts whose lightcurves have more than two pulses. However, it can be found from Figure 2 (a) that the best-fitting power-law index of FWHM for the total BATSE sample is 1.89 ± 0.12, which is quite close to the value of 1.82 +0.14 −0.15 given by Lyu et al. (2020). In addition, the similar power-law differential distributions of rise time, decay time, and peak time with indexes of 1.89 ± 0.12, 1.92 ± 0.10, and 2.46 ± 0.05 can be found in Figures 3 -5 (a), respectively. For cumulative distributions, the best-fitting powerlaw index of peak flux for the total BATSE sample is 3.05 ± 0.02 in Figure 1 (b), which is larger than the value of 1.99 +0.16 −0.19 reported by Lyu et al. (2020). Particularly, it can be seen from Figure 2 (b) that the best-fitting power-law index of FWHM for the total BATSE sample is 2.00 ± 0.01, which is also larger than the value of 1.75 +0.11 −0.13 given by Lyu et al. (2020). The best-fitting powerlaw indexes of rise time, decay time, and peak time are 2.10 ± 0.01, 2.20 ± 0.01, and 2.50 ± 0.01 for the total BATSE sample (see The mean values of these power-law indexes of peak flux, FWHM, rise time, decay time, and peak time of four individual channels are 2.29 ± 0.13, 1.87 ± 0.07, 1.85 ± 0.08, 1.92 ± 0.05, and 2.41 ± 0.05, respectively. We find that there is almost no significant power-law index evolution across different energy bands, indicating that the self-organized criticality may quite be likely to exist in GRB systems or, in other words, be intrinsic. Significantly, similar results are obtained for Swift GRBs in Figures 6 -10. Overall, the results for Swift GRBs are approximately consistent with those of BATSE GRBs. In Figures 6 -7 (a), the best-fitting power-law indexes of differential distributions of peak flux and FWHM for the whole Swift sample are 2.44 ± 0.07 and 1.87 ± 0.13, both similar to the values of BATSE GRBs. Similarly, we can see from Figures 8 -10 (a) that the power-law differential distributions of rise time, decay time, and peak time for the total Swift sample with the indexes of 1.82 ± 0.13, 1.86 ± 0.14, and 1.75 ± 0.17, respectively, which are slightly smaller than those of BATSE GRBs. Meanwhile, we find that the cumulative distributions of the total Swift sample can also be well described by power-law model with the indexes of 2.50 ± 0.01, 1.99 ± 0.01, 1.99 ± 0.01, 1.99 ± 0.02, and 1.97 ± 0.03 for peak flux, FWHM, rise time, decay time, and peak time, respectively, as shown in Figures 8 -10 (b). The mean values of these indexes of four individual channels are 2.40 ± 0.05, 1.81 ± 0.08, 1.80 ± 0.07, 1.80 ± 0.08, and 1.83 ± 0.07, respectively. Above all, it can be found that there is also no significant power-law index evolution across different energy bands for Swift GRBs. On the other hand, the fact that there is no significant instrumental effect between BATSE and Swift bursts, strengthens the evidences of the self-organized criticality in GRB systems. SOC For a fractal-diffusive SOC model, Aschwanden (2012Aschwanden ( , 2014 predicted power-law distributions for duration and peak flux with the indexes α T = (1 + S)β/2 and α P = 2 − 1/S, where S = 1, 2, 3 is Euclidean space dimensions of SOC system and β is the diffusive spreading exponent. According to the prediction, it is easy to obtain the indexes α P = 1.67 and α T = 2 for S = 3 and the classical diffusion with β = 1. Our results can be used to derive the possible Euclidean space dimension of GRB SOC system. Owing to the small differences between the best-fitting powerlaw indexes of differential and cumulative distributions, we choose the cumulative case to examine GRB system. Furthermore, considering invariance of the power-law index on energy, we adopt the mean value of power-law indexes of four individual channels to check. For example, the mean value of power-law index of FWHM for BATSE GRB is 1.87 ± 0.07 and can determine the Euclidean space dimension S = 3 according to the prediction of the FD-SOC model with the theoretical index α T = 2. The result is very consistent with those of BATSE GRBs with at least three pulses reported by Lyu et al. (2020). Similarly, the Euclidean space dimension can be obtained using the results of rise time, decay time, and peak time. Although the results are slightly smaller than the theoretical index α T = 2, Euclidean space dimension of GRB SOC system is essentially in agreement with the model prediction for S = 3. It is worth pointing out that it is difficult to determine which dimension for peak flux since the value is obviously larger than the theoretical index 1.67 for S = 3. This phenomenon is similar to that reported by Lyu et al. (2020). The steeper index of the peak flux distributions can be explained due to the fainter peak fluxes in small bin size (5ms and 8ms). After Swift was launched, the bright X-ray flares are detected in nearly half of GRBs (e.g. Burrows et al. 2005;Romano et al. 2006). Generally, X-ray flares are characterized as the late central engine activities, through a mechanism similar to that of GRB prompt emissions (e.g. Margutti et al. 2010;Chang et al. 2021). At present, some of theoretical models of GRB and X-ray flares involve magnetic reconnection scenario (Zhang et al. 2006;Giannios 2006;Kumar & Zhang 2015). Dai et al. (2006) suggested that the differential rotation of millisecond magnetar after compact star mergers can lead to windup of interior poloidal magnetic fields to toroidal fields, which are strong enough to float up and break through the stellar surface. Once penetrating through the surface, the toroidal fields with different polarity may reconnect and give rise to original GRB and multiple X-ray flares. In the Internal Collisioninduced Magnetic Reconnection and Turbulence (IC-MART) model proposed by Zhang & Yan (2011), internal collisions distort the ordered magnetic field lines in the ejecta. Then, GRBs and X-ray flares can be triggered by magnetic reconnection in the distorted magnetic field. The ICMART model can well reproduce the properties of GRB prompt phases and X-ray flares simultaneously. Our statistical results of the similar statistical framework of SOC system of GRBs and X-ray flares support the model and can impose strong constraints on the same magnetically dominated stochastic process and mechanism of them. Wang & Dai (2013) studied three statistical properties: power-law frequency distributions for energies, durations and waiting times of GRB X-ray flares with known redshift. They suggested that GRB X-ray flares can be explained with the same statistical framework with solar flares and correspond to a one-dimensional SOC system. In our previous work (Yi et al. 2016), we studied the peak times, rise times, decay times, waiting times, and durations of a larger GRB X-ray flare sample and further strengthened the self-organized criticality in GRB X-ray flares. Nevertheless, the Euclidean space dimension S = 3 is different from the result reported by Wang & Dai (2013). As to GRB prompt phases, it is very exciting that our conclusion is consistent with that reported by Lyu et al. (2020) in despite of different GRB samples. It can be naturally explained by the theory that GRB prompt phases and X-ray flares arise from different active stages of internal shocks (e.g. Burrows et al. 2005;Mészáros 2006). The radial component of magnetic fields decays faster with radius Dai et al. (2006); Wang & Dai (2013). Thus, the early GRB prompt phases near to the central engine is in a 3-dimensional form while the X-ray flares are closer to one dimension rather than three dimensions (Zhang & Zhang 2014;Wang & Dai 2013;Lazarian et al. 2020;Lyu et al. 2020). It is worth noting that GRB optical flares are found to share a similar physical origin to X-ray flares and similar frequency distributions further confirm the SOC nature of GRB system (Yi et al. 2017). DISCUSSIONS CONCLUSIONS In this paper, we have systematically studied the differential and cumulative distributions of short GRBs from the fourth BATSE TTE Catalog and the third Swift GRB Catalog from 2004 December to 2019 July. For the first time, we presented the joint analysis among the different individual energy bands in both BATSE and Swift sGRBs. Our major results are summarized as follows: 1. We find that GRBs have similar power-law distributions with GRB X-ray flares for peak flux, FWHM, rise time, decay time, and peak time, thus both can be attributed to a SOC process. 2. There is no obvious power-law index evolution across different energy bands. 3. It is found that the results for Swift sGRBs are essentially consistent with those in BATSE ones. 4. According to Aschwanden (2012Aschwanden ( , 2014, our results are used to derive the possible Euclidean space dimensions of the GRB SOC system and obtain the spatial dimension S = 3. In this work, our survey is restricted to the BATSE and Swift short GRBs and shed new light on the physical processes of compact binaries mergers. In fact, some long GRBs are found to originate from moderatelymagnetized millisecond pulsars with hyperaccreting accretion disks after the collapses of massive stars (e.g. Dai et al. 2006;Tang et al. 2019;Xie et al. 2022). Thus, it is worth exploring the SOC behavior in long GRB prompt emissions. On the one hand, the longer duration enhances the chance of analyzing the multi-pulses (more than three) within a single burst. On the other hand, the higher detection rate of long GRBs makes it possible to obtain enough pulses to perform the statistics. Therefore, further search for GRBs, especially those with X-ray flares observed simultaneously from the Fermi, HXMT, GECAM, and SVOM catalogs can help to unveil the real physical mechanism of GRBs in the future. Figure 1 ( 1c) -(f) show the cumulative distributions of peak flux from BATSE Ch1 to Ch4. Similar results can be seen from Figures 2 -5 (c) -(f). Figure 1 . 1The distributions of fm for BATSE GRBs. (a) The differential distributions of fm for the total BATSE sample. (b) The cumulative distributions of fm for the total BATSE sample. (c) The cumulative distributions of fm in Ch1. (d) The cumulative distributions of fm in Ch2. (d) The cumulative distributions of fm in Ch3. (e) The cumulative distributions of fm in Ch4. The gray region represents the 95% confidence level, the green solid line is the best fit, and the dotted line is marked as the threshold x0. Figure 2 .Figure 3 .Figure 4 . 234The distributions of FWHM for BATSE GRBs. The symbols are the same as those inFigure 1. The distributions of tr for BATSE GRBs. The symbols are the same as those inFigure 1. The distributions of t d for BATSE GRBs. The symbols are the same as those inFigure 1. Figure 5 .Figure 6 . 56The distributions of tm for BATSE GRBs. The symbols are the same as those in Figure 1. The distributions of fm for Swift GRBs. (a) The differential distribution of fm for the total Swift sample. (b) The cumulative distribution of fm for the total Swift sample. (c) The cumulative distribution of fm in Ch1. (d) The cumulative distribution of fm in Ch2. (d) The cumulative distribution of fm in Ch3. (e)The cumulative distribution of fm in Ch4. The gray region represents the 95% confidence level, the green solid line is the best fit, and the blue dotted line is marked as the threshold x0. Figure 7 .Figure 8 .Figure 9 .Figure 10 . 78910The distributions of FWHM for Swift GRBs. The symbols are the same as those inFigure 6. The distributions of tr for Swift GRBs. The symbols are the same as those inFigure 6. The distributions of t d for Swift GRBs. The symbols are the same as those inFigure 6. The distributions of tm for Swift GRBs. The symbols are the same as those inFigure 6. Table 1 . 1The best-fitting parameters with the power-law models for BATSE GRBs.Parameters Energy Band Satellite x 0d x0c α d αc χ 2 ν peak flux total sample BATSE 7.34±0.40 −− 2.48±0.02 −− 2.67 peak flux total sample BATSE −− 14.94±0.08 −− 3.05±0.02 1.28 peak flux Ch1 BATSE −− 6.81±2.70 −− 2.17±0.31 0.22 peak flux Ch2 BATSE −− 7.18±0.38 −− 2.48±0.03 0.85 peak flux Ch3 BATSE −− 10.88±0.53 −− 2.48±0.02 1.03 peak flux Ch4 BATSE −− 6.32±2.65 −− 2.02±0.40 0.11 FWHM total sample BATSE 0.10±0.03 −− 1.89±0.12 −− 1.36 FWHM total sample BATSE −− 0.20±0.00 −− 2.00±0.01 2.93 FWHM Ch1 BATSE −− 0.11±0.05 −− 1.77±0.21 0.21 FWHM Ch2 BATSE −− 0.20±0.01 −− 1.98±0.03 1.45 FWHM Ch3 BATSE −− 0.19±0.01 −− 1.97±0.04 1.07 FWHM Ch4 BATSE −− 0.07±0.03 −− 1.75±0.20 0.11 tr total sample BATSE 0.04±0.01 −− 1.89±0.12 −− 2.83 tr total sample BATSE −− 0.08±0.00 −− 2.10±0.01 2.86 tr Ch1 BATSE −− 0.08±0.04 −− 1.65±0.29 0.35 tr Ch2 BATSE −− 0.08±0.00 −− 1.97±0.04 1.08 tr Ch3 BATSE −− 0.08±0.00 −− 1.97±0.04 0.98 tr Ch4 BATSE −− 0.02±0.01 −− 1.82±0.16 0.15 ACKNOWLEDGEMENTSWe thank the referee for very helpful suggestion and comments. This work is supported by the National Natural Science Foundation of China (Grant No. U2038106), and China Manned Spaced Project (CMS-CSST-2021-A12). B C Arnold, 10.1201/b18141Pareto Distributions. 2nd ed.Arnold, B.C. 2015, Pareto Distributions (2nd ed.). Chapman and Hall/CRC. https://doi.org/10.1201/b18141 . A A Abdo, M Ackermann, M Ajello, 10.1088/2041-8205/734/2/L27ApJL. 73427Abdo, A. A., Ackermann, M., Ajello, M., et al. 2011, ApJL, 734, L27. doi:10.1088/2041-8205/734/2/L27 Self-Organized Criticality in Astrophysics. M J Aschwanden, Markus J. AschwandenAschwanden, M. J. 2011, Self-Organized Criticality in Astrophysics, by Markus J. Aschwanden. . Springer-Praxis, 10.1051/0004-6361/201118237A&A. 539Springer-Praxis, Berlin ISBN 978-3-642-15000-5, 416p. Aschwanden, M. J. 2012, A&A, 539, A2. doi:10.1051/0004-6361/201118237 . M J Aschwanden, 10.1088/0004-637X/782/1/54ApJ. 78254Aschwanden, M. J. 2014, ApJ, 782, 54. doi:10.1088/0004-637X/782/1/54 . M J Aschwanden, 10.1088/0004-637X/814/1/19ApJ. 81419Aschwanden, M. J. 2015, ApJ, 814, 19. doi:10.1088/0004-637X/814/1/19 . M J Aschwanden, T Dudok De Wit, 10.3847/1538-4357/abef69ApJ. 91294Aschwanden, M. J. & Dudok de Wit, T. 2021, ApJ, 912, 94. doi:10.3847/1538-4357/abef69 . M J Aschwanden, 10.3847/1538-4357/ac6bf2ApJ. 93433Aschwanden, M. J. 2022, ApJ, 934, 33. doi:10.3847/1538-4357/ac6bf2 . P Bak, C Tang, K Wiesenfeld, 10.1103/PhysRevLett.59.381PhRvL. 59381Bak, P., Tang, C., & Wiesenfeld, K. 1987, PhRvL, 59, 381. doi:10.1103/PhysRevLett.59.381 . D N Burrows, P Romano, A Falcone, 10.1126/science.1116168Science. 3091833Burrows, D. N., Romano, P., Falcone, A., et al. 2005, Science, 309, 1833. doi:10.1126/science.1116168 . X Z Chang, Z Y Peng, J M Chen, 10.3847/1538-4357/ac14b6ApJ. 92234Chang, X. Z., Peng, Z. Y., Chen, J. M., et al. 2021, ApJ, 922, 34. doi:10.3847/1538-4357/ac14b6 . Y Cheng, G Q Zhang, F Y Wang, 10.1093/mnras/stz3085MNRAS. 4911498Cheng, Y., Zhang, G. Q., & Wang, F. Y. 2020, MNRAS, 491, 1498. doi:10.1093/mnras/stz3085 . G Chincarini, J Mao, R Margutti, 10.1111/j.1365-2966.2010.17037.xMNRAS. 4062113Chincarini, G., Mao, J., Margutti, R., et al. 2010, MNRAS, 406, 2113. doi:10.1111/j.1365-2966.2010.17037.x . Z G Dai, X Y Wang, X F Wu, 10.1126/science.1123606Science. 3111127Dai, Z. G., Wang, X. Y., Wu, X. F., et al. 2006, Science, 311, 1127. doi:10.1126/science.1123606 . M Du, S.-X Yi, T Liu, 10.3847/1538-4357/abd6bdApJ. 908242Du, M., Yi, S.-X., Liu, T., et al. 2021, ApJ, 908, 242. doi:10.3847/1538-4357/abd6bd . D Giannios, 10.1051/0004-6361:20065578A&A. 4555Giannios, D. 2006, A&A, 455, L5. doi:10.1051/0004-6361:20065578 . J Hakkila, R D Preece, 10.1088/0004-637X/740/2/104ApJ. 740104Hakkila, J. & Preece, R. D. 2011, ApJ, 740, 104. doi:10.1088/0004-637X/740/2/104 . J Hakkila, I Horváth, E Hofesmann, 10.3847/1538-4357/aaac2bApJ. 855101Hakkila, J., Horváth, I., Hofesmann, E., et al. 2018, ApJ, 855, 101. doi:10.3847/1538-4357/aaac2b . Jonathan R M Hosking, Wallis, 10.1080/00401706.1987.10488243Technometrics. 29339Hosking, Jonathan R. M. & Jamie Wallis. 1987, Technometrics, 29, 339. doi:10.1080/00401706.1987.10488243 . N L Johnson, S Kotz, N Balakrishnan, 1Continuous Univariate DistributionsJohnson, N. L., Kotz, S., & Balakrishnan, N. 1994, Continuous Univariate Distributions, Vol. 1 . D Kocevski, F Ryde, E Liang, ApJ. 596389Kocevski, D., Ryde, F., & Liang, E., 2003, ApJ, 596, 389 . P Kumar, B Zhang, 10.1016/j.physrep.2014.09.008PhR. 5611Kumar, P. & Zhang, B. 2015, PhR, 561, 1. doi:10.1016/j.physrep.2014.09.008 . A Lazarian, G L Eyink, A Jafari, 10.1063/1.5110603Physics of Plasmas. 2712305Lazarian, A., Eyink, G. L., Jafari, A., et al. 2020, Physics of Plasmas, 27, 012305. doi:10.1063/1.5110603 . X.-J Li, Z.-B Zhang, C.-T Zhang, 10.3847/1538-4357/aaac2bApJ. 892113Li, X.-J., Zhang, Z.-B., Zhang, C.-T., et al. 2020, ApJ, 892, 113. doi:10.3847/1538-4357/ab7a94 doi:10.3847/1538-4357/aaac2b . X J Li, Z B Zhang, X L Zhang, 10.3847/1538-4365/abd3fdApJS. 25216Li, X. J., Zhang, Z. B., Zhang, X. L., et al. 2021, ApJS, 252, 16. doi:10.3847/1538-4365/abd3fd . Y.-P Li, F Yuan, Q Yuan, 10.1088/0004-637X/810/1/19ApJ. 81019Li, Y.-P., Yuan, F., Yuan, Q., et al. 2015, ApJ, 810, 19. doi:10.1088/0004-637X/810/1/19 . A Lien, T Sakamoto, S D Barthelmy, 10.3847/0004-637X/829/1/7ApJ. 8297Lien, A., Sakamoto, T., Barthelmy, S. D., et al. 2016, ApJ, 829, 7. doi:10.3847/0004-637X/829/1/7 . T Liu, W.-M Gu, B Zhang, 10.1016/j.newar.2017.07.001NewAR. 791Liu, T., Gu, W.-M., & Zhang, B. 2017, NewAR, 79, 1. doi:10.1016/j.newar.2017.07.001 . E T Lu, R J Hamilton, 10.1086/186180ApJL. 38089Lu, E. T. & Hamilton, R. J. 1991, ApJL, 380, L89. doi:10.1086/186180 . F Lyu, Y.-P Li, S.-J Hou, 10.1007/s11467-020-0989-xFrontiers of Physics. 1614501Lyu, F., Li, Y.-P., Hou, S.-J., et al. 2020, Frontiers of Physics, 16, 14501. doi:10.1007/s11467-020-0989-x . K S Lomax, 10.1080/01621459.1954.10501239J. Am. Stat. Assoc. 49847Lomax, K. S. 1954, J. Am. Stat. Assoc., 49, 847. doi:10.1080/01621459.1954.10501239 . R Margutti, C Guidorzi, G Chincarini, 10.1111/j.1365-2966.2010.16824.xMNRAS. 4062149Margutti, R., Guidorzi, C., Chincarini, G., et al. 2010, MNRAS, 406, 2149. doi:10.1111/j.1365-2966.2010.16824.x . A Melatos, C Peralta, J S B Wyithe, 10.1086/523349ApJ. 6721103Melatos, A., Peralta, C., & Wyithe, J. S. B. 2008, ApJ, 672, 1103. doi:10.1086/523349 . P Mészáros, 10.1088/0034-4885/69/8/R01Reports on Progress in Physics. 692259Mészáros, P. 2006, Reports on Progress in Physics, 69, 2259. doi:10.1088/0034-4885/69/8/R01 . P Romano, A Moretti, P L Banat, 10.1051/0004-6361:20054172A&A. 45059Romano, P., Moretti, A., Banat, P. L., et al. 2006, A&A, 450, 59. doi:10.1051/0004-6361:20054172 . C.-H Tang, Y.-F Huang, J.-J Geng, Z.-B Zhang, 10.3847/1538-4365/ab4711ApJS. 2451Tang, C.-H., Huang, Y.-F., Geng, J.-J., Zhang, Z.-B. 2019, ApJS, 245, 1. doi:10.3847/1538-4365/ab4711 . F Y Wang, Z G Dai, 10.1038/nphys2670Nature Physics. 9465Wang, F. Y. & Dai, Z. G. 2013, Nature Physics, 9, 465. doi:10.1038/nphys2670 . F Y Wang, H Yu, 10.1088/1475-7516/2017/03/023JCAP. 23Wang, F. Y. & Yu, H. 2017, JCAP, 2017, 023. doi:10.1088/1475-7516/2017/03/023 . F Y Wang, G Q Zhang, Z G Dai, 10.1093/mnras/staa3912MNRAS. 5013155Wang, F. Y., Zhang, G. Q., & Dai, Z. G. 2021, MNRAS, 501, 3155. doi:10.1093/mnras/staa3912 . J S Wang, F Y Wang, Dai, 10.1093/mnras/stx1728MNRAS. 4712517Wang, J. S., Wang, F. Y., & Dai, Z. G. 2017, MNRAS, 471, 2517. doi:10.1093/mnras/stx1728 . J.-J Wei, X.-F Wu, Z.-G Dai, 10.3847/1538-4357/ac2604ApJ. 920153Wei, J.-J., Wu, X.-F., Dai, Z.-G., et al. 2021, ApJ, 920, 153. doi:10.3847/1538-4357/ac2604 . J.-J Wei, arXiv:2212.08813Wei, J.-J. 2022, arXiv:2212.08813 . L Xie, D.-M Wei, Y Wang, 10.3847/1538-4357/ac7c13ApJ. 934125Xie, L., Wei, D.-M., Wang, Y., et al. 2022, ApJ, 934, 125. doi:10.3847/1538-4357/ac7c13 . S.-X Yi, X.-F Wu, F.-Y Wang, 10.1088/0004-637X/807/1/92ApJ. 80792Yi, S.-X., Wu, X.-F., Wang, F.-Y., et al. 2015, ApJ, 807, 92. doi:10.1088/0004-637X/807/1/92 . S.-X Yi, S.-Q Xi, H Yu, 10.3847/0067-0049/224/2/20ApJS. 22420Yi, S.-X., Xi, S.-Q., Yu, H., et al. 2016, ApJS, 224, 20. doi:10.3847/0067-0049/224/2/20 . S.-X Yi, H Yu, F Y Wang, 10.3847/1538-4357/aa7b7bApJ. 84479Yi, S.-X., Yu, H., Wang, F. Y., et al. 2017, ApJ, 844, 79. doi:10.3847/1538-4357/aa7b7b . S.-X Yi, M Du, T Liu, 10.3847/1538-4357/ac35e7ApJ. 92469Yi, S.-X., Du, M., & Liu, T. 2022, ApJ, 924, 69. doi:10.3847/1538-4357/ac35e7 . B Zhang, Y Z Fan, J Dyks, 10.1086/500723ApJ. 642354Zhang, B., Fan, Y. Z., Dyks, J., et al. 2006, ApJ, 642, 354. doi:10.1086/500723 . B Zhang, H Yan, 10.1088/0004-637X/726/2/90ApJ. 72690Zhang, B. & Yan, H. 2011, ApJ, 726, 90. doi:10.1088/0004-637X/726/2/90 . B Zhang, B Zhang, 10.1088/0004-637X/782/2/92ApJ. 78292Zhang, B. & Zhang, B. 2014, ApJ, 782, 92. doi:10.1088/0004-637X/782/2/92 . W.-L Zhang, S.-X Yi, Y.-P Yang, 10.1088/1674-4527/ac6aacResearch in Astronomy and Astrophysics. 2265012Zhang, W.-L., Yi, S.-X., Yang, Y.-P., et al. 2022, Research in Astronomy and Astrophysics, 22, 065012. doi:10.1088/1674-4527/ac6aac
[]
[ "High-Order Incremental Potential Contact for Elastodynamic Simulation on Curved Meshes", "High-Order Incremental Potential Contact for Elastodynamic Simulation on Curved Meshes" ]
[ "Zachary Ferguson ", "Pranav Jain [email protected] ", "Denis Zorin [email protected] ", "Teseo Schneider ", "Daniele Panozzo [email protected] ", "Zachary Ferguson ", "Pranav Jain ", "Denis Zorin ", "Teseo Schneider ", "Daniele Panozzo ", "Zachary Ferguson ", "Pranav Jain ", "Denis Zorin ", "Teseo Schneider ", "Daniele ", "\nNew York University\nUSA\n", "\nNew York University\nUSA\n", "\nUniversity of Victoria\nLos AngelesCACanada, USA\n" ]
[ "New York University\nUSA", "New York University\nUSA", "University of Victoria\nLos AngelesCACanada, USA" ]
[ "Special Interest Group on Computer Graph-ics and Interactive Techniques Conference Conference Proceedings (SIGGRAPH '23 Conference Proceedings)" ]
New York University USA = 0.0 s = 2.5 s = 5.0 s = 7.5 s Figure 1: High-order armadillo-rollers. A simulation of an armadillo squished by rollers. We use a high-order volumetric mesh (top row) and deform it with quadratic displacement. To solve collision and compute contact forces, we use a dense linear surface mesh (bottom row) and transfer the deformation and contact forces between the two meshes.ABSTRACTHigh-order bases provide major advantages over linear ones in terms of efficiency, as they provide (for the same physical model) higher accuracy for the same running time, and reliability, as they are less affected by locking artifacts and mesh quality. Thus, we introduce a high-order finite element (FE) formulation (high-order Permission to make digital bases) for elastodynamic simulation on high-order (curved) meshes with contact handling based on the recently proposed Incremental Potential Contact (IPC) model. Our approach is based on the observation that each IPC optimization step used to minimize the elasticity, contact, and friction potentials leads to linear trajectories even in the presence of nonlinear meshes or nonlinear FE bases. It is thus possible to retain the strong non-penetration guarantees and large time steps of the original formulation while benefiting from the high-order bases and high-order geometry. We accomplish this by mapping displacements and resulting contact forces between a linear collision proxy and the underlying high-order representation.We demonstrate the effectiveness of our approach in a selection of problems from graphics, computational fabrication, and scientific computing.
10.1145/3588432.3591488
[ "https://export.arxiv.org/pdf/2205.13727v3.pdf" ]
249,151,992
2205.13727
84e534bd43d4be44e1384c023d41d11109baf9e5
High-Order Incremental Potential Contact for Elastodynamic Simulation on Curved Meshes ACMCopyright ACMAugust 06-10, 2023. August 06-10, 2023, Panozzo. 2023. August 06-10, 2023 Zachary Ferguson Pranav Jain [email protected] Denis Zorin [email protected] Teseo Schneider Daniele Panozzo [email protected] Zachary Ferguson Pranav Jain Denis Zorin Teseo Schneider Daniele Panozzo Zachary Ferguson Pranav Jain Denis Zorin Teseo Schneider Daniele New York University USA New York University USA University of Victoria Los AngelesCACanada, USA High-Order Incremental Potential Contact for Elastodynamic Simulation on Curved Meshes Special Interest Group on Computer Graph-ics and Interactive Techniques Conference Conference Proceedings (SIGGRAPH '23 Conference Proceedings) Los Angeles, CA, USA; Los Angeles, CA, USA; New York, NY, USAACM11August 06-10, 2023. August 06-10, 2023, Panozzo. 2023. August 06-10, 202310.1145/3588432.3591488New York University USA ACM Reference Format:CCS CONCEPTS • Computing methodologies → Physical simulation KEYWORDS Finite element methodElastodynamicsFrictional contact New York University USA = 0.0 s = 2.5 s = 5.0 s = 7.5 s Figure 1: High-order armadillo-rollers. A simulation of an armadillo squished by rollers. We use a high-order volumetric mesh (top row) and deform it with quadratic displacement. To solve collision and compute contact forces, we use a dense linear surface mesh (bottom row) and transfer the deformation and contact forces between the two meshes.ABSTRACTHigh-order bases provide major advantages over linear ones in terms of efficiency, as they provide (for the same physical model) higher accuracy for the same running time, and reliability, as they are less affected by locking artifacts and mesh quality. Thus, we introduce a high-order finite element (FE) formulation (high-order Permission to make digital bases) for elastodynamic simulation on high-order (curved) meshes with contact handling based on the recently proposed Incremental Potential Contact (IPC) model. Our approach is based on the observation that each IPC optimization step used to minimize the elasticity, contact, and friction potentials leads to linear trajectories even in the presence of nonlinear meshes or nonlinear FE bases. It is thus possible to retain the strong non-penetration guarantees and large time steps of the original formulation while benefiting from the high-order bases and high-order geometry. We accomplish this by mapping displacements and resulting contact forces between a linear collision proxy and the underlying high-order representation.We demonstrate the effectiveness of our approach in a selection of problems from graphics, computational fabrication, and scientific computing. INTRODUCTION Elastodynamic simulation of deformable and rigid objects is used in countless algorithms and applications in graphics, robotics, mechanical engineering, scientific computing, and biomechanics. While the elastodynamic formulations used in these fields are similar, the accuracy requirements differ: while graphics and robotics applications usually favor high efficiency to fit within strict time budgets, other fields require higher accuracy. In both regimes, finite element (FE) approaches based on a conforming mesh to explicitly partition the object volume are a popular choice due to their maturity, flexibility in handling non-linear material models and contact/friction forces, and convergence guarantees under refinement. In a FE simulation, a set of elements is used to represent the computational domain and a set of basis functions are used within each element to represent the physical quantities of interest (e.g., the displacement in an elastodynamic simulation). Many options exist for both elements and bases. Due to the simplicity of their creation, linear tetrahedral elements are a common choice for the element shape. Similarly, linear Lagrangian functions (often called the hat functions) are often used to represent the displacement field. The linearity in both shape and basis leads to a major and crucial benefit for dynamic simulations: after the displacement is applied to the rest shape, the resulting mesh remains a piece-wise linear mesh. This is an essential property in order to robustly and efficiently detect and resolve collisions [Wang et al. 2021]. Collisions between arbitrary curved meshes or between linear meshes over curved trajectories are computationally expensive, especially if done in a conservative way ]. However, these two choices are restrictive: meshes with curved edges represent shapes, at a given accuracy, with a lower number of elements than linear meshes, especially if tight geometric tolerances are required. Curved meshes are often favored over linear meshes in mechanical engineering [Hughes et al. 2005]. The use of linear bases, especially on simplicial meshes, is problematic as it introduces arbitrary stiffness (a phenomenon known as locking [Schneider et al. 2018]). Additionally, high-order bases are more efficient, in the sense that they provide the same accuracy (compared to a reference solution) as linear bases for a lower running time [Babuška and Guo 1992;Schneider et al. 2022]. Elasto-static problems in computational fabrication (e.g., [Panetta et al. 2015]), mechanics, and biomechanics [Maas et al. 2012] often use highorder bases, but their use for dynamic problems with contact is very limited or the high-order displacements are ignored for contact purposes. Contribution. We propose a novel elastodynamic formulation supporting both high-order geometry and high-order bases (Figure 1). Our key observation is that a linear transformation of the displacements degrees of freedom leads to linear trajectories of a carefully designed collision proxy. We use this observation to extend the recently proposed Incremental Potential Contact (IPC) formulation, enabling us to use both high-order geometry and highorder bases. Additionally, we can now use arbitrary collision proxies in lieu of the boundary of the FE mesh, a feature that is useful, for example, for the simulation of nearly rigid materials. To evaluate the effectiveness of our approach, we explore its use in graphics applications, where we use the additional flexibility to efficiently simulate complex scenes with a low error tolerance, and we show that our approach can be used to capture complex buckling behaviors with a fraction of the computational cost of traditional approaches. Note that in this work we focus on tetrahedral meshes, but there are no theoretical limitations to applying our method to hexahedral or other polyhedral elements. Reproducibility. To foster further adoption of our method we release an open-source implementation based on PolyFEM [Schneider et al. 2019b] which can be found at polyfem.github.io. RELATED WORK High-Order Contacts. Contact between curved geometries has been investigated in multiple communities, as the benefits ofrefinement (i.e., refinement of the basis order) for elasticity have been shown to transfer to problems with contact in cases where an analytic solution is known, such as Hertzian contact [Aldakheel et al. 2020;Franke et al. 2010Franke et al. , 2008Konyukhov and Schweizerhof 2009]. One of the simplest forms of handling contact, penalty methods [Moore and Wilhelms 1988;Terzopoulos et al. 1987] apply penalty force when objects contact and intersect. However, despite their simplicity and computational advantages, it is well known that the behavior of penalty methods strongly depends on the choice of penalty stiffness (and a global and constant in-time choice ensuring stability may not be possible). Li et al. [2020] propose IPC to address these issues, and we choose to use their formulation and benefit from their strong robustness guarantees. Mortar methods [Belgacem et al. 1998;Hüeber and Wohlmuth 2006;Puso and Laursen 2004] are also a popular choice for contact handling, especially in engineering [Krause and Zulian 2016] and biomechanics [Maas et al. 2012]. Extensions to high-order non-uniform rational B-spline (NURBS) surfaces have also been proposed [Seitz et al. 2016]. Mortar methods require to (a priori) mark the contacting surfaces. A clear limitation of this method is that they cannot handle collisions in regions with more than two contacting surfaces or self-collisions. Li et al. [2020] provide a didactic comparison of the IPC method and one such mortar method ( [Krause and Zulian 2016]). They show such methods enforce contact constraints weakly and therefore allow intersections (especially at large timesteps and/or velocities). Nitsche's method is a method for soft Dirichlet boundary conditions (eliminating the need to tune the penalty stiffness) [Nitsche 1971]. Stenberg [1998] and recent work [Chouly et al. 2022;Gustafsson et al. 2020] extend Nitsche's method to handle contacts through a penalty or mortaring method. While this eliminates the need to tune penalty stiffnesses, these methods still suffer from the same limitations as mortaring methods. Another way to overcome the challenges with high-order contact is the use of a third medium mesh to fill the empty space between objects [Wriggers et al. 2013]. This mesh is handled as a deformable material with carefully specified material properties and internal forces which act in lieu of the contact forces. In this setting, highorder formulations using -refinement have been shown to be very effective [Bog et al. 2015]. Similar methods have been used in graphics (referred to as an "air mesh"), as a replacement for traditional collision detection and response methods [Jiang et al. 2017;Müller et al. 2015]. The challenge for these approaches is the maintenance of a high-quality tetrahedral mesh in the thin contact regions, a problem that is solved in 2D, but still open for tetrahedral meshes. The detection and response to collisions between spline surfaces are major open problems in isogeometric analysis, where over a hundred papers have been published on this topic (we refer to Temizer et al. [2011] and Cardoso and Adetoro [2017] for an overview). However, automatic mesh generation for isogeometric analysis (IGA) is still an open issue , limiting the applicability of these methods to simple geometries manually modeled, and often to surface-only problems. In comparison, we introduce the first technique using the IPC formulation to solve elastodynamic problems with contact and friction forces on curved meshes using high-order elements. We also show that an automatic high-order meshing and simulation pipeline is possible when our algorithm is paired with ]. High-Order Collision Detection. IPC utilizes continuous collision detection (CCD) to ensure that every step taken is intersection-free. The numerical exactness of CCD can make or break the guarantees provided by the IPC algorithm [Wang et al. 2021]. While several authors have proposed methods for collision detection between curved surfaces and nonlinear trajectories Kry and Pai 2003;Nelson and Cohen 1998;Nelson et al. 2005;Snyder et al. 1993;Von Herzen et al. 1990], there still does not exist a method that is computationally efficient while being conservative (i.e. never misses collisions). Therefore, we are unable to utilize existing methods and instead, propose a method of coupling linear surface representations with curved volumetric geometry. High-Order Bases. Linear FE bases are overwhelmingly used in graphics applications, as they have the smallest number of degrees of freedom (DOF) per element and are simpler to implement. Highorder bases have been shown to be beneficial to animate deformable bodies [Bargteil and Cohen 2014], to accelerate approximate elastic deformations [Mezger et al. 2009], and to compute displacements for embedded deformations [Longva et al. 2020]. Higher-order bases have also been used in meshless methods for improved accuracy and faster convergence [Faure et al. 2011;Martin et al. 2010]. Highorder bases are routinely used in engineering analysis [Jameson et al. 2002] where -refinement is often favored over ℎ-refinement (i.e., refinement of the number of elements) as it reduces the geometric discretization error faster and using fewer degrees of freedom [Babuska and Guo 1988;Babuška and Guo 1992;Bassi and Rebay 1997;Luo et al. 2001;Oden 1994]. We propose a method that allows using high-order bases within the IPC framework, thus enabling us to resolve the IPC contact model at a higher efficiency for elastodynamic problems with complex geometry, i.e. we can obtain similar accuracy as with linear bases with a lower computation budget. Additionally, our method allows us to explicitly control the accuracy of the collision approximation by changing the collision mesh sampling (Section 4). High-order bases can be used as a reduced representation and the high-order displacements can be transferred to higher resolution meshes for visualization purposes [Suwelack et al. 2013]. We use this approach to extend our method to support arbitrary collision proxies, which enables us to utilize our method to accelerate elastodynamic simulations by sacrificing accuracy in the elastic forces. Physically-Based Simulation. There is a large literature on the simulation of deformable and rigid bodies in graphics [Bargteil and Shinar 2018;Kim and Eberle 2022], mechanics, and robotics [Choi et al. 2021]. In particular, a large emphasis is on the modeling of contact and friction forces [Brogliato 1999;Kikuchi and Oden 1988;Stewart 2001;Wriggers 1995]. Longva et al. [2020] propose a method for embedding geometries in coarser FE meshes. By doing so they can reduce the complexity while utilizing higher-order elements to generate accurate elastic deformations. To apply Dirichlet boundary conditions they design the spaces such that they share a common boundary. This scheme, however, cannot capture self-contacts without resorting to using the full mesh. As such they do not consider the handling of contacts. They do, however, suggest a variant of the Mortar method could be future work, but this has known limitations as outlined above. We do not provide a comparison against this method as it does not support contact, and adding contact to it is a major research project on its own, as discussed by the authors. In our work, we build upon the recently introduced IPC [Li et al. 2020] approach, as it offers higher robustness and automation compared to traditional formulations allowing interpenetrations between objects. We review only papers using the IPC formulation in this section, and we refer to [Li et al. 2020] for a detailed overview of the state of the art. Li et al. [2020] proposes to use a linear FE method to model the elastic potential, and an interior point formulation to ensure that the entire trajectory is free of collisions. While the approach leads to accurate results when dense meshes are used, the computational cost is high, thus stemming a series of works proposing to use reduced models to accelerate the computation. propose Codimensional IPC (C-IPC), a new formulation for codimensional objects is introduced that optionally avoids using volumetric elements to model thin sheets and rod-like objects. An acceleration of multiple orders of magnitude is possible for specific scenes where the majority of objects are codimensional. propose a formulation of IPC for rigid body dynamics, dramatically reducing the number of DOF but adding a major cost and complexity to the collision detection stage, as the trajectories spanned by rigid objects are curved. Longva et al. [2020] demonstrate their ability to approximately model a rigid body using a single stiff element. This idea is further expanded upon by Lan et al. [2022] who propose to relax the rigidity assumption: they use an affine transformation to approximate the rigid ones, thus reducing the problem of collision detection to a much more tractable linear CCD. Massive speedups are possible for rigid scenes, up to three orders of magnitude compared to the original formulation. While these methods provide major acceleration for specific types of scenes, they are not directly usable for scenes with deformable objects. Lan et al. [2021] proposes to use medial elastics [Lan et al. 2020], a family of reduced models tailored for real-time graphics applications. In their work, the shape is approximated by a medial skeleton which is used to model both the elastic behavior and as a proxy for collision detection. The approach can simulate deformable objects, however, it cannot reproduce a given polyhedral mesh and it is also specialized for medial elasticity simulations. In our work, we enable the use of high-order meshes and highorder elements in a standard FE framework. Our approach decouples the mesh used to model the elastic potential from the mesh used for the contact and friction potentials, thus providing finer-grained control between efficiency and accuracy. Convergence and use of 0 Lagrangian Elements. Studies compare 0 (p-finite element method (FEM)) and IGA bases' convergence under p-refinement [Sevilla et al. 2011], in the presence of contact [Seitz et al. 2016;Temizer et al. 2011] and in other settings such as electromechanics [Poya et al. 2018]. IGA bases have been shown, in specific problems with simple geometries, to have slightly higher accuracy compared to Lagrangian 0 elements. In this work, we favor Lagrangian 0 elements as IGA meshes are hard to generate for complex geometries and, additionally, some of their benefits are lost when non-regular grid meshes are required to represent complex geometry [Schneider et al. 2019a[Schneider et al. , 2022. Our paper does not study the convergence of the method, we leave a convergence (h and p) study as future work jointly with a convergence study for the IPC contact model. Our goal is restricted to show that elastodynamic simulations with high-order geometry and bases are possible on complex geometry and provide a practical speedup over the linear geometry representation and linear bases that are commonly used in graphics applications. IPC OVERVIEW Our approach builds upon the IPC solver introduced in [Li et al. 2020]. In this section, we review the original formulation and introduce the notation. Li et al. [2020] computes the updated displacements +1 of the objects at the next time step by solving an unconstrained non-linear energy minimization: +1 = argmin ( , , ) + ( + ,ˆ) + ( + , ), (1) where is the vertex coordinates of the rest position, is the displacement at the current step, the velocities, ( , , ) is a time-stepping incremental potential (IP) [Kane et al. 2000], Figure 2: Linearity of displacement update. Even with nonlinear bases , the update to displacement still constitutes a linear combination of nodal displacements. Therefore from a starting position (in red), the update to displacements of any point on the surface (in blue) is linear, and as such we need not use expensive nonlinear CCD. ∑︁ =1 +1 − ∑︁ =1 = ∑︁ =1 ( +1 − ) = ∑︁ =1 Δ is the barrier potential, and is the lagged dissipative potential for friction [Li et al. 2020]. The user-defined geometric accuracyĉ ontrols the maximal distance at which the barrier potential will have an effect. Similarly, the smooth friction parameter controls the smooth transition between static and dynamic friction. We refer to Li et al. [2020] for a complete description of the potentials, as for our purposes we will not need to modify them. Solver and Line Search CCD. The advantage of the IPC formulation is that it is possible to prevent intersections from happening by using a custom Newton solver with a line-search that explicitly checks for collisions using a continuous collision detection algorithm [Provot 1997;Wang et al. 2021], while keeping the overall simulation cost comparable to the more established linear complementarity problem (LCP) based contact solvers [Li et al. 2020]. METHOD We introduce an extension of IPC for a curved mesh M = ( M , M ) where M and M are the nodes and volumetric elements of M, respectivly. The formulation reduces to standard IPC when linear meshes and linear bases are used, but other combinations are also possible: for example, it is possible to use high-order bases on standard piece-wise linear meshes, as we demonstrate in Section 5. We first introduce explicit definitions for functions defined on the volume and the contact surface corresponding to its boundary. Let M : M → R 3 be a volumetric function (in our case the volumetric displacement ) defined as M = ∑︁ =1 M M ,(2) where M are the FE bases defined on M and M their coefficient. Similarly on the surface S = ( S , S ) used for collision, with vertices S and triangular faces S , we define S : S → R 3 (in our case the displacement restricted to the surface) as S = ∑︁ =1 S S ,(3) where S are the FE bases defined on S and S their coefficient. We can now rewrite Equation (1) to make explicit that the potential depends on M, while and only depend on S: +1 = argmin M ( , , ) + S ( S + Φ( ),ˆ) + S ( S + Φ( ), ),(4) where Φ : Θ M → Θ S is an operator where Θ M = span{ M } and Θ S = span{ S } that transfers volumetric functions on M to S. In the context of [Li et al. 2020] (i.e., Equation (1)), Φ is a restriction of the volumetric function to its surface. While in general, Φ could be an arbitrary operator, IPC takes advantage of its linearity: if Φ is linear, then the trajectories of surface vertices in one optimization step of Equation (4) will be linear (Figure 2), and it is thus possible to use standard continuous collision detection methods [Provot 1997;Wang et al. 2021]. If Φ is nonlinear, for example in the rigidbody formulation introduced by , the collision detection becomes considerably more expensive [Lan et al. 2022]. We observe that arbitrary linear operators can be used for Φ, and note that increasing the order of the bases used to represent M and S does not affect the linearity of the operator. An additional advantage of this reformulation is that the space Θ S does not have to be a subspace of Θ M . For example, the collision mesh can be at a much higher resolution than the volumetric mesh used to resolve the elastic forces (Section 5). We first discuss how to build a linear operator Φ for high-order meshes, high-order elements, and arbitrary collision proxies, and we postpone the discussion on how to adapt the IPC algorithm to work with arbitrary Φ to Section 4.2. Construction of Φ We present two methods for constructing Φ: upsampling the surface of M to obtain a dense piecewise linear approximation of its boundary, which we use as S (Section 4.1.1), or using an arbitrary surface triangle mesh as S and determining closest point correspondences used to evaluate bases (Section 4.1.2). Our results in Section 5 show a mix of both approaches: Figures 3 to 5, 10, and 11 use an upsampling while Figures 1, 4, 6 to 9, and 11 use an arbitrary triangle mesh proxy. Since Φ is a linear operator, a discrete function M ∈ Θ M with coefficients M can be transferred to S ∈ Θ S using its coefficients S as f S = f M , where f M and f S are the stacked coefficients M and S , respectively. The tetrahedron M ∈ M of a high-order mesh M is defined as the image of the geometric mapping applied to reference right-angle tetrahedronˆ; that is M = (ˆ). On S, the geometric map is a vectorial function and has the same form as Equation (3). Upsampled linear boundary. To construct S we need to use the geometric map to find the initial vertex positions, while to define the operator to transfer functions from the volumetric mesh to S we will use the basis functions of M. Vertex Positions. Every vertex of the piece-wise linear approximation S ∈ S has coordinatesˆin the reference tetrahedron of Transfer. To construct the linear operator Φ encoded with the matrix transferring from a higher-order polynomial basis on the boundary of M to the piecewise linear approximation S, we observe that, since S is an upsampling of M, we can useˆto directly evaluate the bases of M (for all non-zero bases) and use them as a weight to transfer the function from M to S and define = M (ˆ), which is a linear operator, independent of the degree of the basis functions. 4.1.2 Arbitrary Triangle Mesh Proxy. The same construction applies to arbitrary mesh proxies (e.g., Figure 1), but we need to computeˆfor every vertex. When M is linear we can simply com-puteˆas the barycentric coordinates of the closest tetrahedron in M, but when M is nonlinear we use an optimization to invert [Suwelack et al. 2013]. However, unlike Suwelack et al. [2013], we found that using a normal field to define correspondences is fragile when the surfaces have a very different geometric shape, so we opt for a simpler formulation based on distances. Algorithm 1 outlines our method for computing Φ for an arbitrary triangle proxy. Namely, given a volumetric mesh M and an arbitrary triangle mesh S we do not have the pre-image under the geometric mapping of the vertices S ∈ S , so we compute one by determining the closest element in M to S and use an optimization to compute the inverse geometric mapping to obtain the coordinatesˆ. This procedure only needs to be performed once because depends only on the rest geometry. Gradient and Hessian of Surface Terms Adapting IPC to work with arbitrary linear Φ mapping requires only changing the assembly phase, which requires gradients and Hessian of the surface potentials. Similar to IPC, we use Newton's Figure 3: Bending beam. Squared-section coarse beam pressed by two planes. Linear elements exhibit artificial stiffness as they cannot bend. The reference 1 solution and 3 are rendered in isolation on the right. The results are indistinguishable, but 3 is an order of magnitude faster. method to minimize the newly formulated potential in Equation (4), and we thus need its gradient and Hessian. For a surface potential S ( S + Φ( ),ˆ) and transfer Φ( ) = Φ ∑︁ =1 M = ∑︁ =1 ( u) S , where u is the vector containing all the coefficients ; we use the definition of to express the gradient of the barrier (or the friction) potential as ∇ S ( S + Φ( ),ˆ) = ∇ ( S + Φ( ))) ⊤ ∇ S S (S ,ˆ) = ∇ ( S + ( u)) ⊤ ∇ S S (S ,ˆ) = ⊤ ∇ S S (S ,ˆ), where S = S + Φ( ). The Hessian is computed similarly ∇ 2 S ( S + Φ( ),ˆ) = ⊤ [∇ 2 S S (S ,ˆ)] . The formulas for ∇ S S (S ,ˆ), ∇ S S (S , ), and their Hessians are the same as in [Li et al. 2020], thus requiring minimal modifications to an existing implementation. As in [Li et al. 2020], we mollify the edge-edge distance computation to avoid numerical issues with nearly parallel edges. RESULTS All experiments are run on individual nodes of an high performance computing (HPC) cluster each using two Intel Xeon Platinum 8268 24C 205W 2.9 GHz Processors and 16 threads. All results are generated using the PolyFEM library [Schneider et al. 2019b] coupled with the IPC Toolkit , and use the direct linear solver Pardiso [Alappat et al. 2020;Bollhöfer et al. 2019Bollhöfer et al. , 2020]. We use the notation to define the FE bases order (e.g., 2 indicates quadratic Lagrange bases) and all our curved meshes are quartic. All simulation parameters and a summary of the results can be found in Tables 1 and 2. Test Cases Bending beam. We first showcase the advantages of high-order bases and meshes. Figure 3 shows that linear bases on a coarse mesh introduce artificial stiffness and the result is far from the reference (a dense 1 mesh). As we increase the order, the beam bends more. Using 3 on such a coarse mesh leads to results indistinguishable from the reference at a fraction of the cost. We also compare the can be improved using our method and replacing S with a dense sphere in blue. When using a high-order mesh with 2 displacement, red, the results are similar to the dense linear simulation in green. results of a higher resolution 1 mesh with a limited time budget. That is, the number of elements is chosen to produce a similar running time as the 3 results (1,124 tetrahedra compared to 52 in the coarse version). Even in this case, the differences are obvious and far from the expected results. Bouncing ball. Figure 4 shows the movement of the barycenter of a coarse bouncing sphere on a plane. When using linear bases on the coarse mesh, the ball tips over and starts rolling as the geometry is poorly approximated (yellow line). Replacing the coarse collision mesh using our method (blue line) improves the results for a small cost (125 frames/s versus 83.3 frames/s); however, since the sphere boundary is poorly approximated and the bases are linear, the results are still far from the accurate trajectory (green line). Finally, replacing M with a curved mesh and using 2 bases leads almost to the correct dynamics (red line) while maintaining a real-time simulation (38.4 frames/s). As a reference, the dense 1 linear mesh (green line) runs at 3.9 frames/s. Rolling ball. Figure 7 shows our method is able to maintain purely tangential friction forces on the FE mesh while rolling a ball down a slope. The baseline spherical FE mesh (8.8K 1 tetrahedra) and our method using a cube FE mesh (26 1 tetrahedra), both using the same collision geometry, produce very similar dynamics, but our method is 7.5× faster. However, while the ball's material is stiff ( = 10 9 Pa), it is not rigid, so the baseline model deforms slightly at the point of contact. Our model exhibits extra numerical stiffness from the large linear elements and so deforms less. This results in a 1 coarse (2m 47s) 1 time budgeted (6h 7m 12s) 2 (6h 19m 52s) 1 (2d 14h 13m 0s) Figure 5: Mat-twist. Simulation of twisting for different bases' order and mesh resolutions. The cross-section (bottom row) shows that the coarse linear mesh (left) has huge artifacts. The coarse 2 bases (middle-right) produce smooth results similar to a dense mesh (right) for a tenth of the time. A "time-budgeted" version shows similar results but exhibits checker patterns around the folds. 5% difference on average in the minimum distance which translates to a normal force (and ultimately friction force) that is 2× greater. This inaccuracy is a limitation of using such a course FE mesh within our framework. Examples Mat twist. We reproduce the mat twist example in [Li et al. 2020] using a thin linear mesh M with 2K tetrahedra and simulate the self-collisions arising from rotating the two sides using a collision mesh S with 65K vertices ( Figure 5). Simulating this result using standard IPC on the coarse (left) is fast but leads to visible artifacts; by using 2 bases for displacements the results are smooth and the simulation is faster (91 s/frame). For reference, a finer linear solution with more elements, to get a result similar to ours but only using linear elements, requires 230K elements and a runtime 10× higher. We find a 1 mesh with 51K tetrahedra (25× the number used in the 2 variant) that produces a similar running time. The 2 collision mesh uses 3.5× more triangles leading to 3.1× slower collision detection while the linear solver for the 1 mesh is only 2.2× slower. This results in similar dynamics and final state (see Figure 5) with some notable differences around the folds of the mat. Microstructure. In Figure 10, we simulate the compression of an extremely coarse (6K 4 tetrahedra) curved microstructure mesh from . We upsample its surface to generate a collision mesh with 143K triangles. We demonstrate our method's ability to simulate anisoparametric scenarios (i.e., the shape and basis functions differ) by using 1 and 2 bases. In this case, both simulations take a similar amount of time (6h 34m 9s versus 6h 4m 48s). Armadillo on a Roller. In Figure 11, we replicate the armadillo roller from [Verschoor and Jalba 2019] and use fTetWild [Hu et al. 2020] to generate M with 1.8K tetrahedra (original mesh has 386K). With our method, we combine M with the original surface with 21K faces with linear element and obtain a speedup of 60× (row ★ ). We used ] to generate a coarse curved mesh (with only 4.7K tetrahedra) and use an optimization to invert the geometric mapping and simulate the result using 2 , this leads to a simulation 30× faster (row † ). Finally, we upsampled the surface of Initial config. Fine (17s) Coarse (19s) Optimized (9s) Figure 6: Balancing armadillo. We simulate the dancing armadillo from [Prévost et al. 2013] falling on a plane (left). The coarse model (middle) tips over because the center of mass falls outside the foot. We optimize the density (shown in red) to match the input center of mass and the armadillo is balanced (right). Differences in running time can be attributed to the different dynamics (i.e., the coarse model experiences more contacts when it falls over). the curved mesh to generate a new collision mesh S with 20K faces, this simulation is only 8× faster (row ‡ ). Trash-compactor. We reproduce the trash compactor from [Li et al. 2020] using a coarse mesh M with 21K tetrahedra and compress it with five planes. Since the input mesh is already coarse and the models have thin features in the tentacles, we use fTetWild to generate a coarser mesh with 3.5K tetrahedra. Using this mesh with 1 displacements while using the same surface mesh for collisions provides a 2.5× speedup. Since both coarse and input meshes have similar resolution, using 2 leads to a more accurate but much slower (around 10×) result as the number of DOF for 2 is similar to the denser mesh but with 5× the number of surface triangles. Extreme coarsening Nut and Bolt. As mentioned in Section 4, our method can be used with linear meshes and linear bases. This is best suited to stiff objects where the deformation is minimal. Figure 8 shows an example of a nut sliding inside a bolt, since both materials are stiff ( = 200 GPa), we coarsen M using fTetWild [Hu et al. 2020] from 6K tetrahedra and 1.7K vertices to 492 and 186, respectively. This change allows our method to be twice as fast without visible differences. Balancing Armadillo. When generating a coarse mesh M the center of mass and mass of the object might change dramatically. Figure 6 shows that the coarse mesh cannot balance anymore as the center of mass is outside the contact area. To prevent this artifact, similarly to [Prévost et al. 2013], we modify the density (in red in the third figure) of the material to move the center of mass. CONCLUDING REMARKS We introduce a robust and efficient simulator for deformable objects with contact supporting high-order meshes and high-order bases to simulate geometrically complex scenes. We show that there are major computational advantages in increasing the order of the geometric map and bases and that they can be used in the IPC formulation with modest code changes. Limitations. At a high level, we are proposing to use -refinement for elasticity, coupled with ℎ-refinement approach for contacts, to sidestep the high computational cost of curved continuous collision detection. The downside of our approach is that our contact surface is still an approximation of the curved geometry, and while we can reduce the error by further refinement, we cannot reduce it to zero. While for graphics applications this is an acceptable compromise, as the scene we use for collision is guaranteed to be collision-free and we inherit the robustness properties of the original IPC formulation, there could be engineering applications where it is important to model a high-order surface exactly. In this case, our approach could not be used as we might miss the collisions of the curved FE mesh. A second limitation of our approach is that the definition of a robust, guaranteed positivity check for high-order elements is still an open research problem [Johnen et al. 2013]. In our implementation, we check positivity only at the quadrature points, which is a reasonable approximation but might still lead to unphysical results as the element might have a negative determinant in other interior points. While our method for mapping between an arbitrary triangle mesh proxy and the curved tetrahedral mesh works well enough for the examples shown in this paper, it is not a robust implementation, as the closest point query can lead to wrong correspondences. In the future, it will be interesting to explore the use of bijective maps between the two geometries to avoid this issue (for example by using the work of ). Our choice of Φ is not unique as there are a large number of basis functions to choose from. We explored other options such as mean value coordinates and linearized L2-projection, but we found their global mappings produce dense weight matrices. This results in slower running times with only minor quality improvements. A future direction might be the exploration of more localized operators such as bounded bi-harmonic weights [Jacobson et al. 2011]. Future Work. Beyond these limitations, we see three major avenues for future work: (1) existing curved mesh generators are still not as reliable in producing high-quality meshes as their linear counterparts: more work is needed in this direction, and our approach can be used as a testbed for evaluating the benefits curved mesh provides in the context of elastodynamic simulations, (2) our approach could be modified to work with hexahedral elements, spline bases, and isogeometric analysis simulation frameworks, and (3) we speculate that integrating our approach with high-order time integrators could provide additional benefits for further reducing numerical damping and we believe this is a promising direction for a future study. Our approach is a first step toward the introduction of highorder meshes and high-order FEM in elastodynamic simulation with the IPC contact model, and we believe that our reference implementation will reduce the entry barrier for the use of these approaches in industry and academia. ACKNOWLEDGMENTS This work was supported in part through the NYU IT High Performance Computing resources, services, and staff expertise. This work was also partially supported by the NSF CAREER award under Grant No. 1652515, the NSF grants OAC-1835712, OIA-1937043, CHS-1908767, CHS-1901091, NSERC DGECR-2021- Figure 7: Rolling-ball. We demonstrate a ball rolling down a slope, while maintaining non-slip rolling contact, produces purely tangential friction forces on the FEM mesh. Our method uses a symmetric cube mesh (black wireframe) as the FE mesh and a high-resolution sphere (green) as the collision mesh. The friction forces on the FE mesh are shown as pink arrows. We plot the out-of-plane friction force ( ·ˆ) and norm of the in-plane friction force (∥ − ( ·ˆ)ˆ∥). Compared to a high-resolution baseline, the out-of-plane error shows negligible differences but the in-plane force is around 2× greater. This is due to the increased numerical stiffness of our course mesh leading to less localized deformation, smaller distances, and, ultimately, a larger normal force. Figure 9: Trash compactor. The Octocat model is compressed by five planes. Using the original input mesh (top) is two times slower than using our method with linear elements (middle). Since we cannot coarsen the input too much without losing the tentacles, using 2 leads to longer running times and similar results (bottom). Coarse FE Mesh Figure 11: Armadillo-rollers. Armadillo roller simulation for the different variants of our method. Ours ★ uses a coarse linear mesh with linear displacement and the original geometry for the collision. Ours † uses a curved mesh with 2 displacement and an upsampled geometry for the collision. Ours ‡ uses a curved mesh with 2 displacement and the original geometry for the collision. Armadillo-rollers (Figures 1 and 11) 0.025 1e3, 5e5, 0.2 1e-3 0.5, 1e-3, 1 1e-3 Bending beam (Figure 3 Table 2: Summary of results shown in Section 5. For each example, we report the number of tetrahedra (#T) used for elasticity, the number of surface triangles (#F) used for collision processing, and the total running time of the simulation. Names correspond to the same given in each figure. M , so its global coordinates can be computed as S = (ˆ), and stacked into the vector S used in Equation(4). Figure 4 : 4Bouncing ball. Simulation of a bouncing sphere on a plane. The yellow image and line are the baseline, a coarse linear mesh with linear displacement. The results be3D under CC BY-SA 3.0. Figure 8 : 8Nut-and-bolt. Simulation of a bolt rotating into a bolt under gravity. Directly meshing the input mesh (top) generate similar results as using our method with a coarse simulation mesh (right). Enigma under CC BY-SA 3.0. Figure 10 : 10Microstructure. Compression of a curved microstructure using linear and quadratic bases. While the choice of bases only leads to marginal running time savings, it demonstrates our method's ability to simulate anisoparametric scenarios where the 4 shape functions differ from the 1 / 2 bases. 00461 and RG-PIN 2021-03707, a Sloan Fellowship, a gift from Adobe Research and a gift from Advanced Micro Devices, Inc.0 1×10 −16 2×10 −16 3×10 −16 0 0.5 1 1.5 2 2.5 3 0 1 2 3 Baseline (5m 52s) Ours (47s) time (s) t=1.25 s t=0.075 s Table 1 : 1Simulation parameters used in the results. For each example, we report the time step size (ℎ), density ( with ★ indicating multi-density), Young's modulus ( ), Poisson ratio ( ), barrier activation distance (ˆ), coefficient of friction ( ), friction smoothing parameter ( ), maximum friction iterations, and Newton tolerance. For all examples, we use implicit Euler time integration and the Neo-Hookean material model.Scene ℎ (s) (kg/m 3 ), (Pa),ˆ( m) , (m/s), friction iters. Newton tol. (m) A Recursive Algebraic Coloring Technique for Hardware-Efficient Symmetric Sparse Matrix-Vector Multiplication. Christie Alappat, Achim Basermann, Alan R Bishop, Holger Fehske, Georg Hager, Olaf Schenk, Jonas Thies, Gerhard Wellein, ACM Transactions on Parallel Computing. 7Christie Alappat, Achim Basermann, Alan R. Bishop, Holger Fehske, Georg Hager, Olaf Schenk, Jonas Thies, and Gerhard Wellein. 2020. A Recursive Algebraic Coloring Technique for Hardware-Efficient Symmetric Sparse Matrix-Vector Multiplication. ACM Transactions on Parallel Computing 7, 3, Article 19 (June 2020), 37 pages. Lourenço Beirão da Veiga, and Peter Wriggers. 2020. Curvilinear virtual elements for contact mechanics. Fadi Aldakheel, Blaž Hudobivnik, Edoardo Artioli, Computer Methods in Applied Mechanics and Engineering. 372113394Fadi Aldakheel, Blaž Hudobivnik, Edoardo Artioli, Lourenço Beirão da Veiga, and Peter Wriggers. 2020. Curvilinear virtual elements for contact mechanics. Computer Methods in Applied Mechanics and Engineering 372 (2020), 113394. The h-p Version of the Finite Element Method for Domains with Curved Boundaries. I Babuska, B Q Guo, SIAM J. Numer. Anal. 25I. Babuska and B. Q. Guo. 1988. The h-p Version of the Finite Element Method for Domains with Curved Boundaries. SIAM J. Numer. Anal. 25, 4 (1988), 837-861. The h, p and h-p version of the finite element method; basis theory and applications. I Babuška, B Q Guo, Advances in Engineering Software. 15I. Babuška and B.Q. Guo. 1992. The h, p and h-p version of the finite element method; basis theory and applications. Advances in Engineering Software 15, 3 (1992), 159- 174. An Introduction to Physics-Based Animation. Adam Bargteil, Tamar Shinar, ACM SIGGRAPH 2018 Courses. Vancouver, British Columbia, Canada; New York, NYACMArticle 6, 1 pagesAdam Bargteil and Tamar Shinar. 2018. An Introduction to Physics-Based Animation. In ACM SIGGRAPH 2018 Courses (Vancouver, British Columbia, Canada). ACM, New York, NY, Article 6, 1 pages. High-Order Accurate Discontinuous Finite Element Solution of the 2D Euler Equations. W Adam, Elaine Bargteil, Cohen, ACM Transactions on Graphics. 27. F. Bassi and S. Rebay33J. Comput. Phys.Adam W Bargteil and Elaine Cohen. 2014. Animation of deformable bodies with quadratic Bézier finite elements. ACM Transactions on Graphics 33, 3 (2014), 27. F. Bassi and S. Rebay. 1997. High-Order Accurate Discontinuous Finite Element Solution of the 2D Euler Equations. J. Comput. Phys. 138, 2 (1997), 251-285. The mortar finite element method for contact problems. F B Belgacem, P Hild, P Laborde, Mathematical and Computer Modelling. 28Recent Advances in Contact MechanicsF.B. Belgacem, P. Hild, and P. Laborde. 1998. The mortar finite element method for contact problems. Mathematical and Computer Modelling 28, 4 (1998), 263-271. Recent Advances in Contact Mechanics. Normal contact with high order finite elements and a fictitious contact material. Tino Bog, Nils Zander, Stefan Kollmannsberger, Ernst Rank, High-Order Finite Element and Isogeometric Methods. 70Tino Bog, Nils Zander, Stefan Kollmannsberger, and Ernst Rank. 2015. Normal contact with high order finite elements and a fictitious contact material. Computers & Mathematics with Applications 70, 7 (2015), 1370-1390. High-Order Finite Element and Isogeometric Methods. Largescale Sparse Inverse Covariance Matrix Estimation. Matthias Bollhöfer, Aryan Eftekhari, Simon Scheidegger, Olaf Schenk, SIAM Journal on Scientific Computing. 41Matthias Bollhöfer, Aryan Eftekhari, Simon Scheidegger, and Olaf Schenk. 2019. Large- scale Sparse Inverse Covariance Matrix Estimation. SIAM Journal on Scientific Computing 41, 1 (2019), 380-401. State-of-the-Art Sparse Direct Solvers. Matthias Bollhöfer, Olaf Schenk, Radim Janalik, Steve Hamm, Kiran Gullapalli, Parallel Algorithms in Computational Science and Engineering. Matthias Bollhöfer, Olaf Schenk, Radim Janalik, Steve Hamm, and Kiran Gullapalli. 2020. State-of-the-Art Sparse Direct Solvers. Parallel Algorithms in Computational Science and Engineering (2020), 3-33. Bernard Brogliato, Nonsmooth Mechanics. Springer-VerlagBernard Brogliato. 1999. Nonsmooth Mechanics. Springer-Verlag. On contact modelling in isogeometric analysis. R P R Cardoso, O B Adetoro, European Journal of Computational Mechanics. 26R. P. R. Cardoso and O. B. Adetoro. 2017. On contact modelling in isogeometric analysis. European Journal of Computational Mechanics 26, 5-6 (2017), 443-472. On the use of simulation in robotics: Opportunities, challenges, and suggestions for moving forward. Heesun Choi, Cindy Crump, Christian Duriez, Asher Elmquist, Gregory Hager, David Han, Frank Hearl, Jessica Hodgins, Abhinandan Jain, Frederick Leve, Chen Li, Franziska Meier, Dan Negrut, Ludovic Righetti, Alberto Rodriguez, Jie Tan, Jeff Trinkle, Proceedings of the National Academy of Sciences. 1181HeeSun Choi, Cindy Crump, Christian Duriez, Asher Elmquist, Gregory Hager, David Han, Frank Hearl, Jessica Hodgins, Abhinandan Jain, Frederick Leve, Chen Li, Franziska Meier, Dan Negrut, Ludovic Righetti, Alberto Rodriguez, Jie Tan, and Jeff Trinkle. 2021. On the use of simulation in robotics: Opportunities, challenges, and suggestions for moving forward. Proceedings of the National Academy of Sciences 118, 1 (2021). Nitsche method for contact with Coulomb friction: Existence results for the static and dynamic finite element formulations. Franz Chouly, Patrick Hild, Vanessa Lleras, Yves Renard, J. Comput. Appl. Math. 416114557Franz Chouly, Patrick Hild, Vanessa Lleras, and Yves Renard. 2022. Nitsche method for contact with Coulomb friction: Existence results for the static and dynamic finite element formulations. J. Comput. Appl. Math. 416 (2022), 114557. Sparse Meshless Models of Complex Deformable Solids. François Faure, Benjamin Gilles, Guillaume Bousquet, Dinesh K Pai, ACM Transactions on Graphics. 30ArticleFrançois Faure, Benjamin Gilles, Guillaume Bousquet, and Dinesh K. Pai. 2011. Sparse Meshless Models of Complex Deformable Solids. ACM Transactions on Graphics 30, 4, Article 73 (July 2011), 10 pages. Zachary Ferguson, IPC Toolkit. Zachary Ferguson et al. 2020. IPC Toolkit. https://ipc-sim.github.io/ipc-toolkit/. https://ipc-sim.github.io/ipc-toolkit/ Intersection-Free Rigid Body Dynamics. Zachary Ferguson, Minchen Li, Teseo Schneider, Francisca Gil-Ureta, Timothy Langlois, Chenfanfu Jiang, Denis Zorin, Danny M Kaufman, Daniele Panozzo, Proceedings of SIGGRAPH) 40, 4, Article 183. SIGGRAPH) 40, 4, Article 18316Zachary Ferguson, Minchen Li, Teseo Schneider, Francisca Gil-Ureta, Timothy Langlois, Chenfanfu Jiang, Denis Zorin, Danny M. Kaufman, and Daniele Panozzo. 2021. Intersection-Free Rigid Body Dynamics. ACM Transactions on Graphics (Proceedings of SIGGRAPH) 40, 4, Article 183 (July 2021), 16 pages. A comparison of the h-, p-, hp-, and rp-version of the FEM for the solution of the 2D Hertzian contact problem. A David Franke, V Düster, E Nübel, Rank, Computational Mechanics. 45David Franke, A. Düster, V. Nübel, and E. Rank. 2010. A comparison of the h-, p-, hp-, and rp-version of the FEM for the solution of the 2D Hertzian contact problem. Computational Mechanics 45, 5 (April 2010), 513-522. The p-version of the FEM for computational contact mechanics. David Franke, Alexander Düster, Ernst Rank, Pamm. 8David Franke, Alexander Düster, and Ernst Rank. 2008. The p-version of the FEM for computational contact mechanics. Pamm 8, 1 (2008), 10271-10272. On Nitsche's Method for Elastic Contact Problems. Tom Gustafsson, Rolf Stenberg, Juha Videman, SIAM Journal on Scientific Computing. 42Tom Gustafsson, Rolf Stenberg, and Juha Videman. 2020. On Nitsche's Method for Elastic Contact Problems. SIAM Journal on Scientific Computing 42, 2 (2020), B425- B446. Fast Tetrahedral Meshing in the Wild. Yixin Hu, Teseo Schneider, Bolun Wang, Denis Zorin, Daniele Panozzo, ACM Transactions on Graphics. 39ArticleYixin Hu, Teseo Schneider, Bolun Wang, Denis Zorin, and Daniele Panozzo. 2020. Fast Tetrahedral Meshing in the Wild. ACM Transactions on Graphics 39, 4, Article 117 (July 2020), 18 pages. Mortar methods for contact problems. S Hüeber, B I Wohlmuth, SpringerBerlin Heidelberg; Berlin, HeidelbergS. Hüeber and B.I. Wohlmuth. 2006. Mortar methods for contact problems. Springer Berlin Heidelberg, Berlin, Heidelberg, 39-47. Isogeometric analysis: CAD, finite elements, NURBS, exact geometry and mesh refinement. T J R Hughes, J A Cottrell, Y Bazilevs, Computer Methods in Applied Mechanics and Engineering. 194T.J.R. Hughes, J.A. Cottrell, and Y. Bazilevs. 2005. Isogeometric analysis: CAD, finite elements, NURBS, exact geometry and mesh refinement. Computer Methods in Applied Mechanics and Engineering 194, 39 (2005), 4135-4195. Bounded Biharmonic Weights for Real-Time Deformation. Alec Jacobson, Ilya Baran, Jovan Popović, Olga Sorkine, Proceedings of SIGGRAPH). SIGGRAPH)30Alec Jacobson, Ilya Baran, Jovan Popović, and Olga Sorkine. 2011. Bounded Biharmonic Weights for Real-Time Deformation. ACM Transactions on Graphics (Proceedings of SIGGRAPH) 30, 4 (2011), 78:1-78:8. Application of a non-linear frequency domain solver to the Euler and Navier-Stokes equations. A Jameson, J Alonso, M Mcmullen, 40th AIAA Aerospace Sciences Meeting & Exhibit. A. Jameson, J. Alonso, and M. McMullen. 2002. Application of a non-linear frequency domain solver to the Euler and Navier-Stokes equations. In 40th AIAA Aerospace Sciences Meeting & Exhibit. Simplicial Complex Augmentation Framework for Bijective Maps. Zhongshi Jiang, Scott Schaefer, Daniele Panozzo, ACM Transactions on Graphics. 36186Zhongshi Jiang, Scott Schaefer, and Daniele Panozzo. 2017. Simplicial Complex Aug- mentation Framework for Bijective Maps. ACM Transactions on Graphics 36, 6, Article 186 (Nov. 2017), 9 pages. Bijective Projection in a Shell. Zhongshi Jiang, Teseo Schneider, Denis Zorin, Daniele Panozzo, Proceedings of SIGGRAPH) 39, 6, Article 247. SIGGRAPH) 39, 6, Article 24718Zhongshi Jiang, Teseo Schneider, Denis Zorin, and Daniele Panozzo. 2020. Bijective Projection in a Shell. ACM Transactions on Graphics (Proceedings of SIGGRAPH) 39, 6, Article 247 (Nov. 2020), 18 pages. Bijective and Coarse High-Order Tetrahedral Meshes. Zhongshi Jiang, Ziyi Zhang, Yixin Hu, Teseo Schneider, Denis Zorin, Daniele Panozzo, ACM Transactions on Graphics. 40ArticleZhongshi Jiang, Ziyi Zhang, Yixin Hu, Teseo Schneider, Denis Zorin, and Daniele Panozzo. 2021. Bijective and Coarse High-Order Tetrahedral Meshes. ACM Trans- actions on Graphics 40, 4, Article 157 (July 2021), 16 pages. Geometrical validity of curvilinear finite elements. Amaury Johnen, J-F Remacle, Christophe Geuzaine, J. Comput. Phys. 233Amaury Johnen, J-F Remacle, and Christophe Geuzaine. 2013. Geometrical validity of curvilinear finite elements. J. Comput. Phys. 233 (2013), 359-372. Variational integrators and the Newmark algorithm for conservative and dissipative mechanical systems. Couro Kane, Jerrold E Marsden, Michael Ortiz, Matthew West, Internat. J. Numer. Methods Engrg. 49Couro Kane, Jerrold E Marsden, Michael Ortiz, and Matthew West. 2000. Variational integrators and the Newmark algorithm for conservative and dissipative mechanical systems. Internat. J. Numer. Methods Engrg. 49, 10 (Dec. 2000), 1295-1325. Contact Problems in Elasticity: A Study of Variational Inequalities and Finite Element Methods. Noboru Kikuchi, John Tinsley Oden, Society for Industrial and Applied Mathematics. 8SIAM Studies in App. and Numer. Math.Noboru Kikuchi and John Tinsley Oden. 1988. Contact Problems in Elasticity: A Study of Variational Inequalities and Finite Element Methods. SIAM Studies in App. and Numer. Math., Vol. 8. Society for Industrial and Applied Mathematics. Dynamic Deformables: Implementation and Production Practicalities (Now with Code!). Theodore Kim, David Eberle, ACM SIGGRAPH 2022 Courses. British Columbia, Canada; New York, NYACMArticle 7, 259 pagesTheodore Kim and David Eberle. 2022. Dynamic Deformables: Implementation and Production Practicalities (Now with Code!). In ACM SIGGRAPH 2022 Courses (Van- couver, British Columbia, Canada). ACM, New York, NY, Article 7, 259 pages. Incorporation of contact for highorder finite elements in covariant form. Alexander Konyukhov, Karl Schweizerhof, Computer Methods in Applied Mechanics and Engineering. 198Alexander Konyukhov and Karl Schweizerhof. 2009. Incorporation of contact for high- order finite elements in covariant form. Computer Methods in Applied Mechanics and Engineering 198, 13 (2009), 1213-1223. A Parallel Approach to the Variational Transfer of Discrete Fields between Arbitrarily Distributed Unstructured Finite Element Meshes. Rolf Krause, Patrick Zulian, SIAM Journal on Scientific Computing. 383Rolf. Krause and Patrick. Zulian. 2016. A Parallel Approach to the Variational Transfer of Discrete Fields between Arbitrarily Distributed Unstructured Finite Element Meshes. SIAM Journal on Scientific Computing 38, 3 (2016). Continuous contact simulation for smooth surfaces. G Paul, Kry, K Dinesh, Pai, ACM Transactions on Graphics. 22Paul G Kry and Dinesh K Pai. 2003. Continuous contact simulation for smooth surfaces. ACM Transactions on Graphics 22, 1 (2003), 106-129. Affine Body Dynamics: Fast, Stable and Intersection-Free Simulation of Stiff Materials. Lei Lan, Danny M Kaufman, Minchen Li, Chenfanfu Jiang, Yin Yang, Proceedings of SIGGRAPH) 41, 4, Article. SIGGRAPH) 41, 4, Article6714 pagesLei Lan, Danny M. Kaufman, Minchen Li, Chenfanfu Jiang, and Yin Yang. 2022. Affine Body Dynamics: Fast, Stable and Intersection-Free Simulation of Stiff Materials. ACM Transactions on Graphics (Proceedings of SIGGRAPH) 41, 4, Article 67 (July 2022), 14 pages. Medial Elastics: Efficient and Collision-Ready Deformation via Medial Axis Transform. Lei Lan, Ran Luo, Marco Fratarcangeli, Weiwei Xu, Huamin Wang, Xiaohu Guo, Junfeng Yao, Yin Yang, ACM Transactions on Graphics. 39Lei Lan, Ran Luo, Marco Fratarcangeli, Weiwei Xu, Huamin Wang, Xiaohu Guo, Junfeng Yao, and Yin Yang. 2020. Medial Elastics: Efficient and Collision-Ready Deformation via Medial Axis Transform. ACM Transactions on Graphics 39, 3, Article 20 (April 2020), 17 pages. Lei Lan, Yin Yang, Danny Kaufman, Junfeng Yao, Minchen Li, Chenfanfu Jiang, Proceedings of SIGGRAPH) 40, 4, Article. SIGGRAPH) 40, 4, Article158Lei Lan, Yin Yang, Danny Kaufman, Junfeng Yao, Minchen Li, and Chenfanfu Jiang. 2021. Medial IPC: Accelerated Incremental Potential Contact with Medial Elastics. ACM Transactions on Graphics (Proceedings of SIGGRAPH) 40, 4, Article 158 (July 2021), 16 pages. Incremental Potential Contact: Intersection-and Inversion-free Large Deformation Dynamics. Minchen Li, Zachary Ferguson, Teseo Schneider, Timothy Langlois, Denis Zorin, Daniele Panozzo, Chenfanfu Jiang, Danny M Kaufman, ACM Transactions on Graphics. 39Article. 20 pagesMinchen Li, Zachary Ferguson, Teseo Schneider, Timothy Langlois, Denis Zorin, Daniele Panozzo, Chenfanfu Jiang, and Danny M. Kaufman. 2020. Incremental Potential Contact: Intersection-and Inversion-free Large Deformation Dynamics. ACM Transactions on Graphics 39, 4, Article 49 (July 2020), 20 pages. Minchen Li, Danny M Kaufman, Chenfanfu Jiang, Proceedings of SIGGRAPH). SIGGRAPH)40170Minchen Li, Danny M. Kaufman, and Chenfanfu Jiang. 2021. Codimensional Incremen- tal Potential Contact. ACM Transactions on Graphics (Proceedings of SIGGRAPH) 40, 4, Article 170 (2021). Higher-Order Finite Elements for Embedded Simulation. Andreas Longva, Fabian Löschner, Tassilo Kugelstadt, José Antonio Fernández-Fernández. 39181ACM Transactions on GraphicsAndreas Longva, Fabian Löschner, Tassilo Kugelstadt, José Antonio Fernández- Fernández, and Jan Bender. 2020. Higher-Order Finite Elements for Embedded Simulation. ACM Transactions on Graphics 39, 6, Article 181 (Nov. 2020), 14 pages. The influence of geometric approximation on the accuracy of high order methods. Xiaojuan Luo, S Mark, Jean-Francois Shephard, Remacle, Rensselaer SCOREC report. 1Xiaojuan Luo, Mark S Shephard, and Jean-Francois Remacle. 2001. The influence of geometric approximation on the accuracy of high order methods. Rensselaer SCOREC report 1 (2001). FEBio: Finite Elements for Biomechanics. Steve A Maas, Benjamin J Ellis, Gerard A Ateshian, Jeffrey A Weiss, Journal of Biomechanical Engineering. 134Steve A. Maas, Benjamin J. Ellis, Gerard A. Ateshian, and Jeffrey A. Weiss. 2012. FEBio: Finite Elements for Biomechanics. Journal of Biomechanical Engineering 134, 1 (Feb. 2012). Unified Simulation of Elastic Rods, Shells, and Solids. Sebastian Martin, Peter Kaufmann, Mario Botsch, Eitan Grinspun, Markus Gross, ACM Transactions on Graphics (Proceedings of SIGGRAPH). 2910Sebastian Martin, Peter Kaufmann, Mario Botsch, Eitan Grinspun, and Markus Gross. 2010. Unified Simulation of Elastic Rods, Shells, and Solids. ACM Transactions on Graphics (Proceedings of SIGGRAPH) 29, 3 (2010), 39:1-39:10. Interactive physically-based shape editing. Johannes Mezger, Bernhard Thomaszewski, Simon Pabst, Wolfgang Straśer, Computer Aided Geometric Design. 26Solid and Physical ModelingJohannes Mezger, Bernhard Thomaszewski, Simon Pabst, and Wolfgang Straśer. 2009. Interactive physically-based shape editing. Computer Aided Geometric Design 26, 6 (2009), 680-694. Solid and Physical Modeling 2008. Collision Detection and Response for Computer Animation. Matthew Moore, Jane Wilhelms, Computer Graphics (Proceedings of SIGGRAPH). 22Matthew Moore and Jane Wilhelms. 1988. Collision Detection and Response for Computer Animation. Computer Graphics (Proceedings of SIGGRAPH) 22, 4 (June 1988), 289-298. Air Meshes for Robust Collision Handling. Matthias Müller, Nuttapong Chentanez, Tae-Yong Kim, Miles Macklin, ACM Transactions on Graphics. 34133Matthias Müller, Nuttapong Chentanez, Tae-Yong Kim, and Miles Macklin. 2015. Air Meshes for Robust Collision Handling. ACM Transactions on Graphics 34, 4, Article 133 (July 2015), 9 pages. User interaction with CAD models with nonholonomic parametric surface constraints. D Donald, Elaine Nelson, Cohen, ASME International Mechanical Engineering Congress and Exposition. 15861Donald D Nelson and Elaine Cohen. 1998. User interaction with CAD models with nonholonomic parametric surface constraints. In ASME International Mechanical Engineering Congress and Exposition, Vol. 15861. American Society of Mechanical Engineers, 235-242. Haptic Rendering of Surface-to-Surface Sculpted Model Interaction. Donald D Nelson, David E Johnson, Elaine Cohen, ACM SIGGRAPH 2005 Courses. Los Angeles, California; New York, NYACM97Donald D. Nelson, David E. Johnson, and Elaine Cohen. 2005. Haptic Rendering of Surface-to-Surface Sculpted Model Interaction. In ACM SIGGRAPH 2005 Courses (Los Angeles, California). ACM, New York, NY, 97-es. Über ein Variationsprinzip zur Lösung von Dirichlet-Problemen bei Verwendung von Teilräumen, die keinen Randbedingungen unterworfen sind. C C Johannes, Nitsche, Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg. 36Johannes C. C. Nitsche. 1971. Über ein Variationsprinzip zur Lösung von Dirichlet- Problemen bei Verwendung von Teilräumen, die keinen Randbedingungen un- terworfen sind. Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg 36 (1971), 9-15. Optimal h-p finite element methods. J Tinsley Oden, Computer Methods in Applied Mechanics and Engineering. 112J. Tinsley Oden. 1994. Optimal h-p finite element methods. Computer Methods in Applied Mechanics and Engineering 112, 1 (1994), 309-331. Elastic Textures for Additive Fabrication. Julian Panetta, Qingnan Zhou, Luigi Malomo, Nico Pietroni, Paolo Cignoni, Denis Zorin, ACM Transactions on Graphics. 34ArticleJulian Panetta, Qingnan Zhou, Luigi Malomo, Nico Pietroni, Paolo Cignoni, and Denis Zorin. 2015. Elastic Textures for Additive Fabrication. ACM Transactions on Graphics 34, 4, Article 135 (July 2015), 12 pages. A curvilinear high order finite element framework for electromechanics: From linearised electro-elasticity to massively deformable dielectric elastomers. Roman Poya, Antonio J Gil, Rogelio Ortigosa, Ruben Sevilla, Javier Bonet, Wolfgang A Wall, Computer Methods in Applied Mechanics and Engineering. 329Roman Poya, Antonio J. Gil, Rogelio Ortigosa, Ruben Sevilla, Javier Bonet, and Wolf- gang A. Wall. 2018. A curvilinear high order finite element framework for elec- tromechanics: From linearised electro-elasticity to massively deformable dielectric elastomers. Computer Methods in Applied Mechanics and Engineering 329 (2018), 75-117. Make It Stand: Balancing Shapes for 3D Fabrication. Romain Prévost, Emily Whiting, Sylvain Lefebvre, Olga Sorkine-Hornung, ACM Transactions on Graphics (Proceedings of SIGGRAPH). 3210Romain Prévost, Emily Whiting, Sylvain Lefebvre, and Olga Sorkine-Hornung. 2013. Make It Stand: Balancing Shapes for 3D Fabrication. ACM Transactions on Graphics (Proceedings of SIGGRAPH) 32, 4 (2013), 81:1-81:10. Collision and Self-Collision Handling in Cloth Model Dedicated to Design Garments. Xavier Provot, Computer Animation and Simulation. SpringerXavier Provot. 1997. Collision and Self-Collision Handling in Cloth Model Dedicated to Design Garments. In Computer Animation and Simulation. Springer, 177-189. A mortar segment-to-segment contact method for large deformation solid mechanics. A Michael, Tod A Puso, Laursen, Computer Methods in Applied Mechanics and Engineering. 193Michael A Puso and Tod A Laursen. 2004. A mortar segment-to-segment contact method for large deformation solid mechanics. Computer Methods in Applied Mechanics and Engineering 193, 6-8 (2004), 601-629. Poly-Spline Finite-Element Method. Teseo Schneider, Jérémie Dumas, Xifeng Gao, Mario Botsch, Daniele Panozzo, Denis Zorin, ACM Transactions on Graphics. 38Teseo Schneider, Jérémie Dumas, Xifeng Gao, Mario Botsch, Daniele Panozzo, and Denis Zorin. 2019a. Poly-Spline Finite-Element Method. ACM Transactions on Graphics 38, 3, Article 19 (March 2019), 16 pages. . Teseo Schneider, Jérémie Dumas, Xifeng Gao, Denis Zorin, Daniele Panozzo, Teseo Schneider, Jérémie Dumas, Xifeng Gao, Denis Zorin, and Daniele Panozzo. 2019b. PolyFEM. https://polyfem.github.io/. Decoupling Simulation Accuracy from Mesh Quality. Teseo Schneider, Yixin Hu, Jérémie Dumas, Xifeng Gao, Daniele Panozzo, Denis Zorin, ACM Transactions on Graphics. 37Teseo Schneider, Yixin Hu, Jérémie Dumas, Xifeng Gao, Daniele Panozzo, and Denis Zorin. 2018. Decoupling Simulation Accuracy from Mesh Quality. ACM Transactions on Graphics 37, 6 (Oct. 2018). A Large-Scale Comparison of Tetrahedral and Hexahedral Elements for Solving Elliptic PDEs with the Finite Element Method. Teseo Schneider, Yixin Hu, Xifeng Gao, Jérémie Dumas, Denis Zorin, Daniele Panozzo, ACM Transactions on Graphics. 412314 pagesTeseo Schneider, Yixin Hu, Xifeng Gao, Jérémie Dumas, Denis Zorin, and Daniele Panozzo. 2022. A Large-Scale Comparison of Tetrahedral and Hexahedral Elements for Solving Elliptic PDEs with the Finite Element Method. ACM Transactions on Graphics 41, 3, Article 23 (March 2022), 14 pages. Isogeometric high order mesh generation. Teseo Schneider, Daniele Panozzo, Xianlian Zhou, Computer Methods in Applied Mechanics and Engineering. 386114104Teseo Schneider, Daniele Panozzo, and Xianlian Zhou. 2021. Isogeometric high order mesh generation. Computer Methods in Applied Mechanics and Engineering 386 (2021), 114104. Isogeometric dual mortar methods for computational contact mechanics. Alexander Seitz, Philipp Farah, Johannes Kremheller, Barbara I Wohlmuth, Wolfgang A Wall, Alexander Popp, Computer Methods in Applied Mechanics and Engineering. 301Alexander Seitz, Philipp Farah, Johannes Kremheller, Barbara I. Wohlmuth, Wolf- gang A. Wall, and Alexander Popp. 2016. Isogeometric dual mortar methods for computational contact mechanics. Computer Methods in Applied Mechanics and Engineering 301 (2016), 259-280. Comparison of high-order curved finite elements. Ruben Sevilla, Sonia Fernández-Méndez, Antonio Huerta, Internat. J. Numer. Methods Engrg. 87Ruben Sevilla, Sonia Fernández-Méndez, and Antonio Huerta. 2011. Comparison of high-order curved finite elements. Internat. J. Numer. Methods Engrg. 87, 8 (2011), 719-734. Interval Methods for Multi-Point Collisions between Time-Dependent Curved Surfaces. John M Snyder, Adam R Woodbury, Kurt Fleischer, Bena Currin, Alan H Barr, Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques. the 20th Annual Conference on Computer Graphics and Interactive TechniquesAnaheim, CAAnnual Conference Series (Proceedings of SIGGRAPHJohn M. Snyder, Adam R. Woodbury, Kurt Fleischer, Bena Currin, and Alan H. Barr. 1993. Interval Methods for Multi-Point Collisions between Time-Dependent Curved Surfaces, In Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques (Anaheim, CA). Annual Conference Series (Proceedings of SIGGRAPH), 321-334. Mortaring by a method of. Rolf Stenberg, J. A. Nitsche. Computational Mechanics. Rolf Stenberg. 1998. Mortaring by a method of J. A. Nitsche. Computational Mechanics (Jan. 1998). Finite-dimensional contact mechanics. E David, Stewart, Philosophical Transactions of the Royal Society A. 359David E Stewart. 2001. Finite-dimensional contact mechanics. Philosophical Transac- tions of the Royal Society A 359 (2001), 2467-2482. Accurate Surface Embedding for Higher Order Finite Elements. Stefan Suwelack, Dimitar Lukarski, Vincent Heuveline, Rüdiger Dillmann, Stefanie Speidel, Proceedings of the 12th ACM SIGGRAPH/Eurographics Symposium on Computer Animation. the 12th ACM SIGGRAPH/Eurographics Symposium on Computer AnimationAnaheim, California; New York, NYACMSCA '13)Stefan Suwelack, Dimitar Lukarski, Vincent Heuveline, Rüdiger Dillmann, and Stefanie Speidel. 2013. Accurate Surface Embedding for Higher Order Finite Elements. In Proceedings of the 12th ACM SIGGRAPH/Eurographics Symposium on Computer Animation (Anaheim, California) (SCA '13). ACM, New York, NY, 187-192. Contact treatment in isogeometric analysis with NURBS. I Temizer, P Wriggers, T J R Hughes, Computer Methods in Applied Mechanics and Engineering. 200I. Temizer, P. Wriggers, and T.J.R. Hughes. 2011. Contact treatment in isogeometric analysis with NURBS. Computer Methods in Applied Mechanics and Engineering 200, 9 (2011), 1100-1112. Elastically Deformable Models. Demetri Terzopoulos, John Platt, Alan Barr, Kurt Fleischer, Proceedings of the 14th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '87). the 14th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '87)New York, NYACMDemetri Terzopoulos, John Platt, Alan Barr, and Kurt Fleischer. 1987. Elastically Deformable Models. In Proceedings of the 14th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '87). ACM, New York, NY, 205-214. Efficient and accurate collision response for elastically deformable models. Mickeal Verschoor, Andrei C Jalba, ACM Transactions on Graphics. 38Mickeal Verschoor and Andrei C Jalba. 2019. Efficient and accurate collision response for elastically deformable models. ACM Transactions on Graphics 38, 2, Article 17 (March 2019), 20 pages. Geometric Collisions for Time-Dependent Parametric Surfaces. Alan H Brian Von Herzen, Harold R Barr, Zatz, Computer Graphics (Proceedings of SIG-GRAPH). 24Brian Von Herzen, Alan H. Barr, and Harold R. Zatz. 1990. Geometric Collisions for Time-Dependent Parametric Surfaces. Computer Graphics (Proceedings of SIG- GRAPH) 24, 4 (Sept. 1990), 39-48. A Large Scale Benchmark and an Inclusion-Based Algorithm for Continuous Collision Detection. Bolun Wang, Zachary Ferguson, Teseo Schneider, Xin Jiang, Marco Attene, Daniele Panozzo, ACM Transactions on Graphics. 4016Bolun Wang, Zachary Ferguson, Teseo Schneider, Xin Jiang, Marco Attene, and Daniele Panozzo. 2021. A Large Scale Benchmark and an Inclusion-Based Algorithm for Continuous Collision Detection. ACM Transactions on Graphics 40, 5, Article 188 (Oct. 2021), 16 pages. Finite Element Algorithms for Contact Problems. Peter Wriggers, Archives of Computational Methods in Engineering. 2Peter Wriggers. 1995. Finite Element Algorithms for Contact Problems. Archives of Computational Methods in Engineering 2 (Dec. 1995), 1-49. A finite element method for contact using a third medium. P Wriggers, J Schröder, A Schwarz, Computational Mechanics. 52P. Wriggers, J. Schröder, and A. Schwarz. 2013. A finite element method for contact using a third medium. Computational Mechanics 52, 4 (Oct. 2013), 837-847.
[]
[ "The SSL Interplay: Augmentations, Inductive Bias, and Generalization", "The SSL Interplay: Augmentations, Inductive Bias, and Generalization" ]
[ "Vivien Cabannes ", "Bobak T Kiani ", "Randall Balestriero ", "Yann Lecun ", "Alberto Bietti " ]
[]
[]
Self-supervised learning (SSL) has emerged as a powerful framework to learn representations from raw data without supervision. Yet in practice, engineers face issues such as instability in tuning optimizers and collapse of representations during training. Such challenges motivate the need for a theory to shed light on the complex interplay between the choice of data augmentation, network architecture, and training algorithm. We study such an interplay with a precise analysis of generalization performance on both pretraining and downstream tasks in a theory friendly setup, and highlight several insights for SSL practitioners that arise from our theory.
10.48550/arxiv.2302.02774
[ "https://export.arxiv.org/pdf/2302.02774v2.pdf" ]
256,615,325
2302.02774
601e24bf710bad5f691c4af2b40b8a7402f10d68
The SSL Interplay: Augmentations, Inductive Bias, and Generalization Vivien Cabannes Bobak T Kiani Randall Balestriero Yann Lecun Alberto Bietti The SSL Interplay: Augmentations, Inductive Bias, and Generalization Self-supervised learning (SSL) has emerged as a powerful framework to learn representations from raw data without supervision. Yet in practice, engineers face issues such as instability in tuning optimizers and collapse of representations during training. Such challenges motivate the need for a theory to shed light on the complex interplay between the choice of data augmentation, network architecture, and training algorithm. We study such an interplay with a precise analysis of generalization performance on both pretraining and downstream tasks in a theory friendly setup, and highlight several insights for SSL practitioners that arise from our theory. Introduction Self-supervised learning (SSL) aims to construct useful representations of data without the need for pre-constructed labels. Due to the recent success and widespread applicability of SSL, established methods for training large neural networks now incorporate pre-training of models in an unsupervised manner over large amounts of data, before fine-tuning/probing them over downstream datasets (Devlin et al., 2019;Chen et al., 2020;Brown et al., 2020;Radford et al., 2021). Self-supervised pretraining generally aims to render the model invariant to certain distorsions/views of the inputs, in order to capture useful features for downstream tasks (e.g., Chen et al., 2020;Caron et al., 2020;Grill et al., 2020;Caron et al., 2021;Bardes et al., 2022). Though very powerful, SSL methods can be challenging to implement properly. They tend to suffer from various practical issues, such as instability and collapse during training and the need to carefully tune parameters related to the architecture, optimization algorithm, representation dimension, and form of augmentations. These different aspects of pretraining can lead to widely different behaviors and representations, as illustrated for instance in Figure 1. These challenges motivate new theoretical insights to better understand why such issues arise and how to better address them. Our study focuses on the joint-embedding framework and characterizes learned representations for given choices of input distributions, data augmentations, and architecture. To obtain a fine-grained picture, we study linear classes of functions endowed with a reproducing kernel, and analyze a theoretically friendly loss function that models both contrastive and non-contrastive methods. Our work generalizes the discrete data setting of HaoChen et al. (2021) and the finite dimensional setting of Saunshi et al. (2022), encompassing more expressive nonparametric models, potentially with universal approximation properties, and which can capture certain properties of architectures through their limiting kernel limits (Jacot et al., 2018). Our contributions are as follows: 1. We unveil two central integral operators: an "intrinsic" one that depends on the input distribution and choice of augmentations and another capturing the inductive bias associated with the model of computation. 2. We provide new bounds on the downstream generalization error that are sharper than previous work, and which can handle distributions shift between data before and after performing augmentations. 3. We propose new generalization bounds on the pretraining excess risk via tools from convex analysis. This analysis yields novel insights, including an understanding of the benefits of using multiple augmentations per sample (e.g., "multi-crop"). 4. We detail several examples where optimal representations are found in closed form, illustrating the role of augmentations, architecture, and regularization in forming representations. 5. We discuss several practical insights for SSL practitioners that emerge from our theory, in particular on how design choices in pretraining may affect downstream performance, and on how to avoid collapse of representations. Related work. Foundations for theoretically analyzing SSL have emerged in the past few years. Particularly relevant to our work, Balestriero & LeCun (2022); Kiani et al. Notably, HaoChen et al. (2021) recently leveraged tools in spectral graph theory to characterize guarantees on SSL performance under clustering assumptions. These assumptions were deemed impractical by Saunshi et al. (2022), who highlighted the importance of incorporating inductive bias to obtain provable guarantees. This line of work was extended to multi-modal SSL by Lee et al. (2021) where in essence the central symmetric operator T is replaced by a non-symmetric one, and the eigen decomposition is replaced by the singular one. The role of inductive bias has also been scrutinized through analysis of feature learning in training dynamics by Wen & Li (2021) and Tian (2022). Setup Machine learning streamlines the task of creating algorithms for finding patterns in data. An algorithm is conceptualized as a mapping f from an input x ∈ X to an output y ∈ Y. To construct this mapping f : X → Y, one can choose a measure of disagreement ℓ : Y × Y → R, and minimize the risk R(f ) = E (X,Y )∼ρ [ℓ(f (X), Y )],(1) for ρ ∈ ∆ X ×Y a distribution on I/O pairs. We denote by f * ∈ arg min R an optimal I/O map according to the risk. Mapping raw inputs (e.g., arrays of pixels), to outputs (e.g., classifying an animal in an image), is in general a challenging task. An effective technique consists of first extracting (or engineering) meaningful features ψ : X → R k from input data before using those features to search f under the form g • ψ for g : R k → Y a simple function. 1 Though features ψ can be hand-engineered, representation learning aims at improving such design via unsupervised learning procedures. On the one hand, reconstruction based methods mask or add noise to inputs via a mapping M x and aim to reconstruct the original input x from the features g•ψ Practice Theory Quantity Augmentation Spectral embedding T Architecture Space of functions K Optimization Regularization λ Subtle Interplay T λ Table 1. Analogy between practice and theory that this paper proposes to help disentangle the various phenomena of SSL training. using g, a simple prediction head. Large language models largely rely on this paradigm, usually learning ψ by completing sentences M x where word tokens are masked (e.g. Devlin et al., 2019). On the other hand, joint embedding methods learn ψ by leveraging invariance to small perturbations of the semantic information contained in inputs. This is the paradigm we shall focus on. Recently, joint embedding methods have relied heavily on the concept of data augmentation, such as small rotation, translation, color jittering of images. In particular, contrastive methods learn ψ by enforcing that if two augmentations ξ and ξ ′ come from the same data point, their representation ψ(ξ) and ψ(ξ ′ ) are close; while if they come from different data points, their representation are far away from one another (e.g., Chen et al., 2020). Non-contrastive methods only enforce similarities of augmented datapoints and avoid collapse by enforcing richness of the representation (see, e.g., Bardes et al., 2022). In the following, we focus on a theoretically friendly variant of VICReg (Balestriero & LeCun, 2022) with parameter β > 0, defined for ψ : X → R k by L(ψ) = β E X E ξ,ξ ′ ∥ψ(ξ) − ψ(ξ ′ )∥ 2 X + E ξ [ψ(ξ)ψ(ξ) ⊤ ] − I 2 2 ,(2) where pairs of inputs/augmentations (X, ξ) follow a distribution µ ∈ ∆ X ×X , whose conditional (ξ | X) arises from the choice of augmentation. The first term in L enforces invariance of the representation ψ to two augmentations ξ and ξ ′ of the same input X, while the second term lowers risk of collapse by pushing coordinates ψ i : X → R of ψ = (ψ i ) i∈ [k] to be orthogonal in L 2 . Remark 1 (Contrastive learning with L). When β = 1, the population loss L is equivalent to the spectral contrastive loss studied in HaoChen et al. (2021) as a theoretically friendly proxy for SimCLR (Chen et al., 2020). In other terms, L analyzes both contrastive and non-contrastive approaches to representation learning. Given a representation ψ, one can optimize for f through linear probing by constructing f = g • ψ where g is a linear function. f is thereby in the class of functions F = x → w ⊤ ψ(x) w ∈ R k .(3) In practice, one might not know the optimal ψ, but can estimate it asψ from empirical data, leading to an estimateF of this class of functions. Representation learning In this section, we study the representations induced by pretraining with specific augmentations and inductive biases. Closed form solution Equation (2) admits a closed form solution for ψ upon noting that the invariant part is a quadratic form. Lemma 2 (Spectral embedding). There exists a linear positive symmetric operator L in L 2 for which the operator I − T is positive and E X E ξ,ξ ′ ∥ψ(ξ) − ψ(ξ ′ )∥ 2 X = i∈[k] ψ ⊤ i Lψ i . To be consistent with previous literature, we will rather use T = I − L/2, which is also a linear positive symmetric operator, and is defined as, for ψ 1 , ψ 2 ∈ L 2 ψ ⊤ 1 T ψ 2 = E X E ξ,ξ ′ ψ 1 (ξ) ⊤ ψ 2 (ξ) X As a consequence, if (λ i ) are the eigenvalues of T and (f i ) are the corresponding eigenvectors, a minimizer of L is ψ i = √ µ i f i with µ i = 1 − β + βλ i . Lemma 2 is closely tied to the guiding principle in unsupervised learning that a good representation of data should minimize variations over the manifold of the data (Cabannes et al., 2023), and techniques that learn such representations through spectral decomposition of a central operator (see, e.g., Coifman & Lafon, 2006). Search within a linear class of functions In this more technical section, we study solutions of L for ψ belonging to a linear class of functions. The coordinates of the mapping ψ : X → R k are typically searched within a space of functions Ψ ⊂ R X , leading to ψ ∈ Ψ k . In our theoretically friendly setup, we assume that Ψ is a linear class of functions endowed with a Hilbertian topology such that the linear evaluations ψ → ψ(x) are continuous for almost all x ∈ X . The theory of reproducing kernel Hilbert space (Scholkopf & Smola, 2001) asserts that Ψ can be parameterized by a Hilbert space H and a mapping φ : X → H such that Ψ = {x → f θ (x) | f θ (x) = ⟨θ, φ(x)⟩ H , θ ∈ H} . (4) This generalizes the setting of HaoChen et al. (2021) where X is assumed to be finite and Ψ is parameterized by H = R X and φ(x) = δ x , as well as the setting of Saunshi et al. (2022) where H is assumed to be finite dimensional. To describe architectures such as neural networks with such a linear structure, it is common to linearize those models (e.g. Jacot et al. (2018)) as ψ θ (x) = ψ θ0 (x) + ⟨∇ θ0 ψ θ0 (x), θ − θ 0 ⟩ + o(∥θ − θ 0 ∥), λ λ i − λ θ i 2 augment. reg. interplay archit. reg. Figure 2. Interplay between T and K as a function of λ. Illustration of Proposition 4 in a setting where (λi) = (.9, .75, .5) and (∥θi∥ 2 ) = (.4, .25, .125). The plot displays the eigenvalues associated with three different eigenfunctions as a function of λ, β is set to one for convenience. When λ = 0, the minimizer ψ * : X → R of (2) is defined through T , here φ * = f1 (i = 1, shown in blue), when λ is big ψ * = f3 (green) mainly depends on K. In the middle, there is an interplay between these two regimes leading to ψ * = f2 (orange). The three regimes are named the "augmentation", the "architecture" (or VCReg) and the "interplay" regime respectively. This abstract setting can be instantiated with a two-layer ReLU network and cropping as detailed in Figure 8. where θ are the network parameters, assumed close to their initialization θ 0 , and ψ θ is the neural network. In this case, we may take φ = ∇ θ0 ψ θ0 , which arguably describes some regimes of wide neural networks (Lee et al., 2019). To minimize L in practice and improve generalization, a regularization parameter is typically introduced. 2 The following lemma provides a closed form solution of the regularized variant of L. Lemma 3 (Regularized population loss). For Θ ∈ R k ⊗ H, and a regularizer λ > 0, the regularized loss L(SΘ) + λ ∥Θ∥ 2 2 can be minimized in closed form with the operator T λ = (1 − β)I + βT − λK −1 .(5) where K = SS ⊤ for S : H → L 2 (µ Ξ ); θ → f θ the embedding of H in L 2 . Specifically, if (λ i ) are the (decreasing) eigenvalues of T λ and (f i ) the corresponding eigenfunctions, a minimizer is given by ψ i = max {λ i , 0} f i . The need for inductive bias Two different points of view motivate the introduction of the regularizer λ ∥Θ∥ leading to the operator T λ . In the classical viewpoint of statistical learning theory, one would like to retrieve the eigenfunctions of T to minimize L (Lemma 2). However, when solely accessing finitely many samples of data, eigenfunctions of T should be searched within a space of finite capacity (i.e. {f ∈ Ψ | ∥f ∥ 2 Ψ ≤ λ −1 }). Though fewer samples are needed for smaller models (e.g. the fewer neurons and layers in a deep network), such small models are unlikely to be expressive enough to represent the ideal solutions. This echoes the classical tradeoff between approximation and estimation error. In the case of Laplacians, one can assume that the eigenfunctions of T are smooth thereby belong to a small space of functions that are well-approximated with a finite model of computation. We refer the curious reader to Cabannes et al. (2021a) for results in this vein when I − T is the real Laplacian in L 2 . Another take was suggested by Saunshi et al. (2022), which pointed out that eigenvalues of T can have large multiplicity in realistic situations (in particular in the non-localized augmentations setting of Section 4.2), meaning that the space F is not uniquely defined from the loss L. As a consequence, defining the optimal solution solely from T is somewhat ill-posed, whereas, when K is properly chosen, T λ could define a "more principled" representation ψ. Paradoxically, with this viewpoint, bias could reduce the approximation error. Figure 3 illustrates such an idea. It leverages the following interpretation of the inductive bias in the friendly setting where T and K commute. Proposition 4. If T and K commute, and if (λ i ) are the eigenvalues of T and (f i ) its eigenfunctions, then there exists (θ i ) such that f i = f θi (4). Moreover, the optimal representation to minimize the regularized loss are the f i that maximize βλ i − λ ∥θ i ∥ 2 . In other terms, the regularization biases towards representations that have a small complexity with respect to the model of computation. Lemma 3 shows an interesting behavior of the VCReg loss (β = 0, i.e. VICReg without invariance term). In this setting, the optimal ψ retrieve the largest eigenfunctions of K, recovering kernel PCA. Learning downstream tasks with linear probing of the resulting ψ is equivalent to linear regression with an eigenvalue cut-off, which is a powerful spectral filtering technique (see, e.g. Bun et al., 2017). Illustrative examples The analysis of this paper relies on two central operators: T , that is "intrinsically" defined from the data distribution and augmentations, and K, which relates to the model of computation (e.g. the network architecture). Once those operators are chosen, Section 5 provides a sharp analysis of convergence and generalization with SSL in the kernel regime. In essence, Assumption 3 requires that the target function (downstream) align well with the learned representation (upstream) when given infinite data. Here, the effect of T and the inductive bias introduced by λK −1 on the learned representation can appear abstract. To provide intuition and outline important properties of these operators, this section lays out several concrete examples to help prac- Figure 3. Trade-off on eigenvalues between T and K. Illustration of a harmonic setting where T and K are diagonalized in the same basis. This basis is parametrized by an "invariance score" (x = m in (54)) and a "complexity score" (y = |S| in (54)). The eigenvalues λx,y(A) for A ∈ {T, K, T λ } are represented with colors and displayed in a grid associated with x ∈ [15] and y ∈ [8]. The sole use of the operator T biases towards invariance (lower x) with high complexity (lower y), while the sole use of K biases toward low complexity. The interplay between the two results in T λ whose biggest eigenfunctions have high invariance and low complexity, and corresponds to an ideal representation ψ. titioners better understand the role of augmentations and their interplay with the inductive bias of the architecture. Two different perspective have emerged to understand learned representation in SSL. One intuition comes from the spectral clustering literature, and is the object of subsection 4.1. The other intuitive way to understand SSL is based on harmonic analysis, and is the object of subsection 4.2. All in all, this section generalizes previous works by dropping out strong clustering assumptions in the data, showing that what really matter are the eigenfuntions of T , which eventually capture clustering structures when such clustering assumptions are invoked. It further uses harmonic analysis tools to better describe these eigenfunctions as suggested by Saunshi et al. (2022) and detailed in Table 2. Low-variation with localized augmentations When augmentations (ξ | X) are localized around the input X, optimizing the loss L (2) biases towards small gradients of ψ along the directions of augmentations. Formally for ψ : X → R, using first order Taylor expansion, E ∥ψ(ξ) − ψ(ξ ′ )∥ 2 X ≃ E ⟨∇ψ(X), ξ − ξ ′ ⟩ 2 X . Under isotropic augmentations, the objective simplifies as E ⟨∇ψ(X), ξ − ξ ′ ⟩ 2 X ∝ ∥∇ψ(X)∥ 2 , which enforces ψ to have small variations on densely populated regions of the input space -reminiscent of popular approaches to tackle representation and semi-supervised learning in the last two decades (van Engelen & Hoos, 2020). More generally, augmentations govern the important directions of invariance for ψ, recovering a finite-differences approach to the Tangent Prop algorithm (Simard et al., 1991). Low variation methods are particularly useful when data display a clustering structure (c.f. Figure 10 for an illustration with neural networks). If augmentations preserve the clustering structure, L is minimized by piecewise constant functions on each cluster, leading to useful features for downstream tasks that involve classifying different clusters (HaoChen et al., 2021;Schiebinger et al., 2015). The inductive biases further deforms those top eigenfunctions to be regular in a sense defined by Ψ (4), e.g., analytic if we use a radial basis function kernel (Sun & Zhou, 2008). The role of augmentations When augmentations are not localized, which is often the case in practice, harmonic analysis provides useful tools to study in-depth the role of augmentations, in particular when data are uniform on the sphere or the hypercube. Our findings on the hypercube are summarized in Table 2. In such a setting, we show that common augmentations enforce smoothness, locality, or invariance to certain symmetries. For example, crops push ψ to focus on details that can appear within the crop size, filtering out long-range interactions between parts of the input that are likely to be spurious features. The following example formalizes this. Example 1 (Cropping). Consider the hypercube setting where X = {−1, 1} n and X is uniformly distributed. A basis of L 2 (X , R) is given by the parity functions χ S : x → i∈S x i for all the subsets S ⊆ [n]. Pre-training via cropping with window sizes v × w set T χ S = 0 for all S whose support forms a window larger than the size v × w. For all the other S, T χ S = λ S χ S , where λ S decreases with the diameter of S. In other terms, pre-training with 2-D cropping eliminates the influence of functions which act globally outside of the cropping window. This, in effect, imparts a locality to the induced representation ψ which is often desirable for generalization. This example suggests that the ideal crop size should match the desired scale of details for ψ; e.g., on a dataset with fine-grained details such as iNaturalist, one should reduce the crop window size in comparison to a dataset such as ImageNet. Appendix D discusses further examples of augmentations, such as random noise or translations, and shows how they bias towards smooth or invariant eigenfunctions. Interplay with the architecture While the design of augmentations and architecture can be done separately, changes to the architecture and optimization scheme play an important role in the resulting optimal ψ. Generally, increasing the amount of inductive bias by increasing λ shifts ψ towards smoother functions, in the sense captured by the H norm, which we illustrate in Figure 4. In practice, the right amount (captured here by the parameter λ) of inductive bias to enforce is often set by a mix of intuition, common knowledge and empirical evidence. For example, Caron et al. (2021) links the inductive bias of early stopping to beneficial outcomes noting that "training longer [...] has been leading to worse performance". Example 2 (Dot-product kernel). On the Boolean hypercube setting of Example 1, many linear models (4) take the form φ(x) ⊤ φ(y) = h(⟨x, y⟩) (e.g., the classical NTK linearization of fully connected layer) leading to an integral operator K that is diagonalizable by parity functions. More precisely, there exists (ν i ) ∈ R d such that Kχ S = ν |S| χ S , where |S| is the cardinality of S and ν |S| decreases with |S|. In the setting of crops, T pushes towards representation on parity functions with small diameter (ψ = (χ S ) S for S with small diameters), while the inductive bias acts on the cardinality of the sets S, pushing towards the χ S that maximize ν |S| . Formal derivations are provided in Appendix D. Appendix D provides additional examples. For instance, in the case of translations, there is a similar interplay between a low-degree bias in K versus an invariance bias in T . We also consider convolutional architectures, which can impose locality through constraints on diam(S), on top of a lowdegree bias. Figure 3 shows such trade-offs in eigenvalues, Figure 4 visualizes how this interplay may affect leading eigenfunctions in a spherical setup, and Figure 5 illustrates the resulting effect on different downstream tasks. Convergence analysis The following section analyzes guarantees on both the pretraining and downstream tasks. 3 For simplicity, we consider the mean-square loss ℓ(y, y ′ ) = ∥y − y ′ ∥ 2 with Y = R dy . The studies of many losses can be reduced to the leastsquares case thanks to calibration inequalities (Bartlett et al., 2006) or self-concordance properties (Ostrovskii & Bach, 2018). To precisely study convergence rates, we consider the kernel regime of Section 3.2, where F is specified through Θ * of Lemma 3 as F = x → w ⊤ Θ * φ(x) w ∈ R k ,(3) andF is defined similarly withΘ as an estimate of Θ * . In the following, (f i ) denote the eigenfunctions of T λ ordered by decreasing eigenvalues, and λ is considered to be fixed throughout this section. Dealing with distribution shift Self-supervised learning algorithms often incorporate strong augmentations leading to potentially different marginal distributions over inputs and augmentations. This discrepancy is often overlooked, many theoretical works implicitly assuming ρ X = µ Ξ . In practice, the marginal distribution ρ X of inputs in the downstream task can be meaningfully different from the marginal distribution of augmentations µ Ξ Table 2. Effect of common augmentations on the optimal representation ψ through the operator T . Without augmentations, ψ could match any Fourier basis function. Augmentations filter out some of those by attenuating their eigenvalues in T , and the architecture will push ψ to pick some specific frequencies among the remaining ones through the operator K. The table stylizes the effect of usual augmentations on parity functions over bit streams. We refer the reader to Appendix D for further details and derivations. on which we have imposed orthogonality of the representation ψ in the pretraining task. However, the optimal representation ψ is likely to be invariant to augmentations, meaning that ideally, ψ(X) should have the same distribution when X ∼ µ Ξ or X ∼ ρ X , which we write formally as ψ # µ Ξ = ψ # ρ X . Moreover, augmentations are likely to spread the input data distribution, leading to the dominance ρ X ≪ µ Ξ . This motivates the following assumptions and definitions. Assumption 1 (Low expansion). There exists c r > 0 such that for any function f in the original space of functions Ψ defined in (4), ∥f ∥ 2 L 2 (ρ X ) ≤ c r ∥f ∥ 2 L 2 (µ X ) , Assumption 2. For any i smaller than the number of positive eigenvalues of T λ , the projection of the target f * on f i in L 2 (µ Ξ ) coincides with the projection on f i in L 2 (ρ X ). To make those two concepts more concrete, we provide three examples below. Example 3. If ρ X has a density against µ Ξ which is bounded from below by δ ∈ (0, 1] on its support, i.e. µ Ξ = δρ X + (1 − δ)µ ⊥ with µ ⊥ ∈ ∆ X , then Assumption 1 is met with c r = 1/δ. Example 4. Let Σ τ = E X∼τ [φ(X)φ(X) ⊤ ] be the covariance matrix of φ under the distribution τ . When there exists c such that Σ ρ X ⪯ cΣ µΞ (i.e cΣ µΞ − Σ ρ X is positive semidefinite), then Assumption 1 holds with c r = c. , with only f * 3 invariant to translations. K is built from a dotproduct kernel that acts as a regularizer on degrees, while T is built from local translations. Designing ψ from T alone (λ = 0) is helpful to learn globally invariant polynomials in the downstream, while increasing the regularization λ helps to learn polynomials of small degree. Experiment details in Appendix E.2, and Figure 11 showcases a similar behavior for neural networks. Example 5. If ψ ♯ µ Ξ = ψ ♯ ρ X holds for the optimal representation ψ = (f i ), with (f i ) the positive eigenfunctions of T λ , and there exists a measurable function g : R k → Y such that f * = g • ψ, then Assumption 2 is verified. In essence, Assumptions 1 and 2 allow for the incorporation of augmented data that does not resemble the original data as long as the model of computation (Assumption 1) and training via the VICReg loss (Assumption 2) do not bias too much towards this aberrant augmented data. Example 3 states that when the augmented data mostly looks like the original samples then one does not have to worry about bias introduced by the model of computation. Example 4 gives a more relaxed guarantee based on second order moments. Finally, Example 5 states that one need not worry about the idiosyncrasies of the augmented data if the learned representations confound augmented data with their original samples. Generalization on downstream tasks This section provides comprehensive generalization guarantees on supervised downstream tasks. The following assumption ensures that the target function f * (x) = E ρ [Y |X = x] of the downstream task is well represented by the pretraining problem. Assumption 3 (Source condition). f * belongs to the positive eigenspace of T λ , i.e. f * ∈ Span {f i | λ i > 0}. Example 6 (Cluster assumption). If the support of the density µ Ξ has k connected components, f * is constant on those clusters, and λ = 0, then Assumption 3 holds. We now give a simplified version of our downstream guarantee. See Theorem 4 in Appendix B for the full statement. Theorem 1 (Downstream error). Let (X i , Y i ) ∼ ρ ⊗n be n samples drawn from the distribution for the downstream task and ℓ be the square loss. Define k λ < +∞ as the number of strictly positive eigenvalues of T λ . Under Assumptions 1, 2, and 3, after a transitory regime, the average excess risk of the optimally-regularized empirical risk minimizer f n is E[R(f n ) − R(f * )] ≤ 2k e ε 2 n + log(n) 1.1 n ∥f * ∥ L 2 (ρ) + c 2 f,T λ L k (SΘ) − L k (SΘ * ) + c f,k . (6) where ε 2 is the noise level of Y (the supremum of conditional variances), k e ≤ k is the effective dimension of the representation ψ = Θφ on the downstream task, c f,k ≤ (k λ − k) + ∥f * ∥ 2 L 2 (ρ X ) is a constant relating to the concentration of the energy of f * the target function on the downstream task with respect to the eigenspaces of T λ , c f,T λ ≤ T −1 λ f * is a similar constant taking into account the decay of eigenvalues of T λ , and the index k in L k indicates that we search over ψ : X → R k . The results of Theorem 1 can be seen as a variance-bias decomposition. A variance term, due to misspecified linear regression, displays rates in k log(n)/n. The log(n) factor is actually an artefact of our derivations, and could be removed with Theorem 1 of Mourtada et al. (2022). A bias term relates to the approximation error. It captures both the hardness to learn f * with T λ through the constants c f,T λ and c f,k , and the error made on the pre-training task through L − L * . Note that the proof of Theorem 1 mindfully avoids bounding c f,T λ by T −1 λ op ∥f * ∥ which would introduce the inverse of the spectral gap of T λ in the bound, and would not characterize well situation where the target function f * is actually easy to learn with T λ . We also remark that for classification tasks, recent work shows that under mild assumptions on ρ X , and low noise conditions, it should be possible to convert the rates of Theorem 1 into exponentially fast rates for the zero-one loss (Cabannes et al., 2021b). This is particularly the case under the cluster setting studied by HaoChen et al. (2021). 4 The theoretical convergence rates of Theorem 1 are validated experimentally in Figure 6. 4 See also Rigollet (2007) for fast rates in this setting. Pretraining guarantees Theorem 1 above states that representations with small pretraining loss can solve downstream tasks that satisfy Assumption 3 but do not address how difficult it is to find such representations. This section aims to bridge that gap. The following theorem details convergence rates of the empirical risk minimizer using Rademacher complexity arguments. Theorem 2 (Empirical risk minimizer). Let Θ n ∈ R k ⊗ H be the minimizer of the unbiased regularized empirical version of L based on a dataset D n . Assume that D n is built from n input samples (X i ) ∼ µ ⊗n X and m augmentation per samples (ξ ij ) ∼ µ| ⊗m Xi , then the average excess risk is bounded by E Dn [L(SΘ n )] − L(SΘ) ≤ 12κ 2 k λ √ n 1 + κ 2 k λ ,(7) where κ is a bound on ∥φ(X)∥. Note that the proof of Theorem 2 proceeds with a loose bound on the variance of the empirical risk, which is mainly due to the difficulty in dealing with non-exchangeability of the samples (ξ ij ). In essence, the ease of minimizing L depends on both the variance of L when estimated with empirical data (or the variance of stochastic gradients when performing SGD), and the size of the space where we aim to find representations ψ : X → R k . With stronger assumptions on the distribution of φ(ξ) (e.g., data are clustered, and the law of (ξ | X) is invariant per cluster), one could show much better behavior of the excess risk with respect to the number of augmentations (e.g., replacing n by the minimum number of points in one cluster multiplied by the number of views). The following theorem states convergence rates with a stochastic gradient descent algorithm capturing such a potential situation. Proofs and technicalities, based on convex optimization literature, are detailed in Appendix C. Theorem 3 (Sharper bounds). There exists an implementable algorithm that guarantees an average excess risk E Dn [L(SΘ n )] − L(SΘ) ≤ 3κ 2 c λ c ′ λ σ 2 X n + σ 2 ξ nm + 4κ 6 c 2 λ n(8) where c λ = 1 + κ 2 k λ /λ, c ′ λ = 1 + k 2 λ /λ 2 , k λ is the number of positive eigenvalues of T λ , κ is a bound on ∥φ∥, σ X relates to the variance of E [ψ(ξ) | X], and σ ξ relates to the average variance of (ξ | X). Moreover, when K = SS ⊤ or the covariance of the φ(ξ) has a finite number of positive eigenvalues (e.g. X finite or H finite dimensional), with c K a constant that relates to the condition number of K, this bound can be tightened to E Dn [L(SΘ n )] − L(SΘ) ≤ 4c 2 K c 2 λ n .(9) In the setting studied in HaoChen et al. (2021), we stress that Theorem 3 guarantees convergence rates of O(n −1 ) rather than O(n −1/2 ) on the upstream loss. In effect, we improve the rates of HaoChen et al. (2021, Theorem 4.3) from n −1/2 to n −1 on both pretraining and downstream tasks. Practical insights In this section, we relate our theory to commonly observed phenomena when training SSL algorithms and offer best practices informed by our findings. The downstream problem Two regimes should be distinguished for the downstream problem. When few downstream samples are available, fewshot learning requires a small effective dimension k e (6) to lower the estimation error and avoid fitting noise. Limiting k e (or equivalently the capacity ofF) can be done either by decreasing the representation dimension k or applying regularization on downstream tasks. This theoretical tradeoff between effective dimension and number of downstream examples is illustrated empirically by He & Ozay (2020, Figure 6). On the contrary, when accessing a substantial amount of data for training downstream tasks, one could confidently augment the representation dimension k to decrease the approximation error. This was notably observed on large-scale datasets by Garrido et al. (2022, Figure 1): as k increases, the effective dimension k e converges to a limit, and the downstream performance keeps increasing until this limit is reached. Remarkably, our theory explains this phenomenon: since k λ is finite, as k increases, the effective dimension k e will be bounded by the limiting case where k = k λ . 5 The pretraining problem Usefulness of multiple augmentations per sample. Theorem 3 shows how multiple augmentation such as multi-crops can result in faster convergence to an optimal representation ψ. There, the variance of the empirical risk depends on both σ X due to variation over inputs, and σ ξ due to variations over resulting views after augmentation. With multiple augmentations per sample, one can reduce the latter variance and improve performance, which was observed with the introduction of multicrops in Caron et al. (2020). However, when the total amount m × n of pre-processed data is held fixed, it is generally better to process many inputs with two views m = 2, rather than a few inputs with many augmentations. This finding matches the empirical observations of Bardes et al. (2022) that if available, fresh samples are always better than more views. Capacity trouble in pretraining. Theorems 2 and 3 show that, without regularization restricting the capacity of the model of computation, one cannot expect to meaningfully solve the pretraining task. This is captured by the quantity c λ that goes to infinity as λ goes to zero. Such issues related to the lack of regularization commonly arise in practice. Given n × m upstream samples (ξ ij ), the empirical minimization of VICReg can be implemented by approximating µ with ij δ (i,ξi,j ) /nm. In this setting, T is the adjacency matrix of a graph with as many connected components as there are inputs n, as detailed in Appendix E. All the connected components define a maximal eigenvector of the empirical approximation to T , leading to a "collapsed" representation ψ = j δ ξij /m. Regularizing forces the optimizer to search for representation inside the space Ψ which mixes those small clusters letting meaning eigenfunctions emerging (see Figure 7 for an illustration). Guidelines for practitioners Our theoretical study provides several insights that may be useful for SSL practitioners. We highlight a few below. Avoiding collapse. The common collapse phenomenon, where pretraining ends up fitting noise instead of learning useful features, may be addressed in several ways. Our theory suggests to: • Reduce the model capacity, through regularization (e.g., early stopping), or simpler architectures (e.g., a shallow CNN instead of an MLP). As a consequence, Figure 7. Capacity trouble. Level lines of the top eigenfunction of empirical estimate of T λ for negligible regularization (left) and small regularization λ (right). Experiments are done with a Gaussian kernel with scale about one tenth of the problem diameter, augmentations are represented as black dots, connected by a line when they provide from the same input X. When λ is negligibly small, capacity troubles arise, infringing the recovery of the cluster structure on the right. Ψ will have a lower effective dimension, K will encourage "simpler" representations that can be learned with less data, even without any data augmentation. • Use stronger augmentations. T will become more compact, reducing k λ the dimension of the "positive eigenspace" of T λ . The ideal ψ will exhibit more structure, thereby its search could be reduced to smaller spaces, making it harder to collapse. Incorporating priors. Representations are typically used for solving downstream tasks, thus it is crucial to incorporate the right priors during pretraining. Our theory showcases the important role of several factors. (i) Augmentations determine the nature of the invariance that is enforced (e.g., low variations, short-range dependencies, translation invariance); affects top eigenfunctions of T . (ii) Architecture promotes "simple" representations (e.g., smoothness, locality); affects top eigenfunctions of K. (iii) Regularization balances the interplay between augmentations and architecture; affects top eigenfunctions of T λ . (iv) Pretraining data impacts both T and K and their eigenfunctions, e.g., through clustering structure, or natural image statistics. Conclusion This paper presents a theoretical framework for studying self-supervised learning in the kernel regime. It examines three key operators and their impact on convergence and generalization: T linked with augmentations, K linked with architecture choices, and T λ resulting from their interplay and tuned by the parameter λ. Our analysis offers useful guarantees and practical guidelines for practitioners to improve the stability and performance of SSL algorithms. Looking beyond the kernel regime, future work can We left for future work the extension of our analysis beyond the kernel setting, in particular to understand non-linear training dynamics in finite width neural network and feature learning capabilities within layers. Moreover, future studies could encompass more techniques that enhance performance in SSL, these include projecting representations before enforcing losses, batching the data, or applying different loss functions. Cabannes, V., Pillaud-Vivien, L., Bach, F., and Rudi, A. References Overcoming the curse of dimensionality with Laplacian regularization in semi-supervised learning. In NeurIPS, 2021a. Cabannes, V., Rudi, A., and Bach, F. Fast rates in structured prediction. In Conference on Learning Theory, 2021b. Cabannes, V., Bietti, A., and Balestriero, R. On minimal variations for unsupervised representation learning. Favero, A., Cagnetta, F., and Wyart, M. Locality defeats the curse of dimensionality in convolutional teacher-student scenarios. In Advances in Neural Information Processing Systems, 2021. Garrido, Q., Balestriero, R., Najman, L., and Lecun, Y. Rankme: Assessing the downstream performance of pretrained self-supervised representations by their rank. arXiv preprint arXiv:2210.02885, 2022. Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al. Bootstrap your own latent-a new approach to self-supervised learning. In Advances in neural information processing systems, 2020. HaoChen, J., Wei, C., Gaidon, A., and Ma, T. Provable guarantees for self-supervised deep learning with spectral contrastive loss. In NeurIPS, 2021. HaoChen, J. Z. and Ma, T. A theoretical study of inductive biases in contrastive learning. arXiv preprint arXiv:2211.14699, 2022. He, B. and Ozay, M. Exploring the gap between collapsed & whitened features in self-supervised learning. In ICML, 2020. Jacot, A., Gabriel, F., and Hongler, C. Neural tangent kernel: Convergence and generalization in neural networks. In NeurIPS, 2018. Kato, T. Perturbation Theory for Linear Operators. Springer, 1995. Kiani, B. T., Balestriero, R., Chen, Y., Lloyd, S., and LeCun, Y. Joint embedding self-supervised learning in the kernel regime. arXiv preprint arXiv:2209.14884, 2022. Kolmogorov, A. and Tikhomirov, V. ε-entropy and εcapacity of sets in functional spaces. Uspekhi Matematicheskikh Nauk, 1959. Lee, J., Xiao, L., Schoenholz, S., Bahri, Y., Novak, R., Sohl-Dickstein, J., and Pennington, J. Wide neural networks of any depth evolve as linear models under gradient descent. In NeurIPS, 2019. Lee, J., Lei, Q., Saunshi, N., and Zhuo, J. Predicting what you already know helps: Provable self-supervised learning. In Advances in Neural Information Processing Systems, 2021. Lin, J., Rudi, A., Rosasco, L., and Cevher, V. Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces. Applied and Computational Harmonic Analysis, 2020. A. Mathematical details and simple proofs A.1. Technical details Several technicalities were left implicit in the main text, we discuss it now. In particular, we assumed that there exists a minimizer f * of the risk R which is true when Y is finite, or when ℓ is the least-square loss and (Y | X) has a second order moment almost everywhere. Moreover, in the proof for least-squares, we will assume for simplicity that Y = R. The same derivations still holds true when Y = R k although it requires slight precautions such as working in Y ⊗ H rather than in H (see Cabannes et al., 2021b, for example). Integers. The representation dimension k is an integer, and [k] denotes the set {1, 2, · · · , k}. For simplicity, we abuse notations and denote by N the set of strictly positive integers. Geometry. The product A × B denotes the set of elements (a, b) with a ∈ A and b ∈ B. The notation a ⊤ denotes the adjoint of a which depends on the Hilbertian metric space one consider a to be part of (e.g. the adjoint in L 2 (µ Ξ ) is not the same as the adjoint in L 2 (ρ X )). The notation a ⊤ b denotes the scalar product ⟨a, b⟩ with the Hilbertian metric space a and b are understood to be part of. The Hilbertian norm on matrices or operators is denoted by ∥·∥ F (Frobenius), ∥·∥ 2 or ∥·∥ HS (Hibert-Schmidt). The operator norm is denoted by ∥·∥ op . Moreover, the identity is always denoted by I. Distributions. In order to define probabilities, X and Y are assumed to be Polish spacesa endowed with the Borel topologies. We used the simplex notation ∆ A to design the set of probability measure on A, and the tensor notations ρ ⊗n to denotes the measure of n independent random variables all distributed according to ρ. The notation φ # ρ denote the measure of φ(X) when X is distributed according to the measure ρ. The notation ρ ≪ µ means that for any measurable set X the fact that µ(X) = 0 implies that ρ(X) = 0. The notation δ x denotes the Dirac distribution, which satisfies ⟨f, δ x ⟩ = f (x) using the duality bracket between functions and distributions. For any distribution p, the space L 2 (p) is made of measurable functions that are square-integrable. Functions. All functions such as ℓ, f , ψ, φ, and so on, are restricted to be measurable. The notation • denote the composition of functions f • g(·) = f (g(·)). A function f : X → Y is understood as an element of Y X , and we use some isomorphism such as (R k ) X = (R X ) k . We use the notation R k ⊗ H to denote linear bounded operator from H to R k . This tensor product notation generalizes matrix notations with R k ⊗ R d h = R k×d h . In particular, Ψ k = x → Θφ(x) Θ ∈ R k ⊗ H . For Θ ∈ R k ⊗ H, one can write Θ in row-style as an element of H k as well as its adjoint Θ ⊤ ∈ H ⊗ R k in column-style which follows from the fact that H k is self-adjoint when endowed with the ℓ 2 -product topology. A.2. Proof of Remark 1 Let us characterize (2) in order to easily implement it with unbiased stochastic gradient. We need to get the expectation outside the norm. This can be done with the following derivations E[ψ(ξ)ψ(ξ) ⊤ ] − I 2 = Tr (E[ψ(ξ)ψ(ξ) ⊤ ] − I)(E[ψ(ξ ′ )ψ(ξ ′ ) ⊤ ] − I) = E ξ,ξ ′ Tr ψ(ξ)ψ(ξ) ⊤ ψ(ξ ′ )ψ(ξ ′ ) ⊤ − 2 E ξ Tr ψ(ξ)ψ(ξ) ⊤ + Tr(I) = E ξ,ξ ′ (ψ(ξ ′ ) ⊤ ψ(ξ)) 2 − 2 E ξ ∥ψ(ξ)∥ 2 + k. For the first part, we get E X E ξ,ξ ′ ∥ψ(ξ) − ψ(ξ ′ )∥ 2 X = 2 E ξ [∥ψ(ξ)∥ 2 ] − 2 E X E ξ,ξ ′ ψ(ξ) ⊤ ψ(ξ ′ ) X As a consequence, L(ψ; β) = 2(β − 1) E ξ [∥ψ(ξ)∥ 2 ] − 2β E X E ξ,ξ ′ ψ(ξ) ⊤ ψ(ξ ′ ) X + E ξ,ξ ′ (ψ(ξ ′ ) ⊤ ψ(ξ)) 2 + k.(10) In particular, when β = 1, we retrieve the spectral contrastive loss introduced by HaoChen et al. (2021), L(ψ) = −2 E X E ξ,ξ ′ ψ(ξ) ⊤ ψ(ξ ′ ) X + E ξ,ξ ′ [(ψ(ξ) ⊤ ψ(ξ ′ )) 2 ] + k. A.3. Proof of Lemma 2 First, notice that if we define for ψ : X → R, the mapping ω(ψ) → E X [E ξ,ξ ′ ∥ψ(ξ) − ψ(ξ)∥ 2 X ] , ω is a quadratic form on L 2 (µ Ξ ). As a consequence, it can be represented with a linear self-adjoint operator L on L 2 (µ Ξ ) such that ω(ψ) = ⟨ψ, Lψ⟩ L 2 (µΞ) . Because ω(ψ) ≥ 0, we have L ⪰ 0 (with ⪰ the Loewner order on symmetric operators, i.e. A ⪰ B if A − B is positive). The following lemma show that L is bounded. Lemma 5. For any ψ ∈ L 2 (µ Ξ ), ω(ψ) ≤ 2 ∥ψ∥ 2 L 2 (µΞ) . As a consequence, L ⪯ 2I. Proof. This follows from the fact that ω(ψ) = E X [E ξ,ξ ′ ∥ψ(ξ) − ψ(ξ ′ )∥ 2 X ] = E X [E ξ,ξ ′ ∥ψ(ξ) − E [ψ(ξ) | X] + E [ψ(ξ) | X] − ψ(ξ ′ )∥ 2 X ] = E X [E ξ,ξ ′ ∥ψ(ξ) − E [ψ(ξ) | X]∥ 2 + ∥E [ψ(ξ) | X] − ψ(ξ ′ )∥ 2 X ] = 2 E X [E ξ ∥ψ(ξ) − E [ψ(ξ) | X]∥ 2 X ] = 2 min ψ0:X →R E X [E ξ ∥ψ(ξ) − ψ 0 (X)∥ 2 X ] ≤ 2 E X [E ξ ∥ψ(ξ)∥ 2 X ] = 2 E ξ ∥ψ(ξ)∥ 2 = 2 ∥ψ∥ L 2 (µΞ) Hence for any ψ, with the L 2 (µ Ξ ) geometry we have ψ ⊤ Lψ ≤ ψ ⊤ ψ, which implies, since L is self-adjoint, that ∥L∥ op ≤ 2. Because 0 ⪯ L ⪯ 2I, let us introduce T = (2I − L)/2, we have 0 ⪯ T ⪯ 1 and, with the L 2 (µ Ξ ) geometry, for ψ : X → R k L(ψ) = β k i=1 ψ ⊤ i Lψ i + E[ψ(ξ)ψ(ξ ′ ) ⊤ ] − I 2 = 2β k i=1 ψ ⊤ i (I − T )ψ i + E[ψ(ξ)ψ(ξ ′ ) ⊤ ] − I 2 . In order to diagonalize all operators without relying on integral formulations of the spectral theorem, we introduce the following mild assumption. Assumption 4. Assume that T has a pure point spectrum. Example 7. When the distribution of augmentation have a density p with respect to a any measure and (x, ξ) → p (ξ | x) /p(ξ) is in L 2 (µ), or when X is finite, T can be shown to be a compact operator, hence to have a pure point spectrum according to the spectral theorem. Proof. When X is finite, the L 2 spaces are finite dimensional, hence locally compact, which implies that all operators are compact. To prove the case with density, let us develop T as an integral operator. We have, in L 2 (µ Ξ ) geometry, for f : X → R 2f ⊤ (I − T )f = E X E ∥f (ξ) − f (ξ ′ )∥ 2 X = E X E ∥f (ξ)∥ 2 + ∥f (ξ ′ )∥ 2 − 2 ⟨f (ξ), f (ξ ′ )⟩ X = 2f ⊤ f − 2 E X E [⟨f (ξ), f (ξ ′ )⟩ | X] . This allow us to identify T with the inner product, we have for g : X → R and p the density of augmentations f ⊤ T g = E X E [⟨f (ξ), g(ξ ′ )⟩ | X] = ⟨f (ξ), g(ξ ′ )⟩ p (ξ | x) p (ξ ′ | x) dξ ′ dξµ X (dx) = µ Ξ (dξ) f (ξ), µ Ξ (dξ ′ )g(ξ ′ ) µ X (dx)p (ξ | x) p (ξ ′ | x) p(ξ)p(ξ ′ ) . As a consequence, one can consider T as the integral operator in L 2 (µ Ξ ) linked with the kernel k(ξ, ξ ′ ) = µ X (dx)p (ξ | x) p (ξ ′ | x) p(ξ)p(ξ ′ ) . When this kernel is bounded, or simply when ξ → k(ξ, ξ) belongs to L 2 (µ Ξ ), T is trace-class hence compact. Let us now prove in order to minimize L, one should take the eigenfunctions of the operator (1 − β)I + βT whose corresponding eigenvalues are the biggest positives ones. It can be proven with simple geometry in a somewhat abstract space. To do so, remark that ψ : X → R k in L 2 (µ Ξ , X , R k ) can be represented asψ ∈ R k ⊗ L 2 (µ Ξ , X , R) with the linear map that associates (ψ ⊤ i φ) to a function φ ∈ L 2 (µ Ξ , X , R). Let denote T β = (1 − β)I + βT , the upstream loss can be characterized as L(ψ) = 2β i∈[k] ⟨ψ i , (I − T )ψ i ⟩ + E ξ [ψ(ξ)ψ(ξ) ⊤ ] − I 2 = 2β i∈[k] e ⊤ i ψ, (I − T )ψ ⊤ e i + E ξ [ i,j∈[k] e ⊤ i ψ(ξ)ψ(ξ) ⊤ e j e i e ⊤ j ] − I 2 = 2β i∈[k] e iψ (I − T )ψ ⊤ e i + i,j∈[k] e ⊤ iψψ ⊤ e j e i e ⊤ j − I 2 = 2β Tr ψ (I − T )ψ ⊤ + ψψ ⊤ − I 2 = 2β Tr ψ (I − T )ψ ⊤ + Tr ψψ ⊤ − I 2 . = Tr 2βψ(I − T )ψ ⊤ +ψψ ⊤ψψ⊤ − 2ψψ ⊤ + I . = Tr L 2 (µΞ) ψ ⊤ψψ⊤ψ + (2β(I − T ) − 2I)ψ ⊤ψ + k. = Tr L 2 (µΞ) ψ ⊤ψψ⊤ψ − 2T βψ ⊤ψ + k. = Tr L 2 (µΞ) (ψ ⊤ψ − T β ) 2 − T 2 β + k. In order to find the minimizer of L with this new characterization, slight precautions are needed here since the two operators are not trace-class. The following lemma takes those precautions in order to finish the proof. Lemma 6. Let A be a self-adjoint operator on L 2 (µ Ξ ). Assume that there exists c such that A ⪯ cI and that A is pure-point spectrum. Then if (λ i , f i ) denote the eigen-decomposition of A with λ i in decreasing order, the minimization of Tr (B − A) 2 − B 2 under the constraint that B is a self-adjoint positive operator of rank at most k, is reached for B =ψ ⊤ψ with ψ : X → R k such that ψ i = max(0, λ i ) 1/2 f i . Proof. Let us decompose A into a positive part A + ⪰ 0 and a negative part A − ⪰ 0 such that A = A + − A − . Using the fact that B is positive self-adjoint, we get Tr (B − A) 2 − A 2 = Tr B 2 − 2B 1/2 AB 1/2 = Tr B 2 − 2B 1/2 A + B 1/2 + 2 Tr B 1/2 A − B 1/2 ≥ Tr B 2 − 2B 1/2 A + B 1/2 . Let us decompose B into k symmetric operators of rank at most one as B = k i=1 B i such that B i B j = 0 for any i ̸ = j ∈ [k]. Using the different properties of the operators introduced, we proceed with The following is a direct corollary of the proof above. Tr (B − A) 2 − A 2 ≥ k i=1 Tr B 2 i − 2 Tr (B i A + ) = k i=1 ∥B i ∥ 2 op − 2 ∥B i A + ∥ op ≥ k i=1 ∥B i ∥ 2 op − 2 ∥B i ∥ op ∥Π Bi A + ∥ op ≥ k i=1 ∥B i ∥ 2 op − 2 ∥B i ∥ op j<i (I − Π Bj )A + op = k i=1   ∥B i ∥ op − j<i (I − Π Bj )A + op   2 − j<i (I − Π Bj )A + 2 op ≥ − k i=1 j<i (I − Π Bj )A + 2 op ≥ − k i=1 σ i (A + ) where Π B denote Proposition 7 (Uniqueness of minimizers). The minimizers of L are unique up to orthogonal transformations and eigenfunction picking. More specifically, if U ∈ R k×k is orthogonal, i.e. U ⊤ U = I, then L(ψ) = L(U ψ); and if λ k = λ k+1 , one can choose different eigenfunctions as f k in the eigen-decomposition (λ i , f i ) of T β . A.4. Proof of Lemma 3 Let us consider ψ : X → R k with ψ i = f θi for θ i ∈ H and S : H → L 2 (µ Ξ ); θ → θ ⊤ φ(·). We can use the tensor notations introduced earlier to parameterized ψ = SΘ ⊤ with Θ = (θ i ) i∈[k] seen as an element of R k ⊗ H. The proof of Lemma 3 follows from the fact that ∥Θ∥ 2 2 = Θ ⊤ 2 2 = SΘ ⊤ , (S ⊤ S) −1 SΘ ⊤ L 2 (µΞ) = k i=1 Sθ ⊤ i K −1 Sθ i = k i=1 ψ ⊤ i K −1 ψ i . Since Ψ = S(H) = im S = im K 1/2 = K 1/2 (L 2 (µ Ξ )), one can consider K −1 as the inverse of K such that for ψ i ∈ ker K, ψ ⊤ i K −1 ψ i = +∞. This is what we implicitly assumed in the main paper, which lead to the (ψ i ) being all in Ψ. Note that in many cases, Ψ is dense in L 2 (µ Ξ ) (Micchelli et al., 2006), and one does not need to take such a precaution since the ker K = {0}, and there is only one way to define K −1 on L 2 (µ Ξ ). A.5. Second proof of Lemma 3 with covariance operators The proof given above of Lemma 3 might seem quite abstract for the reader unfamiliar with reproducing kernel Hilbert space. In this subsection, we provide a somewhat more accessible proof of this Lemma based on covariance operators. Reusing the unbiased characterization of L we have L(ψ; β) = 2(β − 1) E ξ [∥ψ(ξ)∥ 2 ] − 2β E X E ξ,ξ ′ ψ(ξ) ⊤ ψ(ξ ′ ) X + E ξ,ξ ′ (ψ(ξ ′ ) ⊤ ψ(ξ)) 2 + k. = 2(β − 1) Tr E ξ [ψ(ξ)ψ(ξ) ⊤ ] − 2β Tr E X E ξ,ξ ′ ψ(ξ))ψ(ξ ′ ) ⊤ X + Tr E ξ ψ(ξ)ψ(ξ) ⊤ 2 + k, where the last term provides from the fact that E ξ,ξ ′ (ψ(ξ) ⊤ ψ(ξ ′ )) 2 = E ξ,ξ ′ ψ(ξ) ⊤ ψ(ξ ′ )ψ(ξ ′ ) ⊤ ψ(ξ) = E ξ,ξ ′ Tr ψ(ξ ′ )ψ(ξ ′ ) ⊤ ψ(ξ)ψ(ξ) ⊤ = Tr E ξ [ψ(ξ)ψ(ξ) ⊤ ] E ξ ′ [ψ(ξ ′ )ψ(ξ ′ ) ⊤ ] = Tr E ξ [ψ(ξ)ψ(ξ) ⊤ ] 2 . A.5.1. OPERATOR TECHNICALITIES The search for ψ will be done under the form Θφ for Θ ∈ R k ⊗ H and φ : X → H. Let us discuss technicalities related to the infinite dimensional operators that will appear. Assumption 5. The Hilbert space H is separable, and the mapping φ belongs to L 2 (µ X ) endowed with Borel topology on both X and H. Remark 8. The operator Σ = E ξ [φ(ξ)φ(ξ) ⊤ ] ∈ H ⊗ H is trace-class. Proof. This follows from linearity of traces, expectations, together with the fact that Tr AB = Tr BA, Tr Σ = E ξ Tr φ(ξ)φ(ξ) ⊤ = ∥φ∥ 2 L 2 (µ X ) < +∞. As a consequence, Σ is compact, hence has a pure point spectrum, and since H is separable it can be diagonalized with its eigenvectors forming a basis of H. We will see later that Σ −1/2 Σ X Σ −1/2 is indeed isometric to T . Hence, under Assumption 4, Σ −1/2 Σ X Σ −1/2 has a pure-point spectrum. However, the following lemma shows that this operator is bounded without using the fact that T ⪯ I. Remark 9. The operator Σ X = E X E ξ,ξ ′ φ(ξ)φ(ξ ′ ) ⊤ X ∈ H ⊗ H verifies 0 ⪯ Σ X ⪯ Σ with ⪯ the Loewner order (A ⪯ B if B − A is semi-definite positive) . As a consequence, Σ X is trace-class and Σ −1/2 Σ X Σ −1/2 is continuous. Proof. This follows from Jensen inequality applies to A → AA ⊤ , which can be proven using the positivity of covariance operator. 0 ⪯ E[(A − E[A])(A − E[A]) ⊤ ], ⇒ E[A] E[A] ⊤ ⪯ E[AA ⊤ ]. As a consequence, E ξ,ξ ′ φ(ξ)φ(ξ ′ ) ⊤ X = x ⪯ E ξ φ(ξ)φ(ξ) ⊤ X = x , which implies that Σ X ⪯ Σ. As a consequence, Tr Σ X ≤ Tr Σ < +∞ and Σ −1/2 Σ X Σ −1/2 ⪯ I, and Σ −1/2 Σ X Σ −1/2 op ≤ 1. The positivity follows from the fact that Σ X is a covariance operator Σ X = E X E ξ [φ(ξ) | X] E ξ [φ(ξ) | X] ⊤ . A.5.2. OPERATOR FORMULATION Let us begin by proving a variant of the lemma where everything is expressed in H. We expand later on the isometry between H and L 2 (µ Ξ ) (due to the isometry between S and Σ 1/2 ) that allows us to transfer it to the lemma written in the paper. Lemma 10. For (θ i ) ∈ H k and f θ : x → ⟨φ(x), θ⟩, and a regularizer λ ∈ R L((f θi ) i∈[k] ) + λ i∈[k] ∥θ i ∥ 2 2 = Tr   Σ 1/2 ( i∈[k] θ i θ ⊤ i )Σ 1/2 − A) 2 − A 2   + k, with A and Σ being operator on H defined as A = Σ −1/2 ((1 − β)Σ + βΣ X − λI)Σ −1/2 , Σ = E ξ φ(ξ)φ(ξ) ⊤ , Σ X = E X [E ξ,ξ ′ φ(ξ)φ(ξ ′ ) ⊤ X ]. As a consequence, a minimizer Θ * of L is such that Θ * matches the eigenvalue decomposition of A on positive eigenvalues up to the k-th. Formally, if A = i∈N λ i u i ⊗ u i with u i ∈ H and (λ i ) in decreasing order, Θ * = (θ i ) i∈[k] , with θ i = max(λ i , 0)Σ −1/2 u i . Moreover, (f θi ) are orthogonal in L 2 (µ Ξ ), where µ Ξ denotes the marginal distribution over augmentations. Proof. Let us now rewrite the different quantities appearing in L based on the parameterization ψ = Θφ. We have Tr E[ψ(ξ)ψ(ξ) ⊤ ] = Tr E[Θφ(ξ)φ(ξ) ⊤ Θ ⊤ ] = Tr Θ E[φ(ξ)φ(ξ) ⊤ ]Θ ⊤ = Tr ΘΣΘ ⊤ = Tr Σ 1/2 Θ ⊤ ΘΣ 1/2 . The adjoint Θ ⊤ is taken with respect to the canonical topology on H and R k . Similarly, Tr E X E ψ(ξ)ψ(ξ ′ ) ⊤ X = Tr ΘΣ X Θ ⊤ = Tr Σ −1/2 Σ X Σ −1/2 Σ 1/2 Θ ⊤ ΘΣ 1/2 . For the last term, we get Tr E[ψ(ξ)ψ(ξ) ⊤ ] 2 = Tr (ΘΣΘ ⊤ ) 2 = Tr ΘΣΘ ⊤ ΘΣΘ ⊤ = Tr Σ 1/2 Θ ⊤ ΘΣΘ ⊤ ΘΣ 1/2 Collecting the different terms, we get L(Θφ) + 2λ Tr(Θ ⊤ Θ) − k = Tr 2(β − 1)Σ 1/2 Θ ⊤ ΘΣ 1/2 − 2βΣ −1/2 Σ X Σ −1/2 Σ 1/2 Θ ⊤ ΘΣ 1/2 + Σ 1/2 Θ ⊤ ΘΣΘ ⊤ ΘΣ 1/2 + 2λΣ −1 Σ 1/2 Θ ⊤ ΘΣ 1/2 = Tr Σ 1/2 Θ ⊤ ΘΣ 1/2 + (β − 1)I − βΣ −1/2 Σ X Σ −1/2 + λΣ −1 2 − (β − 1)I − βΣ −1/2 Σ X Σ −1/2 + λΣ −1 2 = Tr Σ 1/2 Θ ⊤ ΘΣ 1/2 − Σ −1/2 ((1 − β)Σ + βΣ X − λ)Σ −1/2 2 − Σ −1/2 ((1 − β)Σ + βΣ X − λ)Σ −1/2 2 . This proves the first part of the lemma. Remark that the expression of the lemma is slightly different from the generalization to continuous X suggested by HaoChen et al. (2021) (x) = q −1/2 (x) E [φ(ξ) | X = x] where q : x → E X∼µΞ [k(x, X)] rather than Σ −1/2 Σ X Σ −1/2 . Finally, let us prove that the f θi are orthogonal in L 2 , we have f θi , f θj L 2 (µΞ) = max(λ i , 0) max(λ j , 0) E[ Σ −1/2 u i , φ(ξ) Σ −1/2 u i , φ(ξ) ] = max(λ i , 0) max(λ j , 0) E[u ⊤ i Σ −1/2 φ(ξ)φ(ξ) ⊤ Σ −1/2 u j ] = max(λ i , 0) max(λ j , 0)u ⊤ i Σ −1/2 E[φ(ξ)φ(ξ) ⊤ ]Σ −1/2 u j = max(λ i , 0) max(λ j , 0)u ⊤ i Σ −1/2 ΣΣ −1/2 u j = max(λ i , 0) max(λ j , 0)u ⊤ i u j = max(λ i , 0) max(λ j , 0)δ ij . This proves the orthogonality of the f θi in L 2 (µ Ξ ). A.5.3. ISOMETRIC FORMULATION Let us consider S : H → L 2 (µ Ξ ); θ → f θ the embedding of the RKHS in L 2 (µ Ξ ). Lemma 11. S is isometric to Σ 1/2 , and K = S ⊤ S is an integral operator that maps f ∈ L 2 (µ Ξ ) to Kf ∈ L 2 (µ Ξ ) defined for ξ ∈ X as Kf (ξ) = E ξ ′ φ(ξ) ⊤ φ(ξ ′ )f (ξ ′ ) .(11) Proof. This follows from the fact that both S and Σ 1/2 are a square root of Σ. Indeed, Σ = S ⊤ S, since for θ ∈ H, θ, S ⊤ Sθ H = ⟨Sθ, Sθ⟩ L 2 (µ X ) = E ξ [Sθ(ξ) 2 ] = E ξ [⟨θ, φ(ξ)⟩ 2 ] = E ξ [⟨θ, φ(ξ) ⊗ φ(ξ)θ⟩] = ⟨θ, E[φ(ξ) ⊗ φ(ξ)]θ⟩ = ⟨θ, Σθ⟩ . As a consequence, S is isometric to Σ 1/2 (if we write the singular value decomposition of S as U SV ⊤ , then Σ 1/2 = U SU ⊤ ). Regarding the part in K, one can check with the same derivation that S ⊤ f = E[f (ξ)φ(ξ)] ∈ H hence the value of (Kf )(ξ) = (S ⊤ f ) ⊤ φ(ξ) = E ξ ′ [f (ξ ′ )φ(ξ ′ ) ⊤ φ(ξ)]. Using the isometry one can replace ∥Sθ∥ = Σ 1/2 θ with the Hilbertian norm on H and L 2 (µ Ξ ), so that for C operating in H, Tr SCS ⊤ = Tr Σ 1/2 CΣ 1/2 . Going back to the proof in H, one can replace all the Σ 1/2 by S or its adjoint at the right places to get the following statement. Lemma 12. For Θ ∈ R k ⊗ H, and a regularized λ ∈ R L(SΘ) + λ ∥Θ∥ 2 2 = Tr((SΘ ⊤ ΘS ⊤ − T λ ) 2 − T 2 λ ) + k where T = S −⊤ Σ X S −1 , T λ = (1 − β)I + βT − λK −1 , K = SS ⊤ , with S : H → L 2 (µ Ξ ); θ → f θ the embedding of H in L 2 (µ Ξ ), where µ Ξ denotes the marginal distribution over augmentations. As a consequence, a minimizer Θ * of L; λ is such that SΘ ⊤ * matches the eigenvalue decomposition of T λ on positive eigenvalues up to the k-th. Proof. This lemma follows from the previous discussion. The fact that S −⊤ ΣS −1 equates to T on the L 2 (µ Ξ )-closure of Ψ is due to the characterization in Lemma 3. We can nonetheless prove it in a more direct fashion, by adapting Lemma B.9 of Saunshi et al. (2022) to our case. A.6. Proof of Proposition 4 Proposition 4 relies on the fact that when two operators commutes, they can be diagonalized in the same basis. Lemma 13. When K and T commute, K and T can be diagonalized by the same eigenfunctions (f i ). Proof. When the operators commute, if f is an eigenfunction of T with T f = λf , then T Kf = KT f = λKf . This means that the eigenspace of T , i.e. ker(T − λI) are stable by K. As a consequence, K can be decomposed with respect to the summation L 2 = ⊕ λ∈spec(T ) ker(T − λI). By diagonalizing the restrictions of K on each of those spaces, there exists a basis that diagonalizes both K and T . While we did not discuss it in the main text, one should not consider any eigenvalue decomposition of T but only the eigenfunctions that jointly diagonalize T and K. However, note that to find those eigenfunctions, based on Courant-Fisher principle, one can take, recursively on i ∈ N, f i = f θi an eigenfunction in ker(T − λ i I) that maximizes or minimizes ∥θ i ∥. Those eigenfunctions (f i ) will diagonalize T λ , and the optimal representation will pick the ones that maximize f ⊤ i T λ f i as long as this quantity is positive. If f i diagonalize K then f i ∈ im K 1/2 = Ψ = im S, hence there exists a θ i ∈ H such that f i = Sθ i . As a consequence, with the L 2 (µ Ξ ) geometry, f ⊤ i K −1 f i = (Sθ i ) ⊤ (S ⊤ S) −1 Sθ i = ∥θ i ∥ 2 2 . We use this to derive that f i T λ f i = (1 − β)f ⊤ i f i + βf ⊤ i T f i − λf ⊤ i K −1 f i = 1 − β + βλ i − λ ∥θ i ∥ 2 2 . In other term the maximal eigenvalues of T λ are found by maximizing βλ i − λ ∥θ i ∥ 2 . (2022) have taken this second perspective on inductive bias perspective by looking at the "barrier" case where one can only match eigenfunctions that belongs to the function space Ψ. In the kernel regime, this is deceptive since, for example, when considering the Gaussian kernel φ( Remark 14. Recently, HaoChen & Ma x) ⊤ φ(x ′ ) = − exp(∥x − x ′ ∥ 2 ) , Ψ is made of analytic functions (Sun & Zhou, 2008), hence cannot parameterize any indicator functions without being one everywhere, therefore their approach would fail to explain how the Gaussian kernel could learn fast under the cluster assumption. A.7. Remark about VCReg When L = 0, finding ψ correspond in finding k functions (f θi ) i that are orthogonal in L 2 (µ Ξ ) and maximize 1 − λ ∥θ∥ 2 = 1 − λf ⊤ θ K −1 f θ before multiplying them by (1 − λ ∥θ i ∥ 2 ) + . Using Courant-Fisher min-max principle, the function (f θi ) i are given by the k biggest eigenfunctions of K. A.8. Proof of Example 3 If µ Ξ = δρ X + (1 − δ)µ ⊥ , then for any measurable function f ∥f ∥ 2 L 2 (µΞ) = δ X f (x) 2 ρ X (dx) + (1 − δ) X f (x) 2 µ ⊥ (dx) ≥ δ ∥f (X)∥ 2 L 2 (ρ X ) . A.9. Proof of Example 4 This follows from the embedding S ρ and S µ of H in L 2 (ρ X ) and L 2 (µ Ξ ) respectively. We have seen earlier that S ⊤ µ S µ = Σ µΞ and S ⊤ X S X = Σ ρ X . Let f ∈ Ψ, there exists θ ∈ H such that f = f θ , hence, using the isometry between S and Σ 1/2 , ∥f θ ∥ 2 L 2 (ρ X ) = ∥S X θ∥ 2 L 2 (ρ X ) = Σ 1/2 ρ X θ 2 H = Σ 1/2 ρ X Σ −1/2 µΞ Σ 1/2 µΞ θ 2 L 2 (ρ X ) ≤ Σ −1/2 µΞ Σ ρ X Σ −1/2 µΞ op Σ −1/2 µΞ θ 2 H = Σ −1/2 µΞ Σ ρ X Σ −1/2 µΞ op ∥f θ ∥ 2 L 2 (µΞ) . We conclude by using the equivalence the fact that A ⪯ cB implies that B −1/2 AB −1/2 ⪯ c · I. A.10. Proof of Example 5 This follows from the definition of the different objects, Π (ρ X ) fi (f ) = wf i , with w = arg min w∈R E X∼ρ X [∥f (X) − wf i (X)∥ 2 ]. We develop this last objective as E X∼ρ X [∥f (X) − wf i (X)∥ 2 ] = E X∼ρ X [∥g(ψ(X)) − wf i (X)∥ 2 ] = E Z∼ψ # ρ X [ g(Z) − w ⊤ Z 2 ] = E Z∼µΞ [ f (X) − w ⊤ f i (X) 2 ]. Hence, the equality of the w and of the projections. A.11. Proof of Example 6 If µ Ξ has k connected components, then the indicators of those components will be orthogonal in L 2 (µ Ξ ) while minimizing the invariant term E x E ∥φ(ξ) − φ(ξ ′ )∥ 2 X . As a consequence, f * belongs to the space of the (f i ) i≤k . B. Control of the downstream convergence This section is devoted to the proof of Theorem 1. In all the following, k λ designs the number of positive eigenvalues of T λ (including multiplicity) as an operator on L 2 (µ Ξ ). We fix k ≤ k λ , and design by F the span of the (f i ) i∈ [k] . In the kernel regime, the space F can also be written as F = w ⊤ Θ * φ w ∈ R k for Θ * the minimizer defined in Lemma 12. We denote byF the space defined similarly from an estimateΘ of Θ * . The error on the downstream task could be decomposed into three quantities: the error on the downstream task linked with the capacity ofF (12); the error on the upstream task linked to approximation error between F andF (13), the error due to the fact that the downstream task might not be effectively solved within F (14). Lemma 15 (Decomposition intuition). Let F andF be two closed convex sets of L 2 (ρ X ), and Π F design the orthogonal projection on the space F according to L 2 (ρ X ) geometry. For any function f : X → Y inF, the excess of risk (1) can be decomposed as R(f ) − R(f * ) ≤ f − ΠF f * 2 L 2 (ρ X )(12)+ 2 (I − ΠF )Π F f * 2 L 2 (ρ X )(13)+ ∥(I − Π F )f * ∥ 2 L 2 (ρ X ) ,(14) Proof. The proof of the lemma follows from classical characterization of the mean square error and a triangular inequality. Introduce the following technical assumption. Assumption 6. Assume (X, Y ) → Y to belong to L 2 (ρ). When ℓ(y, y ′ ) = ∥y − y ′ ∥ 2 , using the fact that (X, Y ) → Y − E [Y | X] is orthogonal to any measurable function that does not depend on Y in L 2 (ρ), R(f ) = E[∥f (X) − Y ∥ 2 ] = E[∥f (X) − E [Y | X]∥ 2 ] + E[∥E [Y | X] − Y ∥ 2 ]. As a consequence, f * ( x) = E [Y | X = x] and R(f ) − R(f * ) = ∥f − f * ∥ 2 L 2 (ρ X ) . Let us decompose the excess of risk with the orthogonal projection ofF, we have R(f ) − R(f * ) = ∥f − f * ∥ 2 L 2 (ρ X = f − ΠF f * 2 L 2 (ρ X + (I − ΠF )f * 2 L 2 (ρ X The second term is worked out as (I − ΠF )f * 2 L 2 (ρ X ) = (I − ΠF )Π F f * + (I − ΠF )(I − Π F )f * 2 L 2 (ρ X ) ≤ 2 (I − ΠF )Π F f * 2 L 2 (ρ X ) + (I − ΠF )(I − Π F )f * 2 L 2 (ρ X ) ≤ 2 (I − ΠF )Π F f * 2 L 2 (ρ X ) + ∥(I − Π F )f * ∥ 2 L 2 (ρ X ) where the last inequality is due to the fact that projections contract distances. For linear probing (3), when the downstream task is learned with n data points and a noise level ε, (12) is expected to behave as ε 2 k/n (Mourtada & Rosasco, 2022). In this linear setting, (13) should be seen as a measure of angle between F andF seen through the eyes of f * (Davis & Kahan, 1970;Kato, 1995). (12) The downstream task error relates to the generalization error of mis-specified linear model. To bound it, we will use the convergence rates analysis through concentration of integral operators of Smale & Zhou (2007) and Caponnetto & De Vito (2007). It requires reworking slightly the previous decomposition. B.1. Controlling Lemma 16 (Warm-up). LetF be the span of the (ψ i ) i∈ [k] , with S ψ : R k → L 2 defined as S ψ w = w ⊤ ψ, then ΠF f * = S ψ E[ψ(X)ψ(X) ⊤ ] −1 E[Y ψ(X)].(15) Based on data (X i , Y i ), one can define the empirical risk minimizer f n = S ψ w n , where w n is the minimizer of w n ∈ arg min w∈R k n i=1 w ⊤ ψ(X i ) − Y i 2 = [ 1 n n i=1 φ(X i )φ(X i ) ⊤ ] −1 1 n n j=1 Y i φ(X i ).(16) Proof. The two formula can be proven at once by remarking that if ΠF f * is defined as S ψ w for w minimizing E[ w ⊤ φ(X) − Y 2 ] = w ⊤ E[φ(X)φ(X) ⊤ ]w − 2w ⊤ E[Y φ(X)] + E[∥Y ∥ 2 ]. Minimizing this quadratic form leads to the first results. The second result is proven in the same way after substituting the distribution over (X, Y ) by the empirical one n −1 i∈[n] δ (Xi,Yi) . As a consequence of this warm-up lemma, let us introduce some notations, for ψ : X → R k and some data (X i ), define S ψ : R k → L 2 (ρ X ); w → w ⊤ ψ,Ŝ ψ : R k → ℓ 2 (n); w → (w ⊤ ψ(X i )) i∈[n] ,(17) where ℓ 2 (n) is endowed with normalized (i.e. probability-like) scalar product ⟨a, b⟩ = n −1 i∈[n] a i b i . Similarly to Lemma 11, one can show that the adjoint of S ψ andŜ ψ , and the covariance operators are S ψ : L 2 (ρ X ) → R k ; f → E ρ X [f (X)ψ(X)],Ŝ ψ : ℓ 2 (n) → R k ; (Y i ) i∈n → 1 n i∈[n] Y i ψ(X i ). Σ ψ = S ψ S ⊤ ψ = E ρ X [ψ(X)ψ(X) ⊤ ],Σ ψ =Ŝ ψŜ ⊤ ψ = 1 n [ψ(X)ψ(X) ⊤ ],(18) In this subsection, we will only consider S and Σ associated with ψ and we remove the indices for convenience. To simplify notation when f ∈ L 2 (ρ X ) we will writeŜ ⊤ f forŜ ⊤ (f (X i )) i∈[n] . Assumption 7 (Homoskedastic noise). There exists ε > 0 such that for ρ X -almost all x, the variance of (Y | X = x) is bounded by ε 2 . Lemma 17 (Bias-Variance decomposition). Based on data (X i , Y i ), one can define the regularized empirical risk minimizer f n = S ψ w n with a regularization parameter γ > 0 as w n ∈ arg min w∈R k n i=1 w ⊤ ψ(X i ) − Y i 2 + γ ∥w∥ 2 .(19) When doing so, under Assumption 7, the average excess of risk can be decomposed as, with M = sup ∥ψ(X)∥, E (Xi,Yi) [ f n − ΠF f * 2 L 2 (ρ X ) ] ≤ ε 2 n 1 + M 2 γn Tr (Σ + γ) −1 Σ + 2γ 1 + M 2 γn 2 ΠF f * , Σ(Σ + γ) −1 ΠF f * L 2 (ρ X ) + 2 E (Xi) S(Σ + γ) −1Ŝ⊤ (I − ΠF )f * 2 L 2 (ρ X ) .(20) Proof. Retaking the warm-up lemma, one can show that w n = (Σ + γ) −1Ŝ (Y i ) i∈[n] . As a consequence, using the usual bias-variance decomposition, and the fact that f * = E ρ [Y | X = ·], we develop E (Yi | X=Xi) [ f n − ΠF f * 2 ] = E (Yi | X=Xi) [ S(Σ + γ) −1Ŝ⊤ (Y i ) i∈[n] − ΠF f * 2 ] = E (Yi | X=Xi) S(Σ + γ) −1Ŝ⊤ (Y i − E [Y | X = X i ]) i∈[n] 2 + S(Σ + γ) −1Ŝ⊤ f * − ΠF f * 2 . The first term can be worked out with Mourtada & Rosasco (2022) techniques as E (Xi,Yi) S(Σ + γ) −1Ŝ⊤ (Y i − E [Y | X = X i ]) i∈[n] 2 ≤ ε 2 n 1 + R 2 γn Tr (Σ + γ) −1 Σ under the assumption that the variance of (Y | X) is bounded by ε 2 . We work out the second term with S(Σ + γ) −1Ŝ⊤ f * − ΠF f * ≤ S(Σ + γ) −1Ŝ⊤ (f * − ΠF f * ) + S(Σ + γ) −1Ŝ⊤ ΠF f * − ΠF f * . Once again, the last part can be worked out with techniques of Mourtada & Rosasco (2022) to get E (Xi,Yi) [ S(Σ + γ) −1Ŝ⊤ ΠF f * − ΠF f * 2 ] ≤ γ 1 + R 2 γn 2 ΠF f * , Σ(Σ + γ) −1 ΠF f * L 2 (ρ X ) . This provides the decomposition of the lemma. Let us work out the last term in (20). Lemma 18. For t = (Σ + γ) −1/2 (Σ −Σ)(Σ + γ) op and M such that ∥ψ(X)∥ ≤ M almost everywhere, S(Σ + γ) −1Ŝ⊤ (I − ΠF )f * L 2 (ρ X ) ≤ min 1 1 − t , 1 + t · M 2 + γ γ Σ −1/2 γŜ ⊤ (I − ΠF )f * .(21) Proof. Let us set f = (I − ΠF )f * and A γ = A + γI for simplicity. Remark that f is orthogonal to the image of S, hence S ⊤ f = 0. We decompose the last quantity witĥ Σ −1 γŜ ⊤ f = (Σ −1 γ− Σ −1 γ )Ŝ ⊤ f + (Σ γ ) −1Ŝ⊤ f =Σ −1 γ( Σ γ −Σ γ )Σ −1 γ S ⊤ f + Σ −1 γŜ ⊤ f =Σ −1 γ( Σ −Σ)Σ −1 γŜ ⊤ f + Σ −1 γŜ ⊤ f = Σ −1/2 γ Σ 1/2 γΣ −1 γ Σ 1/2 γ Σ −1/2 γ( Σ −Σ)Σ −1/2 γ + I Σ −1/2 γŜ ⊤ f Using the fact that S is isometric to Σ 1/2 which itself if smaller than Σ 1/2 γ (with the Loewner order), we have S(Σ + γ) −1Ŝ⊤ (I − ΠF )f * L 2 (ρ X ) ≤ 1 + Σ 1/2 γΣ −1 γ Σ 1/2 γ op Σ −1/2 γ( Σ −Σ)Σ −1/2 γ op Σ −1/2 γŜ ⊤ f We know that Σ 1/2 γΣ −1 γ Σ 1/2 γ op ≤ γ −1 (∥Σ∥ op + γ) ≤ γ −1 ( sup x∈supp ρ X ∥ψ(x)∥ 2 + γ) We also have that for A and self adjoint and any t > 0, the sequence of implications A −1/2 (A −Â)A −1/2 op ≤ t ⇔ −tI ⪯ A −1/2 ( − A)A −1/2 ⪯ tI ⇔ −tA ⪯ − A ⪯ tA ⇔ (1 − t)A ⪯ ⪯ (1 + t)A ⇔ (1 + t) −1 A −1 ⪯ −1 ⪯ (1 − t) −1 A −1 ⇔ (1 + t) −1 ⪯ A 1/2Â−1 A 1/2 ⪯ (1 − t) −1 . Combining the different results leads to the lemma. Probabilistic arguments will show that t, as well as (Σ + γ −1/2Ŝ⊤ (I − ΠF )f * , vanishes to zero in n −1/2 . We will use Bernstein concentration inequality. Lemma 19 (Bernstein concentration inequalities). Let denote by A a Hilbert space and by (Z i ) i∈[n] a sequence of independent random vectors on A such that E[Z i ] = 0, and such that there exists two positive constants M and σ such that for all m > 2 1 n i∈[n] E[∥Z i ∥ m ] ≤ m!σ 2 M m−2 /2 For any t > 0, P( 1 n n i=1 Z i ≥ t) ≤ 2 exp −nt 2 2σ 2 + 2tM . In particular when the (Z i ) are bounded by 3M , and σ 2 = n −1 i∈n E[Z 2 i ], the condition holds. When, instead, Z i are symmetric matrices in R k×k and ∥·∥ is the operator norm, the same bound holds with k exp(· · · ) instead of 2 exp(· · · ) on the right-hand side, where σ 2 = n −1 i∈[n] E[Z 2 i ] . Proof. See Corollary 1 in Pinelis & Sakhanenko (1986) for the first part, and Tropp (2015) for the matrix version. Lemma 20. For any t > 0, the vector part in last term of the bias decomposition (20) can be controlled with P Σ −1/2 γŜ ⊤ (I − ΠF )f * ≥ t ≤ 2 exp −nt 2 a(b + 2M γ −1/2 t/3)(22) where b = 2 Tr (Σ + γ) −1 Σ, M = sup ∥ψ(X)∥ and a = ∥f * ∥ L ∞ + M ∥f * ∥ L 2 . Moreover, this vector part is bounded by γ −1 a 2 M 2 . The matrix part in the last term of (20) is controlled with P Σ −1/2 γ (Σ − Σ)Σ −1/2 γ op ≥ t ≤ k exp −nt 2 2M 2 γ −1 (1 + t/3) (23) Moreover, this matrix part is bounded by γ −2 M 4 . Proof. Let us introduce Z i = (I − ΠF )f * (X i )(Σ + γ) −1/2 ψ(X i ) ∈ R k .(24) One can check that 1 n i∈[n] Z i = (Σ + γ) −1/2 1 n i∈[n] (I − ΠF )f * (X i )ψ(X i ) = (Σ + γ) −1/2Ŝ⊤ (I − ΠF )f * , as well as, since im S =F E[Z i ] = S ⊤ (I − ΠF )f * = 0. Moreover, ∥Z i ∥ = (Σ + γ) −1/2 ψ(X i ) (I − ΠF )f * (X i ) ≤ γ −1/2 M (∥f * ∥ L ∞ + M ∥f * ∥ L 2 ). where R = sup X ∥ψ(X)∥ and we have used the fact that (I − ΠF )f * (X i ) ≤ |f * (X i )| + ΠF f * (X i ) = |f * (X i )| + SS −1 ΠF f * , φ(X) ≤ |f * (X i )| + SS −1 op ΠF f * L 2 ∥φ(X i )∥ ≤ ∥f * ∥ L ∞ + M ∥f * ∥ L 2 . Finally, we have E[∥Z i ∥ 2 ] = E (Σ + γ) −1/2 ψ(X i ) 2 (I − ΠF )f * (X i ) 2 ≤ E (Σ + γ) −1/2 ψ(X i ) 2 (∥f * ∥ L ∞ + M ∥f * ∥ L 2 ) = Tr (Σ + γ) −1 Σ (∥f * ∥ L ∞ + M ∥f * ∥ L 2 ). Using Bernstein inequality leads to the control on the vector term. For the matrix term, let us introduce Z i = U i U ⊤ i − E[U i U ⊤ i ], U i = (Σ + γ) −1/2 φ(X i ). We have (Σ + γ) −1/2 (Σ − Σ)(Σ + γ) −1/2 = 1 n i∈[n] Z i , and sup ∥Z∥ ≤ sup ∥U ∥ 2 ≤ γ −1 M 2 . Finally, using the fact that ∥U i ∥ 2 ⪯ U i sup ∥U i ∥, with the variational definition of the mean, with the infimum taken with respect to the Loewner order E[Z 2 i ] = inf a E[(Z i − a) 2 ] ⪯ E[(U i U ⊤ i ) 2 ] ⪯ sup ∥U ∥ 2 E[U ⊤ i U i ] = sup ∥U ∥ 2 (Σ + γ) −1 Σ ⪯ sup ∥U ∥ 2 I Applying the matrix version of Bernstein inequality leads to the lemma. We now turn the deviation inequalities of the last lemma into a bound on the average. Lemma 21. Retaking the notation of the previous lemma. E (Xi) S(Σ + γ) −1Ŝ⊤ (I − ΠF )f * 2 L 2 (ρ X ) ≤ k exp −3nγ (3 + √ 2)M 2 (γ −4 M 6 a 2 (M 2 + 2γ)) 2 + 16ab n + 512a 2 M 2 9γn 2 . Proof. In essence, we have two random variables, X = Σ −1/2 γ (Σ − Σ)Σ −1/2 γ 2 op , and Y = Σ −1/2 γŜ ⊤ (I − ΠF )f * 2 the vector one. We proceed with computation using the fact that for X positive E[X] = t>0 P(X > t) dt and that ab > t, implies for any s that a > 1 + s or b > t/(1 + s), E[min 1 1 − X , 1 + (M 2 + γ)X γ 2 Y 2 ] = t∈(0,sup(1+γ −1 (M 2 +γ)X) 2 Y 2 ) P min 1 1 − X , 1 + (M 2 + γ)X γ 2 Y 2 > t dt ≤ inf s P min 1 1 − X , 1 + (M 2 + γ)X γ 2 > 1 + s + P(Y 2 > t/(1 + s)) dt. Rather than solving this in closed form, we will proceed with a much simpler bound that consists in taking s = 1 without any optimization. It gives the much simpler formula E[min 1 1 − X , 1 + (M 2 + γ)X γ 2 Y 2 ] ≤ P( 1 (1 − X) 2 > 2) sup(1 + γ −1 (M 2 + γ)X) 2 Y 2 + 2 E[Y 2 ]. For Y we can use the same technique as before, using that exp (−(a + b) −1 ) ≤ exp(− max(2a, 2b) −1 ) ≤ exp(−(2a) −1 ) + exp(−(2b) −1 ), we get E[Y 2 ] = t>0 P(Y 2 > t) dt ≤ t>0 2 exp − −nt a(b + 2M γ −1/2 t 1/2 /3) dt ≤ 4 t>0 exp − −nt 2ab + exp − −nt 1/2 4aM γ −1/2 /3) dt = 8abn −1 + 256a 2 M 2 γ −1 n −2 /9 We conclude the lemma with the previous one. Let us now simplify the constant that appear in the bound derived so far. Lemma 22 (Simplifying constants). The constant is the previous bound can be worked out as Tr Σ(Σ + γ) −1 ≤ k, M ≤ λ −1 k sup ∥φ∥ , ΠF f * , Σ(Σ + γ) −1 ΠF f * L 2 (ρ X ) ≤ ∥f * ∥ L 2 (ρ X ) . We also have ∥f * ∥ L 2 (ρ X ) ≤ ∥f * ∥ L ∞ (ρ X ) ≤ σ, ε 2 ≤ σ 2 , σ 2 = sup x E Y 2 X = x As a consequence, the constant a appearing earlier is smaller than (1 + M )σ. Proof. The first bound is a direct application of the fact that Σ ⪯ Σ + λ, hence Tr((Σ + γ) −1 γ) ≤ Tr(I) = k. The second bound is due to the fact that ψ =Θφ, hence ∥ψ∥ ≤ Θ op ∥φ∥ ≤ Θ F ∥φ∥. In the meantime, ifΘ was regularized λ Θ 2 F ≤L(Θ) + λ Θ 2 ≤L(0) = k. For the part in f * , we have that Σ 1/2 (Σ + γ) −1/2 ΠF f * ≤ ΠF f * ≤ ∥f * ∥ . Finally, the last equality is due to the fact that f * (x) is the mean of Y conditionally to X = x, ∥f * (X)∥ = ∥E [Y | X]∥ ≤ E Y 2 X 1/2 ≤ σ. This ends the lemma Lemma 23. Under Assumption 7, when γ = M 2 log(n) 1+δ n −1 , with δ > 0, there exists a N > 0 such that for any n > N , the excess of risk of the regularized empirical risk (19) minimizer reads E (Xi,Yi) [R(f n ) − R(f * )] ≤ 2k e ε 2 n + 8M 2 log(n) 1+δ n ∥f * ∥ L 2 (ρ X ) + 64ka n + 2 I − ΠF Π F f * 2 + ∥I − Π F f * ∥ 2 (25) where k e = Tr Σ(Σ + γI) −1 ≤ k is the effective dimension, a = I − ΠF f * L ∞ ≤ ∥f * ∥ L ∞ + M ∥f ∥ L 2 , and M = sup ∥ψ∥ ≤ kλ −1 sup ∥φ∥. Proof. When γ = c log(n) 1+δ n −1 the excess of risk reads E (Xi,Yi) [R(f n ) − R(f * )] ≤ k e ε 2 n 1 + M 2 c log(n) + 2c log(n) 1+δ n 1 + M 2 c log(n) 2 ∥f * ∥ L 2 (ρ X ) + 64ka n + 114a 2 M 2 9cn log(n) + O(exp(− log(n) 1+δ/2 )) + 2 I − ΠF Π F f * 2 + ∥I − Π F f * ∥ 2 Taking c = M 2 leads to the lemma. (13) An ideal control of (13) (2023). Yet, those proof proceed with the estimation of the smallest eigenfunctions of (Σ X + λ) −1/2 Σ(Σ X + λ) −1/2 , rather than the biggest of Σ −1/2 (Σ X − λ)Σ −1/2 . In this proof, we will rather utilize derivations based on empirical processes concentration, together with the following "transfer bound". B.2. Controlling Lemma 24 (Transfer bound). ForΘ ∈ R k ⊗ H, andF = x → w ⊤Θ φ(x) w ∈ R k , i∈[k] λ 2 i (Π (µΞ) F − Π (µΞ) F )f i 2 L 2 (µΞ) − k<i≤k λ λ 2 i Π (µΞ) F f i 2 L 2 (µΞ) ≤ L(Θ; λ) − L(Θ * ; λ),(26) where Π (τ ) F is the projection orthogonal on F in L 2 (τ ). Proof. For simplicity, let us remove the dependency to µ Ξ in the proof. Let us introduce C = SΘΘ ⊤ S ⊤ , C is a positive operator of rank k in L 2 , let us write it as C = i∈[k] µ i g i g ⊤ i with µ i ≥ 0. L(Θ; λ) − k = Tr (C − T λ ) 2 − T 2 λ = Tr C 2 − 2C 1/2 T λ C 1/2 . Let us decompose T λ = T + − T − where T + and T − are positive. Since T λ ⪯ T + , −C 1/2 T λ C 1/2 ⪰ C 1/2 T + C 1/2 , hence L(Θ; λ) − k ≥ Tr C 2 − 2C 1/2 T + C 1/2 = i≤k µ 2 i − 2µ i T 1/2 + g i 2 . Minimizing this quantity with respect to µ i , leads to L(Θ; λ) − L(Θ; λ) ≥ i≤k λ 2 i − i≤k T 1/2 + g i 4 . Let us know introduce (f i ) the eigenfunctions of T λ . With U = (⟨g i , f j ⟩ 2 ) ij ∈ R k×k λ and λ = (λ i ) ∈ R k λ , we have i≤k T 1/2 + g i 4 = i≤k (g ⊤ i T + g i ) 2 = i≤k   j≤k λ λ j ⟨g i , f j ⟩ 2   2 = j,m≤k λ λ j λ m i≤k ⟨g i , f j ⟩ 2 ⟨g i , f m ⟩ 2 = λ ⊤ U ⊤ U λ. Note that U is at most doubly stochastic since both (g i ) and (f i ) are orthonormal families, thus ∥U ∥ ≤ 1, and U ⊤ U ⪯ I. If one replace the f i by f i / ΠF in the definition of U that would becomeŨ = diag(( ΠF f i 2 ) i≤k λ ) −1 U ,Ũ is still right stochastic. Hence U ⊤ U ⪯ diag( ΠF f i 2 i≤k λ ) 2 diag( ΠF f i 2 i≤k λ ). It follows that i≤k T 1/2 + g i 4 ≤ λ ⊤ U ⊤ U λ = i≤k λ λ 2 i ΠF f i 2 . This allows to simplify the lower bound as L(Θ; λ) − L(Θ; λ) ≥ i≤k λ 2 i f ⊤ i f i − f ⊤ i ΠF f i − k<i≤k λ λ 2 i f ⊤ i ΠF f i = i≤k λ 2 i f i , (I − ΠF )f i − k<i≤k λ λ 2 i ΠF f i 2 = i≤k λ 2 i (Π F − ΠF )f i 2 − k<i≤k λ λ 2 i ΠF f i 2 . This ends the proof of our transfer bound. The left-hand side in Lemma 24 is to be linked with the desired control of (13). In order to deal more finely with distribution-shift, we introduce the following generic variant of Assumptions 1 and 2. Assumption 8 (Low expansion). Assume that for any function of the original space of functions f ∈ Ψ (4), ∥f ∥ L 2 (ρ X ) ≤ ζ ∥f ∥ L 2 (µ X ) , with ζ : R → R continuous, increasing and ζ(0) = 0. Definition 25 (Distribution ε-robustness). A close convex set of functions F will be said to be ε-robust to distribution shift conditionally to the function f if Π (ρ X ) F f − Π (µΞ) F f L 2 (ρ X ) ≤ ε ∥f ∥ L 2 (ρ X ) , where Π (τ ) F is the projection orthogonal on F in L 2 (τ ). Assumption 9. There exists a profile σ : R 2 → R increasing and bounded such that for any k ∈ N, Span {f i } i∈[k] is σ(k)-robust to f * . Lemma 26 (Decomposition). Under Assumptions 8 and 9, with F l the span of the (f i ) i∈[l] (I − Π (ρ X ) F )Π (ρ X ) F l f * L 2 (ρ X ) ≤ σ(l) + ζ   i≤l ⟨f * , f i ⟩ L 2 (µΞ) (Π (µΞ) F l − Π (µΞ) F )f i L 2 (µΞ)   .(27) Proof. Using the fact that I − Π is a projection when Π is a projection, and that projections contract distance, we get (I − Π (ρ X ) F )Π (ρ X ) F l f * L 2 (ρ X ) ≤ (I − Π (ρ X ) F )(Π (ρ X ) F l − Π (µΞ) F l )f * L 2 (ρ X ) + (I − Π (ρ X ) F )Π (µΞ) F l f * L 2 (ρ X ) ≤ (Π (ρ X ) F l − Π (µΞ) F l )f * L 2 (ρ X ) + (I − Π (ρ X ) F )Π (µΞ) F l f * L 2 (ρ X ) . Under Assumption 9, the first term in the right-hand side of the previous equation is bounded by σ(l). Regarding the second term, under Assumption 8, for f ∈ Ψ and f ′ ∈F ⊂ Ψ, we have (I − Π (ρ X ) F l )f L 2 (ρ X ) ≤ ∥f − f ′ ∥ L 2 (ρ X ) ≤ ζ ∥(f − f ′ ∥ L 2 (µΞ) . Taking the minimum on the right-hand side and using the fact that ζ is increasing leads to (I − Π (ρ X ) F l )f L 2 (ρ X ) ≤ ζ (I − Π (µΞ) F l )f L 2 (µΞ) . Applied to Π (µΞ) F l f * , this leads to (I − Π (µΞ) F )Π (ρ X ) F l f * L 2 (ρ X ) ≤ ζ (I − Π (µΞ) F )Π (µΞ) F l f * L 2 (µΞ) . We are done with all the quantities that relate to the distribution shift. Under Assumption 3, we have (I − Π (µΞ) F )Π (µΞ) F l f * L 2 (µΞ) = i≤l ⟨f * , f i ⟩ µΞ (I − Π (µΞ) F )f i L 2 (µΞ) ≤ i≤l ⟨f * , f i ⟩ µΞ (I − Π (µΞ) F )f i L 2 (µΞ) . Collecting the previous equations leads to the lemma. To get a finer control of (13), remark that the left-hand side of (26) has some additional constraints that can help us to tighten our bound. For simplicity, we will remove all the dependency to µ Ξ in the following. In essence, we want to lower bound the λ 2 i and to upper bound the (Π F − ΠF )f i . The next lemma adds a constraint the maximal error one can make on (13) under a constraint on L(Θ; λ). Lemma 27. When F is of dimension k andF is of dimension k ′ we have i≤k (Π F − ΠF )f i 2 = k − k ′ + i>k ΠF f i 2 ≤ k.(28) Proof. Let us consider two projection U and V onto the span of (u i ) i∈ [k] and (v i ) i∈ [k] with (u i ) i∈N and (v i ) i∈N two orthonormal basis of the ambient space. We have, with Hilbert-Schmidt norm everywhere, ∥U (I − V )∥ 2 = ∥U ∥ 2 − ∥U V ∥ 2 = k − ∥U V ∥ 2 = k − (U V ) ⊤ 2 = k − k ′ + k ′ − ∥V U ∥ 2 = k − k ′ + ∥V (I − U )∥ 2 . Based on invariant of the Hilbert-Schmidt norm to adjoint, and the fact that projection are self-adjoint, we have ∥U (I − V )∥ 2 = ∥(I − V )U ∥ 2 = k − k ′ + ∥V (I − U )∥ 2 = k − k ′ + ∥(I − U )V ∥ 2 . Finally, we also know that since projection contracts distances ∥(I − V )U ∥ 2 ≤ ∥U ∥ 2 = k. The claim of the lemma consists in writing explicitly (I − ΠF )Π F 2 = (Π F − ΠF )Π F 2 = i≤k (Π F − ΠF )f i 2 = k − k ′ + ΠF (I − Π F ) 2 = k − k ′ + i>k ΠF f i 2 ≤ k. This is lead to the statement of the lemma. Given a control on (2), finding an upper bound on (13) reduces to a purely algebraic one. In order to find the worse value that i≤k |⟨f * , f i ⟩| (Π (µΞ) F − ΠF )f i L 2 (µΞ) can take, let us introduce x i = (Π F − ΠF )f i , c i = |⟨f * , f i ⟩| .(29) The previous results lead to the following maximization problem in order to find the worse value of (13), max x i≤k c i x i (30) subject to i≤k λ 2 i x 2 i − k<i≤k λ λ 2 i x 2 i ≤ ε (Lemma 24) i≤k x 2 i = k − k ′ + k<i≤k λ x 2 i ≤ k (Lemma 27) B.3. Keeping it simple and concluding after controlling (14) Solving smartly the algebraic problem above to get the best bound on (13) requires distinguishing between many cases. While it might be relevant to distinguish those different cases and show different convergence regimes, this subsection proceed in a simpler way, although less tight. In particular, we can simplify the problem with respect to the (x i ) ik , using the fact that k ′ ≤ k (it is minimum between the number of positive eigenvalues ofT λ based on samples and k), it leads to x 2 k+1 = i≤k x 2 k and x 2 k+1+j = 0, (30) becomes max x i≤k c i x i (30) subject to i≤k (λ 2 i − λ 2 k+1 )x 2 i ≤ ε In general, one could refine this formulation by introducing a probability argument that tells us how much one can expect the error between ΠF and Π F to concentrates on the eigenspace linked to the smallest eigenvalue of T 2 λ . The problem shows two behaviors, if the c i decrease faster than the λ i than we want to charge the energy of (x i ) i≤k on the smallest indices. Otherwise, we want to charge the (x i ) i≤k on the biggest indices. To keep it simple, we will optimize L without any rank restriction first, which allow considering λ k λ +1 = 0, before thresholding the rank to get to a space of dimension k. Lemma 28. Under Assumptions 8 and 9, with F l the span of the first l eigenfunctions of T λ , (I − Π (ρ X ) F )f * 2 L 2 (ρ X ) ≤ inf l≤k (I − Π (ρ X ) F l )f * 2 L 2 (ρ X ) + 4σ(l) 2 + 4ζ 2 T λ −1 Π (µΞ) F l f * L 2 (µΞ) L(Θ; λ) − L(Θ; λ) 1/2 .(31)whereT λ = i∈[k] (λ 2 i − λ 2 k+1 ) 1/2 f i f ⊤ i . Moreover, when the search forF is done without rank restriction on Θ, before thresholding to get reduceF to a space of dimension k, under the strong Assumptions 1 and 2, as well as Assumption 3 (I − ΠF k )f * 2 ≤ |k − k λ | ∥f * ∥ 2 L 2 (ρ X ) + 2c r T −1 λ f * 2 L 2 (µΞ) L(Θ; λ) − L(Θ * ; λ) .(32) Proof. Keeping the algebraic notation above, this comes from a simple application of Cauchy-Schwarz, for (a i ) ∈ R k i≤l c i x i = i∈[l] c i a i a i x i ≤   i≤[l] c 2 i a 2 i   1/2   i∈[l] a 2 i x 2 i   1/2 . When applies to the quantities in (29) and a i = λ 2 i − λ 2 k+1 and l ≤ k, the previous lemma leads to (I − Π (ρ X ) F )Π (ρ X ) F l f * L 2 (ρ X ) ≤ σ(l) + ζ   i≤l ⟨f * , f i ⟩ L 2 (µΞ) (Π (µΞ) F l − Π (µΞ) F )f i L 2 (µΞ)   ≤ σ(l) + ζ   i≤l c i x i   ≤ σ(l) + ζ      i≤l c 2 i a 2 i   1/2   i∈[l] a 2 i x 2 i   1/2    ≤ σ(l) + ζ      i≤l c 2 i a 2 i   1/2 L(Θ; λ) − L(Θ; λ) 1/2    . We conclude by remarking that i≤l c 2 i a 2 i = T −1 λ Π F (µ Ξ ) l f * L 2 (µΞ) . For the second part, setF k the k first eigenfunctions to the all the one retrieve with the empirical minimization of L, and F to be the span of all the eigenfunctions linked with positive eigenvalues of T λ . Let us rework the decomposition of the excess of risk, we have (I − ΠF k )f * 2 = ΠF k λ (I − ΠF k )f * 2 + (I − ΠF k λ )(I − ΠF k )f * 2 = (ΠF k λ − ΠF k )f * 2 + (I − ΠF k λ )f * 2 ≤ (ΠF k λ − ΠF k )f * 2 + 2 (I − ΠF k λ )Π F f * 2 + ∥(I − Π F )f * ∥ 2 ≤ |k − k λ | ∥f * ∥ 2 + 2 (I − ΠF k λ )Π F f * 2 . The last bound begin due to Assumption 3, as well as the lax bounding that on the operator norm of two projections. When one could remove the k − k λ we let it as we expect the quantity to behave it this way, with a constant similar to ∥f * ∥ 2 /k λ instead of ∥f * ∥ 2 . We can now state the master theorem. Theorem 4. Under Assumptions 3, 7, 8 and 9, there exists a regularizer γ such that the regularized empirical risk minimizer verifies that: for any δ > 0, there exists an N δ > 0 such that for any n > N δ , the excess of risk of the regularized empirical risk (19) minimizer reads R(f ) − R(f * ) ≤ 2k e ε 2 n + 8M 2 log(n) 1+δ n ∥f * ∥ L 2 (ρ X ) + 64ka n + inf l≤k (Π F k λ − Π (ρ X ) F l )f * 2 L 2 (ρ X ) + 4σ(l) 2 + 4ζ 2 T λ −1 Π (µΞ) F l f * L 2 (µΞ) L k (Θ; λ) − L k (Θ; λ) 1/2 .(33) where F l the span of l-th first eigenfunction of T λ , k λ the number of strictly positive eigenfunctions of T λ , k e ≤ k is the effective dimension of ψ in L 2 (ρ X ), a = I − ΠF f * L ∞ ≤ ∥f * ∥ L ∞ + M ∥f ∥ L 2 , M = sup ∥ψ∥ ≤ kλ −1 sup ∥φ∥, and T λ = i∈[k] (λ 2 i − λ 2 k+1 ) 1/2 f i f ⊤ i . Moreover, under the sole Assumptions 1 and 2, we have the simpler bound R(f ) − R(f * ) ≤ 2k e ε 2 n + 8M 2 log(n) 1+δ n ∥f * ∥ L 2 (ρ X ) + 64ka n + max(k − k λ , 0) ∥f * ∥ 2 L 2 (ρ X ) + 2c r T −1 λ Π F λ f * 2 L 2 (µΞ) L k λ (Θ; λ) − L k λ (Θ * ; λ) + ∥(I − Π F λ )f * ∥ L 2 (µΞ) WhereΘ is understood as belonging to R k λ ⊗ H in this last expression and F λ the eigenspace linked with positive eigenvalues of T λ . B.4. Discussion B.4.1. FINITE NUMBER OF POSITIVE EIGENVALUES The following result relates the eigenvalues of T λ with those of K. It notably proves that k λ is finite when K is trace-class, which is one claim of Theorem 1. Lemma 29 (Relating capacity between K and T λ ). If (µ i ) are the eigenvalues of K, then the number of eigenvalues of T λ that are bigger than t ∈ R is smaller than the cardinality of {i | µ i > λ/(1 − t)} . Moreover, if there exists q > 0 such that Tr K 1/q < +∞, then there exists a c q such that if (µ i ) are the eigenvalues of K, we have µ i ≤ c q i −q . As a consequence, in this setting, for any t ∈ R, the number of eigenvalues of T λ that is bigger than t is smaller than (c q (1 − t)/λ) 1/q . Proof. Let us consider the set of eigenvectors (f i ) whose eigenvalues are bigger than t. Consider the span of this space, we want to quantify its dimension. We know that all unitary vectors in this span satisfies t ≤ x ⊤ T λ x ≤ x ⊤ T x − λx ⊤ K −1 x ≤ 1 − λx ⊤ K −1 x Hence x ⊤ K −1 x ≤ 1 − t λ This means that this span does not intersect the span of φ i for φ i the eigenvectors of K −1 such that the eigenvalues are bigger than λ/(1 − t). In other terms, this linear space does not intersect a linear space of co-dimension d where d is the cardinality mentioned in the lemma statement. Let us denote by U the space we are interested in and by V the space it does not intersect beside in the origin, and by E the ambient space Since U ∩ V = {0}, the quotient (U + V )/V is isomorphic to U , hence dim(U ) = dim U + V V ≤ dim E V = codim(V ) = d. This concludes the proof of the first part of the lemma. The second claim follows from the fact that µ 1/q i are summable and decreasing, hence the sequence S n = i≤n µ 1/q i is a Cauchy sequence. As a consequence, there exists N ∈ N, such that for any s > N/2, we have sµ 1/q 2s ≤ S 2s − S s ≤ 1/2. Hence, for all s ≥ N , we have µ s ≤ s −1/q , hence µ s /s −1/q is bounded. Denoting by c q the maximum, leads to the first result. The final statement is a consequence of the fact that c q i −q > λ/(1 − t) implies i < (c q (1 − t)/λ) 1/q . Example 8. When considering the radial basis function kernel φ(x) ⊤ φ(x ′ ) = exp(− ∥x − x ′ ∥ 2 ) , Ψ is the space of analytical functions (Sun & Zhou, 2008), which is known to be small compared to L 2 spaces (Kolmogorov & Tikhomirov, 1959). As a consequence, one can think as q = +∞ in the previous lemma. More in general, when φ is bounded, K is trace-class and one can take q = 1. Proof. The capacity of K is relates to the capacity of K( f ∥f ∥ L 2 (µΞ) ≤ 1 ), which itself relates to the capacity of Ψ = im K 1/2 . This explains why q can be taken, in essence, as arbitrarily big (Bach, 2023). When φ is bounded, the following Tr (K) = Tr SS ⊤ = Tr S ⊤ S = Tr E[φ(X)φ(X) ⊤ ] = E[Tr φ(X)φ(X) ⊤ ] = E[φ(X) ⊤ φ(X)] = E[∥φ(X)∥ 2 ] < +∞, proves that K is trace class. B.4.2. DERIVATION FOR VANISHING BIAS In the main text, we have assumed that T λ was the right operator to define the solution of the representation learning (which explains Assumption 3). This might offend the purist as it would be nicer to define a principled solution that does not depend on the choice of the architecture (yet that might be easier to approximate with some architecture than others). This suggests studying the behavior of the last expression in Theorem 4 when λ goes to zero. We let for future work a more precise study of the inductive bias in this vanishing setting: in essence, the choice of architecture Ψ perturbs T by λK −1 to make it T λ , and ideally, we would like to quantify the speed at which T λ converges to T when seen through the eyes of f * as we decrease the regularization parameter. In the kernel regime, it could be characterized by perturbation theory (Kato, 1995), and refinement of Davis-Kahan theorem (Davis & Kahan, 1970) taking into account Assumption 3. Moreover, when K and T commute, the interplay can be studied in a more direct fashion thanks to Proposition 4. C. Control of the upstream excess of risk In order to control the excess of risk, one can use technique steaming from optimization as well as technique steaming from classical statistical learning. C.1. Rademacher complexity First, let us remark that L is a quadratic function when parameterized with Λ = Θ ⊤ Θ ∈ H ⊗ H. Lemma 30. Let Θ ∈ R k ⊗ H, denote Λ = Θ ⊤ Θ ∈ H ⊗ H L(SΘ) = 2(β − 1) E ξ [ Λ, φ(ξ)φ(ξ) ⊤ ] − 2β E X E ξ,ξ ′ Λ, φ(ξ ′ )φ(ξ) ⊤ X + E ξ,ξ ′ Λ, φ(ξ)φ(ξ ′ ) ⊤ 2 + k. (34) Moreover, the regularization reads λ ∥Θ∥ 2 = λ Tr Λ = λ ⟨Λ, I⟩. Proof. Consider ψ = Θφ, we have L(ψ; β) = 2(β − 1) E ξ [ψ(ξ) ⊤ ψ(ξ)] − 2β E X E ξ,ξ ′ ψ(ξ) ⊤ ψ(ξ ′ ) X + E ξ,ξ ′ (ψ(ξ ′ ) ⊤ ψ(ξ)) 2 + k. = 2(β − 1) E ξ [φ(ξ) ⊤ Λφ(ξ)] − 2β E X E ξ,ξ ′ φ(ξ) ⊤ Λφ(ξ ′ ) X + E ξ,ξ ′ (φ(ξ ′ ) ⊤ Λφ(ξ)) 2 + k. = 2(β − 1) E ξ [Tr Λφ(ξ)φ(ξ) ⊤ ] − 2β E X E ξ,ξ ′ Tr Λφ(ξ ′ )φ(ξ) ⊤ X + E ξ,ξ ′ Tr Λφ(ξ)φ(ξ ′ ) ⊤ 2 + k. The lemma follows from the characterization of the Hilbert-Schmidt geometry with the trace, the fact that Λ is self-adjoint, and that the regularization reads ∥Θ∥ 2 = Tr Θ ⊤ Θ. Let us recall three useful facts from the statistical learning literature. Lemma 31. Let R(ζ) = E Z [ℓ(ζ, Z)], ζ * be the minimizer of L inside a domain for ζ, and ζ n be the minimizer of R (Zi) (ζ) = 1 n i∈[n] ℓ(ζ, Z i ) based on exchangeable data Z i such that E (Zi) [R (Zi) ] = R. The average excess of risk of ζ n is bounded by Rademacher complexity as R(ζ n ) − R(ζ * ) ≤ 4 E (Zi),(σi) sup ζ 1 n n i=1 σ i ℓ(ζ, Z i )(35) where σ i are i.i.d variables taking values one and minus one with probability one half. Proof. The proof is a classical result from learning theory (Bartlett & Mendelson, 2002), its proof consists in introducing both the empirical risk of ζ n and ζ, and bounding the difference between the empirical and population of ζ by the supremum of this deviation over the entire domain of ζ. This is followed by the replacement of the population risk by the average empirical one, and a symmetrization trick that introduce the variable (σ i ) based on exchangeability of the (Z i ). Lemma 32. For linear model, the Rademacher complexity can be bounded as E (Zi),(σi) sup ∥ζ∥≤M 1 n n i=1 σ i ⟨Z i , ζ⟩ ≤ M √ n E[∥Z∥ 2 ].(36) Proof. This is a classical result on Rademacher complexity of ball constraints predictors (Bartlett & Mendelson, 2002). Lemma 33. Moreover, when h : R → R is Lipschitz, the following contraction principle holds E[sup f 1 n n i=1 σ i h(f (Z i ))] ≤ sup ∥dh(x)∥ E[sup f 1 n n i=1 σ i f (Z i )] Proof. This follows from contraction of space capacity by Lipschitz functions (Vitushkin, 1954), see Meir & Zhang (2003) for a proof in the context of machine learning. We can now state the convergence property based on Rademacher complexity. Lemma 34. Let Θ n ∈ R k ⊗ H be the minimizer of the unbiased regularized empirical version of L based on a dataset D n . Assume that D n is built from n input samples (X i ) and m augmentation per samples (ξ ij ), then the average excess of risk is bounded by E Dn [L(SΘ n )] − L(SΘ) ≤ 8κ 2 sup ∥Λ∥ HS √ n m + 1 + β m + √ 2κ 2 sup ∥Λ∥ HS (m 2 + 1) m 2 ,(37) where κ is a bound on ∥φ(X)∥. Proof. Following the previous lemmas on Rademacher complexity we have E Dn [L(SΘ n ); λ] − L(SΘ; λ) ≤ 8 E Dn,σ   sup Λ 1 − β n i∈[n] σ i Λ, 1 m j∈[m] φ(ξ ij )φ(ξ ij ) ⊤   + 8 E Dn,σ   sup Λ β n i∈[n] σ i Λ, 2 m j∈[m/2];j+k−1=m φ(ξ ij )φ(ξ ik ) ⊤   + 4 E Dn,σ   2 n i∈[n/2];i+j−1=n σ i 1 m 2 k,l∈[m] Λ, φ(ξ ik )φ(ξ jk ) ⊤ 2   ≤ 8 sup ∥Λ∥ HS √ n   (1 − β) E X E 1 m m i=1 φ(ξ i )φ(ξ i ) ⊤ 2 HS X 1/2   + 8 sup ∥Λ∥ HS √ n   β E X   E   2 m m/2 i,j=1 φ(ξ i )φ(ξ j ) ⊤ 2 HS X     1/2    + 8 sup ∥Λ∥ HS √ n    √ 2 sup Λ, φ(ξ)φ(ξ ′ ) ⊤ E   1 m 2 m i,j=1 φ(ξ i )φ(ξ j ) ⊤ 2 HS   1/2    . To work out those terms, remark that if (Z i ) are i.i.d. variables, E[ 1 p i∈[p] Z i 2 ] = E[ 1 p i∈[p] Z i − E[Z] 2 ] + ∥E[Z]∥ 2 = 1 p E[∥Z − E[Z]∥ 2 ] + ∥E[Z]∥ 2 . While one could work out each term, the lemma consists in simply bounding φ by κ, hence all the mean and standard deviation one can obtain with expression of φ by κ. The expression in the main text is due to the following lemma. C.2. Convex optimization When a least-square problem benefits from additional structure, such as smoothness or strong convexity, results from convex optimization could lead to improvement over the usual convergence rates in n −1/2 . Recall basic results from convex optimization. Lemma 36. Let L(Θ) = E Z [ℓ(Θ, Z)] be a convex function optimized over a convex domain. Given n samples (Z i ), (unbiased) stochastic gradient descent with final averaging can achieve an excess of risk E (Zi) L(Θ) − L(Θ * ) ≤ √ 2M V n −1/2 (38) with M 2 = ∥Θ * − Θ 0 ∥ 2 and V 2 = E[∥∇ Θ ℓ(Θ, Z i )∥ 2 ]. Moreover, if L is α-smooth, then it can achieve E (Zi) L(Θ) − L(Θ * ) ≤ √ 2M σn −1/2 + αM 2 n −1 (39) where σ 2 = E[∥∇L − ∇ℓ∥ 2 ]. Finally, when L is α-strongly convex, it achieves E (Zi) L(Θ) − L(Θ * ) ≤ 2V 2 α(n + 1) . As a consequence, given n data samples, there exists an empirical estimate ofΘ that guarantee those generalization bounds. Proof. This lemma is a direct consequence of Theorems 6.1, 6.2 and 6.3 of Bubeck (2015). It should be noted that when parameterized with Λ = Θ ⊤ Θ, L is a quadratic form as stated by Lemma 30, yet it is minimized over a non-convex domain, the domain of symmetric operator of rank k. We will relax this constraint and consider the harder problem of optimizing over Λ in the set of self-adjoint positive operators. This is justified by the fact that Theorem 4 provides guarantee on the downstream task, even when one relaxes the rank constraint on Λ. To benefit from Lemma 36, one should consider an unbiased expression of L. Consider the minibatch scheme that consist in sampling two inputs X 1 , X 2 , and m augmentations ξ ij for each X i , formally X i ∼ µ ⊗2 X , ξ ij ∼ µ| ⊗m Xi .(41) Here µ X denotes the marginal of µ with respect to X, which is likely to match ρ X , and µ| X denotes the distribution of Ξ conditionally to X. Lemma 37. An unbiased formulation of L is based on ℓ defined as ∇ Λ ℓ(SΘ; λ) = 2(β − 1) m j∈[m] φ(ξ 1j )φ(ξ 1j ) ⊤ − 2β m(m − 1) 1≤j̸ =k≤m φ(ξ 1j )φ(ξ 1k ) ⊤ + 1 m 2 2 i,i ′ =1 m j,k=1 Λ, φ(ξ ij )φ(ξ i ′ k ) ⊤ φ(ξ ij )φ(ξ i ′ k ) ⊤ .(42) Moreover, when L is regularized, one has to add +λI to get a gradient on the regularized risk. Proof. This formula follows from Lemma 30. In order to bound the norm squared of the gradient, one can use the following lemma. Lemma 38. For ℓ given in (42), bounds on the gradient norm and its variance are ∥∇ Λ ℓ∥ ≤ 2κ 2 + κ 4 sup ∥Λ∥ , and E[∥∇ Λ ℓ − ∇L∥ 2 ] ≤ (σ 2 X + m −1 σ 2 ξ )(1 + sup ∥Λ∥ 2 ),(43) where σ X relates to the variance of E [ψ(ξ) | X] and σ ξ relates to the average variance of (ξ | X). Proof. Let us decompose ∇ℓ into three terms ∇ℓ = a + b + c as appearing in (42), we have ∥a∥ ≤ 2(1 − β) φ(ξ)φ(ξ) ⊤ ≤ 2(1 − β)κ 2 ∥b∥ ≤ 2β φ(ξ)φ(ξ ′ ) ⊤ 2 ≤ βκ 2 ∥c∥ ≤ Λ, φ(ξ)φ(ξ ′ ) ⊤ φ(ξ)φ(ξ ′ ) ⊤ ≤ sup ∥Λ∥ κ 4 . To bound the variance, one can proceed with E ∥∇ℓ − ∇L∥ 2 ≤ 3 E ∥a − E[a]∥ 2 + 3 E ∥b − E[b]∥ 2 + 3 E ∥c − E[c]∥ 2 . Let us begin with the part in a, E    1 m i∈[m] φ(ξ 1i )φ(ξ 1i ) ⊤ − E[φ(ξ)φ(ξ) ⊤ ] 2    = E    1 m i∈[m] φ(ξ 1i )φ(ξ 1i ) ⊤ − E[φ(ξ)φ(ξ) ⊤ X = X 1 ] 2    + E E[φ(ξ)φ(ξ) ⊤ X = X 1 ] − E[φ(ξ)φ(ξ) ⊤ ] 2 = 1 m E X E ξ φ(ξ)φ(ξ) ⊤ − E[φ(ξ)φ(ξ) ⊤ X] 2 X + E E[φ(ξ)φ(ξ) ⊤ X] − E[φ(ξ)φ(ξ) ⊤ ] 2 . Similarly, the part in b can be expressed as E[∥b − E[b]∥ 2 ] = 2β 2 m E X E ξ φ(ξ)φ(ξ ′ ) ⊤ − E[φ(ξ)φ(ξ ′ ) ⊤ X] 2 X + 2β E E[φ(ξ)φ(ξ ′ ) ⊤ X] − E[φ(ξ)φ(ξ ′ ) ⊤ ] 2 . Finally, E[∥c − E[c]∥ 2 ] = 1 m 2 E X E ξ Λ, φ(ξ)φ(ξ ′ ) ⊤ φ(ξ)φ(ξ ′ ) ⊤ − E[ Λ, φ(ξ)φ(ξ ′ ) ⊤ φ(ξ)φ(ξ ′ ) ⊤ X, X ′ ] 2 X, X ′ + E E Λ, φ(ξ)φ(ξ ′ ) ⊤ φ(ξ)φ(ξ ′ ) ⊤ X, X ′ − E[ Λ, φ(ξ)φ(ξ ′ ) ⊤ φ(ξ)φ(ξ ′ ) ⊤ ] 2 = 1 m 2 E X E ξ Λ, φ(ξ)φ(ξ ′ ) ⊤ ⊗ φ(ξ)φ(ξ ′ ) ⊤ − E[φ(ξ)φ(ξ ′ ) ⊤ ⊗ φ(ξ)φ(ξ ′ ) ⊤ X, X ′ ] 2 X, X ′ + E Λ, E φ(ξ)φ(ξ ′ ) ⊤ ⊗ φ(ξ)φ(ξ ′ ) ⊤ X, X ′ − E[φ(ξ)φ(ξ ′ ) ⊤ ⊗ φ(ξ)φ(ξ ′ ) ⊤ ] 2 . ≤ 1 m 2 ∥Λ∥ 2 E X E ξ φ(ξ)φ(ξ ′ ) ⊤ ⊗ φ(ξ)φ(ξ ′ ) ⊤ − E[φ(ξ)φ(ξ ′ ) ⊤ ⊗ φ(ξ)φ(ξ ′ ) ⊤ X, X ′ ] 2 X, X ′ + ∥Λ∥ 2 E E φ(ξ)φ(ξ ′ ) ⊤ ⊗ φ(ξ)φ(ξ ′ ) ⊤ X, X ′ − E[φ(ξ)φ(ξ ′ ) ⊤ ⊗ φ(ξ)φ(ξ ′ ) ⊤ ] 2 . As a consequence, we get E ∥∇ℓ − ∇L∥ 2 ≤ 3 2(1 − β) σ 2 ξ,1 m + σ 2 X,1 + 2β 2σ 2 ξ,2 m + σ 2 X,2 + sup ∥Λ∥ 2 σ 2 ξ,3 m 2 + σ 2 X,3 where σ 2 ξ,1 = E X E ξ φ(ξ)φ(ξ) ⊤ − E[φ(ξ)φ(ξ) ⊤ X] 2 X σ 2 X,1 = E E[φ(ξ)φ(ξ) ⊤ X] − E[φ(ξ)φ(ξ) ⊤ ] 2 σ 2 ξ,2 = E X E ξ φ(ξ)φ(ξ ′ ) ⊤ − E[φ(ξ)φ(ξ ′ ) ⊤ X] 2 X σ 2 X,2 = E E[φ(ξ)φ(ξ ′ ) ⊤ X] − E[φ(ξ)φ(ξ ′ ) ⊤ ] 2 σ 2 ξ,3 = E X E ξ φ(ξ)φ(ξ ′ ) ⊤ ⊗ φ(ξ)φ(ξ ′ ) ⊤ − E[φ(ξ)φ(ξ ′ ) ⊤ ⊗ φ(ξ)φ(ξ ′ ) ⊤ X, X ′ ] 2 X, X ′ σ 2 X,3 = E E φ(ξ)φ(ξ ′ ) ⊤ ⊗ φ(ξ)φ(ξ ′ ) ⊤ X, X ′ − E[φ(ξ)φ(ξ ′ ) ⊤ ⊗ φ(ξ)φ(ξ ′ ) ⊤ ] 2 . Using the fact that m 2 ≥ m and choosing the right σ X and σ ξ leads to the lemma. The following lemma states the convexity properties of L Lemma 39. As a function of Λ, the objective L is α-smooth with α = κ 4 , where κ is a bound on ∥φ∥. Moreover, when X is finite, it is α ′ -strongly, with α ′ being the square of eigen gap of K = SS ⊤ . Proof. This is a consequence of Lemma 30, L is a quadratic function, with the quadratic part being E[ Λ, φ(ξ)φ(ξ ′ ) ⊤ 2 ] = Λ, E[φ(ξ ′ )φ(ξ ′ ) ⊤ ] ⊗ E[φ(ξ)φ(ξ) ⊤ ]Λ = ⟨Λ, Σ ⊗ ΣΛ⟩ . In other terms, the hessian of L is Σ ⊗ Σ ∈ H ⊗2 ⊗ H ⊗2 . As a consequence, Σ ⊗ Σ ⪯ ∥Σ ⊗ Σ∥ op I = ∥Σ∥ 2 op I ⪯ κ 4 I. Similarly, Σ ⊗ Σ ⪰ Σ −1 −2 op I = γ 2 ξ I, where γ ξ is the eigen gap of Σ, hence of K. There are few remaining difficulties that must be addressed before concluding. First, although the identity is not Hilbert-Schmidt, it should be noted that the term in λ will only contract distances in the stochastic gradient descent. As a consequence, optimizing the regularized risk will only contract the descent trajectory (to prove it formally one could go back to the proofs of Bubeck (2015)). Finally, we have described a descent in the space of self-adjoint positive operators, without incorporating any constraints on the rank of Λ. Notice that based on Lemma 35, on can restrict the search of Λ to inside the domain ∥Λ∥ ≤ k λ /λ. Finally, if Λ minimizes the loss L, then one can show that thresholding its eigenvalues to make it of dimension at most k can only increase the loss L by a bounded multiplicative factor. We note that without explicit regularization, the previously described stochastic gradient descent algorithm with early stopping has a regularization effect that could be studied in the spectral filtering framework of Lin et al. (2020). D. Examples This section is devoted to illustrate what T and K are under simple distributions thanks to harmonic analysis techniques. D.1. Harmonics analysis on the sequence of bits, a.k.a. the Boolean hypercube A fine-grained analysis of the role of classical augmentations can be derived in settings that allow précise derivations. We shall focus on invariant data distribution such as the uniform distribution, and augmentations consisting of permutations or perturbations of coordinates that left this distribution invariant. While such distributions may lack structure present in real data, they allow for a precise study of the effect of certain architectures and augmentations, which may also partly apply to more realistic data. The study involves the construction of appropriate L 2 bases that ease the study of the effect of both the kernel operator K and the smoothing operator T defined from augmentations. These are closely related to the study of invariant kernels (see, e.g., Bietti et al., 2021;Bietti, 2022;Mei et al., 2021;Misiakiewicz & Mei, 2022). Will focus here on the data that are n-bit inputs on the Boolean cube X = {−1, +1} d with uniform distribution. To be able to use the harmonic analysis tools to their fullest, we assume that inputs are sampled from the uniform distribution on X . In this setting, the space of function L 2 (X ) = L 2 (X , R, µ X ) is defined through the usual scalar product, for f, g : X → R, ⟨f, g⟩ = E x∼τ [f (x)g(x)] = 1 2 d x∈X f (x)g(x) . D.2. The role of augmentation Let us know analyze the role of augmentation in the definition of T on the Boolean cube. For simplicity and ease of notation, we assume indexing of the bits is taken mod d, e.g., x −1 = x d . D.2.1. STUDY THROUGH PARITY FUNCTIONS Parity functions. A useful basis in this space are the parity functions, which can be seen as Fourier functions in this L 2 -space (O'Donnell, 2014). They are defined for each subset S ⊆ [d] as counting the parity of x within this set χ S (x) = i∈S x i .(44) Lemma 40. The parity functions χ S form an orthonormal basis of L 2 (X ). Proof. It is straightforwards to check that ⟨χ S , χ S ⟩ = 1. If S ̸ = S ′ , then w.l.o.g. there is an i ∈ S \ S ′ , and we have ⟨χ S , χ S ′ ⟩ = E x [x i χ S\{i} (x)χ S ′ (x)] = E xi [x i ] E x−i [χ S\{i} (x)χ S ′ (x)] = 0. This proves orthogonality of this basis. Let us begin with augmentations that are easily to study with the parity basis. Proposition 41 (Random noise). Consider the flip of each bit of x with probability equal to p formally via the operation B p y (x) = x ⊙ y, y ∼ Ber({−1, +1}, p) ⊗d ,(45) where the operation x ⊙ y applies pointwise multiplication and the distribution Ber({−1, +1}, p) returns the value −1 with probability p and +1 with probability 1 − p. Under the augmentations ξ = X ⊗ y, T is diagonalized in the parity basis with T χ S = |1 − 2p| |S| χ S .(46) In other terms, T applies a factor |1 − 2p| |S| to reduce the effect of higher order Fourier functions. Proof. Recall the formula g ⊤ T f = E X E ξ,ξ ′ [⟨f (ξ), g(ξ ′ )⟩ | X]. As a consequence, with y, y ′ denoting the noise strings (each bit equal to −1 with probability p) and S △ S ′ = (S ∪ S ′ )/(S ∩ S ′ ), χ ⊤ S T χ S ′ = E X [E y,y ′ [χ S (X ⊙ y)χ S ′ (X ⊙ y ′ )]] = E X   E y,y ′   i∈S X i y i j∈S ′ X j y ′ j     = E X E y i∈S△S ′ X i y i E y,y ′ i∈S∩S ′ y i y ′ i = E[χ S△S ′ (X)] · |1 − 2p| |S△S ′ | |1 − 2p| 2|S∩S ′ | = |1 − 2p| |S| δ S,S ′ . Therefore, in the case of bit-flip augmentations, T is diagonalized in the parity basis. Proposition 42 (Cropping/Masking). Consider the cropping operation within a window of size w, formally defined as [M w a (x)] i = x i if i ∈ [a, a + w) Ber({−1, +1}, 0.5) otherwise , a ∼ U ([d]) ,(47) where [a, a + w) = {a, a + 1, . . . , a + w − 1}, a is drawn from the uniform distribution over [d], and the distribution Ber({−1, +1}, 0.5) returns a random bit with equal probability for +1 and −1 thus effectively masking the values outside of the window in [a, a + w). Under the augmentations ξ = M w a (X), T is diagonalized in the parity basis with T χ S = max {1 + w − diam(S), 0} 2 d 2 · χ S with diam(S) = min {v | v, a ∈ [d]; S ⊆ [a, a + v)} .(48) In other terms, the action of cropping effectively removes any dependence on the kernel with parity functions of high order whose support falls outside the windows of size w. Proof. In this setting, χ ⊤ S T χ S ′ = E X [E a,b [χ S (M w a (X))χ S ′ (M w b (X))]] = 1 d 2 d a,b=1 E X,ν,ν ′   i∈S∩[a,a+w) x i i ′ ∈S\[a,a+w) ν i ′ j∈S ′ ∩[b,b+w) x j j ′ ∈S ′ \[b,b+w) ν ′ j ′   = 1 d 2 d a,b=1 1 S⊆[a,a+w) 1 S ′ ⊆[b,b+w) E X [χ S (X)χ S ′ (X)] = 1 d 2 d a,b=1 1 S⊆[a,a+w) 1 S ′ ⊆[b,b+w) δ S,S ′ = 1 d d a=1 1 S⊆[a,a+w) 2 δ S,S ′ . The count of the sum relates to the diameter of S. Proposition 43 (2D Cropping). Consider that 2D setting X = {−1, +1} m×d where inputs are organized into an m × d grid. Consider the cropping operation to a window of size v × w, formally [M v×w a,b (x)] i+jm = x i+jm if i ∈ [a, a + v), j ∈ [b, b + w) Ber({−1, +1}, 0.5) otherwise , (a, b) ∼ U ([m] × [d]) .(49) Under the augmentation ξ = M v×w a,b (X), T is diagonalizable in the parity basis and T χ S = 1 m 2 d 2 (1 + v − diam e1 S) 2 + · (1 + w − diam e2 S) 2 + χ S ,(50) where diam e1 S is the diameter of S projected onto the first dimension. Proof. This follows from the proof of the 1D case. Proposition 44 (Flipping). Consider the operator which, with probability p, flip the indices into reverse order, formally [R(x)] i = x −i .(51) Under the augmentation ξ = R(X), T = (1 − 2p + 2p 2 )I + 2p(1 − p)J,(52) where J is the involution that matches any set S to its mirrorS = {−i | i ∈ S}. In this setting, T is diagonalized by the √ 2(χ S + χS) and √ 2)(χ S − χS) for S ⊆ [d]. Proof. In this setting, χ ⊤ S T χ S ′ = ((1 − p) 2 + p 2 ) E X [χ S (X)χ S ′ (X)] + 2p(1 − p) E X [χS(X)χ S ′ (X)] = (1 − 2p + 2p 2 )δ S,S ′ + 2p(1 − p)δS ,S ′ , which explain the lemma. Remark 45. Up to now, we have studied all the operators in the space L 2 (X , R, µ X ) while the main text considered those operators in L 2 (X , R, µ Ξ ), this is justified by the fact that all transformations studied earlier let invariant the uniform distribution, hence L 2 (µ X ) = L 2 (µ Ξ ). D.2.2. STUDY OF TRANSLATIONS THROUGH CYCLIC PARITIES In order to study augmentations that consist of permutations, and more specifically translations, the parity basis is not adapted to diagonalize T . Instead we define below a different basis that incorporates cyclic symmetries (Misiakiewicz & Mei, 2022). We note that a similar study may be carried on other distributions, e.g., uniform on the sphere, product of spheres, or torus (Bietti et al., 2021;Bietti, 2022;Favero et al., 2021;Mei et al., 2021). Cyclic parity functions. The functions χ S are polynomials that can be grouped by their degree ℓ = |S| into spaces V d,ℓ , whose direct sum yields the full L 2 (X ) space, with dim V d,ℓ = |{S ⊆ [d], |S| = ℓ}| = d ℓ . Those different spaces can be further decomposed into orbits under the action of a group. In particular for the group of permutations G = S d , we define the action A : G × X → X denoted A(a, x) = a · x as (a · x) i = x a −1 (i) . To give a concrete example of the study of augmentations through harmonic analysis, let us focus more specifically on the action of translation, which form a sub-group of permutations. For simplicity, we will denote this group [d] which is understood as Z/dZ, acting on X as (a · x) i = x i−a where i − a being understood modulo d. Define the orbits of this action as {S + a | a ∈ [d]} for S ⊆ [d]. On those different orbits, one can define the following "cyclic parities" ψ m,S : X → C: ψ m,S = 1 √ k S k∈[k S ] e 2iπkm/k S χ S+k = √ k S d k∈[d] e 2iπk md k S /d χ S+k where k S = |orb(S)| ,(54) where m ∈ [k S ] and S is taken as a representant of an orbit. Lemma 46. The cyclic parities (ψ m,S ), for m ∈ [k S ] and S is a set of representers of each orbit of the translations action, form an orthogonal basis of L 2 (X , C, µ) where µ is the uniform measure on X . Moreover, they diagonalize the operators A : L 2 → L 2 defined as Af (x) = f (a · x) for any a ∈ [d]. Proof. The first part follows from the fact that L 2 (X ) can be decomposed into the direct sum linked with the V d,ℓ for ℓ ∈ [0, d], that each subspace can be decomposed into the orbits of the action translation orb(S) = {S + a | a ∈ [d]} (note that translation do not change the cardinals of the sets S). Those latter spaces can be parameterized through the discrete Fourier transform, yielding the ψ m,S . A natural way to "find" those basis is when trying to diagonalize an operator T such that (χ ⊤ S T χ S ′ ) S,S ′ ⊆[d] that is block diagonal, where each block corresponds to a circulant matrix on an orbit, which can be diagonalized with the discrete Fourier transform. This is especially the case for operator of the lemma Aχ S = χ S+a The above is only nonzero when [d] · S intersects [d] · S ′ , which implies orb(S) = orb(S ′ ) thereby constructing a block diagonal structure. Indexing the elements of the i-th block by S i,k = S i + k for k ∈ [d], we have χ ⊤ S i,k Aχ S i,k ′ = 1 S i,k =S i,k ′ +a , = 1 Si+k=Si+k ′ +a , = 1 k−k ′ =a ,which only depends on the value of (k − k ′ ). Therefore, each block above is a circulant matrix which is diagonalized by the discrete Fourier transform. The eigenvectors of this matrix are v m = 1 √ k S k∈[k S ] e 2iπkm/k S e k , where k S = |orb(S)| , for m ∈ [k S ] and the corresponding eigenvalues read µ m = k∈[k S ] c k S −k exp 2iπkm k S , where c i = 1 i=a , Using the fact that we wrote those matrices for e i ≃ χ S+i yields the lemma. The study of the operator T can be simplified thanks to its square root A : L 2 (µ Ξ ) → L 2 (µ X ) formally defined by Af (x) = E ξ [f (ξ) | X = x](55) and verifying ⟨f, T g⟩ L 2 (µΞ) = E X E ξ,ξ ′ [f (ξ)g(ξ ′ )|X] = ⟨Af, Ag⟩ L 2 (µ X ) .(56) This decomposition will be particularly useful, when µ X is invariant under the action of permutations, which implies µ X = µ Ξ =: µ. Lemma 47. In the uniform Boolean setting, when augmentations are defined as ξ = a · X where a is a permutation sampled from the probability distribution p ∈ ∆ S d , T f (x) = a,b∈S d p(a)p(b)f ((a −1 b) · x). Proof. The square root of T is defined as Af (x) = a∈S d p(a)f (a · x). Let us focus on the case where p(b) = δ a=b , using the fact that µ X is the uniform measure, hence is left invariant by translation, we compute the adjoint of A with ⟨Af, g⟩ L 2 (µ X ) = 1 2 d x∈X Af (x)g(x) = 1 2 d x∈X p(a)f (a · x)g(x) = 1 2 d x∈X p(a)f (a · x)g(a −1 · a · x) = 1 2 d x∈X p(a)f (x)g(a −1 · x) = f, x → p(a)g(a −1 · x) L 2 (µ X ) . = f, A ⊤ g L 2 (µΞ) . In the general case, we get by linearity, A ⊤ f (x) = a∈S d p(a)f (a −1 · x). Computing T = A ⊤ A leads to the result. Remark that if we further assume that p is symmetric (i.e., p(a) = p(a −1 )), then we have A ⊤ = A, so that T = A 2 . This allows us to characterize more finely the effect of translation on the operator T . Proposition 48 (Translations). Consider the translation operator defined formally as [T a (x)] i = x i−a , a ∼ p ∈ ∆ [d](57) Under the augmentation ξ = T a (X), T is diagonalized in C by the cyclic parity functions (54). T ψ m,S = d 2 k 2 S p md k S 2 ψ m,S ,(58) wherep is the Fourier transform of p, defined for ω ∈ [d] bŷ p(ω) = a∈[d] p(a) exp −2iπaω d(59) Proof. In the case of translation, we have Af (x) = a∈[d] p(a)f (a · x) = a∈[d] p(a)A a f (x), were A k be the operator that associate f to x → f (k · x), it is a translation operator and retaking the proof of Lemma 46, A k ψ m,S = e −2iπkm/k S ψ m,S . This leads to Aψ m,S = a∈[k S ] p(a) exp −2iπam k S ψ m,S = a∈[d] d k S · p(a) exp −2iπam k S ψ m,S = d k S ·p md k S ψ m,S , where ν ℓ can be found by computing the scalar product between h and Q ℓ in L 2 (τ ). ν ℓ = ⟨h, Q ℓ ⟩ L 2 (τ ) .(64) Finally, using the fact that, in the uniform setting, Kf (x) = d −1 x ′ ∈[d] k(x, x ′ )f (x ′ ) where k(x, x ′ ) = φ(x) ⊤ φ(x ′ ), we have Kχ S (x) = E Y [h(⟨x, Y ⟩)χ S (Y )] = ℓ ν ℓ S ′ ⊂[d],|S ′ |=ℓ χ S ′ (x) E[χ S ′ (Y )χ S (Y )] = ν ℓ χ S (x). This ends the proof of this lemma. Lemma 49 can also be shown on the sphere. Its proof showcase the Q ℓ which act as normalized Legendre (or Gegenbauer) polynomials. See, e.g. The features (60) are rich enough to describe the neural tangent kernels of simple architectures with fully connected or convolutional layers. First, we describe the general form of such NTKs as below. Proposition 51 (Linearization of simple network). Define a simple neural architecture as f (x) = ∆ N ωd i∈[N ] k∈[d/∆] a ik s∈[ω] σ w i , x (q) (k∆+s) ,(65) where x (q) (k) = (x k , x k+1 , · · · , x k+q−1 ) is a local patch of size q (with indices being defined modulo d), w i the weights initialized from a rotation-invariant distribution W, σ : R → R is an activation function, ω ∈ N is the size of the average pooling window, ∆ ∈ N is the pooling window, and N is the channel number. The linearization of this network near initialization yields the kernel k(x, x ′ ) = φ(x) ⊤ φ(x ′ ) = ∆ dω k∈[d/∆] s,s ′ ∈[ω] h x (q) (k∆+s) , y (q) (k∆+s ′ ) /q(66) where h(⟨u, v⟩ /q) = E w∼W [σ(⟨u, w⟩ / √ q)σ(⟨v, w⟩ / √ q) + σ ′ (⟨u, v⟩ / √ q)σ ′ (⟨u, v⟩ / √ q) · ⟨u, v⟩ /q] .(67) Proof. Such a linearization can be found, e.g., in Proposition 3 of Misiakiewicz & Mei (2022). Proposition 52 (Linearization of a fully connected network). A one hidden layer fully connected layer f F C (x) = 1 √ N i∈[N ] a ik σ(w ⊤ i x), can be linearized as a dot-product kernel with k F C (x, y) = h(x ⊤ y/d) for h defined in (67). Moreover, the resulting integral operator K F C is diagonalized in the parity basis as K F C χ S = ν h (d, |S|)χ S , where the coefficients are given by ν h (d, ℓ) = ⟨h, Q ℓ ⟩ L 2 (τ ) as in (64). Note that eigenvalues ν h (d, ℓ) are non-increasing with ℓ, and for fixed ℓ and large d they satisfy ν h (d, ℓ) = Θ d (d −ℓ ). More generally, it can be shown that lim d→∞ d k ν h (d, ℓ) = d k dt k h(t) t=0 . Proof. The first part is a direct consequence of the prior proposition with ω = 1 and q = ∆ = d. The second part is due to Lemma 50,and (63). For the statements on eigenvalues, see (Yang & Salman, 2019). Example 11 (Interplay between kernel for CNN and translation augmentations). Consider the setting as before in Example 10 with translations sampled from a localized window. For a single layer CNN with patch width q, eigenfunctions correspond to parity functions χ S , or cyclic parities ψ m,S where diam(S) ≤ q with corresponding eigenvalue ν h (q, ℓ) q+1−diam(S) d . Here, the eigenfunctions ψ m,S of T for S with diameter larger than q are completely eliminated, regardless of the regularization strength λ, . For eigenfunctions ψ m,S where diam(S) ≤ q, the CNN shrinks the contribution to |p(m)| 2 − λ(ν h (q, ℓ) q+1−diam(S) d ) −1 , which shrinks more when diam(S) is larger. Regularization parameter λ Eigenvalue T λ χ S χ {1,6} χ {1,3,5} χ {1,2,3,4} Figure 8. Illustration of the interplay between T and K as a function of λ where K is the NTK of a 2-layer ReLU network and T performs crops of window size 8 on 12-bit inputs. Here we plot eigenvalues of three different parity functions in the eigenbasis of both operators. Parity functions which large diameters have smaller eigenvalues for T (here, the parity function with largest diameter is χ {1,6} (X) = X1X6). Eigenvalues of K, in contrast, bias towards parities supported over fewer bits. Therefore, small regularization biases towards parities with small diameter whereas added regularization penalizes parities with high cardinality. D.5. Remark on the Sphere setup In experiments, we also consider a setup with uniform data on the sphere X = S d−1 , with augmentations consisting of permutations, and a dot-product kernel φ(x) ⊤ φ(y) = h(x ⊤ y). A natural choice of basis functions for L 2 (X ) in this case are spherical harmonics (Efthimiou & Frye, 2014). These consist of homogeneous harmonic polynomials, and similar to the parity case, these can be grouped by degree, leading to orthogonal spaces V d,ℓ of spherical harmonics of any degree ℓ ≥ 0, with N (d, ℓ) := dim V d,ℓ = 2ℓ + d − 2 ℓ ℓ + d − 3 d − 2 . It is well-known that for dot-product kernels, K is diagonal in such a basis (Smola et al., 2000;Bach, 2017), with decaying eigenvalues that only depend on the degree ℓ. These are given analogously to the hypercube setting by ν h (d, ℓ) = E t∼τ [h(t)Q ℓ,d (t)], where Q ℓ,d are now Legendre (Gegenbauer) polynomials of degree ℓ orthogonal w.r.t. a different measure dτ (t) = (1 − t 2 ) d−3 2 dt over [−1, 1]. Since the spaces V d,ℓ are left stable by the operator T = A ⊤ A, it is possible to show that there exists a choice of spherical harmonics that also diagonalizes T (see, e.g., Bietti et al., 2021, Lemma 12). We may then see the eigenvalues λ ℓ,j of T in this basis as capturing the invariance of the corresponding harmonic Y ℓ,j , in particular Y ℓ,j is invariant to all augmentations when λ ℓ,j = 1, and non-invariant or only partially invariant when λ ℓ,j < 1. Ordering λ k,j at fixed k by decreasing j, the interplay between T and K then resembles the one described, e.g., in Figure 3. E. Experiments E.1. Implementation details Previously, we extensively studied the embedding of H in L 2 defined as S : H → L 2 ; θ → φ(·) ⊤ θ. Given samples (ξ ij ) i≤n,j≤m , all the action on H can be reduced to the span of the φ(ξ ij ) (which is known as the representer theorem), and S can be reduced to the embeddingŜ : H → R nm ; θ → ( 1 nm φ(X ij ) ⊤ θ) ij . This leads to the implementation T λ = β + (1 − β)T − λK. Figure 9. Extending Figure 4. The i-th row representing the i-th eigenfunctions of T λ (ordered by decreasing eigenvalues). Regularization λ increases over the columns as λ ∈ {0, .1, 1, 10, 100}. Small λ biases towards functions invariant to the translation augmentation chosen here whereas large λ biases towards smoother functions on the sphere corresponding to low order spherical harmonics in this setting. The last two on the right are artifacts of the instability of the pseudo-inverse for K (leading to the implementation φK −1 φ = 0 while we have defined φ ⊤ K −1 φ = +∞ when Kφ = 0). Where,T ∈ R nm is the matrix equal to the following where we index elements in R nm by ij with i ∈ [n] and j ∈ [m], T = I + ijk e ij e ⊤ ik , and K is the Gram matrix defined as nm · e ⊤ ij Ke kl = k(ξ ij , ξ kl ) = φ(ξ ij ) ⊤ φ(ξ kl ). Note the the matrixT − I can be seen as the adjacency matrix of the graph that connects augmentations if and only if they come from the same input. Equivalently,T can be seen as a Laplacian matrix. An eigenvector ofT λ in R nm is projected back onto L 2 thanks to SŜ −1 = SŜ(ŜŜ ⊤ ) −1 = K ⊤ x K −1 where nmK x = φ(x) ⊤ φ(ξ ij ) ij ∈ R nm . Figure 10. VCReg with Neural networks. Contour plots of the minimizer ψ : X → R of L for β = 1 (left) and β = 0 (right) with a two layer fully connected neural network when k = 1, X = R 2 , X is distributed according to a half-moon structure and ξ = X + ε for a small noise ε. Augmentations are represented as black dots, connected by a line when they provide from the same input X. E.2. Experiment details for Figure 5 We consider data uniformly distributed on the sphere S d−1 with d = 8, augmentations consisting of cyclic shifts of {−1, 0, 1}, and a dot-product kernel of the form k(x, y) = (1 + x ⊤ y)κ(x ⊤ y), with κ(u) = 1 − arccos(u)/π. The target functions f * ℓ are given by: f * 1 (x) = 1 3 3 j=1 Q 1,d (x j ) f * 3 (x) = 1 d d j=1 Q 3,d (x j ), where Q ℓ,d are the Gegenbauer polynomials introduced in Appendix D.5. Note that f * 3 is a cyclic-invariant spherical harmonic of degree 3, while f * 1 is a non-invariant spherical harmonic of degree 1 (though is has some local shift stability). Labels on the downstream tasks are generated from the f * ℓ without noise. Figure 5 shows the downstream relative excess risk ∥f n − f * ℓ ∥ 2 L 2 /∥f * ℓ ∥ 2 L 2 , approximated over 1500 test datapoints, as a function of the regularization parameter λ used in pretraining. We use the same n = 300 samples for pretraining and downstream linear prediction. Pretraining uses all 3 augmentations for each sample, with a representation dimension k = 20. The downstream problem is solved with kernel ridge regression using the induced kernel from pretraining, and the ridge parameter is tuned on test samples to avoid dealing with model selection issues. E.3. Experiment details for Figure 6 Figure 6 considers a classification problem involving four classes with a pretraining task specifically constructed to design a representation ψ : X → R k for k = 4 that solves this particular classification problem. The dataset we consider is the halfmoon dataset, where X = Z + 1 ⟨Z,e1⟩>0 e 2 + U , Z ∼ U S 2 , and U ∼ N (0, σ 2 I) for σ = 0.1. Augmentations apply Gaussian noise, ξ = X + V for V ∼ N (0, σ 2 I) with σ = 0.1. This setting corresponds to that with a Laplacian where 0 1000 2000 3000 4000 5000 Stopped epochs L 2 error = 2 non-inv. = 3, inv. Figure 11. Behavior of Figure 5 with a neural network. The regularization parameter λ is replaced by early stopping of SGD. We consider a neural network with two hidden layers, both made of 200 neurons. Optimization was performed with gradient descent with a constant step-size. Randomness due to weights initialization is averaged over 100 trials, the standard deviation being shown on the Figure. L(ψ) ≃ ∥∇ψ∥ 2 L 2 (ρ X ) . As a consequence, the ideal ψ will correspond to the top eigenvalues of the Laplacian. I.e., the first two span the constant functions on both moons, the next two are waves with a single oscillation on a given moon, etc. In essence, one can view the harmonics on L 2 ([0, 1]) as x → cos(2iπωx + χ) for χ ∈ {0, π/2} and ω ∈ N, deforming the segment [0, 1] to match one moon, and duplicating this basis on the other moons. In this setting, eigenfunctions are not analytic, since analytic functions cannot be dissociated on two different manifold (e.g., a locally constant analytic function is globally constant). As a consequence, searching for the eigenfunction with the radial basis function kernel (Ψ only contains analytical function in this case (Sun & Zhou, 2008)) requires proper tuning of the regularization parameter as a function of the number of samples. This explains our choice of the exponential kernel in this experiment, which corresponds to φ(x) ⊤ φ(y) = exp(− ∥x − y∥ /σ) and is associated with the looser Sobolev space that is still a Reproducing kernel Hilbert space (in R 2 , this is H 1 ). This improves the learning of the top eigenfunctions of T without varying λ, better illustrating the convergence rates of Theorems 1, 2 and 3. In our experiments, we fixed λ = 10 −3 and the scale of the exponential kernel σ to be about one fifth of the problem diameter. We plot the eigenfunctions of T derived empirically with n pre = 2000 samples in Figure 13. The classification tasks aims to learn the four classes described on the left of Figure 12. Class labels include some noise as indicated by the level lines of the conditional probability of Y as a function of X shown in the middle of Figure 12. A training set example is shown on the right of this figure with n down = 100. In the experiments we fix k = 5, which ensures that there is strong correlation in performance between the pretraining and downstream tasks. The downstream task is optimized with a least-squares surrogate: we learn g : X → R 4 that minimizes the least-square error E[∥g(X) − e Y ∥ 2 ] before decoding it as f (X) = arg max i∈[4] g i (X) to get an estimate of the ideal mapping f * : X → Y. We report the downstream generalization error on both the least-squares (surrogate) loss and the 0-1 loss on Figure 14. This error is computed as the average over 100 trials on the pretraining task and 200 trials on the downstream task. Class regions x → P(Y = 1 | X = x) x → P(Y = 2 | X = x) Training set Figure 12. Setting of Figure 6. The downstream task consists in learning four classes in X = R 2 with are represented on the left. Those classes are generated with noise. The level lines of the conditional distribution of Y given X are represented on the middle for the left moons; the right moon follows the same structure. A training set example is on the right. Eigenvalue #0 of T λ Eigenvalue #1 of T λ Eigenvalue #2 of T λ Eigenvalue #3 of T λ Eigenvalue #4 of T λ Figure 13. Eigenvalues of T λ estimated empirically with 2000 pretraining samples on the problem that yield the empirical rates displayed on Figure 6. Proceedings of the 40 th International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). Figure 1 . 1Effect of augmentations and architecture. TSNE of representations learned on MNIST with no augmentations (left) or with rotations and an MLP (middle) or a CNN (right). The representations depend on both the augmentations and the architecture. (2022) provide theoretically friendly characterizations of many self-supervised learning settings, including closedform solutions of representations in the kernel setting. For contrastive learning, SSL was first theoretically analyzed by Arora et al. (2019); Tosh et al. (2021a;b); Tian et al. (2021). Figure 4 . 4Interplay on the sphere. Level lines of the 7-th eigenfunction of T λ for three different λ. Augmentations consist of translations of the x, y, z coordinates together with Gaussian perturbations. K is the integral operator associated with the radial basis function. Without regularization (left), the eigenfunction is highly localized at clusters corresponding to the action of the augmentations. Increasing the regularization biases towards smoother harmonic eigenfunctions of K (middle and right). Figure 5 . 5Trade-off on downstream errors. Effect of pretraining regularization λ on the empirical downstream error for two tasks on the sphere S 7 . The targets f * ℓ are polynomials of degree ℓ ∈ {1, 3} Figure 6 . 6Empirical downstream performance on a simple task (detailed in Appendix D) depends on the number of both downstream samples (x-axis) and pretraining (y-axis) in log-scale. Along the left hand side of the plot, convergence rates of n −1/2 pre are observed with respect to the number of pretraining samples (Theorem 3) while along the top, convergence rates of n −1 down are observed with respect to the number of downstream samples (Theorem 1). At the bottom, a saturation phenomenon is observed where added downstream samples do not result in noticeable benefits as the excess risk stalls at R(ΠF f * ) − R(f * ) > 0. Arora, S.,Khandeparkar, H., Khodak, M., Plevrakis, O., and Saunshi, N. A theoretical analysis of contrastive unsupervised representation learning. In ICML, 2019. Bach, F. Breaking the curse of dimensionality with convex neural networks. The Journal of Machine Learning Research, 18(1):629-681, 2017. Bach, F. Learning Theory from First Principles. To appear at MIT press, 2023. Balestriero, R. and LeCun, Y. Contrastive and noncontrastive self-supervised learning recover global and local spectral embedding methods. In NeurIPS, 2022. Bardes, A., Ponce, J., and LeCun, Y. Vicreg: Varianceinvariance-covariance regularization for self-supervised learning. In ICLR, 2022. Bartlett, P. and Mendelson, S. Rademacher and gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 2002. Bartlett, P., Jordan, M., and Mcauliffe, J. Convexity, classification, and risk bounds. Journal of the American Statistical Association, 2006. Bietti, A. Approximation and learning with deep convolutional models: a kernel perspective. In International Conference on Learning Representations, 2022. Bietti, A. and Mairal, J. On the inductive bias of neural tangent kernels. Advances in Neural Information Processing Systems, 2019. Bietti, A., Venturi, L., and Bruna, J. On the sample complexity of learning under invariance and geometric stability. In Advances in Neural Information Processing Systems, 2021. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. Language models are few-shot learners. In Advances in neural information processing systems, 2020. Bubeck, S. Convex optimization: Algorithms and complexity. Foundations and Trends in Machine Learning, 2015. Bun, J., Bouchaud, J.-P., and Potters, M. Cleaning large correlation matrices: Tools from random matrix theory. Physics Reports, 2017. ICASSP, 2023. Caponnetto, A. and De Vito, E. Optimal rates for the regularized least-squares algorithm. Foundations of Computational Mathematics, 2007. Caron, M., Misra, I., Mairal, J., Goyal, P., Bojanowski, P., and Joulin, A. Unsupervised learning of visual features by contrasting cluster assignments. In Advances in Neural Information Processing Systems, 2020. Caron, M., Touvron, H., Misra, I., Jégou, H., Mairal, J., Bojanowski, P., and Joulin, A. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021. Chen, T., Kornblith, S., Norouzi, M., and Hinton, G. A simple framework for contrastive learning of visual representations. In ICML, 2020. Chi, Y., Lu, Y., and Chen, Y. Nonconvex optimization meets low-rank matrix factorization: An overview. IEEE Transactions on Signal Processing, 2019. Coifman, R. and Lafon, S. Diffusion maps. Applied and Computational Harmonic Analysis, 2006. Davis, C. and Kahan, W. The rotation of eigenvectors by a perturbation. SIAM Journal on Numerical Analysis, 1970. Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT, 2019.Efthimiou, C. and Frye, C. Spherical harmonics in p dimensions. World Scientific, 2014. the orthogonal projector on the image of B, and σ i (A) the i-th singular value of A (monotonically ordered with σ 1 (A) the biggest). The last inequality is due to the Courant-Fisher min-max principle, This inequality can be achieved with Π Bi the projection on the i-th eigenspace of A and ∥B i ∥ op = σ i (A). In other terms, B should match the first k positive eigenvalues of A. In the case where A has less than k positive eigenvalues, then B should match all the positive eigenvalues and be null on the range of A − . in their Appendix F, that would reuse the work of Schiebinger et al. (2015) considering the covariance operator with featureφ would leverage closed form solutions to both the population and empirical risk and use concentration inequalities on integral operators, as in Cabannes et al. (2021b); Pillaud-Vivien & Bach Lemma 35 . 35When minimizing a regularized risk, one can reduce the search of Θ under the constraint ∥Λ∥ HS ≤ λ −1 k.Proof. When regularizing we have ∥Λ∥ HS = Θ ⊤ Θ HS ≤ ∥Θ∥ op ∥Θ∥ HS ≤ ∥Θ∥ 2 HS , and for minimizer of the empirical or population riskλ ∥Θ∥ 2 ≤ L(SΘ) + λ ∥Θ∥ 2 ≤ L(0) = k,which explains the statement of the lemma.The attentive reader would remark that compared to the bound of HaoChen et al.(2021)we gain a factor k −1/2 . Indeed, this factor could be recovered in HaoChen et al. (2021) by using the techniques of Maurer (2016) rather than a trivial bound on Rademacher complexity of vector-valued function spaces in k max i∈[k]R (F i ) with HaoChen et al. (2021) notations. Smola et al. (2000); Bietti et al. (2021); Mei et al. (2021) for details. Note that for common kernel functions on the sphere, such as the ones appearing in the NTK, the ν k decay polynomially with k (Bach, 2017; Bietti & Mairal, 2019). Figure 14 . 14Averaged downstream error computed over 100 trials on the pretraining task and 200 trials on the downstream task, for both the least-squares loss (right) and the 0-1 loss (left). Maurer, A. A vector-contraction inequality for Rademachercomplexities. In Algorithmic Learning Theory, 2016.Mei, S., Misiakiewicz, T., and Montanari, A. Learning with invariances in random features and kernel models. In Conference on Learning Theory, 2021.Meir, R. and Zhang, T. Generalization error bounds for bayesian mixture algorithms. Journal of Machine Learning Research, 2003. Micchelli, C., Xu, Y., and Zhang, H. Universal kernels. Journal of Machine Learning Research, 2006. Misiakiewicz, T. and Mei, S. Learning with convolution and pooling operations in kernel methods. In Advances in Neural Information Processing Systems, 2022. Mourtada, J. and Rosasco, L. An elementary analysis of ridge regression with random design. Comptes Rendus. Mathématique, 2022. Mourtada, J., skevičius, T. V., and Zhivotovskiy, N. Distribution-free robust linear regression. Mathematical Statistics and Learning, 2022. O'Donnell, R. Analysis of boolean functions. Cambridge University Press, 2014. Ostrovskii, D. and Bach, F. Finite-sample analysis of mestimators using self-concordance. Electronic Journal of Statistics, 2018. Pillaud-Vivien, L. and Bach, F. Kernelized diffusion maps. ArXiv, 2023. Pinelis, I. and Sakhanenko, A. Remarks on inequalities for large deviation probabilities. Theory of Probability and Its Applications, 1986.Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, 2021. Rigollet, P. Generalization error bounds in semi-supervised classification under the cluster assumption. Journal of Machine Learning Research, 2007. Saunshi, N., Ash, J., Goel, S., Misra, D., Zhang, C., Arora, S., Kakade, S., and Krishnamurthy, A. Understanding contrastive learning requires incorporating inductive biases. In ICML, 2022. Schiebinger, G., Wainwright, M., and Yu, B. The geometry of kernelized spectral clustering. The annals of Statistics, 2015. Scholkopf, B. and Smola, A. Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT press, 2001. Simard, P., Victorri, B., LeCun, Y., and Denker, J. Tangent prop -a formalism for specifying selected invariances in an adaptive network. In NeuRIPS, 1991. Simon, J., Knutins, M., Ziyin, L., Geisz, D., Fetterman, A., and Albrecht, J. On the stepwise nature of self-supervised learning. ArXiv, 2023. Smale, S. and Zhou, D.-X. Learning theory estimates via integral operators and their approximations. Constructive Approximation, 2007. Smola, A., Ovári, Z., and Williamson, R. C. Regularization with dot-product kernels. In Advances in neural information processing systems, 2000. Sun, H.-W. and Zhou, D.-X. Reproducing kernel hilbert spaces associated with analytic translation-invariant mercer kernels. Journal of Fourier Analysis and Applications, 2008.Tian, Y. Understanding the role of nonlinearity in training dynamics of contrastive learning. arXiv preprint arXiv:2206.01342, 2022.Tian, Y., Yu, L., Chen, X., and Ganguli, S. Understanding self-supervised learning with dual deep networks, 2021.Tosh, C., Krishnamurthy, A., and Hsu, D. Contrastive learning, multi-view redundancy, and linear models. In Algorithmic Learning Theory, 2021a.Tosh, C., Krishnamurthy, A., and Hsu, D. Contrastive estimation reveals topic posterior information to linear models. JMLR, 2021b.Tropp, J. An introduction to matrix concentration inequalities. Foundations and Trends in Machine Learning, 2015.van Engelen, J. and Hoos, H. A survey of semi-supervised learning. Machine Learning, 2020.Vitushkin, A. On Hilbert's thirteenth problem. Proceedings of the USSR Academy of Sciences, 1954.Wen, Z. and Li, Y. Toward understanding the feature learning process of self-supervised contrastive learning. In International Conference on Machine Learning, 2021.Yang, G. and Salman, H. A fine-grained spectral perspective on neural networks. arXiv preprint arXiv:1907.10599, 2019. Meta AI, New York, NY, USA 2 MIT Department of Electrical Engineering and Computer Science, Cambridge, MA, USA. Correspondence to: Vivien Cabannes <[email protected]>. For convenience, several technicalities, such as measurability, have been deferred to the appendix. While we study here the bias of Tikhonov regularization for simplicity, similar studies can be done for early stopped gradient descent or stochastic gradient descent when they are cast as spectral filters, as inLin et al. (2020), see also literature related to optimization for matrix factorization problems(Chi et al., 2019), which has been applied to SSL bySimon et al. (2023). The pretraining and downstream tasks refer to minimization of L and R respectively. Note that without regularization, (1 − β)I + βT is not traceclass so ke will not converge as k increases. Since it lets invariant the orbit of translation. This proves the lemma.We now show how different sampling distributions over translations induce varying smoothing effects in the operator T .Example 9 (Smoothing effect of translations). To see the effect of augmentation strength, consider a distribution p over translations that takes the form p(a) = ωp 0 (ωa), where p 0 is a localized window shape (e.g., uniform or Gaussian) that sums to 1.Here ω ≈ 1/∆ is inversely related to the window size ∆, which controls the "strength" or range of augmentations.Here, the squared Fourier coefficients |p 0 (m)| 2 typically decay with the frequency m, which shows that T has a smoothing effect that penalizes eigenfunctions ψ m,S with larger m, i.e., those which oscillate more quickly. The above formula also highlights that the increasing the augmentation strength ∆ will lead to faster decay with m, while leaving the translation-invariant eigenfunctions (m = 0) unaffected.D.3. The role of architecturesA particularly useful feature space φ to define the linear class of functions Ψ is the set:for any sequences (e S ) ∈ R 2 d . Linear model of this form can be diagonalized in the parity basis, which allows one to effectively study the interplay between the role of augmentation and the role of the architecture.Lemma 49. For any linear model defined through the features φ in (60), the integral operator K : L 2 (X ) → L 2 (X ) is diagonalized in the parity basis,Proof. This follows from the fact thatAmong those classes of functions are dot-product kernel that verifies k(x, y). Once again, those kernels are particularly well adapted to the Fourier geometry of the Boolean hypercube.Lemma 50 (Spectral decomposition of dot-product kernel). Any dot-product kernel is diagonalizable in the parity basis. Specifically, there exists (ν i ) i∈[0,d] ∈ R d+1 such that, when µ X is the uniform distribution on the hypercube,Proof. One can check that x ⊤ y = d − 2k for k the number of bits that differs in x and y. Define Q ℓ the degree-ℓ averaged polynomials of degree ℓ asfor any Boolean strings x and y. The Q ℓ,d are well defined since the left-hand side is translation invariant. Moreover, leveraging the orthogonality of the χ S , one can show that theMore exactly, the m → d ℓ −1/2 Q ℓ,d (m) are orthonormal basis of the L 2 space endowed with τ the pushforwards measure of the uniform distribution on X through the mapping x → ⟨x, y⟩ for any fixed y, and the dimensions match. As a consequence, there exists ν ℓ such thatThe SSL Interplay Proposition 53 (Linearization of a convolutional network). A convolutional layer followed by a fully connected layercan be linearized with the h of (67) asIn the Boolean setting, the resulting integral operator K CN N is diagonalized in both the parity and the cyclic basis aswhere ν h (q, ℓ) are defined by Proposition 52.Proof. The first part corresponds to the case ω = ∆ = 1. The second part is due to the expansion of h over the Q ℓ basis, which leads to (see Eq.(30)inMisiakiewicz & Mei (2022)for details)The fact that K let the {S | |S| = a, diam(S) = b} invariant, since the eigenvalues only depends on |S| and diam(S), allows to change from the parity basis to the cyclic basis.When pooling is included in the kernel and ω > 1 in(65), then the architecture enforces local translation invariance. As a simple example, consider the setting of global average pooling ω = d where strict invariance to translations is enforced and parity functions are projected onto their sum of elements of the orbit to form the eigenbasis. In this case, K is no longer diagonalized in the parity basis, but it is diagonal in the basis of cyclic parities.D.4. Interplay between augmentations and architectureIn the uniform Boolean setting, the interplay between augmentations and architecture is made easy by the fact that many operators K and T commutes.Lemma 54. The operator K associated with a dot-product kernel in the uniform Boolean setting commutes with all the operators T that can be built from bitwise noise, cropping, translations or index flip.Proof. In the case of a dot-product kernel in the uniform setting, the spaces V d,ℓ are eigenspaces of K. Those spaces are left invariant by all the T defined through usual augmentations, since translations and index-flip operations preserve the cardinality of subsets. As a consequence, K and T can be diagonalized in the same basis, hence they commute.As a consequence of the previous lemma, the integral operator K associated with the linear model of fully connected layer commute with all the operators T defined for usual augmentations. It is also the case for the convolutional layer with T deriving from random noise, cropping, or translation. 6 As a consequence, the interplay between the architecture and the augmentations can be studied easily thanks to Proposition 4.Example 10 (Interplay between FC kernel and translation augmentations). Recall from Example 9 that when sampling translations from a localized window, the eigenvalues of T are of the form |p(m)| 2 and typically decay with the frequency index m in ψ m,S = 1 √ k S k∈[k S ] e 2iπkm/k S χ S+k for any set S with no periodicity. In contrast, the eigenvalues ν h (d, |S|) of K for eigenfunctions ψ m,S decay as Θ d (d −|S| ), independently of m. Regularization with parameter λ thus shrinks the eigenvalues to |p(m)| 2 − λν h (d, |S|) −1 after pre-training. This most notably eliminates contributions from eigenfunctions ψ m,S where m is small (i.e., near-invariant) but |S| is large.See Figures 3 and 5for an illustration.
[]
[ "BOUNDED DECOMPOSITION IN THE BRIESKORN LATTICE AND PFAFFIAN PICARD-FUCHS SYSTEMS FOR ABELIAN INTEGRALS", "BOUNDED DECOMPOSITION IN THE BRIESKORN LATTICE AND PFAFFIAN PICARD-FUCHS SYSTEMS FOR ABELIAN INTEGRALS" ]
[ "Sergei Yakovenko " ]
[]
[]
We suggest an algorithm for derivation of the Picard-Fuchs system of Pfaffian equations for Abelian integrals corresponding to semiquasihomogeneous Hamiltonians. It is based on an effective decomposition of polynomial forms in the Brieskorn lattice. The construction allows for an explicit upper bound on the norms of the polynomial coefficients, an important ingredient in studying zeros of these integrals.
10.1016/s0007-4497(02)01126-0
[ "https://export.arxiv.org/pdf/math/0201114v1.pdf" ]
14,418,221
math/0201114
445c3431ef16ce528521148ee3f47a373a8428c4
BOUNDED DECOMPOSITION IN THE BRIESKORN LATTICE AND PFAFFIAN PICARD-FUCHS SYSTEMS FOR ABELIAN INTEGRALS 14 Jan 2002 Sergei Yakovenko BOUNDED DECOMPOSITION IN THE BRIESKORN LATTICE AND PFAFFIAN PICARD-FUCHS SYSTEMS FOR ABELIAN INTEGRALS 14 Jan 2002 We suggest an algorithm for derivation of the Picard-Fuchs system of Pfaffian equations for Abelian integrals corresponding to semiquasihomogeneous Hamiltonians. It is based on an effective decomposition of polynomial forms in the Brieskorn lattice. The construction allows for an explicit upper bound on the norms of the polynomial coefficients, an important ingredient in studying zeros of these integrals. Introduction Given a polynomial in two variables f ∈ R[x, y] and a polynomial 1-form ω on R 2 , how many isolated ovals δ on the level curves f = const may satisfy the condition δ ω = 0? This is the long-standing infinitesimal Hilbert problem, see [Arn94]. The answer is to be given in terms of the degrees of f and ω. A recent approach to this problem, suggested in [NY99,NY01,Yak01] is based on the fact that periods of polynomial 1-forms restricted on level curves of polynomials, satisfy a system of differential equations with rational coefficients, called the Picard-Fuchs system. Under certain restrictions on the monodromy group, the number of zeros of solutions of such systems can be estimated from above in terms of the magnitude of coefficients of this system, more precisely, the norms of its matrix residues. Thus it becomes important to derive the Picard-Fuchs system for Abelian integrals so explicitly as to allow for the required estimates for the residues. In [NY01] a Fuchsian system was derived in the hypergeometric form (t · 1 + A)İ = BI,İ = d dt I(t), (1.1) where I(t) = (I 1 (t), . . . , I l (t)) is a collection of integrals of some monomial forms over any oval of the level curve {f = t}, and A, B are two constant (l×l)-matrices of explicitly bounded norms, depending on f (1 always stands for the identity matrix of the appropriate size). The rational matrix function R(t) = (t · 1 + A) −1 B has only simple poles and the norm of its matrix residues can be explicitly majorized provided that the eigenvalues of A remain well apart. This allows to solve the infinitesimal Hilbert problem for all polynomials f whose critical values (after a suitable normalization) are sufficiently distant from each other. What remains is to study the case of confluent critical values (including those at infinity). In a general hypergeometric system (1.1), the residues may or may not blow up as some of the singular points tend to each other. The particular feature of Date: January 2002. 1991 Mathematics Subject Classification. Primary 34C08; Secondary 34M50, 32S20, 32S40. The research was supported by the Israeli Science Foundation grant no. 18-00/1. the Picard-Fuchs system is its isomonodromy: the monodromy group remains the same under deformations of f (at least for sufficiently generic f ). This implies that even if the explosion of residues occurs, it cannot be caused by the explosion of the eigenvalues. In order to find out what indeed happens with the residues, the first step is to write down as explicitly as possible the Picard-Fuchs system as a flat meromorphic connexion with singularities in the holomorphic bundle over the variety of all polynomials f of a given degree. This problem is solved in the paper for polynomials with a fixed principal (quasi)homogeneous part having an isolated critical point at the origin. As an auxiliary first step, we need to describe explicitly the structure of the relative cohomology module. While the subject is fairly classic and sufficiently well understood, the existing proofs do not allow for the quantitative analysis. We suggest an alternative, completely elementary construction that immediately yields all necessary bounds. This construction, exposed in §2 is based on "division by f ", a lemma distilled from the paper [Fra88] by J.-P. Françoise. The Pfaffian form of the Picard-Fuchs system is derived in §4. In the last section we mention some simple properties of the derived system and formulate a conjecture that it has only logarithmic singularities in the affine part. 2. Relative cohomology revisited 2.1. Relative cohomology, Brieskorn and Petrov modules. Denote by Λ k , k = 0, 1, . . . , n, the module of polynomial k-forms on the complex affine space C n for a fixed n 1. If f ∈ C[x 1 , . . . , x n ] ≃ Λ 0 is a polynomial, then the collection df ∧ Λ k−1 of k-forms divisible by df ∈ Λ 1 , is a C-linear subspace in Λ k , and the quotient Λ k f = Λ k /df ∧ Λ k−1 , k = 1, . . . , n, (2.1) is called the space of relative k-forms. Since the exterior derivative d preserves divisibility by df , the relative de Rham complex Λ • f , 0 −→ Λ 1 f d −→ Λ 2 f · · · d −→ Λ n−1 f d −→ Λ n f d −→ 0,(2. 2) naturally appears. A form ω ∈ Λ k is called relatively closed if dω = df ∧ η and relatively exact if ω = df ∧ ξ + dθ for appropriate η ∈ Λ k and ξ, θ ∈ Λ k−1 . The relative cohomology groups H k f = H k (Λ • f ), relatively closed k-forms modulo relatively exact ones, are important characteristics of the polynomial f . Together with the natural C-linear structure, the relative cohomology groups H k f possess the structure of a module over the ring C[f ] = f * C[x 1 , . . . , x n ]. This follows from the identity f · (df ∧ η + dθ) = df ∧ (f η − θ) + d(f θ). (2.3) meaning that relatively exact forms are preserved by multiplication by f . As is well-known, the highest module H n f , as well as all H k f with 0 < k < n − 1, is zero. Instead, we consider another important module, called Brieskorn module (lattice) [Bri70,DS01,Dou01], defined as the quotient B f = Λ n /df ∧ dΛ n−2 , (2.4) and the C[f ]-module P f , the quotient of all (n − 1)-forms by the closed (n − 1)forms, P f = Λ n−1 /(df ∧ Λ n−2 + dΛ n−2 ) ⊇ H n−1 f . (2.5) The latter is an extension of H n−1 f : the quotient P f /H n−1 f is naturally isomorphic to the finite-dimensional C-space Λ n f = Λ n /df ∧ Λ n−1 . In several sources, P f is referred to as the Petrov module. The exterior differential naturally projects as a bijective map d : P f → B f which obviously is not a C[f ]-module homomorphism. Clearly, a relatively exact (closed) form is exact (resp., closed) after being restricted on any nonsingular level set f −1 (t) ⊂ C n , t ∈ C, since df vanishes on all such sets. The inverse inclusion is considerably more delicate. Gavrilov studied the case n = 2 and proved that for a 1-form with exact restrictions on all level curves f −1 (t) ⊂ C 2 to be relatively exact, it is sufficient to require that the polynomial f has only isolated singularities and all level curves f −1 (t) be connected [Gav98,Gav99]. This result generalizes the earlier theorem by Ilyashenko [Ily69]. A multidimensional generalization in the same spirit was obtained by I. Pushkar ′ [Pus97]. The affirmative answer depends on the topology of a generic level set f −1 (t) (its connectedness for n = 2 or vanishing of the Betti numbers b k for k between 0 and n − 2, see [DS93,BD00]). Both the isolatedness and connectedness assumptions can be derived from a single assumption that the principal (quasi)homogeneous partf of the polynomial f has an isolated critical point at the origin: such polynomials are called semiquasihomogeneous [AGV85]. For two variables with equal weights it suffices to require thatf factors as a product of pairwise different linear homogeneous terms. 2.2. Computation of relative cohomology. Besides the above question on the relationship between the algebraically defined cohomology of the relative de Rham complex and analytically defined cohomology of (generic) fibers, the natural problem of computing H • f arises. This problem was addressed in the papers [BD00, DS01, DS93, Dou01, Gav99, Gav98] mentioned above. Using analytic tools or theory of perverse sheaves and Dmodules, they prove that under certain genericity-type assumptions on f , the highest relative cohomology module H n−1 f and the Petrov module P f are finitely generated over the ring C[f ]. For semiquasihomogeneous polynomials one can describe explicitly the collection of generators for B f , the polynomial forms ω 1 , . . . , ω l ∈ Λ n−1 such that any other form ω ∈ Λ n−1 can be represented as ω = l i=1 p i ω i + df ∧ η + dξ, p i = p i (f ) ∈ C[f ], η, ξ ∈ Λ n−2 , (2.6) with appropriate polynomial coefficients p i that are uniquely defined. The proofs of this and related results, obtained in either analytic or algebraic way, are sufficiently involved. In particular, it is very difficult if possible at all to get an information on (i) how the decomposition (2.6) depends on parameters, in particular, if f itself depends on parameters, and (ii) how to place explicit quantitative bounds on the coefficients p i (f ) in terms of the magnitude of coefficients of the form ω. For example, to extract such bounds from the more transparent analytic proof by Gavrilov, one should place a lower bound on the determinant of the period matrix of the forms ω i over a system of vanishing cycles on the level curves f −1 (t). The mere nonvanishing of this determinant is a delicate assertion whose proof in [Gav98] is incomplete (a simple elementary proof was supplied by Novikov [Nov01]). The explicit computation of this determinant for a specific choice of the generators ω i was achieved by A. Glutsuk [Glu00], but the answer is given by a very cumbersome expression. In the next section we suggest an elementary derivation of the formula (2.6) under the assumption that the polynomial f is semiquasihomogeneous. This derivation: 1. gives an independent elementary demonstration of the Gavrilov-Bonnet-Dimca theorem for the most important particular case of semiquasihomogeneous polynomials; 2. proves that the polynomial coefficients p i and the forms η, θ from the decomposition (2.6) depend polynomially on the coefficients of the non-principal part of f , provided that the principal quasihomogeneous part of f remains fixed; 3. yields the collection of the coefficients (p 1 , . . . , p l ) of (2.6) as a result of application of a certain linear operator to the form ω. The norm of this operator can be explicitly bounded in terms of f (and the chosen set of generators {ω i }) and the degree deg ω. Bounded decomposition in the Brieskorn and Petrov modules 3.1. Degrees, weights, norms. In this section we first consider quasihomogeneous polynomials from the ring C[x] = C[x 1 , . . . , x n ] with rational positive weights w i = deg x i normalized by the condition w 1 + · · · + w n = n to simplify the treatment of the most important symmetric case when w i = 1. The symbol deg f always means the quasihomogeneous degree. Remark 1. Later on we will introduce additional variables λ = (λ 1 , . . . , λ m ) considered as parameters, assign them appropriate weights and work in the extended ring C[x, λ] = C[x 1 , . . . , x n , λ 1 , . . . , λ m ]. Even in the symmetric case the weights of the parameters will in general be different from 1. The Euler field associated with the weights w 1 , . . . , w n is the derivation X = w i x i ∂/∂x i of C[x] . By construction, Xf = rf , r = deg f ∈ Q, for any quasihomogeneous polynomial f (the Euler identity). We put deg dx i = deg x i = w i . This extends the quasihomogeneous grading on all k-forms: in the symmetric case, the degree of a polynomial k-form will be k plus the maximal degree of its coefficients. Obviously, deg ω = deg dω for any form, provided that dω = 0. The Lie derivative Xω of a quasihomogeneous form ω of degree r by the Euler identity is rω. Note that deg ω > 0 for all k-forms with k 1. The norm of a polynomial in one or several variables is defined as the sum of absolute values of its (real or complex) coefficients. This norm is multiplicative. The norm of a k-form by definition is the sum of the norms of its polynomial coefficients; it satisfies the inequality ω ∧ η ω · η for any two forms ω, η. The exterior derivative operator is bounded in the sense of this norm if the degree is restricted: dω (max i w i ) deg ω · ω . In particular, in the symmetric case dω r ω , r = deg ω. Conversely, a primitive of an n-form µ can be always chosen bounded by the same norm µ . Unless explicitly stated differently, a monomial (monomial form, etc) has always the unit coefficient. 3.2. Parameters. We will systematically treat the case when all objects (forms, functions etc.) depend polynomially on finitely many additional parameters λ = (λ 1 , . . . , λ m ). We will denote by Λ k [λ], k = 0, . . . , n, the collection of k-forms whose coefficients polynomially depend on λ. For instance, the notation η ∈ Λ n−1 [λ] means that η = n i=1 a i (x, λ) dx 1 ∧ · · · ∧ dx i ∧ · · · ∧ dx n with polynomial coefficients a i ∈ C[x, λ]. In such case the norm of forms, functions etc. will be always considered relative to the ring C[x, λ], that is, as the sum i a i of absolute values of coefficients a i of the complete expansion in x, λ. If the parameters λ s are assigned weights, we take them into account when defining the degree of the form. To stress the fact that the norm is computed relative to the ring C[x, λ] and not to C[x] (i.e., that the situation is parametric), we will sometimes denote the norm by · λ . For an instance, 2λ 1 x 1 = 2|λ 1 | = 2 = 2λ 1 x 1 λ . 3.3. Division by a quasihomogeneous differential df . The division modulus. If f ∈ C[x 1 , . . . , x n ] is a quasihomogeneous polynomial having an isolated singularity at the origin, then the multiplicity l of this singularity can be easily found by Bézout theorem, since no roots of the system of algebraic equations ∂f /∂x i = 0, i = 1, . . . , n, can escape to infinity. In the symmetric case l = (deg f − 1) n . Choose any monomial basis ϕ 1 , . . . , ϕ l of the local algebra C[[x 1 , . . . , x n ]]/ ∂f , ∂f = ∂f ∂x1 , . . . , ∂f ∂xn . Then the monomial n-forms µ i = ϕ i dx 1 ∧ · · · ∧ dx n form a basis of Λ n f = Λ n /df ∧ Λ n−1 over C: any n-form µ can be divided out as µ = l i=1 c i µ i + df ∧ η, c i ∈ C, η ∈ Λ n−1 , (3.1) with appropriate constants c 1 , . . . , c l ∈ C (coefficients of the "remainder" c i µ i ) and a polynomial form η ∈ Λ n−1 (the "incomplete ratio"). Moreover, if µ is quasihomogeneous, then the decomposition (3.1) contains only terms with deg µ i = deg µ and deg η = deg µ − deg f . This immediately follows from quasihomogeneity and the uniqueness of the coefficients c i . Form this observation we also conclude that all monomial forms of degree < deg f must be among µ i , and, moreover, any monomial form of degree greater than max i deg µ i , is divisible without remainder by df . The choice of the monomial forms µ i spanning the quotient, is not unique, though the distribution of their degrees is. Denote by ρ = ρ(f ) the maximal difference ρ(f ) = max i deg µ i − min i deg µ i = max i deg ϕ i − min i deg ϕ i . (3.2) The following results are well-known. Proposition 1. 1. In the symmetric case ρ(f ) < l = (r − 1) n [AGV85, §5.5]. 2. In the bivariate case n = 2 the inequality ρ(f ) < r = deg f holds if and only if f is a "simple singularity" of one of the following types, A k : f = x k+1 1 + x 2 2 , k 2, D k : f = x 2 1 x 2 + x k−1 2 , k 4, E 6 : f = x 3 1 + x 4 2 , E 7 : f = x 3 1 + x 1 x 3 2 , E 8 : f = x 3 1 + x 5 2 , see e.g., [AGV85, §13, Theorem 2]. From these observations it can be immediately seen that the division with remainder (3.1) is a bounded linear operation in the space of all n-forms of restricted degrees. Lemma 1. Assume that f ∈ Λ 0 is a quasihomogeneous polynomial having an isolated critical point of multiplicity l at the origin, and the monomial n-forms µ 1 , . . . , µ l ∈ Λ n , form the basis of Λ n f . Then there exists a finite constant M < +∞ depending only on f and the choice of the basis {µ i }, such that any n-form µ ∈ Λ n can be divided with remainder by df as in (3.1) subject to the following constraints, Corollary 1. Assume that µ ∈ Λ n [λ] depends polynomially on additional parameters λ. Then µ can be divided with remainder by df so that the remainder and the incomplete ratio depend polynomially on λ with the same division modulus, deg η deg µ − deg f, η + |c i | M µ .c i = c i (λ) ∈ C[λ], i = 1, . . . , n, η ∈ Λ n−1 [λ], η + c i M (f ) µ , · = · λ . Proof of the corollary. Every monomial from the expansion of µ in x, λ can be divided out separately by df which is independent of λ. Proof of the Lemma. Let M be the best constant such that (3.3) holds for all monomial n-forms with deg µ l. It is finite since there are only finitely many such forms. In particular, since any form of degree l is divisible by df by Proposition 1, the respective fraction η will be of the norm at most M µ . Writing an arbitrary monomial n-form of degree > l as a product of a monomial form of degree l times a monic monomial function x α ∈ C[x], α ∈ Z n + , we construct the explicit division formulas (without remainders) for all monomial forms of higher degrees. The division constant will be given by the same number M , since multiplication by a monic monomial preserves the norms of both µ and η . All the other assertions of the Lemma are well-known [AGV85]. 3.4. Computability of the division modulus. Despite its general nature, the above proof is constructive, at least in the low dimensional cases n = 1, 2, allowing for an explicit computation of the division modulus in these cases. The one-dimensional case is trivial: for the monomial f (x) = x r the division modulus M (f ) is equal to r and it can be obviously recalculated for any other principal homogeneous part. The "special case" of a multivariate polynomial f (x) = x r 1 + · · · + x r n , see [GI01], is reducible to the one-dimensional situation. In this case l = (r − 1) n monomial forms x α dx 1 ∧ · · · ∧ dx n with 0 α i r − 1 form the basis, and the corresponding division modulus is again equal to r. This example admits an obvious generalization for quasihomogeneous "special polynomials" with different weights. For a bivariate truly homogeneous polynomial f (i.e., in the symmetric case, the most important for applications), the division modulus M for all higher degree forms (deg µ 2 deg f ) can be explicitly computed as the norm of the inverse Sylvester matrix for the partial derivatives ∂f ∂x1 and ∂f ∂x2 [NY01]. The "quasimonic" polynomials, introduced in that paper, are defined by the condition M (f ) = 1, which in many respects is a natural normalizing condition for multivariate polynomials. The choice of the basic forms even in the symmetric bivariate case depends on f : while it is generically possible to choose them as x α1 1 x α2 2 dx 1 ∧ dx 2 with 0 α 1,2 r − 1, for a badly chosen f some of these forms of degree greater than r = deg f can become linear dependent modulo df , requiring a different choice. In order to avoid making this choice, one may allow a redundant (i.e., linear dependent) collection of generating forms µ i . Choosing all monomial forms of degree 2r makes the corresponding division for low degree forms trivial, so that the division modulus M (f ) is determined only by division of forms of higher degree. Details and accurate estimates in the bivariate symmetric case can be found in [NY01]. To describe the division modulus M (f ) in the case of n 3 variables is a considerably more difficult problem, though it still can be reduced to analysis of finitely many monomial divisions. One can (at least, theoretically) express M (f ) via lower bounds for minors of certain explicitly formed matrices. 3.5. Division by f . We begin by establishing an analog of the Euler identity in the Brieskorn module. It plays the central role for explicitly constructing the decomposition (2.6). Lemma 2. Assume that f ∈ Λ 0 is a quasihomogeneous polynomial of degree r. Then any polynomial n-form divisible by df in Λ n , can itself be divided by f in the Brieskorn module B f . It also admits a polynomial primitive divisible by f . In other words, for any form η ∈ Λ n−1 there exist four forms µ ∈ Λ n , ω ∈ Λ n−1 and ξ, ξ ′ ∈ Λ n−2 such that df ∧ η = f µ + df ∧ dξ (3.4) = d(f ω) + df ∧ dξ ′ . (3.5) The degrees of all forms µ, ω, ξ, ξ ′ are all equal to deg η in case the latter is quasihomogeneous. The division operation is always well-posed in the sense that the decomposition (3.5) can be always chosen to meet the inequality ω + ξ ′ (n + 3) deg η · η (3.6) (a similar inequality can be proved also for the first decomposition (3.4)). Proof. Note that for any n-form µ ∈ Λ n and any vector field X on C n , (Xf )µ = (i X df ) µ = df ∧ i X µ, where i X is the inner antiderivative, since df ∧ µ = 0. We will need this formula for the case when X is the Euler vector field. To prove the first divisibility assertion (3.4), we have to show that the identity df ∧ η = f µ + df ∧ dξ (3.7) can be always resolved as a linear equation with respect to µ and ξ for any choice of η. Using the Euler identity for functions and the above remark, we represent f µ as a form divisible by df , f µ = r −1 (Xf ) µ = r −1 (i X df )µ = df ∧ r −1 i X µ. (3.8) The equation (3.7) will obviously be satisfied if η = r −1 i X µ + dξ, that is, when η is cohomologous to i X µ. This last condition is equivalent to the equality between the exterior derivatives dη = r −1 d i X µ = r −1 Xµ, since by the homotopy formula, d i X µ = Xµ − i X dµ = Xµ. Thus resolving the equation (3.7) is reduced to inverting the Lie derivative X on the linear space of n-forms. We claim that the linear map µ → Xµ of Λ n to itself, is surjective (and obviously degree-preserving), guaranteeing thus solvability of the last equation for any choice of η. Indeed, any monomial n-form µ α = x α dx 1 ∧ · · · ∧ dx n is an eigenvector of X with the strictly positive eigenvalue deg µ α n (recall that the weights w i are normalized so that the volume form dx 1 ∧ · · · ∧ dx n is of degree n). Thus X is surjective on Λ n (actually, bijective) and one can choose µ = rX −1 (dη). The norm of the inverse operator X −1 does not exceed (r/n) deg η in the symmetric case. The proof of (3.4) is complete. To prove the second assertion (3.5), we transform it using (3.8) as follows, df ∧ η = f dω + df ∧ (ω + dξ ′ ) = r −1 df ∧ i X dω + df ∧ (ω + dξ ′ ), which will be obviously satisfied if η = r −1 i X dω + ω + dξ ′ . (3.9) Taking the exterior derivative as before, we reduce this equation to the form dη = r −1 d i X dω + dω = r −1 Xµ + µ, µ = dω. Solvability of this equation with respect to µ (and hence to ω) for any left hand side dη follows from invertibility of the differential operator r −1 X + 1 on the linear space of polynomial n-forms (1 stands for the identity operator). Exactly as in the previous situation, all monomial n-forms are eigenvectors for (r −1 X + 1)| Λ n with the positive eigenvalues, all greater or equal to r −1 n + 1, hence r −1 X + 1 is invertible on Λ n and ω can be chosen as a primitive of (r −1 X + 1) −1 dη. To prove the inequality between the norms, notice that µ = dω satisfies the inequality µ dη deg η η . A primitive ω can be always take of the norm ω dω . Together this yields ω deg η η . The norm ξ ′ can be found from (3.9). Clearly, i X µ n µ because of the choice of the weights deg x i which satisfy the condition w i = n. Substituting this inequality into (3.9), we obtain ξ ′ dξ ′ η + n dω + ω (n + 2) deg η η , since deg ω = deg η 1. 3.6. Generating Petrov and Brieskorn modules: the algorithm. Division by the gradient ideal together with the Euler identity as formulated in Lemma 2, allows for a constructive proof of the representation (2.6) for an arbitrary semiquasihomogeneous polynomial F . Let F = f + h ∈ C[x 1 , . . . , x n ] be a semiquasihomogeneous polynomial with the principal quasihomogeneous part f and the lower-degree part h. Denote as before by µ 1 , . . . , µ l ∈ Λ n the forms spanning Λ n f = Λ n /df ∧ Λ n−1 (note that the quotient is computed using only the principal part f ). We claim that: 1. any n-form µ ∈ Λ n can be represented as µ = l i=1 q i µ i + dF ∧ dζ, q i ∈ C[F ], ζ ∈ Λ n−2 ,(3.10) 2. any (n − 1)-form ω ∈ Λ n−1 can be represented as ω = l i=1 p i ω i + dF ∧ ξ + dξ ′ , p i ∈ C[F ], ξ, ξ ′ ∈ Λ n−2 . (3.11) The construction of the decomposition (3.10) begins by division of µ by df as explained in Lemma 1: µ = c i µ i + df ∧ η, c i ∈ C, η ∈ Λ n−1 . If deg µ < r = deg f = deg F , then the incomplete ratio is in fact absent, η = 0, and we arrive to a particular case of (3.10) with q i = c i of degree 0 (constants). If deg µ is higher than r, we transform the term df ∧ η using Lemma 2 and then substitute f = F − h: µ − c i µ i = f µ ′ + df ∧ dζ = F µ ′ + dF ∧ dζ − µ ′′ , µ ′′ = hµ ′ + dh ∧ dζ. Obviously, both µ ′ and µ ′′ are of degree strictly inferior to deg µ, which allows to continue the process inductively. Assuming that the representations (3.10) are known for both µ ′ and µ ′′ , we substitute them into the last identity and after collecting terms arrive to a representation for µ. In the symmetric case the inductive process cannot take more than deg µ − r steps. It is a direct analog of the process of division of univariate polynomials, see also [NY01]. To construct (3.11), we divide dω by df . If deg ω < r, then the incomplete ratio is absent and we obtain a special kind of (3.11) exactly as before. Otherwise in the division with remainder dω = l i=1 c i dω i + df ∧ η, c i ∈ C[λ], η ∈ Λ n−1 [λ], substitute df ∧ η = d(f ω ′ ) + df ∧ dξ and pass to the primitives. We obtain ω − c i ω i = f ω ′ + df ∧ ξ + dξ ′ = F ω ′ + dF ∧ ξ + dξ ′ − ω ′′ , ω ′′ = hω ′ + dh ∧ ξ. (3.12) For the same reasons as before, the degrees of ω ′ , ω ′′ are strictly smaller than deg ω, hence the process can be continued inductively. Remark 3. In a somewhat surprising way, it turned out impossible to transform directly the decomposition (3.10) for the form dω ∈ Λ n into (3.11) for ω. 3.7. Effective decomposition in the Petrov module. The construction above is so transparent that any qualitative as well as quantitative assertion concerning these expansions, can be immediately verified. We will show that 1. all terms of the decomposition (3.11) depend polynomially on the lower order terms of F , assuming that the principal part if fixed, and 2. the well-posedness of the construction is determined solely by the division modulus M (f ) of the principal homogeneous part. In order to formulate the result, consider a general semiquasihomogeneous polynomial with the prescribed principal quasihomogeneous part, and their quasihomogeneity will be understood in the sense that the formal variable F is assigned the weight deg F = r. F (x, λ) = f (x) + h(x, λ), h(x, λ) = deg fs<deg f λ s f s (x), Theorem 1. If the quasihomogeneous polynomial f ∈ C[x] has an isolated critical point at the origin and F ∈ C[x, λ] is a general semiquasihomogeneous polynomial (3.13), then any polynomial quasihomogeneous (n − 1)-form ω ∈ Λ n−1 [λ] of degree k can be represented as ω = l i=1 p i ω i + dF ∧ ξ + dξ ′ . (3.14) The coefficients p i ∈ C[F, λ] and the (n − 2)-forms ξ, ξ ′ ∈ Λ n−2 [λ] are all polynomial and quasihomogeneous jointly in F, λ (resp., in x, λ) of the degrees k − deg ω i , k − r and k respectively. The norm of the coefficients relative to the ring C[F, λ 1 , . . . , λ m ] is explicitly bounded in terms of n, r, k and the division modulus M (f ). In particular, for the symmetric case when deg x 1 = · · · = deg x n = 1, l i=1 p i k! r k(n+3) M k ω , k = deg ω, M = M (f ), · = · λ . (3.15) Remark 4. The fact that the form ω is quasihomogeneous, is not important: any polynomial form is the sum of quasihomogeneous parts, each of them being divisible separately. Remark 5. Even in the symmetric case, the degrees of the parameters are different from 1: deg λ s = r − deg f s will take all natural values from 1 to r. Proof of the Theorem. The first assertion of the Theorem (on polynomiality and quasihomogeneity) follows from direct inspection of the algorithm described above, since all transformations on each inductive step (exterior differentiation, division by df which is independent of λ, and the Euler identity in P f ) respect the quasihomogeneous grading. The only assertion that has to be proved is that on the norms. In order for a sequence of increasing with k real constants C k > 0 to be upper bounds for the decomposition (3.14), l i=1 p i C k ω , for all ω with deg ω k, they should satisfy a certain recurrent inequality which we will instantly derive from the suggested algorithm. Denote by p i ∈ C[F, λ] (resp., by p ′ i and p ′′ i ) the polynomial coefficients of the decomposition of the forms ω (resp., ω ′ and ω ′′ ) from the identity (3.12): since the degrees of both ω ′ , ω ′′ are less than k and the sequence C k is increasing, we have i p ′ i C k−1 ω ′ , i p ′′ i C k−1 ω ′′ . Multiplication by F corresponds to a shift of coefficients in the decomposition of ω ′ . Thus from (3.12) follows the inequality i p i i c i + i p ′ i + i p ′′ i i c i + C k−1 ( ω ′ + ω ′′ ). By Lemma 2, ω ′ (n + 3)k η . The norm of the inferior part h is by definition equal to the number of terms, that is, the number of monomials in n variables of degree r − 1. Therefore h r n and dh r n+1 . This implies an upper bound for ω ′′ : ω ′′ h ω ′ + dh ξ ( h + dh )( ω ′ + ξ ) 2r n+1 (n + 3)k η by Lemma 2. Finally, η + c i M ω by definition of the division modulus M = M (f ). Assembling all these bounds together, we conclude that p i M ω + C k−1 · 3r n+1 (n + 3)k ω . Thus the increasing sequence C k 1 will form upper bounds for the norms of the coefficients of decomposition for polynomial forms of degree k, provided that C k AkC k−1 , A 4r n+1 (n + 3)M r n+3 M (notice that r 2), which can be immediately satisfied if we put C k = k! r k(n+3) M k . This proves the inequality for the norms. Note that the bound established in this Theorem, is polynomial in M = M (f ) and (for a fixed r) factorial in k = deg ω, that is, only slightly overtaking the exponential growth. 3.8. Nonhomogeneous division. By a completely similar procedure one can describe the result of division by a nonhomogeneous differential dF as a sequence of divisions by the principal homogeneous part df . More precisely, if µ ∈ Λ n [λ] is a polynomial n-form polynomially depending on the parameters λ 1 , . . . , λ m and F = f + λ s f s is as in (3.13), then there exists a representation µ = l i=1 c i (λ)µ i + dF ∧ η, c 1 , . . . , c n ∈ C[λ], η ∈ Λ n−1 [λ],(3.µ = c i µ i + df ∧ η = c i µ i + dF ∧ η − µ ′ , µ ′ = dh ∧ η, where h = F − f , hence deg h < deg f = deg F and therefore deg µ ′ < deg µ. This means that the process of division can be continued inductively. Since µ ′ h η const r,n M (f ) µ , the norms of the remainder and the incomplete ratio are bounded in terms of M (f ) and the degrees. In the symmetric case the bound looks especially simple. Proposition 2. In the symmetric case of all weights equal to 1, the division of a form of degree k = deg µ is bounded as follows, η λ + l i=1 c i λ M k (F ) · µ , M k (F ) = kr n(k−r) (M (f )) k . Proof. In this case h r n , so that µ ′ M r n µ , and finally η + c i M µ (1 + K + · · · + K deg µ−r ), where K = M r n . Thus the norm of the nonhomogeneous division operator obviously does not exceed M k (kr n(k−r) ). This expression is exponential in k = deg µ and polynomial in M = M (f ). Picard-Fuchs system for Abelian integrals Consider a quasihomogeneous polynomial f ∈ Λ 0 of degree r = deg f having an isolated singularity of multiplicity l at the origin. As before, let µ 1 , . . . , µ l be generators of Λ n f over C and ω 1 , . . . , ω l their monomial primitives. Consider the general semiquasihomogeneous polynomial F = f + m 1 λ s f s ∈ C[x, λ] as in (3.13) with the fixed principal part f , whose coefficients λ 1 , . . . , λ m are the natural parameters. Consider in the parameter space C m the locus Σ such that for λ ∈ C m Σ the level set {x ∈ C n : F (x, λ) = 0} is a nonsingular algebraic hypersurface. Denote by Γ = Γ (λ), λ / ∈ Σ, any continuous family of (n − 1)-cycles on the zero level. The Abelian integrals I i (λ) = Γ (λ) ω i , i = 1 . . . , l (4.1) are well defined multivalued analytic functions on C m Σ. In this section we will derive a Pfaffian system of linear equations satisfied by these integrals. We will always assume that the weights of the parameters λ s are chosen so that F becomes a quasihomogeneous polynomial in x, λ of degree r: deg λ s = r − deg f s . The enumeration of the monomials f s begins with the free term f 1 ≡ 1 of degree 0 so that the respective coefficient λ 1 is necessarily of degree r. Recall that ρ(f ) is the maximal difference (3.2) between the degrees of the forms µ i . Theorem 2. There exist (l × l)-matrix polynomials C 0 (λ), C 1 (λ), . . . , C m (λ), Lemma 3. If ω ∈ Λ n−1 is a polynomial form with constant (independent of λ) coefficients, and η s ∈ Λ n−1 [λ] any form satisfying the identity C 0 (λ) = λ 1 · 1 + C ′ (λ 2 , . . . , λ m ), deg C 0 r + ρ(f ), deg C s deg f s + ρ(f ),f s dω = −dF ∧ η s , (4.5) (recall that f s = ∂F ∂λs ), then ∂ ∂λ s Γ (λ) ω = Γ (λ) η s . Proof. To derive this formal identity, we express λ s = H(x) from the equation F (x, λ s ) = 0, assuming all other parameters fixed, and apply the Gelfand-Leray formula to H: for (4.5) to hold, it would be sufficient if η = η s satisfies dω = dH ∧ η. It remains to observe that by the implicit function theorem and the definition of the parameters, dF + ∂F ∂λ s dH = 0, ∂F ∂λ s = f s . Here and above d stands for the exterior derivative with respect to the "spatial" variables x 1 , . . . , x n . The standard Gelfand-Leray derivative appears for the parameter occurring before the constant term f 1 ≡ 1 (modulo the sign). 4.2. Derivation of the system: beginning of the proof of Theorem 2. Divide each of the forms F µ i ∈ Λ n [λ], µ i = dω i , by dF with with the remainder coefficients and the incomplete ratios polynomially depending on λ as in Proposition 2: F µ i = dF ∧ η i + l j=1 c ij µ j , c ij = c ij (λ). (4.6) Clearly, the quasihomogeneous degree deg c ij in C[λ] is equal to r+deg µ i −deg m j ρ(f ) + r (c ij ≡ 0 if the difference is negative). Let C 0 = C 0 (λ) be the (l × l)-matrix polynomial with the entries c ij (λ). Since dF does not depend on λ 1 (the free term of F ), while the only term depending on λ 1 in F µ i is λ 1 µ i , the dependence of C 0 on λ 1 can be immediately described: the corresponding remainder coefficients c ij (λ 1 ) for the division of λ 1 µ i by dF form the scalar matrix λ 1 · 1 (the incomplete ratio is absent). Since c ij do not depend on x (being "constants depending on the parameters"), the identity (4.6) implies that d F ω i − j c ij ω j = −dF ∧ (−ω i − η i ), i = 1, . . . , l. Let ω ′ i,s = −f s (ω i + η i ), i = 1, . . . , l, s = 1, . . . , m. All these forms are polynomial and polynomially depending on parameters. Their degrees can be easily computed: deg η i = deg µ i = deg ω i , deg ω ′ i,s = deg f s +deg µ i . By the parametric Gelfand-Leray formula (Lemma 3), the partial derivatives of integrals of the forms F ω i − j c ij ω j over the cycle Γ (λ) ⊂ {F = 0} ⊂ C n are equal to the integrals of the forms ω ′ i,s . Since the terms F ω i vanish on Γ (λ) for all values of λ, we have ∂ ∂λ s j c ij (λ) I j (λ) = I ′ i,s (λ), I ′ i,s (λ) = Γ (λ) ω ′ i,s . The forms ω i were chosen to generate the Petrov module P F over C[F, λ], so each of the Abelian integrals ω ′ i,s can be expressed as a polynomial combination, I ′ i,s = l j=1 p ij,s I j , p ij,s ∈ C[F, λ], for all i, s. Denote by C s = C s (λ) the polynomial (l × l)-matrix function formed by the free terms of the polynomials p ij,s (·, λ): C s (λ) = p ij,s (F, λ)| F =0 l i,j=1 , s = 1, . . . , m. All other terms, being divisible by F , disappear after integration over the cycle on the level surface {F = 0}. Collecting the terms, we conclude that the partial derivatives of the column vector function I(λ) = (I 1 (λ), . . . , I l (λ)), I i = ω i , we have ∂(C 0 I) ∂λ s = C s I, s = 1, . . . , m. 4.3. Bounds for the norms: end of the proof of Theorem 2. The construction described above, does not yet imply the assertion on the norms of the matrix polynomials C 0 , . . . , C m for only one reason: multiplication by F = f + h, h = λ s f s , is not a bounded operator. While multiplication by h increases the norm at most by h λ = const n,r (not exceeding (r − 1) n in the symmetric case), the norm f cannot be bounded in terms of M (f ), as required in the Theorem (see Remark 2). To correct this drawback, exactly as in [NY01], the division line (4.6) should be first prepared using (3.8) as follows, F µ i = (f + h)µ i = df ∧ η ′ i + hµ i = dF ∧ η ′ i + µ ′ i , η ′ i = r −1 i X µ i , µ ′ i = hµ i − dh ∧ η ′ i , (4.7) where (we again make all estimates for the symmetric case only), η ′ i (n/r) µ i , µ ′ i h (1 + r)(n/r) µ i . Then forms µ ′ i should be divided by dF with remainder: since their norms are bounded by a constant depending only on n, r (the norms of the monomial forms µ i are equal to 1), the results of such division will be bounded by suitable powers of M (f ) by virtue of Proposition 2. Collecting the terms, we conclude that the coefficients c ij ∈ C[λ] of the corresponding remainders in (4.6) and the incomplete ratios η i ∈ Λ n−1 [λ] will be bounded by expressions polynomial in M (f ). The rest of the derivation remains unchanged and the estimates completely straightforward: the polynomial bounds for η i imply those of the polynomial coefficients p ij,s ∈ C[F, λ] by Theorem 1. This proves the last assertion of Theorem 2. Observations. Discussion The algorithm of derivation of the Picard-Fuchs system in the Pfaffian form is so transparent that many things become obvious. 5.1. Bounds. Though the matrix polynomials C s (λ) are not quasihomogeneous (their entries have different degrees), the determinant det C 0 (λ) is a quasihomogeneous polynomial from C[λ]. Its degree can be immediately computed as lr from the explicit representation (4.2). This same representation proves that this determinant, equal to λ n 1 + polynomial in(λ 2 , . . . , λ m ), does not vanish identically, so that the system (4.4) is indeed meromorphic. Moreover, the norm of the inverse matrix C −1 0 can be explicitly majorized in terms of the distance to the critical locus. One possibility to do this is to consider the sections λ 1 = 1 and apply the Cartan inequality as in [NY01], using the quasihomogeneity. 5.2. Spectrum. The spectrum of C 0 (λ) can be also easily computed: it consists of all l critical values of the polynomial F (x, λ), at least when F (·, λ) is a Morse polynomial. To see this, it is sufficient to evaluate both parts of (4.6) at any of l critical points a 1 , . . . , a l ∈ C n . The column vectors v i = (ϕ 1 (a i ), . . . , ϕ l (a i )) T , i = 1, . . . , l, are the corresponding eigenvectors (recall that µ i = ϕ i dx 1 ∧ · · · ∧ dx n ). 5.3. Hypergeometric form. Restricting the Pfaffian system (4.4) on the onedimensional complex lines λ s = const, s = 2, . . . , m, parameterized by the value of t = λ 1 , one obtains a parameterized family of Picard-Fuchs systems of ordinary differential equations. In this case only the matrix C 1 is relevant. By Theorem 2, it is quasihomogeneous of degree ρ(f ) jointly in the variables λ 1 , . . . , λ m . If ρ(f ) < r = deg λ 1 , then C 1 cannot depend on λ 1 and hence the Picard-Fuchs system in this case will have the hypergeometric form (1.1). By Proposition 1, this happens only when f is a simple quasihomogeneous polynomial of one of the types listed there. For hyperelliptic polynomials (the singularity of the type A k ) this was well-known, see [NY01]. In turn, the hypergeometric form implies that all singular points of the Picard-Fuchs system are Fuchsian. Logarithmic poles. For the full Pfaffian system (4.4) the polar locus, occurring where det C 0 (λ) vanishes, is of multiplicity 1 (it is sufficient to produce just one value of the parameters λ such that F (·, λ) has simple critical points). Yet it is not the characteristic property. A rational 1-form ω analytic outside a hypersurface Σ ′ = {g = 0} ⊂ C m , g being a polynomial without multiple factors, is said to have a logarithmic singularity on this hypersurface, if both gω and dg ∧ ω extend as polynomial forms across Σ ′ on C m . This is only one of several close but non-equivalent definitions, probably the strongest possible. It ensures that the restriction of ω on any holomorphic curve γ cutting Σ ′ at a point a, has a Fuchsian singularity with the residue independent on the choice of γ, depending only on the point a. The basic question concerning the system (4.4) is whether this system itself or a suitable gauge transformation of this system with a rational matrix gauge function, are Fuchsian with bounded residues. If the answer is positive, this would mean a positive solution of the infinitesimal Hilbert problem. Using symbolic computation for implementing the algorithm, we discovered that in the hyperelliptic case (singularity of the type A k ) the Picard-Fuchs system (4.4) indeed has only logarithmic poles until the degree k = 6 of the polynomial f = x k 1 + x 2 2 . This naturally suggests the following conjecture. Conjecture. All singularities of the Picard-Fuchs system (4.4) are only logarithmic poles on Σ ′ = {det C 0 = 0}. It would be interesting to verify this conjecture for other simple singularities listed in Proposition 1, perhaps first by symbolic computation. The next step could be to study the behavior of residue of (4.4), the matrix function defined on the regular part of Σ ′ , checking whether it is bounded near singular points of the discriminant. 5.5. Singular perturbations. The polynomial dependence of the matrices C s on the lower degree coefficients of the polynomial F = f + · · · fails for the coefficients of the principal part. Though apparently rational, this dependence certainly must exhibit singularities when f degenerates into a quasihomogeneous form with nonisolated singularities. The Picard-Fuchs system in such cases may have singular points corresponding to atypical values of F . Their appearance must somehow be related to the fact that the division modulus explodes when such degeneracy occurs, thus creating a singularly perturbed system of linear differential equations. These phenomena seem to be worth of detailed study. If the form µ is quasihomogeneous, then deg η = deg µ−deg f and c i can be nonzero only if deg µ i = deg µ. The constant M depends on the choice of the monomial basis {µ i }. The optimal choice of such basis (out of finitely many possibilities) results in the smallest value M = M (f ) that depends only on f . We will always assume that the basis {µ i } is chosen optimal in this sense. Definition 1. The minimal constant M (f ) corresponding to an optimal choice of the monomial basis of the quotient Λ n f , is called the division modulus of the quasihomogeneous polynomial f ∈ Λ 0 . Remark 2 . 2It is worth mentioning that the division modulus M (f ) is not directly related to the norm f , even in the symmetric bivariate case. If deg µ l and µ = df ∧ η, then µ df η . On the other hand, µ M −1 η by the definition of M (f ). Therefore M (f ) df −1 f −1 , that is, the division modulus for a polynomial f with the small norm must be large. The inverse is not true: a polynomial with a small division modulus can have a very large norm. Simple examples can be constructed in the form f (x) = c i (x 1 − λ i x 2 ) with sufficiently close values of the parameters λ i ∈ [0, 1] and a suitably chosen normalizing constant c ∈ C. 1 , . . . , f m ∈ C[x 1 , . . . , x n ] are all (monic) monomials of degree strictly inferior to r = deg f , arbitrarily ordered. We treat the coefficients λ 1 , . . . , λ m as the parameters of the problem, assigning to them the weights so that deg λ s + deg f s = deg f = r for all s. This choice makes the entire polynomial F quasihomogeneous of the same degree r in the ring C[x, λ] = C[x 1 , . . . , x n , λ 1 , . . . , λ m ]. Instead of the ring C[F ], the coefficients p i of the decomposition (3.11) will belong to the ring C[F, λ] 16) polynomially depending on parameters. If µ is quasihomogeneous, then so are c i and η, with deg c i = deg µ − deg µ i and deg η = deg µ − deg F . Moreover, the ratio ( c i λ + η λ )/ µ λ is bounded in terms of deg µ and the division modulus M (f ) of f only.Indeed, dividing µ by df yields degrees are quasihomogeneous), such that on C m Σ ∂ ∂λ s C 0 (λ)I = C s (λ)I, s = 1, . . . , m. (4.3)The norms C s λ are bounded by a power of the division modulus M (f ).In other words, the column vector function I(λ) on the complement to Σ satisfies the matrix Pfaffian equationdI = ΩI, Ω = C −1 0 · − dC 0 + m s=1 C s dλ s ,(4.4) with a rational matrix-valued 1-form Ω having the poles only on the locus Σ ′ = {det C 0 = 0} ⊂ C m . Here d is the exterior derivation with respect to the variables λ s only: for c(λ) ∈ C[λ], dc = s ∂c(λ) ∂λs dλ s . The proof is constructive. The description of the matrix polynomials C s (λ) is given below. 4.1. Gelfand-Leray derivative with respect to parameters. V I Arnol ′ D, S M Guseȋn-Zade, A N Varchenko, Singularities of differentiable maps. Boston, MassBirkhäuser Boston IncI58018V. I. Arnol ′ d, S. M. Guseȋn-Zade, and A. N. Varchenko, Singularities of differentiable maps. Vol. I, Birkhäuser Boston Inc., Boston, Mass., 1985. MR 86f:58018 Sur quelques problèmes de la théorie des systèmes dynamiques. V I , Topol. Methods Nonlinear Anal. 4258001MRV. I. Arnol ′ d, Sur quelques problèmes de la théorie des systèmes dynamiques, Topol. Methods Nonlinear Anal. 4 (1994), no. 2, 209-225. MR 96i:58001 Relative differential forms and complex polynomials. P Bonnet, A Dimca, Bull. Sci. Math. 1247909P. Bonnet and A. Dimca, Relative differential forms and complex polynomials, Bull. Sci. Math. 124 (2000), no. 7, 557-571. MR 1 793 909 Die Monodromie der isolierten Singularitäten von Hyperflächen. E Brieskorn, Manuscripta Math. 22509MRE. Brieskorn, Die Monodromie der isolierten Singularitäten von Hyperflächen, Manuscripta Math. 2 (1970), 103-161. MR 42 #2509 Sur le système de Gauss-Manin d'un polynome modéré. A Douai, Bull. Sci. Math. 1255875A. Douai, Sur le système de Gauss-Manin d'un polynome modéré, Bull. Sci. Math. 125 (2001), no. 5, 395-405. MR 1 841 875 On the cohomology of a general fiber of a polynomial map. A Dimca, M Saito, Compositio Math. 85332060MRA. Dimca and M. Saito, On the cohomology of a general fiber of a polynomial map, Compositio Math. 85 (1993), no. 3, 299-309. MR 94b:32060 Algebraic Gauss-Manin systems and Brieskorn modules. Amer. J. Math. 1231281, Algebraic Gauss-Manin systems and Brieskorn modules, Amer. J. Math. 123 (2001), no. 1, 163-184. MR 1 827 281 Relative cohomology and volume forms. J.-P Françoise, Singularities (Warsaw. 9232051PWNMRJ.-P. Françoise, Relative cohomology and volume forms, Singularities (Warsaw, 1985), PWN, Warsaw, 1988, pp. 207-222. MR 92h:32051 Petrov modules and zeros of Abelian integrals. L Gavrilov, Bull. Sci. Math. 122832043MRL. Gavrilov, Petrov modules and zeros of Abelian integrals, Bull. Sci. Math. 122 (1998), no. 8, 571-584. MR 99m:32043 Abelian integrals related to Morse polynomials and perturbations of plane Hamiltonian vector fields. Ann. Inst. Fourier (Grenoble). 492374, Abelian integrals related to Morse polynomials and perturbations of plane Hamiltonian vector fields, Ann. Inst. Fourier (Grenoble) 49 (1999), no. 2, 611-652. MR 1 697 374 An estimate of the number of zeros of Abelian integrals for special Hamiltonians of arbitrary degree. A Glutsuk, Yu Ilyashenko, math.DS/0112156ArXiv PreprintA. Glutsuk and Yu. Ilyashenko, An estimate of the number of zeros of Abelian integrals for special Hamiltonians of arbitrary degree, ArXiv Preprint math.DS/0112156 (2001), 1-58. An explicit formula for the determinant of the Abelian integral matrix. A Glutsuk, math.DS/0004040ArXiv PreprintA. Glutsuk, An explicit formula for the determinant of the Abelian integral matrix, ArXiv Preprint math.DS/0004040, April 2000. Vozniknovenie predelьnyh ciklov pri vozmuwenii uravneni dw/dz = −Rz/Rw, gde R(z, w)-mnogoqlen (Appearance of limit cycles by perturbation of the equation dw/dz = −Rz/Rw, where R(z, w) is a polynomial ). Yu S Ilyashenko, Mat. Sbornik (New Series). 78120Yu. S. Ilyashenko, Vozniknovenie predelьnyh ciklov pri vozmuwenii urav- neni dw/dz = −Rz/Rw, gde R(z, w)-mnogoqlen (Appearance of limit cycles by perturbation of the equation dw/dz = −Rz/Rw, where R(z, w) is a polynomial ), Mat. Sbornik (New Series) 78 (120) (1969), no. 3, 360-373. D Novikov, Modules of Abelian integrals and Picard-Fuchs systems, ArXiv preprint math.DS/0110126. D. Novikov, Modules of Abelian integrals and Picard-Fuchs systems, ArXiv preprint math.DS/0110126, October 2001. Tangential Hilbert problem for perturbations of hyperelliptic Hamiltonian systems. D Novikov, S Yakovenko, Electron. Res. Announc. Amer. Math. Soc. 534065electronicD. Novikov and S. Yakovenko, Tangential Hilbert problem for perturbations of hyperel- liptic Hamiltonian systems, Electron. Res. Announc. Amer. Math. Soc. 5 (1999), 55-65 (electronic). MR 2000a:34065 Redundant Picard-Fuchs system for Abelian integrals. J. Diff. Equations. 1772[NY01] , Redundant Picard-Fuchs system for Abelian integrals, J. Diff. Equations 177 (2001), no. 2, 267-306. A multidimensional generalization of Il ′ yashenko's theorem on abelian integrals. I A Pushkar, ′ , Funktsional. Anal. i Prilozhen. 31258183I. A. Pushkar ′ , A multidimensional generalization of Il ′ yashenko's theorem on abelian integrals, Funktsional. Anal. i Prilozhen. 31 (1997), no. 2, 34-44, 95. MR 98k:58183 Quantitative theory of ordinary differential equations and tangential Hilbert 16th problem. S Yakovenko, math.DS/0104140ArXiv PreprintS. Yakovenko, Quantitative theory of ordinary differential equations and tangential Hilbert 16th problem, ArXiv Preprint math.DS/0104140 (2001). . P.O.B. 26Department of Mathematics, Weizmann Institute of ScienceIsrael E-mail address: [email protected] WWW pageDepartment of Mathematics, Weizmann Institute of Science, P.O.B. 26, Rehovot 76100, Israel E-mail address: [email protected] WWW page: http://www.wisdom.weizmann.ac.il/~yakov/index.html
[]
[ "Using rigorous ray tracing to incorporate reflection into the parabolic approximation", "Using rigorous ray tracing to incorporate reflection into the parabolic approximation" ]
[ "Edward R Floyd [email protected] \nJamaica Village Road\n92118-3208CoronadoCA\n" ]
[ "Jamaica Village Road\n92118-3208CoronadoCA" ]
[ "PACS NOS. 43.30D, 43.30E, 43.30G" ]
We present a parabolic approximation that incorporates reflection. With this approximation, there is no need to solve the parabolic equation for a coupled pair of solutions consisting of the incident and reflected waves. Rather, this approximation uses a synthetic wave whose spectral components manifest the incident and reflected waves.
null
[ "https://export.arxiv.org/pdf/physics/9909001v1.pdf" ]
15,717,521
physics/9909001
f5db6a4270e63e9dd3f86af39751706d639eaba0
Using rigorous ray tracing to incorporate reflection into the parabolic approximation 1 Sep 1999 2 September 1992 Edward R Floyd [email protected] Jamaica Village Road 92118-3208CoronadoCA Using rigorous ray tracing to incorporate reflection into the parabolic approximation PACS NOS. 43.30D, 43.30E, 43.30G 2601 Sep 1999 2 September 1992arXiv:physics/9909001v1 [physics.ao-ph]ocean acousticsparabolic approximationparabolic equationbackscatterpropagation We present a parabolic approximation that incorporates reflection. With this approximation, there is no need to solve the parabolic equation for a coupled pair of solutions consisting of the incident and reflected waves. Rather, this approximation uses a synthetic wave whose spectral components manifest the incident and reflected waves. The (Leontovich-Fock) parabolic approximation, which approximates the elliptic Helmholtz equation by a parabolic partial differential equation, was originally applied to electromagnetic propagation. 1 Tappert and Hardin introduced the parabolic approximation to acoustic propagation in the ocean to account for inseparable range effects in the sound speed profile. 2 In ocean acoustics, the parabolic equation is a useful computational tool for tackling inseparable indices of refraction for which the sound speed profile changes slowly with respect to range. One of the deficiencies of the standard parabolic approximation is that it neglects backscatter. Heretofore, to account for backscatter, one solved the parabolic equation for a coupled pair of solutions (the incident and reflected solutions). Attempts to account for backscatter include among others the works of Collins et al, which uses a two-way parabolic approximation. 3 Herein, we present a different approach to include backscatter. Based on rigorous ray tracing, we combine the incident and reflected waves into a modulated synthetic wave that progresses in the incident direction. Rigorous ray tracing has been developed in a generalized Hamilton-Jacobi representation that accounts for terms ignored by classical ray tracing and other asymptotic methods. It has provided insight into propagation phenomena. Rigorous ray tracing has shown that the existence of a sound-speed gradient is sufficient to induce linear (material) dispersion and angular (geometric) dispersion even for isotropic frequency-independent sound-speed profiles, that rays are not generally orthogonal to wave fronts, that classical ray tracing does not predict all caustics, and that rigorous ray tracing may be solved in closed form whenever the corresponding wave equation may be solved in closed form. 4 Its quantum mechanical analogy, the trajectory representation, has shown how to derive the Helmholtz equation (the stationary Schrödinger equation) from the generalized Hamilton-Jacobi equation. 5 This allows us to construct the wave function or normal mode from Hamilton's characteristic function (a generator of the motion for the trajectory or ray path). These normal modes can be synthetic normal modes that contain the incident and reflected waves as spectral components. 6 We shall use such a normal mode to develop the parabolic equation that accounts for reflection. Our objective in this letter is to present a parabolic equation that accounts for reflection. It is beyond the scope of this letter to solve the resultant parabolic equation. The acoustical community is free to solve this equation by the methods of their choice. This work is presented in two dimensions, which is sufficient to illustrate how to incorporate reflection into the parabolic equation. We assume that the ocean to first order is a stratified medium whose index of refraction varies with depth due to temperature and pressure changes. The range dependence of the index of refraction is second order. This index of refraction is dependent upon two cartesian coordinates: (x, z) for range and depth respectively. The index of refraction varies much more rapidly in the z-direction then in the x-direction. We also assume that, for propagation of a wave train through the ocean medium, the reflected wave is much smaller than the incident wave consistent with the concept of backscatter. Recently, the trajectory representation of quantum mechanics (the quantum analogue to rigorous ray tracing) showed how the reflected and incident waves can be combined to synthesize a wave whose front monotonically advances in the direction of incidence 6 . The synthetic wave is given by α exp[i(kx − ωt)] incident wave + β exp[−i(kx + ωt)] reflected wave = [α 2 + β 2 + 2αβ cos(2kx)] 1/2 × exp i arctan α − β α + β tan(kx) − ωt wave front moves in +x-direction(1) where α is the amplitude of incident wave and β is the amplitude of reflected wave, where |β| < |α|, and where k is the wavenumber and ω is the angular frequency. This synthetic wave is a normal mode (follows from the superposition principle). The synthetic wave has spatially modulation in phase and amplitude as shown by right side of Eq. (1). For completeness, the right side of Eq. (1) was derived from the generator of the motion for the trajectory in Ref. 6, and the left side was subsequently developed by analysis. While the right side of Eq. (1) was first developed from Hamilton's characteristic function by the quantum analogy to rigorous ray tracing, 6 we subsequently learned how to do it in a wave representation. This is the contribution of rigorous ray tracing that we use here. The wave equation in two dimensions is given by ∂ 2 Ψ/∂x 2 + ∂ 2 Ψ/∂z 2 = C −2 (x, z)∂ 2 Ψ/∂t 2 The speed of sound, C, is isotropic and only spatially dependent. The wave equation is separable in time so that Ψ(x, z, t) = ψ(x, z) exp(iωt) Hence, the wave equation is reduced to the two-dimensional Helmholtz equation ∂ 2 ψ/∂x 2 + ∂ 2 ψ/∂z 2 + κ 2 (x, z)ψ = 0(2) where κ(x, z) = ω/C(x, z). For reference, the standard parabolic approximation substitutes ψ(x, z) = θ standard (x)φ standard (x, z) = exp(ikx)φ standard (x, z)(3) into Eq. (2) to produce, after a standard simplification, the standard parabolic equation given by 7 ∂ 2 φ standard ∂z 2 + i2k ∂φ standard ∂x + (κ 2 − k 2 )φ standard = 0,(4) which does not incorporate reflection. Let us incorporate reflection by considering ψ(x, z) = θ(x)φ(x, z) where θ = [α 2 + β 2 + 2αβ cos(2kx)] 1/2 exp i arctan α − β α + β tan(kx) . There is flexibility in choosing the form of θ. Different choices of θ lead to different parabolic equations. 7 We have chosen a θ that is the spatial component of the synthetic wave, Eq. (1). This θ includes reflection while progressing in the incident direction. In the standard parabolic equation, the corresponding θ standard in Eq. (3) is given by θ standard = exp(ikx), which only includes the incident wave. Substituting ψ = θφ into the Helmholtz equation leads to ∂ 2 φ ∂z 2 + ∂ 2 φ ∂x 2 + 2 ∂θ/∂x θ ∂φ ∂x + (κ 2 − k 2 )φ = 0 where ∂ 2 θ/∂x 2 = k 2 θ by the superposition principle or by direct substitution. We now examine (∂θ/∂x) θ, which is given by ∂θ/∂x θ = ik α 2 +β 2 −2αβ cos(2kx) α 2 +β 2 +2αβ cos(2kx) 1/2 × exp i arctan α+β α−β tan(kx) − i arctan α−β α+β tan(kx) .(5) For small reflections, β ≪ α, Eq. (5) may be simplified to ∂θ/∂x θ = ik[1 − (2β/α) cos(2kx)] exp[i(2β/α) sin(2kx)] + O[(β/α) 2 ]. Now the transformed Helmholtz equation for small reflection becomes ∂ 2 φ ∂z 2 + ∂ 2 φ ∂x 2 + i2k[1 − (2β/α) cos(2kx)] exp[i(2β/α) sin(2kx)] ∂φ ∂x + ∂ 2 φ ∂x 2 + (κ 2 − k 2 )φ = O[(β/α) 2 ]. The critical assumption for the validity of the parabolic assumptions is that φ is well behaved (smooth) in range so that ∂ 2 φ/∂x 2 ≪ 2k∂φ/∂x .(6) This assumption, Eq. (6) is standard for simplifying the elliptic Helmhotz equation to an approximating parabolic equation. The resulting parabolic wave equation with reflection to first order in (β/α) is given by ∂ 2 φ ∂z 2 + i2k[1 − (2β/α) cos(2kx)] exp[i(2β/α) sin(2kx)] ∂φ ∂x + (κ 2 − k 2 )φ = 0 or ∂ 2 φ ∂z 2 + i2k exp[i(2β/α) exp(i2kx)] ∂φ ∂x + (κ 2 − k 2 )φ = 0.(7) Equation (7) is the parabolic equation that incorporates reflection. The difference between Eq. (7) and the standard parabolic equation, Eq. (4), is the additional factor exp[i(2β/α) exp(i2kx)] in the ∂φ/∂x term in Eq. (7). Relative reflection as a function of the fraction β/α is thereby incorporated to first order into φ(x, z). In order to account for the effect of reflection, contemporary solutions to the parabolic approximation solve the parabolic equation for an interacting pair of solutions (incident and reflected) or decouple the pair by simplification. 3,7 Herein, we avoid the problem of coupled solutions. Our solution to Eq. (7) is a single synthetic wave that manifests both the incident and reflected wave throughout the domain. The initialization of φ at some initial range, x i , over the depth column, z, renders the value, φ(x i , z), over an open boundary thereby establishing the Dirichlet boundary conditions for a unique, stable solution for φ. 8 This initialization process is similar to that for the standard parabolic equation, but here we must also specify the fraction β/α. (As a starter, one could use Urick 9 and the references therein to predict β/α from reverberation and backscatter.) Solving Eq. (7) (which is beyond the scope of this letter) in practice, one must not only take the usual precautions associated with the standard parabolic approximation but also take into account that Eq. (7) is an approximation that ignores some second-order terms of (β/α). V A Fock, Electromagnetic Diffraction and Propagation Problems. Pergamon, New York11V. A. Fock, Electromagnetic Diffraction and Propagation Problems (Pergamon, New York, 1965) Chap- ter 11. Computer Simulation of Long Range Ocean Acoustical Propagation Using the Parabolic Equation Method. F D Tappert, R H Hardin, Proceedings 8th International Congress on Acoustics. 8th International Congress on AcousticsLondonGoldcrestII452F. D. Tappert and R. H. Hardin, "Computer Simulation of Long Range Ocean Acoustical Propagation Using the Parabolic Equation Method", Proceedings 8th International Congress on Acoustics Vol. II (Goldcrest, London, 1974) p. 452. . M D Collins, R B Evans, J. Acoust. Soc. Am. 911357M. D. Collins and R. B. Evans, J. Acoust. Soc. Am. 91, 1357 (1992); . J F Lingevitch, M D Collins, J. Acoust. Soc. Am. 104783J. F. Lingevitch and M. D. Collins, J. Acoust. Soc. Am. 104, 783 (1999). . E R Floyd, J. Acoust. Soc. Am. 60877E. R. Floyd, J. Acoust. Soc. Am. 60, 801 (1976); 75, 803 (1984); 79, 1741 (1986); 80, 877 (1986). . E R Floyd, quant-ph/9707051Found. Phys. Lett. 9489E. R. Floyd, Found. Phys. Lett. 9, 489 (1996), quant-ph/9707051. . E R Floyd, Phys. Essay. 5135E. R. Floyd, Phys. Essay 5, 130 (1992); 7, 135 (1994); . An. Fond. L. de Broglie. 20263An. Fond. L. de Broglie 20 263 (1995). . S T Mcdaniel, Am. J. Phys. 4763S. T. McDaniel, Am. J. Phys. 47, 63 (1979); . J. Acoust. Soc. Am. 581178J. Acoust. Soc. Am. 58, 1178 (1975). . P M Morse, H Feshbach, Methods of Theoretical Physics, Part I. 706McGraw-HillP. M. Morse and H. Feshbach, Methods of Theoretical Physics, Part I, (McGraw-Hill, New York, 1953) pp. 691, 706. Principles of Underwater Sound for Engineers. R J Urick, McGraw-HillNew YorkR. J. Urick, Principles of Underwater Sound for Engineers (McGraw-Hill, New York, 1967) pp. 187- 234.
[]
[ "On the Normalization and Hermiticity of Amplitudes in 4D Heterotic Superstrings", "On the Normalization and Hermiticity of Amplitudes in 4D Heterotic Superstrings" ]
[ "Andrea Pasquinucci \nThe Niels Bohr Institute\nUniversity of Copenhagen\nBlegdamsvej 17DK-2100CopenhagenDenmark\n", "Kaj Roland \nThe Niels Bohr Institute\nUniversity of Copenhagen\nBlegdamsvej 17DK-2100CopenhagenDenmark\n" ]
[ "The Niels Bohr Institute\nUniversity of Copenhagen\nBlegdamsvej 17DK-2100CopenhagenDenmark", "The Niels Bohr Institute\nUniversity of Copenhagen\nBlegdamsvej 17DK-2100CopenhagenDenmark" ]
[]
We consider how to normalize the scattering amplitudes of 4D heterotic superstrings in a Minkowski background. We fix the normalization of the vacuum amplitude (the string partition function) at each genus, and of every vertex operator describing a physical external string state in a way consistent with unitarity of the S-matrix. We also provide an explicit expression for the map relating the vertex operator of an incoming physical state to the vertex operator describing the same physical state, but outgoing. This map is related to hermitean conjugation and to the hermiticity properties of the scattering amplitudes.
10.1016/0550-3213(95)00530-7
[ "https://export.arxiv.org/pdf/hep-th/9508135v1.pdf" ]
15,759,874
hep-th/9508135
a0df5eba3054be7940e327ba79a5e55dea446efa
On the Normalization and Hermiticity of Amplitudes in 4D Heterotic Superstrings 25 Aug 1995 August, 1995 Andrea Pasquinucci The Niels Bohr Institute University of Copenhagen Blegdamsvej 17DK-2100CopenhagenDenmark Kaj Roland The Niels Bohr Institute University of Copenhagen Blegdamsvej 17DK-2100CopenhagenDenmark On the Normalization and Hermiticity of Amplitudes in 4D Heterotic Superstrings 25 Aug 1995 August, 1995arXiv:hep-th/9508135v1 We consider how to normalize the scattering amplitudes of 4D heterotic superstrings in a Minkowski background. We fix the normalization of the vacuum amplitude (the string partition function) at each genus, and of every vertex operator describing a physical external string state in a way consistent with unitarity of the S-matrix. We also provide an explicit expression for the map relating the vertex operator of an incoming physical state to the vertex operator describing the same physical state, but outgoing. This map is related to hermitean conjugation and to the hermiticity properties of the scattering amplitudes. Introduction and Summary String theory [1] remains the most promising candidate for a quantum theory of gravity. It has also proven itself useful as a tool for perturbative calculations in Yang-Mills theory [2]. Accordingly, it is of interest to be able to make detailed computations of string scattering amplitudes at any loop order. It is well known how to do this by means of the Polyakov path integral or, equivalently, by computing vacuum expectation values: There exists a "master formula" expressing the connected part of the scattering amplitude at each loop level as an integral over moduli space, where the integrand is obtained as a correlation function of vertex operators, with the appropriate insertions of world-sheet ghosts and Picture Changing Operators (PCOs) [3]. In order to obtain from this "master formula" an actual scattering amplitude (i.e. a number) one would have to perform the integral over the moduli as well as the summation over spin structures, both of which are usually impossible by analytical means. In this paper we address two other important points that have to be understood in detail to be able to obtain explicit expressions for string scattering amplitudes. First of all, we need to know what is the correct normalization of the vacuum amplitude (the string partition function) at each and every genus, and also what is the normalization of all the vertex operators describing physical external string states. This is obviously an important issue since, for example, it is through the proper normalization of the vertex operators that there appears the relation between the string length scale parameter α ′ , the gravitational coupling constant κ and the gauge coupling constants. In a second quantized theory, like Quantum Field Theory, the proper normalizations are obtained automatically when computing the amplitudes using, for example, Dyson's formula. Instead, in the first quantized framework of string theory, one has to carefully fix all normalizations in a way consistent with unitarity of the S-matrix. Second, we need to understand what is the exact relation between the vertex operators that we use to describe ingoing and outgoing string states in the "master formula". We may formulate this more precisely: Consider some scattering process, λ N out +N in + . . . + λ N out +1 −→ λ N out + . . . + λ 1 ,(1) where each label λ represents a set of single-string state quantum numbers, such as momentum, helicity, charges etc. By definition the quantum mechanical scattering amplitude A f ←i for this process is given by the S matrix element A f ←i = λ 1 , . . . , λ N out ; in|S|λ N out +1 , . . . , λ N out +N in ; in , that involves only "in" states. Incoming strings are described by ket-states, outgoing ones by bra-states. In string theory we may compute the connected part of this transition amplitude by means of the "master formula", where each single-string state -whether appearing in eq. (2) as a bra or a ket-is represented by a vertex operator. The question is the following: If some vertex operator W |λ represents the single-string ket-state |λ; in , what is the vertex operator W λ| that represents the single-string bra-state λ; in|? Since λ; in| = (|λ; in ) † it is clear that this question is closely related to the hermiticity properties of the scattering amplitude: The correct choice of W λ| should lead to S-matrix elements consistent with unitarity. In particular it should lead to tree-level T -matrix elements that are real away from the momentum poles. In field theory, in the setting of the Lehmann-Symanzik-Zimmermann reduction formula for S-matrix elements, W λ| is just the hermitean conjugate of W |λ . In string theory, due primarily to the presence of PCOs in the "master formula", the relation between W λ| and W |λ turns out to be somewhat modified. In practice the two problems, finding the correct normalization of the vertex operators appearing in the "master formula", and deriving the exact relation between W λ| and W |λ , can be solved at the same time. We could imagine considering the connected part of the tree-level two-point amplitude (i.e. the inverse propagator) and impose that this should assume the canonical form known from field theory. But the "master formula" for connected string theory amplitudes is only well-defined on the mass shell and here the inverse propagator vanishes identically. Instead we consider another simple object, which is nonzero even on-shell and just as universal as the propagator. This is the amplitude for any given string state to emit or absorb a zero-momentum graviton without changing any of its own quantum numbers. More precisely we consider the term that describes the universal coupling of gravity to the 4-momentum of the propagating string state and the requirement that this term assumes its canonical form yields not only an expression for the normalization of the vertex operator in terms of the gravitational coupling κ (which in D = 4 dimensions is related to Newton's constant by κ 2 = 8πG N ), it also provides the desired map between the vertex operators W |λ and W λ| . The procedure that we adopt is a development of the method proposed in ref. [4], where it was suggested to normalize the vertex operator of any given string state by considering the elastic scattering of this string state, and some "reference" string state, for example a graviton, at very high center-of-mass energies, where the interactions are dominated by gravity, and require that the tree-level amplitude for this process reproduces the standard one dictated by the principle of equivalence. But whereas in ref. [4] the method of normalization was only applied to a few examples, in this paper we proceed to find the proper normalization for all vertex operators in the string theory. The relation between the vertex operators W |λ and W λ| and the associated question of unitarity of the S-matrix was also discussed in ref. [5]. We provide an explicit expression for W λ| , including an overall phase factor, which depends on the picture of the vertex operator, that was not manifest in ref. [5]. The paper is organized as follows: In section 1 we review the situation in quantum field theory, where the Lehmann-Symanzik-Zimmermann reduction formula involves different operators for incoming and outgoing particles, in analogy with the situation we encounter in string theory. In section 2 we present the "master formula" for string amplitudes and section 3 contains a discussion of the correct overall normalization of the vacuum amplitude. In Section 4 we obtain an ansatz for the map between W |λ and W λ| , which is subsequently verified in section 5, where the normalization of the vertex operators is also derived. In Section 6 we check that our ansatz is consistent with unitarity in the sense that it leads to real tree-level amplitudes away from the momentum poles. Section 7 contains an explicit example in the framework of four-dimensional heterotic string theories built using free world-sheet fermions. Finally we include two appendices containing various conventions and a third appendix devoted to the proof of the compatibility of the GSO projection and the map between W |λ and W λ| . 1. Field theory We can formulate scattering amplitudes in 4D field theory in a form close to the one we use in string theory by means of the Lehmann-Symanzik-Zimmermann reduction formula for S-matrix elements [6]: λ 1 , . . . , λ N out ; in|S|λ N out +1 , . . . , λ N out +N in , ; in = disconnected terms + (1.1) N out +N in j=1 i Z j N out +N in j=1 d 4 x j 0|TV λ 1 | (x 1 ) . . . V |λ N out +N in (x N out +N in )|0 . Here we have a Field Theory Vertex (FTV) V |λ (x) corresponding to the 1-particle ketstate |λ; in where the label λ incorporates the 4-momentum p as well as other quantum numbers, and similarly we have a FTV V λ| (x) corresponding to the 1-particle bra-state λ; in|. Since by definition of hermitean conjugation λ; in| = (|λ; in ) † , it is not surprising that V λ| (x) is just the hermitean conjugate of V |λ (x), V λ| (x) = V |λ (x) † . (1.2) For example, for a particle described by a real scalar field φ, the 1-particle states are specified by their momentum only and the Field Theory Vertices are V |p (x) = e ip·x −⊔ ⊓ x + m 2 φ(x) (1.3) V p| (x) = e −ip·x −⊔ ⊓ x + m 2 φ(x) , where in both cases p 0 = + p 2 + m 2 and ⊔ ⊓ = η µν ∂ µ ∂ ν with η = diag(−1, 1, 1, 1). Another example is provided by an electron with momentum p and helicity η, where V |e − ,p,η (x) = −ψ(x) ← / ∂ x − m u( p, η)e ip·x (1.4) V e − ,p,η| (x) = −u( p, η) (−/ ∂ x − m) ψ(x)e −ip·x . Here ψ = ψ † (iγ 0 ) = ψ † (−iγ 0 ) and {γ µ , γ ν } = 2η µν . The spinor u( p, η) of the incoming particle with momentum p and helicity η satisfies the Dirac equation (i/ p − m)u( p, η) = 0 and is normalized according to u † ( p, η)u( p, η ′ ) = 2p 0 δ η,η ′ . (1.5) For particles of nonzero mass m this normalization is equivalent to the more standard one u( p, η)u( p, η ′ ) = 2mδ η,η ′ , but unlike the standard normalization condition it can also be used for massless particles. String amplitudes In this paper we only consider 4D heterotic string models in a Minkowski background. We define the T -matrix element as the connected S-matrix element with certain normalization factors removed λ 1 ; . . . ; λ N out ; in|S|λ N out +1 , . . . , λ N out +N in ; in connected N tot i=1 ( λ i ; in|λ i ; in ) 1/2 = (2.1) i(2π) 4 δ 4 (p 1 + . . . + p N out − p N out +1 − . . . − p N tot ) N tot i=1 (2p 0 i V ) −1/2 × T (λ 1 ; . . . ; λ N out |λ N out +1 ; . . . ; λ N out +N in ) , where N tot = N in + N out is the total number of external states, p i is the momentum of the i'th string state, all of them having p 0 i > 0, and V is the usual volume-of-the-world factor. We also introduce the dimensionless momentum k µ ≡ α ′ 2 p µ . The Minkowski metric is η = diag(−1, 1, 1, 1). For heterotic superstrings in the Neveu-Schwarz Ramond formalism we have various free conformal fields: The space-time coordinates X µ , their chiral world-sheet superpartners ψ µ , the reparametrization ghosts b, c andb,c, and the superghosts β, γ. On top of this we have various internal degrees of freedom described by a conformal field theory (CFT) with left-moving (right-moving) central charge 22 (9). These may or may not be free. The g-loop contribution to the T -matrix element is given by the Polyakov path integral which is equivalent to the following operator formula T g (λ 1 ; . . . ; λ N out |λ N out +1 ; . . . ; λ N out +N in ) = (2.2) (−1) g−1 C g 3g−3+N tot I=1 d 2 m I g µ=1   α µ ,β µ C α µ β µ   3g−3+N tot I=1 (η I |b) N tot i=1 c(z i ) 2 × 2g−2+N B +N F P A=1 Π(w A ) V λ 1 | (z 1 ,z 1 ) . . . V |λ N tot (z N tot ,z N tot ) . Here C g is a constant giving the proper normalization to the string partition function (the g-loop vacuum amplitude). It will be given explicitly in section 3, and (as we shall see) the sign (−1) g−1 ensures that C g is a positive number. m I is a modular parameter, η I is the corresponding Beltrami differential, and our conventions for the overlap (η I |b) with the antighost field b are defined in detail in ref. [7]. The integral is over one fundamental domain of N tot -punctured genus g moduli space. For each loop, labelled by l = 1, . . . , g, we have a summation over sets of spin structures, collected in vectors α l and β l , and with a summation coefficient C α l β l . By definition the correlator . . . includes the partition function. At tree level, where the non-zero mode partition function is equal to one, the notation . . . is also used. At loop level we choose the normalization for the partition function to be the one obtained by applying the sewing procedure. This guarantees sensible factorization properties in the corner of moduli space where the world-sheet degenerates into individual tori connected by long tubes and implies that the spin-structure summation coefficient is just a product of one-loop summation coefficients as in eq. (2.2). More details on our conventions for spin structures, partition functions and operator fields in the explicit setting of a heterotic string model built with free world-sheet fermions [8,9,10] can be found in Appendix A, see also refs. [11,12]. In analogy with field theory we have introduced a vertex operator V |λ (z,z) for each ket string state |λ and similarly a vertex operator V λ| (z,z) corresponding to each bra string state λ|. 1 At the end of this section we will have more to say about the meaning of these operators. The ghost factors residing in the BRST invariant version of the vertex operator, given by W |λ (z,z) = c(z)c(z)V |λ (z,z) and W λ| (z,z) = c(z)c(z)V λ| (z,z) ,(2.3) have been factored out in eq. (2.2). We take all space-time bosonic vertex operators to be in the q = −1 superghost picture and all the space-time fermionic vertex operators to be in the q = −1/2 superghost picture. In an amplitude involving N B space-time bosons and 2N F P space-time fermions this implies that we have to insert 2g − 2 + N B + N F P PCOs Π at arbitrary points w A on the Riemann surface. In practical calculations it can be convenient to insert one PCO at each of the vertex operators describing the space-time 1 Since all the states we consider are of the "in" variety, we drop the "in" label from now on. bosons so as to change these into the q = 0 picture. This leaves 2g − 2 + N F P PCOs at arbitrary points. If we "bosonize" the superghosts in the usual way, β = ∂ξe −φ and γ = e +φ η, the PCO is given explicitly by Π = 2c∂ξ + 2e φ T [X,ψ] F − 1 2 ∂(e 2φ ηb) − 1 2 e 2φ (∂η)b ,(2.4) where we suppressed the superghost cocycle factor which ensures that e φ anti-commutes with all other fermionic operators on the world-sheet, and T [X,ψ] F = − i 2 ∂X · ψ + (internal part) (2.5) is the orbital part of the world-sheet supercurrent (i.e. the part not involving ghosts and superghosts). The "internal part" refers to the internal right-moving degrees of freedom of the CFT with central charge 9. As stated in the introduction our aim in this paper is twofold: First, since the T -matrix element as defined in eq. (2.1) corresponds to the connected S-matrix element obtained using states with standard field theory normalization, we have to use vertex operators with a definite normalization in eq. (2.2). So we need to know what is the correct normalization of all vertex operators involved in the theory; and we also need to determine the value of the overall normalization constant C g . Second, we need to understand what is the exact relation between the vertex operators W λ| (z,z) and W |λ (z,z). By definition the operator W |λ (z = 0), when acting on the conformal vacuum |0 , creates the string state |λ , where (like in section 1) λ is a label incorporating the 4-momentum k (with k 0 > 0), the helicity and the "particle type" (defined through the values of various charges and family labels 3. Normalization of the vacuum amplitude In section 2 we already made use of the basic fact that the problem of normalizing string amplitudes can be separated into two independent problems: One, to fix the normalization constant C g of the vacuum amplitude at genus g. The other, to fix the normalization of each vertex operator in the theory. It is factorization that leads to this simple result. For example, to see that the normalization of the vertex operators cannot depend on the topology of the world-sheet we can imagine inserting a vertex operator on a sphere connected by a long tube to some genus Similarly, if we assume for the moment that the overall normalization of the amplitude depends on the number N of external states, 2 as well as on the genus g, through some coefficients C g,N , we find by factorizing the N -point g-loop amplitude into an N + 1point g 1 -loop amplitude times a 1-point g 2 -loop amplitude times a propagator (where g 1 + g 2 = g), that C g 1 +g 2 ,N ∝ C g 1 ,N+1 C g 2 ,1 (3.1) with a proportionality constant independent of g 1 , g 2 and N . Setting g 1 = 0 one gets C g,N ∝ C 0,N+1 C g,1 ,(3.2) so that the dependence on N can be studied at tree level. Again by factorization, at tree level one gets C 0,N 1 +N 2 ∝ C 0,N 1 +1 C 0,N 2 +1 ,(3.3) and if we put N 2 = 2 this implies that the ratio C 0,N+2 /C 0,N+1 is independent of N or, in other words, that C 0,N ∝ (M) N for some constant M. So we may write C g,N = C g (M) N ,(3.4) and if we absorb a factor of M into the normalization of all vertex operators we are then left with an overall normalization constant C g depending only on the genus. To determine the value of C g we adopt the method proposed in refs. [4,13]: To consider the elastic scattering of two gravitons in the Regge regime of very high center-of-mass energy and small energy transfer and impose that the leading part of the g-loop amplitude assumes the universal form needed for the eikonal resummation [14]. In order to get started we need the expression for the graviton vertex operator including the proper normalization which was found in refs. [15,4]: V (−1) |grav (z,z) = i κ πǭ ·∂X(z)ǫ · ψ(z)e −φ(z) e ik·X(z,z) ,(3.5) where k 2 = 0 and we wrote the graviton polarization on the factorized formǭ ⊗ ǫ with ǫ · k =ǭ · k = 0. Our conventions for the operator fields can be found in Appendix A. Like in eq. (2.4) we suppressed the cocycle factor which ensures that the superghost operator e −φ = δ(γ) anticommutes with all other fermions on the world-sheet. By picture changing (3.5) we arrive at V (0) |grav (z,z) = lim w→z Π(w)V (−1) |grav (z,z) (3.6) = κ πǭ ·∂X(z) [ǫ · ∂X(z) − ik · ψ(z)ǫ · ψ(z)] e ik·X(z,z) . The expressions for V (−1) grav| and V grav| are identical to eqs. (3.5) and (3.6), as long as the polarizations ǫ,ǭ are taken to be real, and we ascribe to the outgoing graviton a momentum with k 0 < 0. The calculation of the four-graviton g-loop amplitude in the Regge limit starting from eq. (2.2) is different from the one in ref. [4] which was performed using the manifestly worldsheet supersymmetric formulation of the heterotic string. In fact it is much harder, because even after changing the graviton vertex operators into the (0) picture there remains 2g − 2 PCOs at arbitrary points. To obtain the universal form of the amplitude in the pinching limit relevant for the Regge regime, where the world-sheet degenerates into a ladder-like configuration consisting of two "fast legs" connected by g + 1 long tubes, one should insert g − 1 PCOs on each of the two "fast legs". (Other choices are of course possible but will lead to the presence of total derivatives that make the leading behaviour of the amplitude rather obscure.) Even subject to this constraint there still remains 2g − 2 PCO insertion points, the dependence on which only drops out at the very end of the calculation. In the end we recover the standard result [4] pertaining to D = 4 space-time dimensions, C g = 2κ 2 α ′ g−1 1 2π 5g−3 (α ′ ) −2 (3.7) and the sign factor (−1) g−1 explicitly displayed in eq. (2.2). The origin of this sign is not too hard to understand. It is needed to compensate the identical sign which appears when we disentangle the anticommuting superghost factors e φ and the orbital supercurrents T [X,ψ] F in the product of the 2g − 2 PCOs 2g−2 α=1 e φ(w α ) T [X,ψ] F (w α ) = (−1) g−1 2g−2 α=1 e φ(w α ) 2g−2 α=1 T [X,ψ] F (w α ) . (3.8) The other three terms present in the PCO (2.4) do not contribute to the leading behaviour of the amplitude in the Regge regime. A comment about the spin structure summation coefficient in eq. (2.2) might be in order at this point: We fix C g by considering the four-graviton g-loop amplitude in the Regge regime. However, only the 2 g spin structures responsible for graviton exchange contribute to the leading, universal part of the amplitude. How do we know that the normalization we obtain is also correct for all the other spin structures? The answer to this has already been given in section 2: The requirement that the amplitude factorizes properly in the limit where all loops are taken far apart implies that the spin structure summation coefficient should be a product of one-loop summation coefficients. These are in turn specified by the requirement that the one-loop partition function should be modular invariant, once a (physically sensible) choice of GSO projection has been made [8,9]. 3 The relation between W |λ and W λ| We now consider in detail the connection between the vertex operators describing incoming and outgoing string states. What we are looking for is the map which, given the vertex operator W |λ describing an incoming string state, gives us the vertex operator W λ| describing the same string state but outgoing. As we saw in section 1 this map is just given by hermitean conjugation in the framework of quantum field theory. In string theory this cannot be the whole story, because if the operator field W |λ creates the ket-state |λ in the usual sense, |λ = lim ζ,ζ→0 W |λ (ζ,ζ)|0 , then by definition the hermitean conjugate operator field creates the corresponding bra-state, λ| = lim ζ,ζ→0 0| W |λ (ζ,ζ) † . But in eq. (2.2) both V λ| and V |λ are vertex operators that create ket states when acting on the conformal ket vacuum. So we need to compose two-dimensional hermitean conjugation with some other transformation which also maps a vertex operator creating ket-states into a vertex operator creating bra-states. This transformation should be a symmetry of any 2-dimensional conformal field theory on the sphere. The obvious choice is the BPZ conjugation [16] (see also [17]). Therefore we now quickly review our conventions on hermitean conjugation and BPZ conjugation in conformal field theory. After that we will propose a map from W |λ to W λ| which is just an unknown phase factor times the combination of BPZ and hermitean conjugation. In the next section we will check that our guess indeed gives the right map, and in the process the phase factor will be determined. Two-dimensional hermitean conjugation In this section we review our conventions on hermitean conjugation, see also refs. [5,12]. We define the hermitean conjugate of all elementary operators in the conformal field theory by specifying the hermitean conjugate of the corresponding oscillators, with the further understanding that hermitean conjugation also complex conjugates all complex numbers and inverts the order of the operators. For example, if Φ ∆ (z) = n φ n z −n−∆ (4.1) is a primary chiral conformal field of conformal dimension ∆, then the hermitean conjugate of this field is (Φ ∆ (z)) † = 1 z * 2∆ Φ ∆ ( 1 z * ) ,(4.2) where z * denotes the complex conjugate of z (we think of z andz as independent complex variables, so that z * andz need not be equal) and Φ ∆ (z) = n φ † −n z −n−∆ (4.3) is a primary conformal field of the same dimension as Φ ∆ . We say that a field Φ ∆ is hermitean (anti-hermitean) when Φ ∆ = +Φ ∆ (−Φ ∆ ). The hermiticity properties are made more complicated by the presence of the reparametrization ghosts, because on the sphere the basic nonvanishing correlator is c −1c0c1 c −1 c 0 c 1 where (since c † n = c −n ) the operator involved is explicitly anti-hermitean. Therefore either one has to postulate an imaginary value for this correlator or one has to relinquish the property M |A|N = + N |A † |M * of matrix elements involving ghost degrees of freedom. We prefer the second option. We define |c −1 c 0 c 1 | 2 = c −1c0c1 c −1 c 0 c 1 = +1 (4.4) and this implies that M |A|N = − N |A † |M * (4.5) in the presence of ghosts. As a special case of this M |c 0c0 A|N = N |c 0c0 A † |M * (4.6) for any operator A not involving the modes b 0 orb 0 . A list of hermiticity properties for the fields relevant in four-dimensional heterotic string models constructed using free fermions can be found in Appendix B. BPZ invariance in conformal field theories Consider a conformal field theory on the cylinder. Introduce complex coordinates z = exp{i(σ + τ )} andz = exp{i(−σ + τ )} and rotate to Euclidean time τ → −iτ . Changing sign on τ and σ simultaneously gives rise to the Belavin-Polyakov-Zamolodchikov (BPZ) transformation z → 1/z [16,17]. This transformation defines a globally holomorphic diffeomorfism on the sphere. At the level of the operator fields, the transformation changes the coordinate system from (z) to (w) where w = 1/z: Φ(z = ζ) BPZ −→ Φ(w = ζ) . (4.7) For a primary conformal field of dimension ∆ Φ ∆ (w = ζ) = e −iǫπ∆ 1 ζ 2∆ Φ ∆ z = 1 ζ , (4.8) where for non-integer conformal dimensions we have to choose a specific phase for −1, 4 parametrized by an odd integer ǫ, when forming the transformation factors dz dw = e −iǫπ 1 w 2 and dw dz = e +iǫπ 1 z 2 . (4.9) The BPZ transformation does not reverse the order of operators and it leaves all complex numbers unchanged. It cannot itself be generated by any operator acting on ket states. Instead it defines a map from ket-states to bra-states as follows: |Φ ≡ lim ζ→0 Φ(z = ζ)|0 BPZ −→ Φ BPZ | ≡ lim ζ→0 0|Φ(w = ζ) . (4.10) The label "BPZ" on the state Φ BPZ | is necessary in order to avoid confusion with the bra state Φ| ≡ lim ζ→0 0| (Φ(z = ζ)) † defined by hermitean conjugation, because this will in general differ from Φ BPZ |. (Another possibility, preferred by many authors, is to take BPZ conjugation as the defining map from ket to bra and introduce instead a label Φ h.c. | on the state defined by hermitean conjugation.) Composing BPZ and hermitean conjugation The composition of BPZ and hermitean conjugation gives a map from ket to ket |Φ −→ Φ BPZ | † = |Φ BPZ (4.11) which acts on the primary conformal fields as follows Φ ∆,∆ (z = ζ,z =ζ) −→ Φ ∆,∆ (w = ζ,w =ζ) † (4.12) = e iǫπ(∆−∆) Φ ∆,∆ (z = ζ * ,z =ζ * ) . Notice that for fields with non-integer value of ∆ −∆, BPZ and hermitean conjugation do not commute. However, this is not a problem for vertex operators describing BRSTinvariant on-shell string states, which satisfy ∆ =∆ = 0. The transformation (4.12) is our educated guess for the map taking W |λ into W λ| , only we will allow the possibility that some phase factor χ may appear. In other words, our ansatz is that if some incoming string state with definite quantum numbers is created, in the superghost charge q picture, by the vertex operator W (q) |λ , |λ = lim z,z→0 W (q) |λ (z,z)|0 ,(4.13) then the vertex operator we have to use in the "master formula" (2.2) to obtain the Tmatrix element involving the outgoing state λ| is given by W (q) λ| (z = ζ,z =ζ) ≡ χ q W (q) |λ (w = ζ * ,w =ζ * ) † . (4.14) As was emphasized at the beginning of section 4, the operator W (q) λ| , like any vertex operator, creates a state by acting on the ket vacuum. From the definitions (4.14) and (4.10) we find this state to be lim ζ,ζ→0 W (q) λ| (z = ζ,z =ζ)|0 = χ q |λ BPZ . (4.15) In other words, we obtain the T -matrix element involving the bra-state λ| by inserting into the Polyakov path integral an operator creating the state χ q |λ BPZ . Notice that whereas the state |λ always has k 0 > 0, the state |λ BPZ has k 0 < 0. Since the combination of BPZ and hermitean conjugation maps L 0 → L 0 and Q BRST → −Q BRST ,(4.16) and since BPZ conjugation is a world-sheet symmetry on the sphere, it follows that if the state |λ is a physical on-shell state, L 0 |λ = Q BRST |λ = 0, then so is the state If we restrict ourselves to BRST invariant on-shell string states, both W |λ and W |λ are primary conformal fields of dimension zero, and eq. (4.14) becomes χ q |λ BPZ ,W (q) λ| (ζ,ζ) = χ q W (q) |λ (ζ,ζ) .(4.17) We will now proceed to verify our ansatz (4.17) by considering the amplitude for the string state |λ to emit (absorb) a very soft graviton. We will find that the phase factor χ q , as anticipated by our notation, depends only on the choice of picture. In particular, if we restrict ourselves to the pictures q = −1 and q = −1/2, the phase factor χ q depends only on whether the string state is a space-time boson or a space-time fermion. At the same time we will be able to determine the correct overall normalization of the vertex operators to be used in the formula (2.2) for the T -matrix element. Normalization of vertex operators In this section we consider the computation of the tree-amplitude for some given onshell string state to absorb or emit a very soft graviton. We perform the analysis for a generic four dimensional heterotic string theory where the graviton vertex operator has the form of eq. (3.5), but the argument can be readily applied to other string models. We first discuss the case of space-time bosonic states and then the case of the spacetime fermionic states. Normalization of space-time bosonic vertex operators We first recall what is the situation in field theory. Consider a basis of propagating The tree-level T -matrix element for such a particle to emit (absorb) a graviton contains a universal term which, in the limit where the graviton momentum is zero, assumes the form −2κ ǫ · pǭ · p P M N = −4 κ α ′ ǫ · kǭ · k P M N (5.1) = −C 0 κ π 3 ǫ · kǭ · k P M N , where we wrote the graviton polarization on the factorized form ǫ ⊗ǭ and C 0 is the overall normalization constant for string tree amplitudes, given by eq. (3.7). The behaviour (5.1) describes the canonical coupling of gravity to the p µ p ν -part of the energy-momentum tensor of the propagating particle. The sign of the amplitude (5.1) obviously depends on the sign convention for the graviton field h µν . Eq. (5.1) corresponds to the expansion g µν = η µν − 2κ (h µν + λη µν h σ σ ) + O(h 2 ) (5.2) regardless of the coefficient λ chosen for the trace term. The sign chosen for the graviton vertex operator (3.5) is in agreement with this convention, as one may check by computing the 3-graviton tree amplitude from eqs. The T -matrix element for the process "N + graviton → M " is given by Here we may expand the fields c,c, ∂X and∂X in oscillators. Only modes with L 0 =L 0 = 0 can contribute to the "universal" part (5.1) of the amplitude. This is because this part of the amplitude, like that of a freely propagating string state, conserves L 0 (X µ ),L 0 (X µ ), L 0 (b, c) andL 0 (b,c). We may imagine the basis |N, k of string states to diagonalize all these operators. Then for n = 0 we may write e.g. T 0 (M, k|graviton; N, k) = −C 0 W (−1) M,k| (z 1 ,z 1 )W (0) |grav (z,z)W (−1) |N,k (z 2 ,z 2 ) ,(5.α µ n = − 1 n [L 0 (X µ ), α µ n ] (5.6) and this vanishes between the states M, k| and |N, k since by assumption they have the same value of L 0 (X µ ). We are thus left with T 0 (M, k|graviton; N, k) = −χ −1 C 0 κ π ǫ · kǭ · k M, k|c 0 c 0 |N, k + . . . ,(5.7) where "+ . . ." denotes possible other terms in the amplitude with a different kinematical structure than the universal part (5.1). By eq. |N,−k , 5 there is probably no fundamental reason to prefer any specific value of the overall complex phase factor, just as in field theory the phase of a complex field is an unphysical degree of freedom. If, on the other hand, W (−1) |N,k is proportional to W (−1) |N,−k it becomes natural to impose a reality condition, which we can take to be W (−1) |N,k = + W (−1) |N,−k or V (−1) |N,k = − V (−1) |N,−k (5.12) in agreement with the choice made for the graviton vertex operator (3.5). This implies that W (−1) N,k| = W (−1) |N,−k . Even in this case there remains a choice a sign for the vertex operator. This is completely dependent on convention, just like the sign of the graviton field in the expansion (5.2). Normalization of space-time fermionic vertex operators We now consider the case of space-time fermions. The field theory description is now more complicated than in the case of space-time bosons, since the graviton field should be described in terms of the vierbein, e µ m . The canonical coupling to gravity of a Dirac fermion, labelled by an index N , is given by the action d 4 x e ψ M {γ m e µ m ∂ µ + m} ψ N P M N ,(5.13) where we ignore the spin-connection terms which all involve derivatives of the vierbein and thus give rise to terms in the fermion-fermion-graviton amplitude proportional to the graviton momentum. When expanding e µ m around the flat background we can ignore the deviation of e = det{e µ m } from unity since this gives rise only to terms proportional to the trace of the graviton field. One obtains the following expression, analogous to eq. (5.1) for the universal part of the fermion-fermion-graviton T -matrix element at tree level: −iκu( p, η)γ ν p µ u( p, η)ǫ νǭµ P M N ,(5.14) where, by virtue of the Gordon identity u( p, η)γ ν u( p, η ′ ) = −2ip ν δ η,η ′ ,(5.15) we recover the bosonic result (5.1), as dictated by the principle of equivalence. 5 Notice that if W (−1) |N,k is proportional to exp(ik·X) then W (−1) |N,k is proportional to exp(−ik·X). In the string theory analysis we again consider a complete set of states |N, k , labelled by N , now built from the superghost vacuum |q = −1/2 , again satisfying b 0 =b 0 = 0 and having a definite momentum k. We may now proceed exactly as in section 5.1, only now we have to use the superghost charge (−1) version of the graviton vertex operator, given by eq. (3.5). In the limit of vanishing graviton momentum we obtain T 0 (M, k|graviton; N, k) = −C 0 W (−1/2) M,k| (z 1 ,z 1 )W (−1) |grav (z,z)W (−1/2) |N,k (z 2 ,z 2 ) = −χ −1/2 C 0 M, k|W (−1) |grav (1)|N, k . (5.16) As in the bosonic case only zero-mode operators contribute to the part of the amplitude in which we are interested, so that T 0 (M, k|graviton; N, k) = χ −1/2 C 0 κ πǭ · k ǫ ν M, k|c 0 c 0 ψ ν 0 δ(γ 0 )|N, k + . . . . (5.17) Here we may recognize the form (5.14) of the result obtained in field theory, since the zero mode ψ ν 0 of the operator field ψ ν furnishes a representation of the Clifford algebra, and so is completely analogous to the gamma matrix γ ν appearing in the expression (5.14). The matrix M, k|c 0 c 0 ψ ν 0 δ(γ 0 )|N, k transforms as a space-time vector and therefore has to be proportional to the momentum k ν . Since ψ ν 0 and δ(γ 0 ) anti-commute it is manifestly anti-hermitean (q.v. eq. (4. Unitarity requires that the tree-level T -matrix element is real except when the momentum flowing in some intermediate channel happens to be on the mass-shell corresponding to some physical state in the theory. In field theory the imaginary part appears as a result of the iǫ-prescription present in the propagator that happens to be on-shell. In string theory it appears as a result of some divergency in the integral over the Koba-Nielsen (KN) variables that has to be treated in a way consistent with the iǫ-prescription in field theory [18,19,20]. What we can rather easily show is that as long as the integrals over the KN variables are convergent the expressions (5.10) and (5.22) lead to a hermitean T -matrix at tree level. At genus zero the formula (2.2) can be rewritten as T 0 (λ 1 ; . . . ; λ N out |λ N out +1 ; . . . ; λ N out +N in ) = (6.1) − C 0 N tot i=4 d 2 z i c(z 1 )c(z 2 )c(z 3 )c(z 1 )c(z 2 )c(z 3 ) × N B +N F P −2 A=1 Π(w A ) V λ 1 | (z 1 ,z 1 ) . . . V |λ N tot (z N tot ,z N tot ) . The T -matrix is hermitean if and only if the quantity (6.1) equals T 0 (λ N out +N in ; . . . ; λ N out +1 |λ N out ; . . . ; λ 1 ) * = (6.2) + C 0 N tot i=4 d 2 z * i V |λ 1 (z 1 ,z 1 ) † . . . V λ N tot | (z N tot ,z N tot ) † × (Π(w N B +N F P −2 )) † . . . (Π(w 1 )) † (c(z 3 )) † . . . (c(z 1 )) † , where we used eq. (4.5). In terms of the vertex operators V (where the cc factor present in W has been removed, q.v. eq. (2.3)) the relations (5.10) and (5.22) acquire an extra minus sign (because cc is an anti-hermitean operator): V (−1) λ| (z,z) = − V (−1) |λ (z,z) (6.3) V (−1/2) λ| (z,z) = −iY V (−1/2) |λ (z,z) , which, by taking the hermitean conjugate, leads to the inverse relations V (−1) |λ (z,z) = − V (−1) λ| (z,z) (6.4) V (−1/2) |λ (z,z) = −iY V (−1/2) λ| (z,z) . Since the operators V |λ and V λ| have conformal dimensions ∆ =∆ = 1 we find for i = 1, . . . , N out : V |λ i (z i ,z i ) † = 1 z * i 1 z * i 2 V |λ i 1 z * i , 1 z * i (6.5) = (phase factor) × 1 z * i 1 z * i 2 × V λ i | 1 z * i , 1 z * i , where the phase factor we pick up is minus one for space-time bosons and iY for space-time fermions. By eqs. (6.4) we pick up exactly the same phase factor from vertex operators of the type V λ| . This amounts to an overall sign (−1) N B +N F P , N F P being the number of space-time fermion pairs and N B the number of space-time bosons. This sign exactly cancels the sign produced by the N B + N F P − 2 PCOs, which are anti-hermitean. Finally, reordering the ghost factors in (6.2) in accordance with eq. (6.1), we obtain a minus sign cancelling the one that was introduced by using eq. (4.5). Since the transformation factors (z * i ) −2 (z * i ) −2 appearing in eq. (6.5) either cancels a similar one coming from the ghost operators (for i = 1, 2, 3), or is just the required jacobian to transform d 2 z i into d 2 ζ i where ζ i = 1/z * i (i ≥ 4) , we finally recover eq. (6.1) multiplied by a phase factor that, at the end, is just plus one. This concludes the proof that our relation between W An explicit example In this section we provide an explicit example of the map (5.24) in the context of fourdimensional heterotic string models of the Kawai-Lewellen-Tye (KLT) type [8,9], where the internal degrees of freedom are described by 22 left-moving and 9 right-moving free complex fermions. We bosonize all these fermions (as well as the four Majorana fermions ψ µ ), using the explicit prescription for bosonization in Minkowski space-time proposed in ref. [12]. In this formulation any state of the conformal field theory (excluding the reparametrization ghosts) can be obtained by means of non-zero mode creation operators from the generic ground state which is specified by the space-time momentum k, the "momentum" J = q = A 34 which is (minus) the "momentum" of the field φ ≡ Φ (34) that is introduced when "bosonizing" the superghosts. Since [J (L) 0 , Φ (K) ] = δ L K , the operator creating such a ground state from the conformal vacuum is S A (z,z) e ik·X(z,z) ,(7.1) where S A (z,z) ≡ 34 L=1 e A L Φ (L) (z,z) C (L) A L ,(7.2) is a spin field operator and C (L) is a cocycle factor, see ref. [12] for details. The range of values allowed for the A L depends on the details of the KLT model we happen to consider, see refs. [8,11]. We assume the level-matching condition L 0 −L 0 = 0 to be satisfied. The hermitean conjugate of the operator S A (z,z) can be computed using the hermiticity properties of the various fields, as outlined in Appendix B (see also ref. [12]). One finds S A (z,z) = σ (33) 1 C −1 B A S B (z,z) , (7.3) where (σ (33) 1 ) AB = 32 L=1 δ A L ,B L δ A 33 +B 33 ,0 δ A 34 ,B 34 (7.4) and C −1 is the inverse of the "charge conjugation matrix" C AB = 33 L=1 δ A L +B L ,0 δ A 34 ,B 34 e iπA·Y ·B (7.5) defined in terms of the 34 × 34 cocycle matrix Y KL (see refs. [12,11]). The example we want to study is that of a physical space-time fermion described by a ground state. To obtain a BRST-invariant state one has to consider a vertex operator which involves a linear combination of spin fields, V (−1/2) |V,k (z,z) = κ π V A (− 1 2 ) (k) S A (z,z) e ik·X(z,z) ,(7.6) where the spinor V A (− 1 2 ) (k) has superghost charge −1/2, i.e. is proportional to δ A 34 ,−1/2 , and satisfies a Dirac equation which can be obtained from the requirement that the 3/2order pole in the operator product expansion (OPE) of the supercurrent T [X,ψ] F with the operator (7.6) vanishes. If we define the gamma matrices by the OPE ψ µ (z)S A (w,w) OPE = 1 √ 2 (Γ µ ) B A S B (w,w) 1 √ z − w + . . . ,(7.7) the Dirac equation assumes the matrix form (V (− 1 2 ) (k)) T D (k) = 0 or (D (k)) T V (− 1 2 ) (k) = 0 ,(7.8) where the Dirac operator is D (k) = k µ Γ µ − M ,(7.9) M being a mass operator that we do not need to write down explicitly. When the vertex operator is written as in eq. (7.6) we are no longer free to choose the normalization of the spinor V (− 1 2 ) (k). It should be fixed in accordance with eq. (5.23). In the next subsection we will explicitly verify that the correct normalization is (V (− 1 2 ) (k)) † V (− 1 2 ) (k) = √ 2 |k 0 | ,(7.10) which is analogous in structure to eq. (1.5). By using eq. (7.3) in the expression (6.3) we find the "outgoing" vertex operator corresponding to (7.6) to be V (−1/2) V,k| (z,z) = −χ −1/2 κ π V A (− 1 2 ) (k) * σ (33) 1 C −1 B A S B (z,z) e −ik·X(z,z) ,(7.11) where χ −1/2 = iY . A Sample Computation. We will now explicitly compute the amplitude for a space-time fermion described by the vertex operator (7.6) to absorb a zero-momentum graviton. In particular we will obtain the relation (5.19) and show how the sign Y appearing in this formula is related to the choice of cocycles. Inserting eqs. (7.6), (7.11) and (3.5) into eq. (5.16) we obtain: T 0 (V, k|graviton; V, k) = −C 0 W (−1/2) V,k| (z 1 ,z 1 ) W (−1) |grav (z,z)W (−1/2) |V,k (z 2 ,z 2 ) (7.12) = iχ −1/2 C 0 κ π 3 V A (− 1 2 ) (k) * σ (33) 1 C −1 B A V C (− 1 2 ) (k) ǫ µ × S B (z 1 ,z 1 )ψ µ (z)e −Φ (34) (z) (C (34) ) −1 S C (z 2 ,z 2 ) × ǭ ·∂X(z)e −ik·X(z 1 ,z 1 ) e ik·X(z 2 ,z 2 ) c(z 1 )c(z)c(z 2 )c(z 1 )c(z)c(z 2 ) . By explicit computation one finds e −Φ (34) (z) (C (34) ) −1 ψ µ (z)S B (z 1 ,z 1 )S C (z 2 ,z 2 ) = (7.13) 1 √ 2 Γ µ C (−1) B C z 1 − z 2 (z − z 1 )(z − z 2 ) |z 1 − z 2 | −2(2+m 2 ) , where m is the mass of the space-time fermion, k 2 + m 2 = 0, and we introduced another family of "charge conjugation matrices" by C (q) AB = 33 L=1 δ A L +B L ,0 δ A 34 +B 34 +q+2,0 e iπA·Y ·B (7.14) for any value of q ∈ Z. Similarly one finds ǭ ·∂X(z)e −ik·X(z 1 ,z 1 ) e ik·X(z 2 ,z 2 ) = iǭ · kz 1 −z 2 (z −z 1 )(z −z 2 ) |z 1 − z 2 | −2k 2 |c(z 1 )c(z)c(z 2 )| 2 = |(z 1 − z)(z − z 2 )(z 1 − z 2 )| 2 . (7.15) Substituting (7.13) and (7.15) into eq. (7.12) we obtain T 0 (V, k|graviton; V, k) = (7.16) χ −1/2 C 0 κ π 3 1 √ 2ǭ · k ǫ µ V (− 1 2 ) (k) † σ (33) 1 C −1 Γ µ C (−1) V (− 1 2 ) (k) . One may show that 7.18) and the sign (−1) A 34 +1/2 is effectively equal to one, since the matrices appearing in eq. (7.16) are sandwiched between spinors with superghost charge −1/2. For the same reason the inverse charge conjugation matrix C −1 is effectively equal to (C (−1) ) −1 . Finally it is straightforward to verify that Γ 0 , as defined by eq. (7.7), may also be written on the form Γ 0 = iY 34,33 Σσ (33) 1 (7.19) and since Σ and σ Inserting eqs. (7.17) and (7.19) into eq. (7.16) we obtain Γ µ C (−1) AB = (−1) A 34 +1/2 C (−1) Σ (Γ µ ) T AB , (7.17) where Σ AB ≡ 34 L=1 δ A L ,B L exp iπ 33 L=1 Y 34,L B L (T 0 (V, k|graviton; V, k) = (7.20) − iχ −1/2 Y 34,33 C 0 κ π 3 1 √ 2ǭ · k ǫ µ V (− 1 2 ) (k) † (Γ 0 ) T (Γ µ ) T V (− 1 2 ) (k) . At this point we may use the Gordon-like identity V (− 1 2 ) (k) † (Γ 0 ) T (Γ µ ) T V (− 1 2 ) (k) = − √ 2k µ . (7.21) This equation can be proven directly using the Dirac equation (7.8), but it is easier to note that Lorentz covariance forces the right-hand side to be proportional to k µ and then fix the proportionality constant by setting µ = 0 and using equation (7.10). Thus we finally obtain T 0 (V, k|graviton; V, k) = iχ −1/2 Y 34,33 C 0 κ π 3ǭ · k ǫ · k . (7.22) This agrees with the correct result (5.1) provided we choose χ −1/2 = iY = iY 34,33 (7.23) and shows that the sign Y appearing in eq. Appendix A: Conventions for operators and partition functions. In this appendix we summarize our conventions for operator fields, partition functions and spin structures in the explicit setting of a Kawai-Lewellen-Tye (KLT) heterotic string model. For more details, see refs. [12] and [11]. Space-time coordinate field: X µ (z,z) = q µ − ik µ log z − ik µ logz + i n =0 a µ n n z −n + i n =0ā µ n nz −n (A.1) X µ (z,z)X ν (w,w) OPE = − η µν log(z − w) + c.c. + . . . (A.2) 1 g−loop = (det∂ 0 ) −D/2 (det 2πImτ ) −D/2 , (A.3) where τ is the period matrix (as given in ref. [21]) and the explicit expression for det∂ 0 can be found for example in ref. [22]. It is normalized to give plus one in the limit where all loops are pinched. Majorana fermion field: ψ µ (z) = n ψ µ n z −n−1/2 {ψ µ n , ψ ν m } = η µν δ n+m,0 , (A.4) where the mode index n is integer (half odd integer) for Ramond (Neveu-Schwarz) boundary conditions. The anti-commutation relations are equivalent to the OPE ψ µ (z)ψ ν (w) OPE = η µν 1 z − w + . . . . (A.5) When computing correlation functions we usually bosonize all fermion fields. Bosonized complex fermion: φ(z)φ(w) OPE = log(z − w) + . . . (A.6) N i=1 e q i φ(z i ) g−loop = (A.7) δ N i=1 q i ,0 (det∂ 0 ) −1/2 i<j (E(z i , z j )) q i q j Θ α β N i=1 q i z i ω 2πi |τ , where ω µ is normalized to have period 2πiδ µ,ν around the cycle a ν (µ, ν = 1, . . . , g), E(z, w) is the prime form (with short-distance behaviour E(z, w) = (z − w) + O(z − w) 2 ) and we define Θ α β (z|τ ) = r∈Z g exp 2πi 1 2 g µ,ν=1 (r µ + 1 2 − α µ )τ µν (r ν + 1 2 − α ν ) + g µ=1 (r µ + 1 2 − α µ )(z µ + β µ + 1 2 ) . (A.8) Superghosts: Our conventions for mode expansions, OPEs and "bosonization" of the superghosts β and γ are the standard ones [23]. We always remain inside the "little" algebra, i.e. excluding the zero mode of η and ξ. Our convention for the partition function is N i=1 e q i φ(z i ) g−loop = (A.9) δ N i=1 q i −2g+2,0 (det∂ 0 ) 1/2 N i=1 (σ(z i )) −2q i i<j (E(z i , z j )) −q i q j × g µ=1 e −2πi(1/2+β µ )   Θ α β   − N j=1 q j z j z 0 ω 2πi + 2∆ z 0 |τ     −1 . This expression agrees with eq. (36) of ref. [24], except for the overall sign which differs in two regards: First there is the phase factor appearing at the beginning of the third line above, which is chosen in accordance with our definition of the spin structure summation coefficient given below. Second, the sign of the argument of the theta function is opposite to that of ref. [24], which amounts to a factor of minus one for odd spin structures. The sign we quote above for the argument of the theta function is the one that is obtained when the correlation function is carefully constructed by sewing [25]. Hence it is the sign consistent with factorization. Our conventions for the differential σ and the Riemann class ∆ z 0 are in accordance with ref. [21]. Reparametrization ghosts: Our conventions for reparametrization ghosts follow ref. [23]. The normalization of the partition function is the standard one, and the explicit expressions can be found in refs. [7,26]. By definition the correlator 3g−3+N tot I=1 (η I |b) N tot i=1 c(z i ) 2 (A.10) is positive definite. Spin structure summation coefficient: Finally the spin structure summation coefficient in eq. (2.2) is a product of one-loop summation coefficients which are given in accordance with ref. [11] by C α µ β µ = 1 i M i × (A.11) exp    −2πi   i (n µ i + δ i,0 )( j k ij m µ j + s i − k i0 ) + i m µ i s i + 1 2      , where µ = 1, . . . , g. The spin structure α L β L of the fermion labelled by L ∈ {1, . . . , 32} is related to the integers m µ i and n µ i through α L,µ = i m µ i (W i ) (L) (A.12) β L,µ = i n µ i (W i ) (L) . For more details, see ref. [11]. Appendix B: Hermiticity properties of operators In this appendix we summarize the hermiticity properties of various primary operators and provide some of the details in the derivation of eq. (7.3). The cocycle operators C (L) appearing in the expression (7.2) and defined in detail in refs. [11,12] satisfy Here C (L) gh is the cocycle operator involving the number operators of the reparametrization ghosts and of the (η, ξ) system (excluding the (η, ξ) zero modes) and is given explicitly by Φ ∆ Φ ∆ X µ X µ ∂X µ −∂X µ e ik·X e −ik·X ψ µ ψ µ Φ (L)Φ (33) Φ (33) Φ (34) Φ (34) − 2 log e A L Φ (L) L=33,34 e A L Φ (L) L=33,34 β −β γ γ η −η ξ −ξ ∂ξ ∂ξ b b c cC (L) † = C (L) −1 CC (L) gh = exp −iπY 34,L N (η,ξ) − N (b,c) + Nb ,c . (B.2) Using the hermiticity properties listed above we find that the hermitean conjugate of the spin field (7.2) is given by S A = 32 L=1 C (L) gh 2A L exp −2πi 32 L=1 (Y 34,L A 34 + Y 33,L A 33 ) J (L) 0 × S (34) A 34 S (33) A 33 S (32) −A 32 . . . S (1) −A 1 . (B.3) Here the factors appearing on the right hand side of the equality sign in the first line of eq. (B.3) can be ignored: Acting on any string state in the theory they give plus one. It is sufficient to verify this on some generic ground state |B , created by the spin field operator S B , since non-zero mode creation operators contribute with integer values to J (L) 0 . One has to recall that A 33 and A 34 are always either both integer or both half-integer, that the number operators appearing in C (L) gh always take integer values, and finally one has to make use of the fact (see ref. [11]) that for a consistent choice of cocycles the following conditions hold If we finally reorder the individual spin fields in eq. (B.3) we obtain S A = 32 L=1 δ A L +B L ,0 34 L=33 δ A L ,B L S(1)B 1 . . . S (34) B 34 e iπB·Y ·B = σ (33) 1 C −1 AB S B , (B.5) where C −1 AB = 33 L=1 δ A L +B L ,0 δ A 34 ,B 34 e iπB·Y ·B (B.6) is the explicit expression for the matrix whose inverse is given by eq. (7.5). Appendix C: Compatibility of the GSO projection and the map between W |λ and W λ| In this appendix we show explicitly that the map (4.17) from W (q) |λ to W (q) λ| is compatible with the GSO projection in the setting of a four-dimensional KLT heterotic string model [8,9]. In other words we want to show that given a vertex operator W (q) |λ creating a state |λ satisfying the GSO conditions, the state χ q |λ BPZ , which is created by W (q) λ| , also satisfies the GSO conditions. We first recall what is the form of the GSO projection conditions. We consider as usual all world-sheet fermions to be bosonized. Then the GSO conditions involve only the where our notation is that of ref. [11], except for the labelling of the complex fermions which is chosen in accordance with ref. [12] We can actually do something more general, and for this we take the sum of eqs. (C.1) and (C.5). We will show that this sum is zero modulus one, this obviously implies that if eq. (C.1) is satisfied then also eq. (C.5) is, and viceversa. In summing the two equations we make use of the fact that the s i are half-integers [8], that eq. (C.4) implies N Let us now return to the identity (C.6) that we were supposed to prove. Substituting eq. (C.7) and recalling that M j k ij MOD 1 = 0 [8], we find that the term j k ij (m j + m j ) cancels out. Next we recall that [8] 2(k 0i + k i0 ) g surface. It is clear that the vertex operator cannot know about the distant handles. This is true even for vertex operators describing space-time fermions, even though these involve spin fields which are non-local operators on the world-sheet, because space-time fermions always come in pairs and we may imagine isolating both of the corresponding vertex operators (and the branch cut connecting them) on a sphere far away from all handles. regardless of what value we choose for the phase χ q . It is less clear that the map (4.14) is also consistent with the GSO projection, i.e. that |λ satisfies the GSO projection conditions if and only if |λ BPZ does, because the two states will in general reside in different sectors of the string theory. An explicit proof in the framework of a Kawai-Lewellen-Tye (KLT) type heterotic string model is given in Appendix C. bosonic particle states with momentum p, labelled by an index N , in terms of which the propagator assumes the diagonal form P M N /(p 2 + m 2 N ) where P M N = +δ M,N for physical states and P M N = −δ M,N for possible negative norm states. For example, for a photon with space-time vector index M = µ we have P M N = η µν . and comparing with eq. (5.1) in the case where the state |M is itself a graviton. Consider now computing the universal part (5.1) of the graviton absorption amplitude at genus zero in string theory. We consider a complete set of space-time bosonic string states |N, k , labelled by N , built from the superghost vacuum |q = −1 , satisfying b 0 = b 0 = 0 and having definite momentum k. We may think of N as specifying physical quantities such as helicity, charges and family labels. have to use the graviton vertex operator in the superghost charge (0) picture, given by eq. (3.6), and the states |N, k and M, k| are now assumed to be physical, k| are primary conformal fields of dimension zero.By projective invariance on the sphere we can fix z 1 = ∞, z = 1 and z 2 k| vertex operator is assumed to have conformal dimension zero we can evaluate it in the coordinate system (w), where w = 1/z, without introducing any transformation factor. In so doing we just undo the BPZ transformation in the definition eq. ǭ ·∂X(1) ǫ · ∂X(1)|N, k .(5.5) (4.6) the matrix M, k|c 0 c 0 |N, k is manifestly hermitean and by an appropriate choice of basis it may be diagonalized such that M, k|c 0 c 0 |N, k = N bos |M,k * N bos |N,k P M N , (5.8) where either P M N = 0 (so that the state does not propagate) or |P M N | = δ M,N . Our conventions (4.4) imply that P M N = +δ M,N for all physical external states but −δ M,N for negative norm states (such as the "timelike" photon). The factor N bos |N,k specifies the overall normalization of the state |N, k . By inserting eq. (5.8) into eq. (5.7) we obtain finally the correct result (5.1) if we take the phase factor introduced in eq. (4.14) to be χ −1 = 1 and choose the normalization constant to be the same for all states, N bos |M,k = N bos |N,k proper normalization of the state |N, k is given by M, k|c 0 c 0 |N, definition |N, k = lim ζ,ζ→0 W |N,k (ζ,ζ)|0 , eq. (5.11) specifies the normalization of the vertex operator up to a complex phase factor. If the vertex operator W 6)) and by choosing an appropriate basis it can be diagonalized such that M, k|c 0 c 0 ψ ν 0 δ(γ 0 )|N, k = iY k ν N ferm |M,k * N ferm |N,k P M N . (5.18) In section 7 we will explicitly derive this formula in the context of a KLT heterotic string model. It is quite analogous to eq. (5.8). The factor of i reflects the fact that the matrix on the left-hand side is anti-hermitean and (as we shall see in section 7) the constant factor Y = ±1 depends on the conventions chosen for the spin fields. Finally, P M N = +δ M,N for physical states, as always. Like in the bosonic case the factor N ferm |N,k specifies the normalization of the string state |N, k . If we insert eq. (5.18) into eq. (5.17) we finally obtain T 0 (M, k|graviton; N, k) (5.19) = Y i χ −1/2 C 0 N ferm |M,k * N ferm |N,k κ π ǭ · k ǫ · k P M N + . . . , the normalization of the states in the same universal way as for the bosons, N ferm |M,k = N ferm |N,k , and N ferm |N,k = N bos |N,proper normalization of the string state is given by M, k|c 0 c 0 ψ ν 0 δ(γ 0 )|N, k = iY k PCO (2.4) is an anti-hermitean operator which satisfies Bose statistics, eqs. (5.10) and (5.22) can be generalized to the superghost charge q picture as follows W of half-integer q (i.e. pictures describing space-time fermions) the phase factor (−1) q+1 involves a choice of sign, which is parametrized by Y according to eq. (5.20), i.e. (−1) 1/2 = iY . 6. Space-Time hermiticity An important check on the correctness of our expressions (5.10) and (5.22) for W (q) N,k|is provided by the requirement that the T -matrix element obtained from eq. (2.2) has the right hermiticity properties. leads to a hermitean T -matrix at tree level away from the resonances. (L) 0 = 0A L of the 33 bosons Φ (L) introduced by the bosonization, and the superghost charge J(34) 0 (Γ 0 ) T = −Γ 0 . ( 5 . 518) should be identified with the component Y 34,33 of the cocycle matrix. At the same time we have verified the correctness of the normalization (7.10) for the spinor V − 1 2 (k). e A L Φ (L) L=1,...,32 e −A L Φ (L) L=1,...,32 C (L) † = C (L) exp −2πi 32 K=1 Y LK J (K) 0for L = 33, 34 . Y 33,L B L + ǫB 33 − Y 34,33 B 34 = integer φ 34 [B] ≡ 32 L=1 Y 34,L B L + Y 34,33 B 33 − ǫB 34 = integer φ 33 [B] ground state |B existing in the theory, regardless of the value chosen for the parameter ǫ = ±1. resulting bosons, and it is sufficient to consider a generic ground state as created by the operator (7.1). If this state satisfies the GSO condition, so do all the states obtained from it by means of non-zero mode creation operators.The GSO projection assumes the form (see ref.[11])W i · N [[α]] − s i (N m j − s i − k 0i + W i · [[α]] = and the present paper, i.e. the left-moving fermions are labelled by L = 1, . . . , 22, the internal right-moving ones by L = 23, . . . , 31, the space-time related ones by L = 32 (the transverse) and L = 33 (the longitudinal), and the superghosts by L = 34. Let us briefly recall that the sector (i.e. the set of boundary conditions for the fermions enumerated by L = 1, . . . , 32) is specified by the 32-component vector α = i m i W i , (C.2) where the integer m i takes values 0, 1, . . . , M i − 1, M i being the smallest integer such that M j W j (j not summed) is a vector of integer numbers. The number operators N α L ]] = A L − [[1 − α L ]] + 1 2 for L = 1, . . . , 33 (C.3) N (34) [[α 32 ]] = A 34 − [[α 32 ]] − 1 2 . As we have seen in section 7, given the state |A with J A L = −A L for L = 1, . . . , 32 +A L for L = 33, 34 . (C.4) This behaviour follows directly from the hermiticity properties of the various fields, as summarized in Appendix B. In eq. (7.3) it is encoded in the presence of the charge conjugation matrix C which changes sign on all the A L except A 34 and the factor σ on A 33 only. Thus we have to check if the GSO projection conditions (C.1) are invariant under the transformation A L → A L given by (C.4). The situation is somewhat complicated by the fact that in general the states |A and |A BPZ do not reside in the same sector. Let us denote by a tilde ( ) the quantities pertaining to the state |A BPZ . We want to show then that if the state |A , residing in the sector α, satisfies eq. (C.1), then the state |A BPZ , residing in the sector α, satisfiesW i · N [[ α]] − s i ( N m j − s i − k 0i + W i · [[ [ 1 32 A= 132[ α 32 ]]and that the number operators always have integer eigenvalues. Thus by summing we obtainW i · N [[α]] + N [[ α]] − j k ij (m j + m j ) − 2k 0i + W i · ([[α]] + [[ α]])to verify this identity we need to find the relation between α and α. We claimthat m j = 0 if m j = 0 M j − m j otherwise , (C.7) which is consistent with 0 ≤ m j , m j ≤ M j − 1. The proof is simple: Let L ∈ {1, . . . , 32}.Since the number operators N(L) [[α L ]] take integer values, it follows from eq. (C.3) that the allowed values for A L in the sector α = j m j W j are A L = 1 2 − j m j (W j ) (L) + (integer) . (C.8) Since (W j ) (L) = w j,(L) /M j where w j,(L) is an integer satisfying 0 ≤ w j,(L) ≤ M j − in the sector α we have for L = 1, . . . , L . (C.9) and (C.10) we find the obvious solution m j = −m j (modM j ) which is equivalent to (C.7). It is worth noticing that if we sum the two equations [[α L ]] + [[ α L ]] . (C.12) Thus, [[α L ]] = [[ α L ]] only if the fermion labelled by L satisfies either Neveu-Schwarz boundary conditions ([[α L ]] = [[ α L ]] = 1/2) or Ramond boundary conditions ([[α L ]] = [[ α L ]] = 0). = 2W i · W 0 (C.13) where W 0 is the vector with all entries equal to 1/2. Substituting all this into eq. (C.6) and using eq. (C.3) we find that eq. (C.6) holds if and only ifW i · ([[α]] + [[ α]] − [[1 − α]] − [[1 − α]]) rather obviously, [[1 − α]] is a vector whose L'th component is [[1 − α L ]]. The equation (C.14) is indeed satisfied since [[α L ]] + [[ α L ]] − [[1 − α L ]] − [[1 − α L ]] = 0 .(C.15) Indeed, if [[α L ]] = 0 then also [[ α L ]] = [[1 − α L ]] = [[1 − α L ]] = 0 and eq. (C.15) is trivially satisfied. Otherwise [[1 − α L ]] = 1 − [[α L ]], [[1 − α L ]] = 1 − [[ α L ]] and by eq. (C.12) [[α L ]] + [[ α L ]] = 1 and again eq. (C.15) holds. Thus eq. (C.6) is satisfied, and we have shown that if the vertex operator W (q) |λ creates a state in the GSO projected spectrum, i.e. a state that satisfies eq. (C.1), then so does the vertex operator W (q) λ| , and viceversa. Table B1 : B1Hermiticity properties of various primary conformal fields. The hermitean conjugate field Φ ∆ of a primary conformal field Φ ∆ of dimension ∆ is defined by eq. (4.2) or eq. (4.3). In table B.1 we list various operator fields and their hermitean conjugates. In this section only we drop the label tot on N tot . Strictly speaking modular invariance of the one-loop partition function does not specify the summation coefficient for those spin structures where one (or more) of the free fermions on the world-sheet develop a zero mode, because these spin structures give zero contribution to the partition function. In order to check that no extra phase factors appear in these cases one may for example consider the factorization of a two-loop vacuum amplitude into one-loop tadpoles[9]. We carried out this check explicitly in the framework of Kawai-Lewellen-Tye[8] heterotic string models. In their original paper[16], Belavin, Polyakov and Zamolodchikov avoided this problem by considering instead the conformal transformation z → −1/z, but we prefer to consider z → 1/z, in accordance with most subsequent authors. M B Green, J H Schwarz, E Witten, Superstring Theory. Cambridge University PressM.B. Green, J.H. Schwarz and E. Witten, Superstring Theory, Cambridge University Press, 1987. . Z Bern, D Kosower, Nucl.Phys. 379451Z. Bern and D. Kosower, Nucl.Phys. B379 (1992) 451. . See E For A Review, D H D&apos;hoker, Phong, Rev.Mod.Phys. 60917For a review, see E. D'Hoker and D.H. Phong, Rev.Mod.Phys. 60 (1988) 917. . G Cristofano, R Marotta, K Roland, Nucl.Phys. 392345G. Cristofano, R. Marotta and K. Roland, Nucl.Phys. B392 (1993) 345. . H Sonoda, Nucl.Phys. 326135H. Sonoda, Nucl.Phys. B326 (1989) 135. C Itzykson, J.-B Zuber, Quantum Field Theory. New YorkMcGraw-HillC. Itzykson and J.-B. Zuber, "Quantum Field Theory", McGraw-Hill, New York, 1980. . K Roland, Phys.Lett. 312441K. Roland, Phys.Lett. B312 (1993) 441. . H Kawai, D C Lewellen, S.-H H Tye, Nucl.Phys. 2881H. Kawai, D.C. Lewellen and S.-H.H. Tye, Nucl.Phys. B288 (1987) 1. . I Antoniadis, C Bachas, C Kounnas, P Windey, Phys.Lett. 17151I. Antoniadis, C. Bachas, C. Kounnas and P. Windey, Phys.Lett. 171B (1986) 51; . I Antoniadis, C Bachas, C Kounnas, Nucl.Phys. 28987I. Antoniadis, C. Bachas and C. Kounnas, Nucl.Phys. B289 (1987) 87; . I Antoniadis, C Bachas, Nucl.Phys. 298586I. Antoniadis and C. Bachas, Nucl.Phys. B298 (1988) 586. . R Bluhm, L Dolan, P Goddard, Nucl.Phys. 309330R. Bluhm, L. Dolan and P. Goddard, Nucl.Phys. B309 (1988) 330. On the computation of one-loop amplitudes with external fermions in 4d heterotic superstrings. A Pasquinucci, K Roland, hep-th/9411015Nucl.Phys. 440A. Pasquinucci and K. Roland, "On the computation of one-loop amplitudes with external fermions in 4d heterotic superstrings", Nucl.Phys. B440 (1995) 441, hep-th/9411015. Bosonization of world-sheet fermions in Minkowski space-time. A Pasquinucci, K Roland, hep-th/9503040Phys.Lett. 351A. Pasquinucci and K. Roland, "Bosonization of world-sheet fermions in Minkowski space-time", Phys.Lett. B351 (1995) 131, hep-th/9503040. . G Cristofano, M Fabbrichesi, K Roland, Phys.Lett. 244397G. Cristofano, M. Fabbrichesi and K. Roland, Phys.Lett. B244 (1990) 397; . A Bellini, G Cristofano, M Fabbrichesi, K Roland, Nucl.Phys. 35669A. Bellini, G. Cristofano, M. Fabbrichesi and K. Roland, Nucl.Phys. B356 (1991) 69. . D Amati, M Ciafaloni, G Veneziano, Phys.Lett. 19781D. Amati, M. Ciafaloni and G. Veneziano, Phys.Lett. B197 (1987) 81. . S Weinberg, Phys.Lett. 156309S. Weinberg, Phys.Lett. B156 (1985) 309. . A A Belavin, A M Polyakov, A B Zamolodchikov, Nucl.Phys. 241333A.A. Belavin, A.M. Polyakov and A.B. Zamolodchikov, Nucl.Phys. B241 (1984) 333. . B Zwiebach, hep-th/9206084Nucl.Phys. 390B. Zwiebach, Nucl.Phys. B390 (1993) 33, hep-th/9206084. . K Aoki, E Hoker, D H Phong, Nucl.Phys. 342149K. Aoki, E. D'Hoker and D.H. Phong, Nucl.Phys. B342 (1990) 149; . E Hoker, D H Phong, hep-th/9404128Theor.Math.Phys. 709410152Nucl.Phys.E. D'Hoker and D.H. Phong, Phys.Rev.Lett. 70 (1993) 3692, Theor.Math.Phys. 98 (1994) 306 hep-th/9404128, Nucl.Phys. B440 (1995) 24 hep-th/9410152. . A Berera, Nucl.Phys. 411157A. Berera, Nucl.Phys. B411 (1994) 157. . J L Montag, W I Weisberger, Nucl.Phys. 363527J.L. Montag and W.I. Weisberger, Nucl.Phys. B363 (1991) 527. . P Di Vecchia, M L Frau, K Hornfeck, A Lerda, F Pezzella, S Sciuto, Nucl.Phys. 322317P. Di Vecchia, M.L. Frau, K. Hornfeck, A. Lerda, F. Pezzella and S. Sciuto, Nucl.Phys. B322 (1989) 317. P , Di Vecchia, DST Workshop on Particle Physics -Superstring Theory. H.S. Mani and R. RamachandranWorld Scientific41P. Di Vecchia, in DST Workshop on Particle Physics -Superstring Theory, eds. H.S. Mani and R. Ramachandran, World Scientific, p.41. . D Friedan, E Martinec, S Shenker, Nucl.Phys. 27193D. Friedan, E. Martinec and S. Shenker, Nucl.Phys. B271 (1986) 93. . E Verlinde, H Verlinde, Phys.Lett. 19295E. Verlinde and H. Verlinde, Phys.Lett. B192 (1987) 95. . P , Di Vecchia, Int.Journ.Mod.Phys. A. invited talk at the Workshop on String quantum Gravity and Physics at the Planck scaleP. Di Vecchia, invited talk at the Workshop on String quantum Gravity and Physics at the Planck scale, Erice, June 1992, Int.Journ.Mod.Phys. A. . P Di Vecchia, M Frau, K Hornfeck, A Lerda, F Pezzella, S Sciuto, Nucl.Phys. 333635P. Di Vecchia, M. Frau, K. Hornfeck, A. Lerda, F. Pezzella and S. Sciuto, Nucl.Phys. B333 (1990) 635.
[]
[ "Some properties of affine C-semigroups", "Some properties of affine C-semigroups" ]
[ "J I García-García ", "D Marín-Aragón ", "A Sánchez-Loureiro ", "A Vigneron-Tenorio " ]
[]
[]
Numerical semigroups have been extensively studied throughout the literature, and many of their invariants have been characterized. In this work, we generalize some of the most important results about symmetry, pseudo-symmetry, or fundamental gaps, to affine C-semigroups. In addition, we give algorithms to compute the tree of irreducible Csemigroups and C-semigroups with a given Frobenius vector.have been devoted to study different properties of C-semigroups in general and generalized numerical semigroups in particular. For example,in [6], the authors show that any N p -semigroup has a unique minimal system of generators and provide an algorithm to compute its set of gaps from a set of generators of the N p -semigroup. In[12], an extension of Wilf's conjecture for numerical semigroups is given to C-semigroups, and in [3], another one is introduced for N p -semigroups. This paper also studies the irreducibleness of the N p -semigroups. More recent papers about N p -semigroups are [1], [4], [5],[7], and[17]. For any C-semigroup, in [9], the authors mainly provide an algorithm to check if an affine semigroup given by a generating set is a C-semigroup and to compute its gap set.The main goal of this work is to generalize several results of numerical semigroups to C-semigroups. A C-semigroup is C-reducible (simplifying reducible) when it can be expressed as an intersection of two C-semigroups containing it properly (see[13]); S is C-irreducible (simplifying irreducible) in another case. In this work, we also characterize irreducible C-semigroups from their genus and from their generalized Frobenius numbers. We also study when a subset of a cone C is the gap set of a C-semigroup or determines it. These results are complemented by some algorithms for checking the corresponding properties.Moreover, some algorithms for computing some objects related to Csemigroups are provided. In particular, it is defined a tree whose vertex set is the set of all irreducible C-semigroups with a fixed Frobenius vector. An algorithm to compute this tree is also introduced. For any integer cone C and any non-null element f ∈ C, we give a procedure to obtain all C-semigroups with Frobenius element equal to f .The results of this work are illustrated with several examples. For this purpose, we have implemented all the algorithms shown in this work in our library CommutativeMonoids dedicated to the study of numerical and affine semigroups (see[11]) developed by the authors in Python [14] and C++. A notebook containing all the examples of this work can be found at the following link https://github.com/D-marina/CommutativeMonoids/ blob/master/CClassCSemigroups/SomePropertiesCSemigroup.ipynb.The content of this work is organized as follows: Section 1 is devoted to provide the reader with the necessary background for the correct understanding of the work. In Section 2, we introduce the concept of symmetric and pseudo-symmetric C-semigroups, and some characterizations of these concepts are given. We turn our attention in Section 3 to the irreducible C-semigroups, we prove that we can build a tree with all these semigroups with a fixed Frobenius vector, and we show an algorithm for computing them. Similarly, an algorithm for computing all the C-semigroups with a fixed Frobenius vector is given in Section 5. Finally, in Section 4, we study the fundamental gaps of a C-semigroup, and for any set X ⊂ C, we give conditions to determine if C \ X is a C-semigroup.
null
[ "https://export.arxiv.org/pdf/2305.05044v1.pdf" ]
258,564,843
2305.05044
fcda9db0ba79b3152b0a9948a26346914fe42b71
Some properties of affine C-semigroups J I García-García D Marín-Aragón A Sánchez-Loureiro A Vigneron-Tenorio Some properties of affine C-semigroups C-semigroupFrobenius elementfundamental gapirreducible semi- grouppseudo-Frobenius elementpseudo-symmetric semigroupspecial gapsym- metric semigroup 2020 Mathematics Subject Classification: 20M14 (Primary)68R05 (Secondary) Numerical semigroups have been extensively studied throughout the literature, and many of their invariants have been characterized. In this work, we generalize some of the most important results about symmetry, pseudo-symmetry, or fundamental gaps, to affine C-semigroups. In addition, we give algorithms to compute the tree of irreducible Csemigroups and C-semigroups with a given Frobenius vector.have been devoted to study different properties of C-semigroups in general and generalized numerical semigroups in particular. For example,in [6], the authors show that any N p -semigroup has a unique minimal system of generators and provide an algorithm to compute its set of gaps from a set of generators of the N p -semigroup. In[12], an extension of Wilf's conjecture for numerical semigroups is given to C-semigroups, and in [3], another one is introduced for N p -semigroups. This paper also studies the irreducibleness of the N p -semigroups. More recent papers about N p -semigroups are [1], [4], [5],[7], and[17]. For any C-semigroup, in [9], the authors mainly provide an algorithm to check if an affine semigroup given by a generating set is a C-semigroup and to compute its gap set.The main goal of this work is to generalize several results of numerical semigroups to C-semigroups. A C-semigroup is C-reducible (simplifying reducible) when it can be expressed as an intersection of two C-semigroups containing it properly (see[13]); S is C-irreducible (simplifying irreducible) in another case. In this work, we also characterize irreducible C-semigroups from their genus and from their generalized Frobenius numbers. We also study when a subset of a cone C is the gap set of a C-semigroup or determines it. These results are complemented by some algorithms for checking the corresponding properties.Moreover, some algorithms for computing some objects related to Csemigroups are provided. In particular, it is defined a tree whose vertex set is the set of all irreducible C-semigroups with a fixed Frobenius vector. An algorithm to compute this tree is also introduced. For any integer cone C and any non-null element f ∈ C, we give a procedure to obtain all C-semigroups with Frobenius element equal to f .The results of this work are illustrated with several examples. For this purpose, we have implemented all the algorithms shown in this work in our library CommutativeMonoids dedicated to the study of numerical and affine semigroups (see[11]) developed by the authors in Python [14] and C++. A notebook containing all the examples of this work can be found at the following link https://github.com/D-marina/CommutativeMonoids/ blob/master/CClassCSemigroups/SomePropertiesCSemigroup.ipynb.The content of this work is organized as follows: Section 1 is devoted to provide the reader with the necessary background for the correct understanding of the work. In Section 2, we introduce the concept of symmetric and pseudo-symmetric C-semigroups, and some characterizations of these concepts are given. We turn our attention in Section 3 to the irreducible C-semigroups, we prove that we can build a tree with all these semigroups with a fixed Frobenius vector, and we show an algorithm for computing them. Similarly, an algorithm for computing all the C-semigroups with a fixed Frobenius vector is given in Section 5. Finally, in Section 4, we study the fundamental gaps of a C-semigroup, and for any set X ⊂ C, we give conditions to determine if C \ X is a C-semigroup. Introduction A C-semigroup is a non-empty subset of N p (for some non-zero natural number p), containing 0 and closed under addition, such that C \ S is finite; C ⊂ N p denotes the integer cone generated by S. These semigroups are the natural generalization to higher dimensions of the numerical semigroups. Moreover, some objects related to numerical semigroups can be generalized to C-semigroups. For example, the elements in C \ S are called gaps of S, and the cardinality of its gap set is called genus of S. We denote this set by H(S), and its cardinality, by g(S). There are other objects whose generalization needs to consider a total order on N p . For example, the Frobenius number of a numerical semigroup is the maximum integer that is not in it. Still, its generalization over the C-semigroups is not unique if we do not fix a total order. So, fixed ≺ a total order on N p , the Frobenius element of S is max ≺ (C \ S). Even though C-semigroups frequently appear in semigroup theory, it is not until the publication of [10] that they have become an object of study in their own right. This paper defines generalized numerical semigroups as the C-semigroups where the cone C is N p . Since 2016, several works Preliminaries In this work, Q, Q ≥ , and N denote the sets of rational numbers, non-negative rational numbers, and non-negative integer numbers, respectively. For any n ∈ N, [n] denotes the set {1, . . . , n}. A non-degenerated rational cone in Q p ≥ is the convex hull of finitely many half lines in Q p ≥ emanating from the origin. These cones can also be determined from their supporting hyperplanes. We consider that the integer points of a rational cone form an integer cone in N p . It is well known that any integer cone C ⊂ N p is finitely generated if and only if a rational point exists in each of its extremal rays. Moreover, any subsemigroup of C is finitely generated if and only if there exists an element in the subsemigroup in each extremal ray of C. Both results are proved in [2,Chapter 2], where an in-depth study on cones can also be found. We assume that any integer cone considered in this work is finitely generated. Throughout this work, we use some particular gaps in H(S) whose definitions are the same for numerical semigroups [15]: • x ∈ H(S) is a fundamental gap if 2x, 3x ∈ S. The set of these elements is denoted by FG(S). • x ∈ H(S) is a pseudo-Frobenius element if x + (S \ {0}) ⊂ S, the set of pseudo-Frobenius elements of S is denoted by PF(S), and its cardinality is known as the type of S, t(S). • x ∈ H(S) is a special gap of S if x ∈ PF(S) and 2x ∈ S. We denote by SG(S) the set of special gaps of S. In this work, we consider different orders on some sets. On a non-empty set L ⊂ N p and x, y ∈ N p , consider the partial order x ≤ L y if y − x ∈ L. Besides, we also fix a total order on N p determined by a monomial order. A monomial order is a total order on the set of all (monic) monomials in a given polynomial ring (see [8]). From the properties of a monomial order, the (induced) total order on N p satisfies: • if a b and c ∈ N p , then a + c b + c, • if c ∈ N p ,a = M 1 b, . . . , M i a = M i b and M i+1 a < M i+1 b. From the fixed total order on N p , the Frobenius vector of S, F (S), is the maximal element in H(S) respect to , and we set n(S) as the cardinality of N (S) = {x ∈ S | x F (S)}. The following lemma generalizes to C-semigroups Proposition 2.26 in [15]. Lemma 1. Let S be a C-semigroup. Then, g(S) ≤ t(S)n(S). Proof. Just as it occurs for numerical semigroups, for any x ∈ H(S), there exist (f , s) ∈ PF(S) × S such that f = x + s, and f x = min {f ∈ PF(S) | f − x ∈ S}. Hence, the map H(S) → PF(S) × N (S), defined by x → (f x , f x − x) is injective, and thus g(S) ≤ t(S)n(S). 2 Symmetric and pseudo-symmetric C-semigroups Fix S ⊂ N p a C-semigroup with genus g. In this section, we characterize the symmetric and pseudo-symmetric C-semigroups using their genus. We say that S is C-irreducible when PF(S) is equal to {F (S)} or {F (S), F (S)/2} (see [13]). If PF(S) = {F (S)}, we say that S is symmetric, and pseudosymmetric when PF(S) = {F (S), F (S)/2}. For any element n in C, let I S (n) be the set {s ∈ S | s ≤ C n}. Remark 2. Note that, for any s ∈ S, s ∈ I S (F (S)) if and only if F (S) − s ∈ H(S). Thus, g ≥ I S (F (S)). We have the following characterizations of symmetric and pseudo-symmetric C-semigroups. Proof. Assume that S is symmetric. Thus, F (S) is the unique pseudo-Frobenius element of S. Furthermore, for any x ∈ H(S), there exists s ∈ S such that x + s = F (S), that is s ∈ I S (F (S)), and then I S (F (S)) ≥ g. Since g ≥ I S (F (S)), we conclude that g = I S (F (S)). Conversely, note that I S (F (S)) = {s ∈ S | F (S) − s ∈ H(S)}, and suppose that g = I S (F (S)). Hence, every x ∈ H(S) \ {F (S)} satisfies F (S) − x ∈ S, and then x is not a pseudo-Frobenius element of S. Proposition 4. Let S be a C-semigroup with genus g. Then, S is pseudosymmetric if and only if g = 1 + I S (F (S)) and F (S)/2 ∈ N p . Proof. Assume that S is pseudo-symmetric, thus PF(S) = {F (S), F (S)/2}, and g > I S (F (S)). For all x ∈ H(S) \ {F (S)/2}, there exists some s ∈ S such that x + s = F (S), or x + s = F (S)/2. If the first option is satisfied, s ∈ I S (F (S)). In other case, x + s + F (S)/2 = F (S) and then s + F (S)/2 also belongs to I S (F (S)). Besides, F (S)/2 + s = F (S) for every s ∈ S. Hence, I S (F (S)) ≥ g − 1. Conversely, suppose that g = I S (F (S)) + 1 and F (S)/2 ∈ N p . Hence, there exists only one x ∈ H(S) \ {F (S)} with x + s = F (S) for all s ∈ S. Hence, PF(S) = {F (S), x}. If x = F (S)/2, then there is s ∈ S such that F (S)/2 + s = F (S), and F (S)/2 ∈ S, but it is not possible. So, x = F (S)/2. Consider the Apéry set of a C-semigroup S relative to b ∈ S \ {0} as Ap(S, b) = {a ∈ S | a − b ∈ H(S)}. The following proposition shows the relationship between the pseudo-Frobenius elements of S and its Apéry set. Then, PF(S) = {a − b | a ∈ maximals ≤ S Ap(S, b)}. From this result, we can generalize the corollaries 4.12 and 4.19 in [15]. The Frobenius number of a numerical semigroup is the maximum nonnegative integer that is not an element of the semigroup. We define the (generalized) Frobenius number of a C-semigroup S as F(S) = I S (F (S)) + g(S). We can easily rewrite the previous propositions 3 and 4 from this definition. These corollaries specialized to numerical semigroups or N p -semigroups are equivalent to Corollary 4.5 in [15], and the theorems 5.6 and 5.7 in [5], respectively. We illustrate the previous results with one easy example. Trees of irreducible C-semigroups This section describes a tree whose vertex set is the set of all irreducible C-semigroups with a fixed Frobenius vector. Again, consider C ⊂ N p an integer cone and f ∈ C \ {0}. Consider a monomial order on N p and decompose the set I C (f ) as I C (f ) = I 1 (f ) I 2 (f ) with I 1 (f ) = {x ∈ I C (f ) | 0 = x f /2} and I 2 (f ) = {x ∈ I C (f ) | x f /2} (when f /2 / ∈ N p , consider as the monomial order extended to Q p ≥ ). We define the C-semigroup S(f ) as C \ {f } \ I 1 (f ). This semigroup will be the root of our tree of irreducible C-semigroups; this root depends on the fixed monomial order, as the following example shows. Example 11. Let C ⊂ N 2 be the integer cone with extremal rays − −− → (1, 0) and − −− → (1, 2), and f = (4,2). Then, f /2 = (2, 1) and I C (f ) = {(1, 0), (1, 1), (1, 2), (2, 0), (2, 1), (2, 2), (3, 0), (3, 1), (3, 2), (4, 2)}. Let ≺ 1 and ≺ 2 be the orders defined by the matrices 1 1 1 0 and 1 1 0 1 , respectively. In the first case, (3,4), (3,5), (3,6) . I 1 (f ) ≺ 1 = {(1, 0), (1, 1), (1, 2), (2, 0), (2, 1)} and S(f ) ≺ 1 = (3, 0), (4, 0), (5, 0), (3, 1), (4, 1), (5, 1), (2, 2), (3, 2), (2, 3), (3, 3), (4, 3), (2, 4), In the other one, The set S(f ) satisfies interesting properties collected in the following lemma. I 1 (f ) ≺ 2 = {(1, Lemma 12. The C-semigroup S(f ) is irreducible. Moreover, f is the Frobenius vector of S(f ) for any monomial order, and S(f ) is the unique irreducible C-semigroup satisfying all its gaps belong to I 1 (f ) ∪ {f }. Proof. By definition of S(f ), f is the unique maximum in H(S(f )) respect to ≤ C . So, it is also the unique maximum in H(S(f )) respect to ≤ N p . This fact implies that f is the Frobenius vector of S(f ) for any monomial order. Note that the set of gaps of S(f ) is the set H(S(f )) = I 1 (f ) ∪ {f }, and I S(f ) (f ) = I 2 (f )\{f }. Besides, for any x ∈ H(S(f )), the element f −x belongs to I S(f ) (f ). In other case, f = f − x + x ≺ f /2 + f /2 = f . Furthermore, since x ∈ I S(f ) (f ) if and only if f − x ∈ H(S(f )), we have that the cardinality of H(S(f )) is equal to 1 + I S(f ) (f ) when f ∈ 2N p , or equal to I S(f ) (f ) in the other case. By Proposition 4 or Proposition 3 (respectively), S(f ) is an irreducible C-semigroup. The uniqueness of S(f ) is given by its definition. The following proposition gives us an irreducible C-semigroups from an existing one, such that both have the same Frobenius vector. Proposition 13. Let S be a C-semigroup irreducible with Frobenius vector f , and x ∈ I S (f ) be one of its minimal generators such that: 1. 2x − f / ∈ S. 2. 3x = 2f . 3. 4x = 3f . Then, S = (S \ {x}) ∪ {f − x} is a C-semigroup irreducible with Frobenius vector f . Proof. Note F (S ) = f . We prove that S is closed under addition. Since x = (f − x) + (2x − f ), 2x − f can not belong to S, that is, the second condition is necessary. Trivially, given two elements in S \ {x}, their addition belongs to the same set. From this point forward, we will use the notation m(S) to represent the minimum element (with respect to the partial order ) in the minimal generating set of S. This element is often referred to as the multiplicity of S. Besides, f − x + s ∈ S \ {x} for any s ∈ S \ {x}. In other case, f − x + s = x or f − x + s ∈ H(S), for some s ∈ S \ {x}. If f − x + s = x, then s = 2x − f / ∈ S. If f − x + s ∈ H(S), then there exists s ∈ S such that f −x+s+s ∈ PF(S). When f −x+s+s = f /2, we have 2(s+s ) = 2x−f / ∈ S, and when f − x + s + s = f , x = s + s . Both conclusions are not possible. To finish this proof, we show that 2(f − x) ∈ S \ {x}. Assume that 2(f − x) / ∈ S \ {x}, so 2(f − x) = x, or 2(f − x) + s ∈ PF(S) for some s ∈ S. Since 3x = 2f , 2(f − x) = x. The semigroup S to be irreducible implies that 2(f − x) + s ∈ {f , f /2}, but 2(f − x) + s is not equal to f because of 2x − f / ∈ S. We denote by I(f ) the set of the irreducible C-semigroups with Frobenius vector f . Given S ∈ I(f ), consider S 0 = S, and S n = (S n−1 \ {m(S n−1 )}) ∪ {f − m(S n−1 )} when m(S n−1 ) ∈ I 1 (f ), or S n = S n−1 in other case, for n > 1. Note that S n = S n−1 if S n = S(f ). Since Theorem 14. Let be a monomial order on N p , C ⊂ N p be an integer cone, and f ∈ C be a non zero element. The digraph G is a rooted tree with root S(f ). Proof. Let S be an element belonging to I(f ). If m(S) / ∈ I 1 (f ), then S = S(f ). Assume that m(S) ∈ I 1 (f ). In that case, m(S) ≺ f /2, that is, 2m(S) − f / ∈ S, 3m(S) = 2f , and 4m(S) = 3f . By Proposition 13, S 1 = (S \ {m(S)}) ∪ {f − m(S)} is irreducible. That means (S, S 1 ) ∈ E. Following this construction, G is a tree whose root is S(f ). We obtain an algorithm from previous construction and results to compute a tree of all irreducible C-semigroups with a given Frobenius vector and a fixed monomial order (Algorithm 1). Algorithm 1: Computing a tree of irreducible C-semigroups with a given Frobenius vector. Input: A monomial order on N p , an integer cone C and f ∈ C. Output: A tree of irreducible C-semigroups with Frobenius vector f. begin X ← {S(f)}; Y ← ∅; while X = ∅ do S ← First(X); A ← {x ∈ S | x ∈ I 2 (f) ∩ Λ S , 2x − f / ∈ S, 3x = f, 4x = 3f, f − x ≺ m(S)}; if A = ∅ then Y ← Y ∪ {S}; else for x ∈ A do H ← (H(S) \ {f − x}) ∪ {x}; S ← C-semigroup with H(S ) = H; X ← X ∪ {S }; X ← X \ {S}; return Y The following example shows how to apply Algorithm 1 using the semigroups of Example 11. After repeating this procedure, the tree in Figure 1 is obtained. Since the definition of S(f) depends on the monomial order, we get a new tree if we change it. For example, when we use the order ≺ 2 , Figure 2 appears. Fundamental gaps of C-semigroups In this section, we generalize to C-semigroups several results related to the fundamental gaps of a numerical semigroup (see [15,Chapter 4]). The first results allow us to check when C \ X is a C-semigroup for any finite subset X ⊂ C. Denote by D(X) the set {a ∈ C | na ∈ X for some n ∈ N}. Proposition 16. Let C ⊂ N p be an integer cone and X be a finite subset of C \ {0}. Then, C \ X is a C-semigroup if and only if x − s ∈ X for every (x, s) ∈ (X, C \ X) with s ≤ C x. Proof. Let S be the set C \ X, and assume that S is a C-semigroup. Set (x, s) ∈ (X, S) with s ≤ C x. Since s ≤ C x, we have that x − s ∈ C. If x − s / ∈ X, then x = s + s for some s ∈ S, and S is not a semigroup. So, x − s ∈ X for any (x, s) ∈ (X, C \ X) with s ≤ C x. Conversely, since x−s belongs to X for every (x, s) ∈ (X, S) with s ≤ C x, S is an additive submonoid of N p with finite complement in C, that is, S is a C-semigroup. From above proposition, C \ X to be a C-semigroup implies that X = D(X); for example, if we consider C the cone generated by {(1, 0), (1, 1), (1, 2)} and X = {(2, 0), (2, 1)}, C \ X is not a semigroup because of D(X) = {(2, 0), (2, 1), (1, 0)}. We now provide an algorithm to determine if C \ X is a C-semigroup (Algorithm 2). Algorithm 2: Checking if C \ X is a C-semigroup. Input: C ⊂ N p an integer cone, and X a finite subset of C \ {0}. Output: True if C \ X is a C-semigroup, and False in other case. begin if X = D(X) then return False while X = ∅ do x ← First(X); A ← {s ∈ C \ X | s ≤ C x}; s ← First(A); while A = ∅ do if x − s / ∈ X then return False s ← First(A \ {s}); X ← X \ {x}; return True. Since, for each x ∈ X, the set {s ∈ C \X | s ≤ C x} can be very very large, the condition x−s / ∈ X has to be checked many, many times in Algorithm 2, and many iterations are required for the worst cases. To improve the computational resolution of this problem, we provide an alternative algorithm (Algorithm 3) obtained from the following lemma and [12, Lemma 3]. Lemma 17. Fix a total order on N p , and let X = {x 1 x 2 · · · x t } be a subset of an integer cone S 0 = C ⊂ N p . Assume that S t = C \ X is a C-semigroup. Then, S i = S i−1 \ {x i } is a C-semigroup, and x i is a minimal generator of S i−1 , for every i ∈ [t]. Proof. Note that x i is the Frobenius vector of S i respect to . Hence, S i−1 = S i ∪ {x i } is a C-semigroup and x i is a minimal generator of S i−1 , for every i ∈ [t]. Algorithm 3: Checking if C \ X is a C-semigroup. Input: A total order on N p , Λ C the minimal generating set of the integer cone C ⊂ N p , and X = {x 1 · · · x t } ⊂ C \ {0}. Output: If C \ X is a C-semigroup, its minimal generating set, and the empty set in another case. begin if X ⊂ Λ C then return the minimal generating set of C \ X if X = D(X) then return {} Λ ← Λ C ; for 1 ≤ i ≤ t do if x i / ∈ Λ then return {} Λ ← the minimal generating set of Λ \ {x i }; X ← X \ {x i }; if X ⊂ Λ then return the minimal generating set of Λ \ X We illustrate this algorithm with the following example. Fix S ⊂ N p a C-semigroup minimally generated by Λ = {s 1 , . . . , s q , s q+1 , . . . , s t }, and consider Λ C = {a 1 , . . . , a q , a q+1 , . . . , a m } the minimal generating set of C, with s i , a i ∈ τ i for i = 1, . . . , q (we assume that the integer cone C has q extremal rays {τ 1 , . . . , τ q }). Note that, the elements x of SG(S) are those elements in H(S) such that S ∪ {x} is again a C-semigroup. These gaps play an important role in decomposing a C-semigroups into irreducible C-semigroups ( [9]). Similarly to numerical semigroups, given two C-semigroups S and T with S T , any x ∈ max ≤ C (T \ S) belongs to SG(S), that is to say S ∪ {x} is a C-semigroup. Note that if x ∈ max ≤ C (T \ S), then 2x ∈ S. From this fact, we can prove the following proposition. Proposition 19. Let S be a C-semigroup and G be a subset of H(S). Then, S ∈ max ⊆ {T is a C-semigroup | G ⊆ H(T )} if and only if SG(S) ⊆ G. Proof. We know that x ∈ SG(S) if and only if S ∪ {x} is a C-semigroup. So, if S ∈ max ⊆ {T is a C-semigroup | G ⊆ H(T )}, then x ∈ G. In other case, S S ∪ {x} and S is not maximal. Assume that S is not maximal but SG(S) ⊆ G, so there exists T a Csemigroup such that S T and G ⊆ H(T ). Let x ∈ max ≤ C (T \ S), thus x ∈ SG(S) ∩ T , but it is not possible (SG(S) ⊆ G ⊆ T ). There is another interesting subset related to the set of gaps of S. A sub- set X of H(S) is said to determine H(S) if S = max ⊆ {T is a C-semigroup | X ⊆ H(T )}. These subsets were introduced in [16] for numerical semigroups. Proposition 20. Let X be a finite subset of an integer cone C ⊂ N p . Then, X determines the set of gaps of a C-semigroup if and only if C \ D(X) is a C-semigroup. Proof. Fix Λ C = {a 1 , . . . , a q , a q+1 , . . . , a m } the minimal generating set of C ⊂ N p . Assume that X determines H(S) for a C-semigroup S, so X ⊂ D(X) ⊂ H(S) and S ⊂ C \ D(X). Let S be the non-empty set {0} ∪ q i=1 h i a i + C | h i = min n∈N {(na i + C) ∩ X = ∅} . Note that S is a C-semigroup. Let a and b be two elements in S , so a = h i a i + m k=1 α k a k , and b = h j a j + m k=1 β k a k for some h i , h j , j, i, α k , β k ∈ N with i, j ∈ [q] and k ∈ [m]. Hence, a+b = h i a i +(h j a j + m k=1 (α k +β k )a k ) ∈ ha i +C. Furthermore, C \S is finite. Get any a ∈ Λ C , then a = q i=1 α i a i for some α 1 , . . . , α q ∈ Q ≥ , and hence ka = q i=1 β i a i for some β 1 , . . . , β q , k ∈ N. We can assume that β i ≥ h i . In that case, C \ S is a subset of the finite set { q i=1 γ i a i | 0 ≤ γ i ≤ β i }. We obtain that S is a finitely generated C-semigroup; let Λ S its minimal generating set. The set X is a subset of H(S ) by construction. For every a ∈ C \ D(X) and we can define S a as the semigroup generated by {a} ∪ Λ S . Since X ⊂ H(S a ), and X determines H(S), we have that S a ⊂ S. Hence, C \ D(X) ⊂ S and then C \ D(X) is a C-semigroup. Conversely, any C-semigroup T such that X ⊂ H(T ) satisfies that D(X) ⊂ H(T ). Thus, X determines the set of gaps of the C-semigroup C \ D(X). The sets determining the set of gaps of a C-semigroup are related to its set of fundamental gaps. Proof. By Proposition 20, if X determines H(S), then H(S) = D(X). Thus, for all x ∈ H(S), hx ∈ X for some h ∈ N. In particular, for every fundamental gap of S, the integer h has to be one. Hence, x ∈ X. Conversely, since X ⊂ H(S), we know that D(X) ⊆ H(S). Let x ∈ H(S) and consider h = max{k ∈ N | kx ∈ H(S)}. In that case, hx ∈ H(S), and 2hx, 3hx ∈ S. Therefore, hx ∈ FG(S) ⊆ X, x ∈ D(X), and H(S) ⊆ D(X). Analogously to the case of numerical semigroups, it happens that FG(S) is the smallest subset of H(S) determining H(S). Also, the relationship between the special and fundamental gaps of a C-semigroup is equivalent to their relationship for numerical semigroups. Proof. Trivially, for any x ∈ SG(S), 2x, 3x ∈ S, and then SG(S) ⊆ FG(S). Assume that for a x ∈ SG(S), there exists some y ∈ FG(S) with x ≤ S y. So, x + s = y for some s ∈ S. Since x is a pseudo-Frobenius element of S, y ∈ S. It is not possible, then x ∈ max ≤ S FG(S). A C-irreducible semigroup can also be characterized using its fundamental gaps using the above lemma. 5 Computing all the C-semigroups with a given Frobenius vector Let C ⊂ N p be an integer cone, be a monomial order on N p , and S be a C-semigroup with Frobenius vector F (S) ∈ C \ {0}. Note that F (S) is a minimal generator of S ∪ {F (S)}. Conversely to Lemma 17, we can consider the following sequence of Csemigroups for some t ∈ N: S t = S, S i−1 = S i ∪ F (S i ) for all i = 1, . . . , t, and S 0 = C. Such a sequence can be constructed for any C-semigroup with Frobenius vector F (S). So, from a minimal system of generators of C, we obtain new C-semigroups just by removing a minimal generator s fufilling that s F . Performing this process as many times as possible, we obtain all the C-semigroups with Frobenius vector F . Note that this process is finite due to the finitiness of the set {s ∈ C | s F }. This idea allows us to provide an algorithm for computing all the C-semigroups with a fixed Frobenius vector (Algorithm 4). Moreover, this algorithm can be modified to obtain all the C-semigroups with the Frobenius vector less than or equal to a fixed Frobenius vector. For any set of ordered pairs, A, π 1 (A) denotes the set of the first projection of its elements. Table 1. Funding The first, second, and last authors were partially supported by Junta de Andalucía research group FQM-343, and by Consejería de Universidad, Investigación e Innovación de la Junta de Andalucía project ProyExcel 00868. Proyecto de investigación del Plan Propio -UCA 2022-2023 (PR2022-004) tuto Universitario para el Desarrollo Social Sostenible), Universidad de Cádiz, E-11406 Jerez de la Frontera (Cádiz, Spain). E-mail: [email protected]. Table 1: All C-semigroups with C = (1, 0), (1, 1), (1,2) , and Frobenius vector equal to (2, 1); • ≡ gap, ≡ minimal generator, • ≡ element in S. Proposition 3 . 3Let S be a C-semigroup with genus g. Then, S is symmetric if and only if g = I S (F (S)). Proposition 5 . 5[13, Proposition 16] Let S be a C-semigroup and b ∈ S\{0}. Corollary 6 . 6Let S be a C-semigroup and b ∈ S \ {0}. The semigroup S is symmetric if and only if maximals ≤ S Ap(S, b) = {F (S) + b}. Corollary 7. Let S be a C-semigroup and b ∈ S \ {0}. The semigroup S is pseudo-symmetric if and only if maximals ≤ S Ap(S, b) = {F (S)+b, F (S)/2+ b}. Corollary 8 . 8Let S be a C-semigroup with genus g. Then, S is symmetric if and only if 2g = F(S). Corollary 9 . 9Let S be a C-semigroup with genus g. Then, S is pseudosymmetric if and only if 2g = 1 + F(S) and F (S)/2 ∈ N p . Example 10 . 10Let C ⊂ N 2 be the cone with extremal rays -symmetric. Note that PF(S 1 ) = SG(S 1 ) = H(S 1 ) = {(5, 2)}, but H(S 2 ) = {(4, 1), (5, 1), (8, 2)}, PF(S 2 ) = {(4, 1), (8, 2)}, and SG(S 2 ) = {(8, 2)}. Hence, 2(f − x) + s = f /2. Since 4x = 3f , s = 0, and from 2x − f / ∈ S, we obtain 2(f − x) + s = f /2. We conclude 2(f − x) ∈ S \ {x}. Since I S (f ) = I S (f ) and H(S) = H(S ), S is irreducible by the propositions 3 and 4. I 1 (f ) is a finite set, the set {S 0 , S 1 , . . .} is also finite. Let G = (V, E) be the digraph given by the set of vertices V = I(f ), and edge set E = (A, B) ∈ V × V | m(A) ≺ f /2 and B = (A \ {m(A)}) ∪ {f − m(A)} . Example 15 .•Figure 1 : 151Let S(f) ≺ 1 be the semigroup spanned by (3, 0), (4, 0), (5, 0), (1, 1), (3, 2), (2, 3),(2,4),(3,6) , Tree of irreducible C-semigroups with ≺ 1 . Figure 2 : 2Tree of irreducible C-semigroups with ≺ 2 . Example 18 . 18Let C be the cone generated by Λ C = {(1, }. Since X ⊂ Λ C and X = D(X), if we apply Algorithm 3, we obtain that: Lemma 21 . 21Let S be a C-semigroup and X be a subset of H(S). Then, X determines H(S) if and only if FG(S) ⊆ X. Lemma 22 . 22Let S be a C-semigroup. Then, SG(S) = max ≤ S FG(S). Corollary 23 . 23S is a C-irreducible semigroup if and only if the cardinality of max ≤ S FG(S) is equal to one.The next example illustrates many results appearing in this section. Example 24. Let C be the cone with extremal rays τ 1 = (1, 0) and τ 2 = (1, 1) andX = {(1, 1), (3, 0), (3, 1), (3, 2), (5, 1), (5, 2)}. Since D(X) = {(1, 0), (1, 1), (3, 0), (3, 1), (3, 2), (5, 1), (5, 2)}, we have that {(x, s) ∈ (D(X), C \ D(X)) | s ≤ C D(2,2), (3, 2)), ((2, 2), (5, 2)), ((4, 0), (5, 1))} Therefore, by Proposition 16, C \ D(X) is a C-semigroup and, by Proposition 20, X determines the set of gaps of a C-semigroup. If we call this semigroup S, we have that H(S) = D(X). It is not difficult to check that S = (2, 0), (5, 0), (2, 1), (2, 2), (3, 3) and that, in this case, FG(S) = X. Moreover, we can compute the set of pseudo-Frobenius elements of S, and we get PF(S) = {(5, 1), (5, 2)}, so SG(S) = {(5, 1), (5, 2)}. On the other hand, FG(S) = {(1, 1), (3, 0), (3, 1), (3, 2), (5, 1), (5, 2)} and max ≤ S FG(S) = {(5, 1), (5, 2)}, as we knew by Lemma 22. Example 25 . 25Let C be the cone generated by {(1, 0), (1, 1), (1, 2)} and F = (2, 1). Then, applying Algorithm 4, we get that the set of all C-semigroups with Frobenius vector (2, 1) is {{(2, then 0 c. Every monomial order can be represented via matrices. For a nonsingular integer (p×p)-matrix M with rows M 1 , . . . , M p , the M -ordering ≺ is defined by a ≺ b if and only if there exists an integer i belonging to [p − 1], such that M 1 Algorithm 4: Computing C-semigroups with a given Frobenius vector.Input: A total order on N p , Λ C the minimal generating set of the integer cone C ⊂ N p , and F ∈ C \ {0}. Output: The set of C-semigroups with Frobenius vector equal to F . begin E-mail: [email protected]. A. Sánchez-Loureiro. Departamento de Matemáticas. J I García-García, Departamento de Matemáticas/INDESS (Instituto Universitario para el Desarrollo Social Sostenible). Cádiz, Spain; Cádiz, Spain; Cádiz, SpainUniversidad de Cádiz, E-11510 Puerto Real ; Departamento de Matemáticas, Universidad de Cádiz, E-11510 Puerto Real ; Universidad de Cádiz, E-11510 Puerto RealE-mail: [email protected]. A. Vigneron-Tenorio. Departamento de Matemáticas/INDESS (Insti-ReferencesJ. I. García-García. Departamento de Matemáticas/INDESS (Instituto Uni- versitario para el Desarrollo Social Sostenible), Universidad de Cádiz, E- 11510 Puerto Real (Cádiz, Spain). E-mail: [email protected]. D. Marín-Aragón. Departamento de Matemáticas, Universidad de Cádiz, E-11510 Puerto Real (Cádiz, Spain). E-mail: [email protected]. A. Sánchez-Loureiro. Departamento de Matemáticas, Universidad de Cádiz, E-11510 Puerto Real (Cádiz, Spain). E-mail: [email protected]. A. Vigneron-Tenorio. Departamento de Matemáticas/INDESS (Insti- References The corner element of generalized numerical semigroups. M Bernardini, W Tenório, G Tizziotti, Results Math. 774ppBernardini, M.; Tenório, W.; Tizziotti, G., The corner element of generalized numerical semigroups, Results Math. 77 (2022), no. 4, Paper No. 141, 20 pp. . W Bruns, J Gubeladze, K-Theory Polytopes, SpringerDordrechtBruns, W.; Gubeladze, J., Polytopes, rings, and K-theory, Springer Monographs in Mathematics. Springer, Dordrecht, 2009. A generalization of Wilf 's conjecture for generalized numerical semigroups. C Cisto, M Dipasquale, G Failla, Z Flores, C Peterson, R Utano, Semigroup Forum. 1012Cisto, C.; DiPasquale, M.; Failla G.; Flores, Z.; Peterson, C.; Utano, R., A generalization of Wilf 's conjecture for generalized numerical semi- groups, Semigroup Forum 101 (2020), no. 2, 303-325. Algorithms for generalized numerical semigroups. C Cisto, M Delgado, Pedro A García-Sánchez, J. Algebra Appl. 205ppPaper No. 2150079Cisto, C.; Delgado, M.; García-Sánchez, Pedro A., Algorithms for gen- eralized numerical semigroups, J. Algebra Appl. 20 (2021), no. 5, Paper No. 2150079, 24 pp. Irreducible generalized numerical semigroups and uniqueness of the Frobenius element. C Cisto, G Failla, C Peterson, R Utano, Semigroup Forum. 992Cisto, C.; Failla G.; Peterson, C.; Utano, R., Irreducible generalized numerical semigroups and uniqueness of the Frobenius element, Semi- group Forum 99 (2019), no. 2, 481-495. On the generators of a generalized numerical semigroup. C Cisto, G Failla, R Utano, An. St. Univ. Ovidius Constanta. 271Cisto, C.; Failla G.; Utano, R., On the generators of a generalized numerical semigroup, An. St. Univ. Ovidius Constanta 27 (1) (2019), 49-59. On almost-symmetry in generalized numerical semigroups. C Cisto, G Failla, W Tenório, Comm. Algebra. 496Cisto, C.; Failla G.; Tenório, W., On almost-symmetry in generalized numerical semigroups, Comm. Algebra 49 (2021), no. 6, 2337-2355. Ideals, varieties, and algorithms. An introduction to computational algebraic geometry and commutative algebra. D A Cox, J Little, D O&apos;shea, Undergraduate Texts in Mathematics. SpringerCox, D. A.; Little, J.; O'Shea, D., Ideals, varieties, and algorithms. An introduction to computational algebraic geometry and commutative algebra. Undergraduate Texts in Mathematics. Springer, Cham, 2015. Characterizing affine C-semigroups, Ricerche di Matematica. J D Díaz-Ramírez, J I García-García, D Marín-Aragón, A Vigneron-Tenorio, 71Díaz-Ramírez, J. D.; García-García, J. I.; Marín-Aragón, D.; Vigneron- Tenorio, A., Characterizing affine C-semigroups, Ricerche di Matem- atica, 71 (2022), 283-296. Algorithms and basic asymptotics for generalized numerical semigroups in N p. G Failla, C Peterson, R Utano, Semigroup Forum. 92Failla G.; Peterson C.; Utano, R., Algorithms and basic asymptotics for generalized numerical semigroups in N p , Semigroup Forum (2016) 92, 460-473. CharacterizingAffineCSemigroup, a Python library for computations in C-semigroups. J I García-García, D Marín-Aragón, A Sánchez-R.-Navarro, A Vigneron-Tenorio, García-García, J. I.; Marín-Aragón, D.; Sánchez-R.-Navarro, A.; Vigneron-Tenorio, A., CharacterizingAffineCSemigroup, a Python li- brary for computations in C-semigroups, available at https://github. com/D-marina/CommutativeMonoids. An extension of Wilf 's conjecture to affine semigroups. J I García-García, D Marín-Aragón, A Vigneron-Tenorio, Semigroup Forum. 96García-García J. I.; Marín-Aragón D.; Vigneron-Tenorio A., An ex- tension of Wilf 's conjecture to affine semigroups, Semigroup Forum (2018), vol. 96, Issue 2, 396-408. On pseudo-Frobenius elements of submonoids of N q. J I García-García, I Ojeda, J C Rosales, A Vigneron-Tenorio, Collectanea Mathematica. 71García-García, J. I.; Ojeda, I.; Rosales, J. C.; Vigneron-Tenorio, A., On pseudo-Frobenius elements of submonoids of N q , Collectanea Math- ematica (2020), 71, 189-204. Python Software Foundation, Python Language Reference. version 3.5, available atPython Software Foundation, Python Language Reference, version 3.5, available at http://www.python.org. Numerical semigroups, Developments in Mathematics. J C Rosales, P A García-Sánchez, Springer20New YorkRosales, J. C.; García-Sánchez, P. A., Numerical semigroups, Develop- ments in Mathematics, 20. Springer, New York, 2009. Fundamental gaps in numerical semigroups with respect to their multiplicity. J C Rosales, P A García-Sánchez, J I García-García, J A Madrid, J. Pure Appl. Algebra. 1891-3Rosales, J. C.; García-Sánchez, P. A.; García-García, J. I.; Jiménez Madrid, J. A., Fundamental gaps in numerical semigroups with respect to their multiplicity, J. Pure Appl. Algebra 189 (2004), no. 1-3, 301-313. Frobenius Allowable Gaps of Generalized Numerical Semigroups. D Singhal, Y Lin, Electron. J. Combin. 294Paper No. 4.12-Singhal, D.; Lin, Y., Frobenius Allowable Gaps of Generalized Numeri- cal Semigroups, Electron. J. Combin. 29 (2022), no. 4, Paper No. 4.12-.
[ "https://github.com/D-marina/CommutativeMonoids/" ]
[]
[ "D Bedrane ", "A Houël \nOrsay Physics, ZA13710Saint-Charles, FuveauFrance\n", "A Delobbe \nOrsay Physics, ZA13710Saint-Charles, FuveauFrance\n", "M Lagaize ", "Ph Dumas ", "S Veesler ", "E Salançon ", "\nCoaxial Ion Source Coaxial Ion Source : pressure dependence of gas flow and field ion emission\nCNRS\n1) CINaM, Aix-Marseille UnivUMR7325France\n" ]
[ "Orsay Physics, ZA13710Saint-Charles, FuveauFrance", "Orsay Physics, ZA13710Saint-Charles, FuveauFrance", "Coaxial Ion Source Coaxial Ion Source : pressure dependence of gas flow and field ion emission\nCNRS\n1) CINaM, Aix-Marseille UnivUMR7325France" ]
[]
We investigated the pressure dependence of gas flow and field ion intensity of a coaxial ion source operating at room temperature over a wide pressure range, testing various gases and ionisation voltages. Flow conductance measurements taking into account the different gases' viscosity and molecular mass consistently exhibit a generic pattern. Three different flow regimes appear with increasing upstream pressure. Since the coaxial ion source supplies the gas locally, very near the apex of the tip where ionisation occurs, large ionisation currents can be obtained without degrading the propagation conditions of the beam.Compared with field ionisation in a partial pressure chamber, using the coaxial ion source increases the ion current a hundredfold for the same residual low pressure. We also show that the gas flow regime does not impact ionisation yield. Although a fuller characterisation remains to be performed, brightness reaches 3 × 10 11 A/m 2 /sr at 12kV extracting voltage. a) https://www.cinam.univ-mrs.fr/ 1 arXiv:2305.05297v1 [cond-mat.mtrl-sci] 9 May 2023
null
[ "https://export.arxiv.org/pdf/2305.05297v1.pdf" ]
258,564,890
2305.05297
1815de64aea3038ea5970df836ed9f362313c786
D Bedrane A Houël Orsay Physics, ZA13710Saint-Charles, FuveauFrance A Delobbe Orsay Physics, ZA13710Saint-Charles, FuveauFrance M Lagaize Ph Dumas S Veesler E Salançon Coaxial Ion Source Coaxial Ion Source : pressure dependence of gas flow and field ion emission CNRS 1) CINaM, Aix-Marseille UnivUMR7325France (Dated: 10 May 2023) We investigated the pressure dependence of gas flow and field ion intensity of a coaxial ion source operating at room temperature over a wide pressure range, testing various gases and ionisation voltages. Flow conductance measurements taking into account the different gases' viscosity and molecular mass consistently exhibit a generic pattern. Three different flow regimes appear with increasing upstream pressure. Since the coaxial ion source supplies the gas locally, very near the apex of the tip where ionisation occurs, large ionisation currents can be obtained without degrading the propagation conditions of the beam.Compared with field ionisation in a partial pressure chamber, using the coaxial ion source increases the ion current a hundredfold for the same residual low pressure. We also show that the gas flow regime does not impact ionisation yield. Although a fuller characterisation remains to be performed, brightness reaches 3 × 10 11 A/m 2 /sr at 12kV extracting voltage. a) https://www.cinam.univ-mrs.fr/ 1 arXiv:2305.05297v1 [cond-mat.mtrl-sci] 9 May 2023 I. INTRODUCTION The search for ion sources has a long history, marked by both stalemates and rapid progress. A range of approaches has met differing degrees of success. The development of helium ion microscopy 1 is a striking example of decades of cumulative patience and perseverance on the part of many. It also shows that the quality of the source itself is paramount to the overall performance of a system. Yet while this achievement is undoubtedly a milestone, it is certainly not the end of the story of bright ion sources. Indeed, the range of possible applications implies a range of desired source properties. For Focused Ion Beam (FIB) applications, in addition to Liquid Metal Ion Sources (LMIS), Gas Field Ion Sources (GFIS) or plasma ion sources are also technical alternatives 2 . Brightness, reliability, the selection of possible ions, ease of use, sample contamination, costs, etc. are among the important parameters to be taken into account. Here, we report on our progress in developing our [3][4][5][6] GFIS. Operating at room temperature (the aim being to keep it simple) for different noble gases, it is hereafter referred to as the Coaxial Ion Source (CIS). In his 2005 review article Quest for high brightness, monochromatic noble gas ion sources, Tondare 2 classified this source, together with Konishi, Takizawa and Tsumori's helium-cooled source 7 , under the needle-in-capillary type GFIS. As we will demonstrate, one main advantage of locally injecting the gases is that high current intensities are obtained while maintaining the pressure low enough for unimpeded beam propagation. In addition, the flow regimes of the different gases are plotted in reduced units, enabling us to reveal their generic pattern. We also describe the mapping of the field ion current under varying pressure and high voltage. Brightness is estimated, and the promising results (compared with LMIS) of this noble gas ion source operating at room temperature encourage us to pursue the development of our CIS. II. MATERIALS AND METHODS The heart of the experimental device is the coaxial ion source (CIS) [3][4][5][6] . It constitutes a deliberate leak between a high-pressure vacuum chamber and a low-pressure vacuum chamber. When a sufficiently high voltage is applied to the inserted needle tip, a portion of the molecules entering the low-pressure chamber through the coaxial ion source is ionised. This high voltage is controlled by a computer program which records the pressure and the tip emission high voltage and current. A. Coaxial ion source The coaxial ion source consists of a stainless-steel capillary tube guiding a tungsten wire shaped into a nanotip, as presented in Figure 1. A tungsten wire (diameter: d = 125µm) is first inserted into a stainless steel tube (length L = (6, 0 ± 0, 2)mm, diameter: D = 170µm) maintained in a copper sleeve. The tip is prepared by electrochemical etching 8 ; then, to minimise corrosion 4,9 , palladium is electrochemically deposited [10][11][12] . The degree to which the apex of the tungsten tip protrudes from the capillary tube (see Figure 1-d-) is controlled by retracting the wire. The length of tip chosen here to emerge from the tube is comparable with the wire diameter ≈ 130µm. The stainless-steel capillary tube and the tungsten wire are appropriately glued to the copper sleeve (see Figure 1-a-) with epoxy, to ensure sealing while preserving gas flow along the CIS. Epoxy is also used to glue the ceramic tube that insulates the CIS from the electrical ground and conveys the gas that will be injected coaxially near the apex of the tip from the high-pressure chamber (see B. Vacuum chambers As described above, the CIS is a deliberate leak between a high-pressure chamber and a lowpressure chamber (see Figure 2). The high-pressure and low-pressure vacuum chambers can independently be evacuated down to, respectively, high or ultra-high vacuum conditions after baking, via a turbo-molecular pump (TwisTorr305, Agilent). As we will see below, to calculate the molecular conductance of the CIS, we need to know the pumping speed. While intrinsic pumping speeds are provided by the manufacturer of the turbo-molecular pump, the effective pumping speeds of our experimental setup will be measured (see Section III B and Appendix C). Gases come from Linde minican bottles and are up to 99.5% pure. A leak valve provides the gas inlet from the bottle in the high-pressure chamber. For our experiments, the high-pressure P hp is maintained between 10 2 < P hp < 10 5 Pa. In a typical experiment, P hp is set at the desired value, while the low-pressure chamber alone is evacuated to balance the incoming gas. This creates a low-pressure equilibrium P l p in the low-pressure chamber. The pressure in the high-pressure chamber is measured with a membrane-based gauge (ED 510/421.411/025 from Haenni). A fullrange gauge from Varian is used in the low-pressure chamber; a gas-dependent gauge factor is required. The gauge factors are given in Table I. The low-pressure chamber is also equipped with a quadrupole mass spectrometer (QMG-064 from Balzers) whose bandwidth reaches 20Hz. In the low-pressure chamber, the CIS is mounted on a rotary manipulator allowing the emission angle of the beam to be measured (see Appendix B). For flexibility, a hollow stainless steel coil connects the CIS and the high-pressure chamber. The overall flow conductance of the deliberate leak is limited by the flow conductance of the CIS itself. The low-pressure chamber is also equipped with an adjustable extraction grid, a Faraday cup and a multichannel plate/fluorescent screen assembly used for projection microscopy 5,6 . C. Electrical measurements The low-pressure UHV chamber is equipped with several high-voltage feedthroughs connected to the CIS, the extraction grid, the Faraday cup, the electrodes of the multichannel plate and the fluorescent screen. For most experiments, the CIS is virtually grounded through a transconductance amplifier (Model 427, Keithley) which provides the total ionisation current from the tip. The high voltage for extraction is supplied to an extraction grid (electron microscopy copper grid: square pattern 150 Mesh TEM Support Grids from Agar Scientific) about 5mm away from the apex of the tip. The power supply is a high-voltage 30kV, reversible, low-ripple module (MPS30 from Spellman). For stabilised pressure conditions, the high voltage is ramped up and down like a staircase. The voltage steps are computer-controlled (see Section III C). Each step up corresponds to 1kV and lasts t ≈ 5sec. The rise time of the Keithley427 is set at 0.01s and the output is sampled by the acquisition program and averaged (see Section III C). The control and acquisition program is written in Labview2011 version 11.0.1 interfacing a National Instrument analog and digital input/output card (USB-6009). After acquisition, data are processed and plotted using Igor Pro 9. D. Beam characterisation While this article details the measurement of CIS ion intensity over a wide range of pressures and high voltages, the full 2D-mapping of brightness when pressure and high voltage vary is beyond its scope. However, although brightness is of the utmost importance as far as source characterisation is concerned, it can be estimated from measurements in certain configurations. Determining brightness also requires knowing the size of the virtual source and the emission solid angle. The size of the virtual source is derived from a projection microscopy image (see Appendix B), using a procedure described elsewhere 6 . Briefly, we project a mask, such as a lacey carbon film, onto the fluorescent screen of an opaque, hollowed, thin object. If the source is spatially extended, despite the rapidly varying transmission contrast of the mask, contrast variations in projected image intensity will be blurred. Based on these intensity profiles and the degree of tilting the rotary manipulator mentioned above. III. RESULTS A. Pressure measurements In a log-log plot, Figure 3 displays the P l p for 5 different gases when the P hp is set within a three-decade range. P l p increases with P hp . We observe that the order of magnitude of P l p / P hp ≈ 10 −6 , resulting in pressures in the low-pressure chamber compatible with unimpeded ion propagation. B. Effective pumping speed measurements The effective pumping speed 13 S e , needs to be known in order to plot the conductance. It is measured by monitoring the (exponential) decay of P l p after closing the valve between the highand low-pressure chambers, upstream of the CIS itself. Straightforwardly, neglecting the final limit pressure in the low-pressure chamber: P l p = P l p0 exp(− S e V t) (where P l p0 is the initial pressure and V is the volume of the low-pressure chamber). Starting from typical P l p0 = 10 −2 Pa, P l p decreases in a few seconds, which requires a sufficient bandwidth for pressure measurements. Based on these pressure measurements, performed with the quadrupole mass spectrometer, S e is calculated for four different gases from the exponential fit of these data (see Appendix C). The results are summarised in Table I. C. Current measurements For selected and stabilised pressure conditions, I(V ) curves can be recorded. Figure 4 shows both the current I and the voltage V as a function of time t during a 3-minute scan for a CIS using Argon in the high-pressure chamber (P hp = 3000 Pa). It can be seen that, as expected for a field ionisation process, current increases more rapidly than voltage. The current spikes at the beginning of each plateau are only due to capacitive coupling of the setup, and are taken into account by the software for the post-processing procedures. A stable source being required throughout the entire experiment, the current is measured during a voltage round trip to check that the apex of the tip has not evolved due to the high-voltage conditions. IV. DISCUSSION A. Gas flow regimes Gas flow regimes are described in the literature 3,14,15 through conductance C, defined as the ratio of molecular flux Q to the difference in pressure (P hp − P l p ) : C = Q P hp − P l p ≈ S e P l p P hp where Q is the product of the effective pumping speed S e by the low pressure and where we consider P hp >> P l p . Figure 5-a-shows the conductance of a CIS structure plotted versus the pressure in the high-pressure chamber. For each of the four gases plotted, the figure exhibits an S-shaped curve. Two plateaus, separated by a regime where the conductance C increases with P hp , can be seen. This behaviour points to a succession of three different gas flow regimes as P hp increases: • At low pressure, the mean free path being larger than the characteristic length scale of the CIS structure, the gas flows in a molecular regime. This is the first conductance plateau. • When pressure increases, viscosity starts to play a dominant role and the gas flow is now in a viscous (or laminar) regime. In this regime (here 10 3 P hp 10 4 Pa), conductance C increases linearly with P hp . • Finally, conductance is limited by the cross-section of the CIS structure to what is known as a choked-flow regime, and reaches the second plateau. These results refine and extend the findings and description previously reported by Descoins et al. 3 . Numerous authors [13][14][15][16][17] have derived analytical and/or semi-empirical expressions of conductance C in various flow regimes and channel geometries, from the simplest cylindrical tube to more complex channel geometries. In these expressions, conductance appears as a separable function. When an analytical expression can be written, for each flow regime, the conductance C regime can be written as a product of two functions. One, C regime,geometry , depends only on the geometrical parameters and the other, C regime,gas , depends only on the gas parameters: C regime = [C regime,geometry ] × [C regime,gas ] As suggested by Descoins et al. 3 , flow behaviour through a coaxial CIS structure can be described using a rectangular aperture of area a × b (where a is the perimeter of the annular aperture between the tungsten wire and the stainless steel capillary and b << a is the width of this annular aperture). In this rectangular geometry, the expressions of [C regime,geometry ] × [C regime,gas ] for the different regimes are: • molecular-flow regime: ab 2 8L (1 + 2 ln 2a b ) × 8kT πm(1) • viscous-flow regime: πab 3 96L × 1 λ 8kT πm(2) • choked-flow regime: C 0 π 8 ab × Γ 8kT πm(3) Coaxial Ion Source In the above equations, the first term of the product within brackets is C regime,geometry while the second term of the product within brackets is C regime,gas . 8kT πm is the mean velocity of the kinetic theory of gases. λ is the mean free path of gas molecules in the high-pressure chamber, which depends on the pressure and on the viscosity η of the gas. Γ also depends on the gas parameters alone, being 0.725 for monoatomic gases and 0.685 for diatomic gases 14 . Finally, the orifice coefficient C 0 is a geometry-dependent, empirical correction not expected to vary significantly for our different CIS structures. Its order of magnitude is 0.1 3,18 . The separability of C as a product of functions depending either on geometrical parameters alone or on gas parameters alone will allow us to plot the same data to directly compare different gases. While all the experiments are performed at room temperature, the above analytical expressions call for us to plot (see Figure 5-b-), C * = C m m N 2 versus 1 λ rather than C versus P hp (as previously done in Figure 5 The S-shaped curves plotted with these axes can be seen in Figure 5-b-. They are now all grouped in a single, generic curve, showing that most of the underlying physics has been taken into account. While deliberately not plotted in Figure 5-a-, Xe data are now plotted in Figure 5-b-. Indeed, the effective pumping speed S e , needed to calculate C, proved impossible to determine using the mass spectrometer, due to Xe's high mass. S e for Xe, reported in Table I, was not measured but arbitrarily chosen to superimpose the Xe S-shaped curve on the generic curve. Our results remain quantitatively consistent when the geometrical parameters of the CIS are varied: an example is given in Appendix A, where similar behaviour is observed. All S-shaped curves are also grouped in another, geometry-dependent generic curve. Although this consistency reflects that most of the underlying physics has been taken into account, we are aware that the reality is more complex across the 6mm-long CIS structure: different flow regimes coexist. The discrepancies in the generic curve that can be observed in Figure 5-b-probably testify to these less obvious behaviours. Moreover, the tungsten wire and the stainless steel capillary, while likely to be colinear, may not be coaxial. A deeper analysis of these issues is beyond the scope of this article; our target is the first-order pressure dependencies of gas flow and field ion emission, which we will now discuss. B. Field ion emission Current versus voltage From intensity data recorded versus time, like those plotted in Figure 4, we average the values obtained on the two plateaus for the same voltage while increasing and decreasing the voltage. The capacitive component of the current being opposite for increasing and decreasing voltage steps, this procedure cancels the capacitive current and gives the field ion emission current alone. • In the high-voltage regime, above ≈ 5kV, the ionisation probability of a molecule near the apex reaches 100 %. The ionisation rate is limited by gas supply, which however also increases with voltage, as observed through the current increase. Straight-line adjustment of this regime results in typical slopes of the order of a few units. In the case of Figure 6, the slope is α ≈ 3.3. We also note, in Figure 6, that the two curves are shifted by a decade, corresponding to the ratio of the two pressures P l p chosen for display. correspond to measurements from the CIS geometry at two different voltages, respectively 7.0 kV and 12.0 kV. Based on these data, we can write the empirical dependence of the current I versus measured pressure in the low-pressure chamber P l p and applied voltage V . Current versus pressure I = K × P l p V α where K ≈ 1000 pA/Pa/kV α and α ≈ 3.3 when I, P l p and V are respectively in pA, Pa and kV, with K also dependent on the CIS structure under study. Notably, this reflects the efficiency of bringing the injected gas into the ionisation zone. Ionisation yield We can estimate the ionisation yield, which is the ratio of ion flux to molecular flux Q, using the gas flow analysis described above. I/(qQ) (q is the elementary charge) is the right vertical axis of Figure 7. As expected from our observations, I/(qQ) is almost constant over three decades in pressure. The order of magnitude of this ratio is 2 × 10 −6 for 12.0 kV. That means there are two Ar ions for one million Ar molecules entering the chamber through the CIS structure. This value is both large and improvable, as will be seen later. Advantages of the CIS The third curve (red triangles) in Figure 7 is a current measurement performed with a similar tip in a partial pressure experiment. In such experiments, gas is provided through a static partial pressure P l p , rather than dynamically provided through the CIS structure. For the same pressure P l p , the observed currents are orders of magnitude lower than those obtained when gas is injected through the CIS structure. For P l p = 10 −2 Pa, read current values for the black (CIS structure) and red (partial pressure) dashed lines are respectively 45000pA and 150pA, resulting in a current ratio of ca. 300. This reflects the fact that the pressure P tip in the vicinity of the tip is ca. 300 times higher than P l p . By injecting gases locally, the CIS geometry thus allows much higher ion currents to be obtained while preserving the vacuum in the low-pressure chamber. This major enhancement, which could be improved if needed, is supported by the literature. Also using a setup allowing He to be injected in the vicinity of the tip, Konishi et al. 7 reported an enhancement of the same order of magnitude: P tip /P l p ≈ 110. More indirectly, from experiments performed in the partial pressure configuration, Jousten et al. 22 derived a semi-empirical expression of ionisation current versus pressure. According to their semi-empirical expression, to obtain the same ionisation current as that afforded by the CIS geometry would require a pressure in the low-pressure chamber a hundredfold higher. Although slightly quantitatively different, these findings confirm the advantage of using the CIS geometry. A gain of at least two orders of magnitude is obtained when locally injecting the gas with a CIS structure of this geometry. As mentioned above, the ionisation yield is I/(qQ) ≈ 2 × 10 −6 ion/molecule for 12.0 kV. This value is both large and improvable: the advantages of the CIS geometry allow a hundredfold increase in ionisation current I for the same P l p and, if needed, the yield could be improved using other geometries. Most of the molecules are too far from the tip apex for ionisation. Indeed, the yield reflects the ratio ≈ (100nm/100µm) 2 of the area capturing electrons to the aperture of the stainless steel capillary. Appropriate engineering solutions would probably increase both ionisation yield and local pressure enhancement. Brightness Like previous authors 23 Although not yet measured in our context, we can reasonably expect the CIS intrinsic energy-spread ∆E 0 to be smaller than for LMIS. Moreover, the low residual pressure inherent to needle-in-capillary type GFIS is an advantage in terms of avoiding collision energy-spread. The order of magnitude of the brightness given above, B ≈ 3 × 10 11 A/m 2 /sr, is very promising, although the two key parameters, brightness and energy spread, have yet to be fully characterised. Such a characterisation is beyond the scope of this work and is planned for later, when the tip is mounted in an ion column. V. CONCLUSION We investigated the gas flow regimes of a coaxial ion source under a range of pressures. Flow conductance measurements taking into account the viscosity and molecular mass of the different gases consistently exhibit a generic pattern: three different flow regimes are observed with increasing upstream pressure, resulting in an S-shaped curve. Even when the geometry of the structure is different, the pattern remains consistent with the new geometrical parameters (see Appendix A for more information). The ionisation current was measured here with respect to pressure and extraction voltage. Current is proportional to pressure in the low-pressure chamber and increases with voltage according to a power law in the regime with an ionisation probability of 100%. Similar experiments in a partial pressure configuration show that the CIS geometry enables local pressure at the tip to reach values at least two orders of magnitude higher than the pressure in the chamber. Allowing much higher currents for the same pressure is therefore an advantage of using the CIS as a source for ion optics. At the same time, it minimises the impediments to beam propagation, known to increase energy spread. Optimising the source geometry may even increase this gain. Moreover, a promising estimation of brightness is obtained for our CIS, which compares favourably to the brightness reported for liquid metal ion sources. Further characterisations will obviously follow, but, should its brightness be confirmed and its stability be demonstrated to be sufficient for practical applications, this room temperature CIS provides a valuable option as a simple, high-brightness, monochromatic noble gas ion source. ACKNOWLEDGMENTS The support of Orsay Physics Company and the Region Sud, providing a joint grant for one of the authors (D.B) and financial support, is gratefully acknowledged. The authors would like to thank Marjorie Sweetko for improving the English of this article, Hubert Klein for his help with the graphics, and Stéphane Varta, from CEA Cadarache, for lending the mass spectrometer. we obtain significantly different values, as can be seen in Table II. This suggests our geometry is not appropriately described by a perfect annulus approximated by a rectangular aperture of width b. The tungsten wire can lie along the inner stainless steel capillary tube. In such a geometry, the characteristic length of the conductance channel, which we will call b e f f , will be larger than b. b e f f is the appropriate length scale for comparison with the mean free path. While we have seen above that the opening area is a good, constant parameter, if we stick with the simple equations of the rectangular geometry, a e f f will be somewhat smaller than a. We thus calculate b e f f by equalising the expression of conductance in the viscous-flow regime (see Equation 2) and in the choked-flow regime (see Equation 3). We obtain: ( b 2 e f f λ co ) = 48 1 2π C 0 ΓL (A1) We note that the values of b e f f , reported in Table II exceed 2 × b, which would be the upper limit in a simple geometrical approach. However b e f f is the appropriate parameter to calculate the so-called Knudsen number K = λ /b e f f commonly used in the literature [13][14][15][16][17] . Appendix B: Emission angle of the ion beam As described in Section II, the CIS structure is mounted in an ion projection microscope. This experimental setup can be used to measure the size of the virtual source. The CIS structure is facing the extractor grid, which can be approached to increase the magnification of the projection image formed on a 4cm diameter fluorescent screen 40cm away. To perform this experiment, the CIS structure is positively electrically supplied, and the extractor is electrically grounded. The entrance of the MCP/fluorescent screen assembly is also grounded so that the ion beam is field-free throughout. Then, to maintain a constant electric field at the tip when the extractor is approached, the voltage is decreased to keep the ion emission current constant. Both the CIS structure and the extractor grid are installed on the same rotary manipulator in front of the screen to allow the entire emission beam to be probed, since projection size limits the detection angle to 5 • . To observe the entire emission beam, we turn the structure, recording the image on the screen. The recorded light intensity on the image is proportional to the ion intensity detected on the screen. Thus, the light intensity value can be plotted versus the rotation angle. It is presented in Figure 9 for a CIS structure supplied at V = 12kV and a pressure in the low-pressure chamber of about P l p = 2.7 × 10 −3 Pa. This angle gives Ω = 0.024sr, which is the solid angle used herein for brightness estimation. Appendix C: Effective pumping speed To complement the effective pumping speed measurements given in Section III B, Figure 10 shows the pressure decrease in the low-pressure chamber P l p /P l p0 versus time recorded with the mass spectrometer for 4 gases. For easier comparison of the four curves, pressure P l p is divided by the highest pressure P l p0 measured at t = 0s for each gas (P l p0 ≈ 10 −3 Pa). Recalling that FIG. 9. Divergence angle of the beam. The mean light intensity is recorded on the 4cm screen while the structure is rotated at an angle θ . P l p /P l p0 = exp(−S e t/V ), to find the effective pumping speed, we need to know the volume V of the low-pressure chamber. Here, we have V = (67 ± 5)L. The effective pumping speeds found are presented in Table I and were used in the treatment of data presented in Appendix A. For two different structures and five different gases, these values permit a consistent analysis. FIG. 10. Effective pumping speed measurement. The (exponential) decrease in pressure P l p in the lowpressure chamber is recorded over time t using the mass spectrometer. FIG. 1 . 1Schematic of coaxial ion source: -a-Cross-sectional view of copper sleeve connecting the ceramic tube (left) and the stainless steel tube (right) into which the tungsten wire is inserted; -b-General view of CIS structure; -c-Geometric modelling parameters of CIS cross-section; -d-Scanning electron microscopy image of CIS low-pressure outlet. Figure 1 1Figure 1-b-). FIG. 2 . 2Experimental setup: The coaxial ion source, of length L, separates a high-pressure chamber from a low-pressure chamber. magnification, an upper limit to the size of the virtual source can be obtained. The fluorescent screen, 4 cm in diameter and 40 cm away from the tip, receives only part of the ion beam. To estimate the emission angle, the intensity projected on the fluorescent screen was observed while FIG. 3. Pressure in the low-pressure chamber P l p vs. pressure in the high-pressure chamber P hp for 5 different gases. FIG. 4 . 4Intensity measurement (blue, left) and applied voltage (grey, right) vs. time during a triangular, staircase voltage ramp. Each voltage step corresponds to ∆V = 1kV and lasts 5 seconds, except for the longer plateau at maximum voltage (16kV). -a-). Our choice for the expression of the reduced conductance C * for the vertical axis is based on the 1/ √ m dependence of the conductance for the molecular-and choked-flow regimes. Nitrogen being commonly studied 19 , for ease of comparison with the literature, we have also normalised by m N 2 . C * and C are thus in the same units in m 3 .s −1 . For the horizontal axis, guided by the expression of the conductance in the viscous-flow regime, taking into account the expression of C * , the appropriate parameter becomes the inverse of the mean free path 1/λ = (P hp /η) 2m/πkT . FIG. 5 . 5-a-Conductance vs. pressure in the high-pressure chamber; -b-Reduced conductance C * vs. 1/λ , the inverse of the mean free path of the high-pressure chamber for 5 gases. Figure 6 6plots in log-log scales current intensity measured versus applied voltage for Argon at two different pressures. Two radically different regimes can be seen, as observed since the early days of field ionisation 20 under partial pressure conditions.• In the low-voltage regime, below ≈ 5kV, ionisation is limited by the electric field. A slight increase in applied voltage, and thus in electric field, considerably enhances the likelihood of ionisation, thereby increasing the current. When plotted in log-log scales, a phenomenological adjustment by a straight line results in huge typical slopes of the order of several dozens 20-22 . FIG. 6. Ionisation intensity vs. extraction voltage for two different pressures: P hp = 500 Pa (blue) and P hp = 3000 Pa (green). Corresponding pressures in the low-pressure chamber are respectively P l p = 1.2 × 10 −4 Pa (blue) and P l p = 1.3 × 10 −3 Pa (green). Figure 7 7shows the patterns of dependence of intensity (left vertical axis) on pressure in the low-pressure chamber for three different experiments. Dashed lines are a guide for the eyes, with a slope = 1. The linear dependence of the current with respect to pressure in the low-pressure Coaxial Ion Source chamber P l p can be observed. The green and black current curves, indicating the highest currents, FIG. 7 . 7Intensity I (vertical left axis) vs. P l p for three different experiments: for a given CIS geometry, at V = 12.0 kV (solid black circles) and at V = 7.0 kV (solid green squares) and, finally, for a partial pressure experiment at V = 12.2 kV (solid red triangles). The vertical right axis shows the ionisation yield I/(qQ) for the CIS geometry at V = 12.0 kV only (open black circles). Unity slope dashed lines are a guide for the eyes. Horizontal dotted line shows that I/(qQ) is almost constant. Figure 6 ) 6-25 , we define brightness B = I/(Ω.s source ), where Ω is the opening solid angle of the emitted ion beam and s source is the size of the virtual source. As mentioned in Section II, to obtain the brightness B for selected pressure and high-voltage conditions, we measured the solid angle and the emission area from projection microscopy experiments. At V = 12 kV and P l p = 2 × 10 −2 Pa, a ≈ 5 • half opening angle of the emitted beam resulting in Ω = 0.024 sr. (see Appendix B). From a shadow experiment 6 , we also estimated the emission area of the virtual source s source ≈ π(2nm) 2 . This value is probably overestimated due to mechanical vibrations from our experimental setup. We hence obtain a brightness B ≈ 3 × 10 11 A/m 2 /sr. Although the CIS is still in development, its room-temperature brightness compares favourably with the brightness of well-established Liquid Metal Ion Sources (LMIS). Typically operated at extraction voltages V ≈ 30 kV, LMIS sources exhibit typical brightnesses 2,26 of B ≈ 3 × 10 10 A/m 2 /sr, one order of magnitude lower. However, while we studied the dependence of current on high voltages (see , the dependence of brightness on high voltages remains to be studied. In the meantime, for ease of comparison, we maintain the original definition of brightness 23-25 rather than using reduced brightness B r = B/V . For applications such as focused ion beams, the energy spread of the ion beam ∆E 0 is another important parameter. A source-intrinsic narrow energy distribution is always preferable. Typical intrinsic energy-spread values can be found in the literature 27 . For LMIS sources, principally used in focused ion microscopes, the order of magnitude is ∆E 0 ≈ 10 eV 26 while for GFIS sources, such as our CIS, it is much lower: ∆E 0 ≈ 1 eV 27 . FIG. 8 .− 1 81Reduced conductance C * vs. 1/λ (the inverse of the mean free path of the high-pressure chamber)for 5 gases.-a-For CIS geometry S 7 = a 7 × b 7 = 432 × 12.5µm 2 ; -b-For CIS geometry S 8 = a 8 × b 8 = 463 × 22.5µm 2 .TABLE II. The inverted mean free path at the viscous-to-choked flow conductance cross-over (1/λ co ) for structures s 7 and s 8 ; the width b of the structure; (b 2 /λ co ); and, the effective characteristic length (b and for structure s 8 at (1/λ co ) s8 = (2.5 ± 1.0) × 10 6 m −1 . However, calculating (b 2 /λ co ), TABLE I . IEffective pumping speeds and gauge factors for the gases. Effective pumping speeds measured as explained in Section III B, except for Xe: being too heavy. Xe effective pumping speed is determinedfrom reduced conductance curves (see Figure 5-b) gas N 2 He H 2 Ar Xe pumping speed (×10 −3 m 3 .s −1 ) 98 206 177 88 (65) gauge factor 1 0.18 0.46 1.29 2.87 The data that support the findings of this study are available from the corresponding author upon reasonable request. The history and development of the helium ion microscope. N P Economou, J A Notte, W B Thompson, 10.1002/sca.20239Scanning. 34N. P. Economou, J. A. Notte, and W. B. Thompson, "The history and development of the helium ion microscope," Scanning 34, 83-89 (2012). Quest for high brightness, monochromatic noble gas ion sources. V Tondare, 10.1116/1.2101792Journal of Vacuum Science & Technology A. 231498V. Tondare, "Quest for high brightness, monochromatic noble gas ion sources," Journal of Vac- uum Science & Technology A 23, 1498 (2005). Local supply of gas in vacuum: Application to a field ion source. M Descoins, Z Hammadi, R Morin, 10.1116/1.2968689Journal of Vacuum Science & Technology A. 261331M. Descoins, Z. Hammadi, and R. Morin, "Local supply of gas in vacuum: Application to a field ion source," Journal of Vacuum Science & Technology A 26, 1331 (2008). Proton and light ion nanobeams from field ionization of water. Z Hammadi, M Descoins, E Salançon, R Morin, 10.1063/1.4770516Applied Physics Letters. 101243110Z. Hammadi, M. Descoins, E. Salançon, and R. Morin, "Proton and light ion nanobeams from field ionization of water," Applied Physics Letters 101, 243110 (2012). A new approach to gas field ion sources. E Salançon, Z Hammadi, R Morin, Ultramicroscopy. 95E. Salançon, Z. Hammadi, and R. Morin, "A new approach to gas field ion sources," Ultrami- croscopy 95, 183-188 (2003). Bright sources under the projection microscope: using an insulating crystal on a conductor as electron source. L Lapena, D Bedrane, A Degiovanni, E Salançon, 10.1051/epjap/2022210260The European Physical Journal: Applied Physics. 97L. Lapena, D. Bedrane, A. Degiovanni, and E. Salançon, "Bright sources under the projec- tion microscope: using an insulating crystal on a conductor as electron source," The European Physical Journal: Applied Physics 97 (2022), 10.1051/epjap/2022210260. . M Konishi, M Takizawa, T Tsumori, Journal of Vacuum Science and Technology B. 6M. Konishi, M. Takizawa, and T. Tsumori, Journal of Vacuum Science and Technology B 6 (1988). Field ion microscopy principles and applications. E W Müller, T T Tsong, American Elsevier Pub. CoE. W. Müller and T. T. Tsong, "Field ion microscopy principles and applications," American Elsevier Pub. Co (1969). Field ionization of water. A R Anway, 10.1063/1.1671324The Journal of Chemical Physics. 50A. R. Anway, "Field ionization of water," The Journal of Chemical Physics 50, 2012-2021 (1969). Preparation and characterization of single-atom tips. H.-S Kuo, I.-S Hwang, T.-Y Fu, J.-Y Wu, C.-C Chang, T T Tsong, 10.1021/nl048569bNano Letters. 4H.-S. Kuo, I.-S. Hwang, T.-Y. Fu, J.-Y. Wu, C.-C. Chang, and T. T. Tsong, "Preparation and characterization of single-atom tips," Nano Letters 4, 2379-2382 (2004). Preparation of singleatom tips and their field emission behaviors. H.-S Kuo, I.-S Hwang, T.-Y Fu, Y.-C Lin, C.-C Chang, T Tsong, 10.1380/ejssnt.2006.233E-journal of Surface Science and Nanotechnology. 4H.-S. Kuo, I.-S. Hwang, T.-Y. Fu, Y.-C. Lin, C.-C. Chang, and T. Tsong, "Preparation of single- atom tips and their field emission behaviors," E-journal of Surface Science and Nanotechnology 4, 233-238 (2006). atom electron source by noble-metal surface diffusion. 10.1116/1.4769966Journal of Vacuum Science & Technology B. 31atom electron source by noble-metal surface diffusion," Journal of Vacuum Science & Technol- ogy B 31, 02B105 (2013). A review of the molecular flow conductance for systems of tubes and components and the measurement of pumping speed. W Steckelmacher, Vacuum. 16W. Steckelmacher, "A review of the molecular flow conductance for systems of tubes and com- ponents and the measurement of pumping speed," Vacuum 16, 561-584 (1966). Vacuum manual. L Holland, S W , J Yarwood, 10.1007/978-94-011-8120-4L. Holland, S. W., and J. Yarwood, "Vacuum manual," (1974), 10.1007/978-94-011-8120-4. Gas flow. J F O&apos;hanlon, http:/arxiv.org/abs/https:/onlinelibrary.wiley.com/doi/pdf/10.1002/0471467162.ch3A User's Guide to Vacuum Technology. LtdJohn Wiley & SonsJ. F. O'Hanlon, "Gas flow," in A User's Guide to Vacuum Technology (John Wiley & Sons, Ltd, 2003) pp. 25-56, https://onlinelibrary.wiley.com/doi/pdf/10.1002/0471467162.ch3. The laws of molecular and viscous flow of gases through tubes. die gesetze der molekularstroemung und der inneran reibungsstroemung der gase durch roehren. M Knudsen, Ann. Physik. 28Translated from the German journal articleM. Knudsen, "The laws of molecular and viscous flow of gases through tubes. die gesetze der molekularstroemung und der inneran reibungsstroemung der gase durch roehren," Translated from the German journal article, Ann. Physik 28, 75-130 (1909). Knudsen flow 75 years on: the current state of the art for flow of rarefied gases in tubes and systems. W Steckelmacher, Rep. Prog. Phys. 491083W. Steckelmacher, "Knudsen flow 75 years on: the current state of the art for flow of rarefied gases in tubes and systems," Rep. Prog. Phys. 49, 1083 (1986). Exit loss in viscous tube flow. D J Santeler, 10.1116/1.573925Journal of Vacuum Science & Technology A. 4D. J. Santeler, "Exit loss in viscous tube flow," Journal of Vacuum Science & Technology A 4, 348-352 (1986). An analysis of equations for flow in thin, rectangular channels. J F O&apos;hanlon, 10.1116/1.574824Journal of Vacuum Science & Technology A. 5J. F. O'Hanlon, "An analysis of equations for flow in thin, rectangular channels," Journal of Vacuum Science & Technology A 5, 98-100 (1987). Field ionization of gases at a metal surface and the resolution of the field ion microscope. E Muller, K Bahadur, Physical Review. 102624E. Muller and K. Bahadur, "Field ionization of gases at a metal surface and the resolution of the field ion microscope," Physical Review 102, 624 (1956). Field emission and field ionization. R Gomer, American Institute of PhysicsR. Gomer, "Field emission and field ionization," American Institute of Physics (1993). Current-voltage characteristics of a gas field ion source. K Jousten, K Bohringer, S Kalbitzer, Appl. Phys. B. 46K. Jousten, K. Bohringer, and S. Kalbitzer, "Current-voltage characteristics of a gas field ion source," Appl. Phys. B 46, 313-323 (1988). Versuche, rechnungen und ergebnisse zur frage des auflösungsvermögens beim Übermikroskop. B Borries, E Ruska, Z. techn. Phys. 20225B. von Borries and E. Ruska, "Versuche, rechnungen und ergebnisse zur frage des auflösungsver- mögens beim Übermikroskop," Z. techn. Phys. 20, 225 (1939). Recent advances in ultrafast structural techniques. G Sciaini, 10.3390/app9071427Applied Sciences. 91427G. Sciaini, "Recent advances in ultrafast structural techniques," Applied Sciences 9, 1427 (2019). Contribution of field effects to the achievement of higher brightness ion sources. P Sudraud, V D W J , C C , C R , 10.1016/0039-6028(78)90421-1Surface Science. 70P. Sudraud, V. D. W. J., C. C., and C. R., "Contribution of field effects to the achievement of higher brightness ion sources," Surface Science 70, 392-402 (1978). Electrohydrodynamic instabilities and the energy spread of ions drawn from liquid metals. G Mair, 10.1016/0039-6028(78)90421-1J. Phys. D: Appl. Phys. 29G. Mair, "Electrohydrodynamic instabilities and the energy spread of ions drawn from liquid metals," J. Phys. D: Appl. Phys. 29, 2186-2192 (1996). Measurement of the energy distribution in field ionization. T T Tsong, E W Muller, 10.1063/1.1725725The Journal of Chemical Physics. 413279T. T. Tsong and E. W. Muller, "Measurement of the energy distribution in field ionization," The Journal of Chemical Physics 41, 3279 (1964). . Coaxial Ion Source. Coaxial Ion Source Xenon gas field ion source from a single-atom tip. W Lai, C Lin, W Chang, P Li, T Fu, C Chang, T Tsong, I Hwang, Nanotechnology. 28255301W. Lai, C. Lin, W. Chang, P. Li, T. Fu, C. Chang, T. Tsong, and I. Hwang, "Xenon gas field ion source from a single-atom tip," Nanotechnology 28, 255301 (2017). Unsteady rarefied gas flow through a slit. A Polikarpov, I Graur, 10.1016/j.vacuum.2013.07.006Vacuum. 101A. Polikarpov and I. Graur, "Unsteady rarefied gas flow through a slit," Vacuum 101, 79-85 (2014). Experimental data and theoretical modeling of gas flows through metal capillary leaks. S Tison, 10.1016/0042-207X(93)90342-8Vacuum. 44S. Tison, "Experimental data and theoretical modeling of gas flows through metal capillary leaks," Vacuum 44, 1171-1175 (1993). Data on internal rarefied gas flows. F Sharipov, V Seleznev, 10.1063/1.556019Journal of Physical and Chemical Reference Data. 27657F. Sharipov and V. Seleznev, "Data on internal rarefied gas flows," Journal of Physical and Chem- ical Reference Data 27, 657 (1998). Gas surface interactions and field ion microscopy of non refractory metals. E W Muller, S Nakamura, O Nishikawa, S B Mclane, 10.1063/1.1714519Journal of Applied Physics. 362496E. W. Muller, S. Nakamura, O. Nishikawa, and S. B. McLane, "Gas surface interactions and field ion microscopy of non refractory metals," Journal of Applied Physics 36, 2496 (1965). Ion current characteristics of an ar field ionization source. M Sato, 10.1016/0167-2584(93)90360-USurface Science Letters. 285M. Sato, "Ion current characteristics of an ar field ionization source," Surface Science Letters 285, L525-L527 (1993). The length of the stainless-steel capillary is L = 6mm and the diameter of the tungsten wire is d = 125µm for each CIS structure. The diameter of the stainless-steel capillary changes, giving an area of the opened section: S = π D+d 2 × D−d 2 = a × b (see Figure 1). For the CIS structure named s 7 , D = 150µm. Appendix A: Effect of CIS geometrical parameters on conductance Pressure measurements P l p vs. P hp were performed for 5 different gases and two different types of CIS structure. we then have S 7 = a 7 × b 7 = 432 × 12.5µm 2 ; and for the CIS structure s 8 , presented in the article, D = 170µm, S 8 = a 8 × b 8 = 463 × 22.5µm 2 . The reduced conductance versus the inverse of the mean free path of the high-pressure chamber is plotted in Figure 8 for each gas and each structure. The generic S-shaped curve described in Section IV A appears for both experimentsAppendix A: Effect of CIS geometrical parameters on conductance Pressure measurements P l p vs. P hp were performed for 5 different gases and two different types of CIS structure. The length of the stainless-steel capillary is L = 6mm and the diameter of the tungsten wire is d = 125µm for each CIS structure. The diameter of the stainless-steel capillary changes, giving an area of the opened section: S = π D+d 2 × D−d 2 = a × b (see Figure 1). For the CIS structure named s 7 , D = 150µm, we then have S 7 = a 7 × b 7 = 432 × 12.5µm 2 ; and for the CIS structure s 8 , presented in the article, D = 170µm, S 8 = a 8 × b 8 = 463 × 22.5µm 2 . The reduced conductance versus the inverse of the mean free path of the high-pressure chamber is plotted in Figure 8 for each gas and each structure. The generic S-shaped curve described in Section IV A appears for both experiments. This confirms that the value of C 0 remains constant. From our data, we derive C 0 ≈ 0.1, which is close to the expected value 3,18 . We take our consistency tests farther, looking at different regimes. We now consider the transition between the choked-flow and the viscous-flow regimes. From Equations 2 and 3, the crossover obtained for (1/λ co ) should correspond to the same (b 2 /λ co ) value for the two structures. 0.52± 0.08. Using Equation 3 of the conductance for a rectangular aperture in the choked-flow regime. The observed transition from viscous-to choked-flow occurs for structure s 7 at (1/λ co ) s7 = (4 ± 1± 0.08. Using Equation 3 of the conductance for a rectangular aperture in the choked-flow regime, this ratio can be calculated: a 7 b 7 /(a 8 b 8 ) ≈ 0.52. This confirms that the value of C 0 remains constant. From our data, we derive C 0 ≈ 0.1, which is close to the expected value 3,18 . We take our consistency tests farther, looking at different regimes. We now consider the transition between the choked-flow and the viscous-flow regimes. From Equations 2 and 3, the crossover obtained for (1/λ co ) should correspond to the same (b 2 /λ co ) value for the two structures. The observed transition from viscous-to choked-flow occurs for structure s 7 at (1/λ co ) s7 = (4 ± 1) ×
[]
[ "Variational quantum simulation of the quantum critical regime", "Variational quantum simulation of the quantum critical regime" ]
[ "Zhi-Quan Shi \nSchool of Physics and Telecommunication Engineering\nGuangdong Provincial Key Laboratory of Quantum Engineering and Quantum Materials\nSouth China Normal University\n510006GuangzhouChina\n", "Xu-Dan Xie \nSchool of Physics and Telecommunication Engineering\nGuangdong Provincial Key Laboratory of Quantum Engineering and Quantum Materials\nSouth China Normal University\n510006GuangzhouChina\n", "Dan-Bo Zhang \nSchool of Physics and Telecommunication Engineering\nGuangdong Provincial Key Laboratory of Quantum Engineering and Quantum Materials\nSouth China Normal University\n510006GuangzhouChina\n\nFrontier Research Institute for Physics\nGuangdong-Hong Kong Joint Laboratory of Quantum Matter\nSouth China Normal University\n510006GuangzhouChina\n" ]
[ "School of Physics and Telecommunication Engineering\nGuangdong Provincial Key Laboratory of Quantum Engineering and Quantum Materials\nSouth China Normal University\n510006GuangzhouChina", "School of Physics and Telecommunication Engineering\nGuangdong Provincial Key Laboratory of Quantum Engineering and Quantum Materials\nSouth China Normal University\n510006GuangzhouChina", "School of Physics and Telecommunication Engineering\nGuangdong Provincial Key Laboratory of Quantum Engineering and Quantum Materials\nSouth China Normal University\n510006GuangzhouChina", "Frontier Research Institute for Physics\nGuangdong-Hong Kong Joint Laboratory of Quantum Matter\nSouth China Normal University\n510006GuangzhouChina" ]
[]
The quantum critical regime marks a zone in the phase diagram where quantum fluctuation around the critical point plays a significant role at finite temperatures. While it is of great physical interest, simulation of the quantum critical regime can be difficult on a classical computer due to its intrinsic complexity. In this paper, we propose a variational approach, which minimizes the variational free energy, to simulate and locate the quantum critical regime on a quantum computer. The variational quantum algorithm adopts an ansatz by performing an unitary operator on a product of a single-qubit mixed state, in which the entropy can be analytically obtained from the initial state, and thus the free energy can be accessed conveniently. With numeral simulation, we show, using the one-dimensional Kitaev model as a demonstration, the quantum critical regime can be identified by accurately evaluating the temperature crossover line. Moreover, the dependence of both the correlation length and the phase coherence time with the temperature are evaluated for the thermal states. Our work suggests a practical way as well as a first step for investigating quantum critical systems at finite temperatures on quantum devices with few qubits.
10.1088/1674-1056/accb43
[ "https://export.arxiv.org/pdf/2302.07438v1.pdf" ]
256,868,379
2302.07438
d7a4d5d42a6a13c6d6e1a29834be0b37f603806a
Variational quantum simulation of the quantum critical regime Zhi-Quan Shi School of Physics and Telecommunication Engineering Guangdong Provincial Key Laboratory of Quantum Engineering and Quantum Materials South China Normal University 510006GuangzhouChina Xu-Dan Xie School of Physics and Telecommunication Engineering Guangdong Provincial Key Laboratory of Quantum Engineering and Quantum Materials South China Normal University 510006GuangzhouChina Dan-Bo Zhang School of Physics and Telecommunication Engineering Guangdong Provincial Key Laboratory of Quantum Engineering and Quantum Materials South China Normal University 510006GuangzhouChina Frontier Research Institute for Physics Guangdong-Hong Kong Joint Laboratory of Quantum Matter South China Normal University 510006GuangzhouChina Variational quantum simulation of the quantum critical regime (Dated: February 16, 2023) The quantum critical regime marks a zone in the phase diagram where quantum fluctuation around the critical point plays a significant role at finite temperatures. While it is of great physical interest, simulation of the quantum critical regime can be difficult on a classical computer due to its intrinsic complexity. In this paper, we propose a variational approach, which minimizes the variational free energy, to simulate and locate the quantum critical regime on a quantum computer. The variational quantum algorithm adopts an ansatz by performing an unitary operator on a product of a single-qubit mixed state, in which the entropy can be analytically obtained from the initial state, and thus the free energy can be accessed conveniently. With numeral simulation, we show, using the one-dimensional Kitaev model as a demonstration, the quantum critical regime can be identified by accurately evaluating the temperature crossover line. Moreover, the dependence of both the correlation length and the phase coherence time with the temperature are evaluated for the thermal states. Our work suggests a practical way as well as a first step for investigating quantum critical systems at finite temperatures on quantum devices with few qubits. I. INTRODUCTION The quantum phase transition of quantum many-body systems marks a sharp transition between two phases and plays a central role in physics. Although occurring at zero temperature, the critical quantum fluctuation at the quantum phase transition point has far-reaching effects on the whole phase diagram, especially on a zone of quantum critical regime above the critical point that spans a range of temperatures [1]. The quantum critical regime is believed to be key for understanding a broad of physics, such as high-Tc superconductivity [2] and nuclear matter under finite temperature and finite density [3,4]. However, the intrinsic complexity of the quantum critical regime, where both quantum fluctuation and thermal fluctuation interplay, makes a simulation of it with classical computers hard due to the sign problem [5]. In recent years, rapid advances in quantum technologies, including both quantum hardware and quantum algorithms, enable us to simulate quantum many-body systems. It is natural to exploit current quantum processors to simulate zero-temperature quantum systems, which typically involve pure state, to investigate both static and dynamical properties of quantum systems [6][7][8][9][10]. Instead, simulation of finitetemperature quantum systems at equilibrium requires to prepare of a kind of mixed states called thermal states [11][12][13][14][15][16][17][18][19][20][21][22], which can be either obtained as a subsystem of a pure state [12,15,21], or as a mixture of pure states with a classical probabilistic distribution [16,17]. There are basically two approaches for thermal state preparation on a quantum computer. One is to filter out the thermal state at a temperature from a completely mixed state (infinitetemperature state) by effectively implementing an imaginary * [email protected] time evolution [12,21,23]. The other approach refers to variational construction of the thermal state with parameterized quantum circuit [15][16][17][18][19][20], where the optimization can be obtained by minimizing the variational free energy with a hybrid quantum-classical procedure. The variational method relies on less quantum resource and is suitable for simulating finite-temperature quantum systems on current or near-term quantum processors [24]. However, simulation of the quantum critical regime, which demands for accurately controlling both the quantum fluctuation and thermal fluctuation and their interplay, stands out as an important goal which is less investigated. While Ref. [21] has proposed to investigate the quantum critical regime with a continuous-variable assisted quantum algorithm [25,26], it should be run on a hybrid variable quantum computer which still awaits for development. In this regard, a practical way for simulating the quantum critical regime can refer to variational quantum algorithm [27,28]. In this paper, we propose a variational approach to simulate the quantum critical regime, which is demonstrated with the one-dimensional Kitaev model under the periodic condition. The variational quantum algorithm adopts an ansatz that the variational energy can be obtained conveniently, in which the entropy is encoded in the initial state as a product of onequbit mixed states and the parameterized unitary operator will not change the entropy. By numeral simulation, we show that such a variational quantum algorithm can prepare thermal states faithfully across the phase diagram of the Kitaev model. Remarkably, we reveal that the temperature crossover, which is important as it can locate the quantum critical regime, can be obtained accurately. We also measure both the static and dynamic correlation functions on the optimized variational thermal states, based on which the correlation length and the phase coherence time are fitted. It is shown that both the correlation length and the phase coherence time in the critical regime for a range of intermediate temperatures are proportional to the inverse of temperature. Our work suggests that investigating quantum critical regime with few qubits can be feasible on the current quantum processors. II. QUANTUM CRITICAL REGIME AND VARIATIONAL QUANTUM ALGORITHM In this section, we first introduce some backgrounds of the quantum critical regime with the one-dimensional Kitaev model [29]. Then we introduce a variational quantum computing approach for simulating thermal states of the Kitaev ring (one-dimensional Kitaev model under periodic boundary condition) as well as locating the quantum critical regime and investigating its properties . A. Quantum critical regime of the Kitaev model Quantum phase transition is defined at zero temperature where the phase of state dramatically changes when tuning a parameter of the system across a point. The critical point associated with the quantum phase transition, although occurs at zero temperature, has far-reaching effects for the phases of state at finite temperature [1]. By comparing two important energies scales of the system, namely the gap ∆ and the temperature T , the phase diagram can be divided into regimes ∆ T and ∆ T , as shown in Fig. 1. The regime of ∆ T represents low-T regime where dynamics and transport can be described in a semi-classical way. The regime of ∆ T , where both quantum fluctuation and classical fluctuation interplay, marks a quantum critical regime, which is separated from the semi-classical regimes with the temperature crossover lines. The quantum critical regime owns universal properties, such as the correlation length and the phase coherence time have scaling behaviors with the temperature [1]. The quantum critical regime plays a key role in understanding a broad of physics such as high-Tc superconductivity and nuclear matter. The difficulty of studying quantum critical systems lies in the intrinsic complexity associated with thermals states that describe phases of state in the quantum critical regime [5]. Quantum simulation can directly prepare those thermal states and measure the physical properties. In this regard, it provides a bottom-up approach to investigating the quantum critical regime. However, as investigated in our previous work [21], a basic goal of simulating the quantum critical regime by locating the temperature crossover line can be tricky as it is model-dependence. For instance, it typically requires more than a system size of more than 100 sites for the quantum Ising model, and on the other hand, it demands only a few sites for the Kitaev ring [21]. Thus, we chose the Kitaev ring as a model Hamiltonian for simulating the quantum critical regime. The Hamiltonian of the Kitaev ring model reads H K = −J N i=1 [c † i c i+1 + c † i c † i+1 + h.c.]−u N i=1 c † i c i+1 ,(1) where fermion operators c N +1 = c 1 as imposed by the periodic condition. The Kitaev model has a quantum phase transi- tion at u 2J = 1. To simulate the Kitaev ring model, the fermion operator should be mapped to qubit operator by the Jordan- Wigner transformation, c i = i−1 j=1 σ z j σ − i ,c † i = σ † i i−1 j=1 σ z j , c † i c i = 1 2 (σ z i − 1) . Now the spin Hamiltonian reads as (omitting a constant), H = −J N −1 i=1 σ x i σ x i+1 − Jσ y 1 P σ y N −λ N i=1 σ z i ,(2)where λ = u 2 , P = N −1 i=2 σ z i is a string operator. Hereafter we set J = 1. The model in Eq. (2) is in fact close to the transversal field Ising model (TFIM), except that the boundary term σ y 1 P σ y N is different, e.g., the term should be σ x 1 σ x N for TFIM. We now turn to discuss the Kitaev ring at finite temperature. The quantum system of the Kitaev ring at equilibrium under an inverse temperature β = 1/T can be described by a thermal state (also known as Gibbs state) [30], ρ(β) = e −βH /Z(β), where Z(β) = Tre −βH is the partition function. The free energy is related to the partition function as F (β) = −β −1 ln Z(β). The free energy at parameter λ is denoted as F (β, λ). The thermodynamic properties can be derived from the free energy or by measuring observable the thermal states. One conventional approach to locate the temperature cross lines is to calculate the magnetic susceptibility and identify the temperature cross point T * (λ) for a given λ as a temperature where the susceptibility is maximum [1,21]. From statistical physics, it is known that the magnetic susceptibility can be expressed as χ(β, λ) = ∂ 2 F (β, λ) ∂ 2 λ .(3) Thus, the temperature cross point for a given λ is, T * = arg max T χ(1/T, λ).(4) It should be pointed out that the quantum critical regime would not include the zone T > J, which is dominated by lattice cutoff and the properties are not universal [1]. Another important aspect of the quantum critical regime is the scaling behavior of the correlation length ξ and the phase coherence time τ [1]. For the spin chain of H, it is known that both ξ and τ are proportional to the inverse temperature β in the quantum critical regime. The correlation length and the phase coherence time should be evaluated from the static correlation function and the dynamical correlation function, respectively. The static correlation function is defined as R(n) = N i=1 Tr[ρ(β)σ x i σ x i+n ].(5) At nonzero temperature, R(n) should be exponentially decreasing with the spatial separation n . The correlation length is defined as a characterization length by R(n) ∝ e −n/ξ . The dynamical correlation function can be chosen as C(t) = N i=1 |Tr[ρ(β)σ x i (t)σ x i ]|,(6) where σ z i (t) = e iHt σ z i e −iHt . Note that in the summation each dynamical correlation function takes absolute value as it is a complex number. Similarly, C(t) at finite temperature is exponential decreasing with t and the phase coherence time is defined through C(t) ∝ e −t/τ . B. Variational quantum algorithm We now propose a variational quantum computing approach for simulating the quantum critical regime for the Kitaev ring. This includes two goals: locating the quantum critical regime and investigating the scaling behavior. Reaching those goals relies on preparing thermal states accurately on a quantum computer, based on which physical quantities can be evaluated reliably. Complexity of thermal states Let us first illustrate the complexity of preparing thermal states. It is inspiring to decompose the thermal state as, ρ(β) = 2 N i=1 p i (β)|ψ i ψ i |, p i (β) = e −βEi Z(β) ,(7) where H|ψ i = E i |ψ . The thermal state thus is a mixture of eigenstates {|ψ i } with classical probabilities {p i }. At low temperatures, only the ground state has a large weighting and the task is almost reduced for preparing the ground state. However, for higher temperatures, such as T ∼ ∆, the lowlying eigenstates will have large probabilities and it requires to prepare accurately those low-lying eigenstates |ψ i with the corresponding weights p i at the same time. Such a task is harder than preparing the ground state. However, for very high temperatures the task of preparing the thermal state becomes easy. To see this, we consider the infinite-temperature limit β = 0. The thermal state is a completely mixed state ρ(β = 0) = I/2 N . As U ρ(β = 0)U † = ρ(β = 0), where U is a unitary transformation, ρ(β = 0) is in fact an equalmixing of an arbitrary set of complete basis {U |ψ }. Another aspect to reveal the simplicity of ρ(β = 0) is to that it equals to a product of single-qubit mixed state ⊗ N i=1 I/2. In other words, there is no correlation at β = 0 and the temperature is local [31]. In fact, it can be proved theoretically that the correlation length scales as ξ(β) = β 2 3 for local Hamiltonian at high temperatures [32]. Since it requires a deeper quantum circuit for preparing states with longer correlation length [32,33], such a scaling indicates that the complexity of preparing a thermal state reduces when increasing the temperature at the high-temperature regime. The above discussion suggests that preparing thermal states at intermediate temperatures will be harder than lowtemperature and high-temperature ones. This is just the case for locating the temperature crossover lines which connect the regime of ∆ T and ∆ T . In this regard, it can be challenging to simulate the quantum critical regime for quantum computing. Variational preparing thermal states The variational quantum computing approach can meet this challenge by taking the advantage of using a specific ansatz that may require short-depth quantum circuit and thus can be suitable on near-term quantum processors. The variational principle for the quantum system at temperature T is that the free energy should be minimized for the thermal state [30]. One can prepare a variational thermal state ρ(ω, β) with a parameter set ω on a quantum computer. The ρ(ω; β) can be generated either as a subsystem of a pure state or as a mixture of pure states. The parameter set ω should be optimized by minimizing the variational free energy expressed as, F (ω; β) = E(ω) − T S(ω),(8) where E(ω) = Tr[ρ(ω; β)H] is the average energy and S(ω) = −Tr[ρ(ω; β) log ρ(ω; β)] is the von Neumann entropy. The energy E(ω) can be evaluated by decomposing the Hamiltonian as a linear combination of local observable and measuring each separately. However, estimating the von Neumann entropy is a difficulty task in general since it does not associate with a Hermitian observable. Indeed some quantum protocols can measure the Reyi entropy which is a function of ρ n (n is a positive integer) [34][35][36]. However, measuring the von Neumann entropy is more challenging as it involves log ρ and there is still lack of efficient protocol for generic quantum states, except for some proposals with approximation valid under specific conditions [19,37,38]. One solution is to use some specific ansatz so that the von Neumann entropy can be calculated directly without measurements. This is possible by using an ansatz where the classical probability and the eigenstates are parameterized sepa-FIG. 2. Illustration of the variational quantum algorithm. The variational Gibbs state is prepared by performing a parameterized unitary operator on an initial state ⊗iρi(θi). The parameterized circuit consists of p blocks. By measuring the energy E on a quantum computer by calculating the entropy S from θ, the variational free energy F can be obtained. With a hybrid quantum-classical optimization, parameters (θ, α, γ)) are iteratively updated to minimize the variational free energy. rately [22,39,40], e.g., the variational thermal state can take a formula (with a parameter set ω = (θ, φ)), ρ(ω; β) = 2 N i=1 p i (θ)U (φ)|i i|U † (φ) = U (φ)ρ 0 (θ)U † (φ),(9) where ρ 0 (θ) = 2 N i=1 p i (θ)|i i|. The unitary evolution U (φ) will not change the entropy of the initial state ρ 0 (θ). Thus the entropy can be obtained from the classical probability {p i (θ)}, S(θ) = − 2 N i=1 p i (θ) log p i (θ).(10) Following Refs. [22,39], we chose the initial state as a product state, ρ 0 (θ) = ⊗ N i=1 ρ i (θ i ), where ρ i (θ i ) = sin 2 θ i |0 i 0 i | + cos 2 θ i |1 i 1 i |. (11) Such a choice of initial state together with Eq. (9) is known as the product-spectrum ansatz [39]. Calculation of the entropy can be simplified as it is a summation of entropy for each qubit, S(θ) = N i=1 [− sin 2 θ i log sin 2 θ i − cos 2 θ i log cos 2 θ i ]. (12) While using only N parameters to characterize the classical probability, it is shown that the product-spectrum ansatz can faithfully represent thermal states of a broad of physical systems [22,39]. It is noted that the single-qubit mixed state ρ i (θ i ) can be obtained by tracing one qubit of a two-qubit pure state sin θ i |00 +cos θ i |11 . This takes an overall 2N qubits to prepare the thermal state. One may also take ρ 0 (θ) as a mixture of computational basis |s 1 s 2 ...s N , where s i = 0, 1. If N is small, ρ 0 (θ) can be generated by preparing an initial state |s 1 s 2 ...s N with a probability N i=1 f si (θ i ), where f 0 = sin and f 1 = cos. The probabilistic way uses only N qubits but involves an ensemble of quantum circuits whose number increases exponentially with N . The unitary operator U (φ) can be constructed with different types of parameterized quantum circuits. Here we use the following structure involving p-blocks of alternatively Hamiltonian evolution (with a parameter set φ = (α, η)) [22,41], U (φ) ≡ U (α, η) = p l=1 e −iH2(η l ) e −iH1(α l )(13) where H 1 (α l ) = N i=1 α l,i σ z i and H 2 (η l ) = N −1 i=1 η l,i σ x i σ x i+1 + η l,N σ y 1 P σ y N . As terms in H 1 (α l ) commute to each other, e −iH1(α l ) can be directly decomposed as a series of quantum gates. The same situation applies for e −iH2(η l ) . The choice of U (φ) is physical motivated as H 1 (α l ) and H 2 (η l ) inherit from the Hamiltonian H. While similar to the Hamiltonian ansatz (HVA) [42] (also known as Quantum Alternating Operator Ansatz [43]), it allocates each term with a variational parameter, rather than allocating the same parameter to all terms. We may call the unitary in Eq. (13) as multi-angle HVA [41]. The original HVA is proposed to prepare the ground state. A promotion of HVA to the multi-angle HVA is necessary as preparing thermal state is more difficulty and demands more representation power of the ansatz. With the initial state ρ 0 (θ) in Eq. (11) and the unitary operator U (α, η) in Eq. (13), the variational state can be written as ρ(θ, α, η; β). To optimize the parameter (θ, α, η), one can minimize the variational free energy F (θ, α, η; β) using a hybrid quantum-classical procedure. The parameter for the initial state ρ(θ) and the parameter (α, η) in the quantum circuit are updated respectively after evaluating the free energy in each iteration. An illustration of the variational quantum algorithm for preparing the thermal state as well as the hybrid quantum-classical optimization is given in Fig. 2. Evaluation of physical quantities With the optimized variational free energy, we now can evaluate the susceptibility in Eq. (3) by a second-order difference scheme, χ(β, λ) ≈ F (β, λ + δλ) + F (β, λ − δλ) − 2F (β, λ) (δλ) 2 ,(14) where δλ is a small number and variational parameters in F have not been written explicitly. For each λ, a series of χ(β, λ) are calculated, and the temperature crossover point is identified as a temperature that χ(β, λ) is maximum as in Eq. (4). By going through all λ, the temperature crossover lines T (λ) can be identified. With the optimized variational thermal state, the spatial and dynamical correlation function can be measured as in Eq. (5) and Eq. (6), respectively. Evaluating the spacial correlation function R(n) relies on a joint-measurement of two qubits. By measuring R(n) at different spacing n, the correlation length ξ is obtained by fitting R(n) ∝ e −n/ξ . Measuring the dynamical correlation function C(t) is also a standard technique [44,45]. The corresponding quantum circuit is given in Fig. 3. The value of C(t) is recorded by measuring σ x + iσ y on the ancillary qubit. Similarly, by measuring C(t) at different time t, the phase coherent time τ can be obtained by fitting C(t) ∝ e −t/τ . It should be emphasized that studying the scaling behavior should refer to the thermodynamic limit where the system size is infinite. As quantum simulation can only be performed on finite-size systems, one may conduct finite-size scaling [46]. In this work, however, we only do some primary investigations on the scaling behavior without sophisticated finite-size scaling analysis due to limited simulation capacity. III. RESULTS In this section, we represent simulation results. The simulation is performed with the open-source package projectQ [47] on classical computers. We use BFGS for the optimization, which is a gradient-based method. We also adopt a strategy to boost the optimization by utilizing a continuous relation between optimized variational parameters with the temperature [48,49]. Concretely, we first get optimized variational parameters for high-temperature thermal states as they are comparatively easy to solve. The optimized parameters then are set as initial parameters of the thermal state of lowertemperature system. The procedure is repeated when the temperature is reduced to zero temperature. With this strategy, the thermal states of the whole phase diagram can be obtained more efficiently. We test the variational algorithm for preparing the thermal states of the Kitaev ring by comparing the optimized free energy with the exact ones. Firstly, the accuracy for solving thermal states at different system sizes is investigated with increasing circuit depth characterized by the number of blocks p in the unitary operator Eq. (13). As seen Fig. 4, with in- creasing p, the optimized free energy will converge nearly to the exact one. Moreover, a larger-size system will require a larger p for obtaining accurate free energy. The demanding of more quantum resources for simulating larger quantum systems at finite temperatures is expected but the exact scaling of quantum resources with the system size is still an open question. For the transversal field Ising model, it has been argued and numerically verified that the critical point requires a depth O(N ) to prepare the ground state with the HVA [33]. For the multi-angle HVA, the requirement of p with the system size N is still awaited investigation. Secondly, the accuracy of free energy at different temperatures is shown in Fig. 5. We choose p = 5 for all cases. For different sizes (N = 3, 4, 5) and different h (h = 0.9, 1.1), it is shown that the accuracy is good for the whole temperature range. We then turn to the first goal of simulating the quantum critical regime: identifying the temperature crossover line. We chose the Kitaev model at N = 3 as a minimal model to demonstrate the temperature crossover. The first step is to calculate the susceptibility χ(β, λ) according to the difference scheme in Eq. (14), where δλ = 0.001 is chosen. As shown in Fig. 6a, χ(β, λ) for a given λ at different temperatures are calculated. The χ ∼ T curve for each λ is peaked at a temperature, which can be identified as a temperature crossover point. Based on Fig. 6a, the temperature crossover line can be obtained. As shown in Fig. 6b, the temperature crossover line The second goal is to investigate the scaling behavior in the quantum critical regime. At this stage, the simulation is only limited to a very small size. We chose N = 6. This allows n in the spatial correlation function R(n) can take n = 1, 2, 3, which can be used for fitting the function R(n) = ae − n ξ with three data. For a finite-size system, the largest time period t in the dynamical correlation function C(t) should be chosen so that C(t) will not oscillate. As t can be continuous, the number of data can be large to fit C(t) = be −t/τ . With those in mind, R(n) and C(t) are measured for different T and λ, which are shown in Fig. 7a and Fig. 7c respectively. Then, the correlation length and the phase coherent time are fitted with R(n) and C(t)), respectively. In Fig. 7b and Fig. 7d, the relation ξ ∼ T and τ ∼ T are presented. It can be seen that both are proportional to T −1 in an intermediate regime of temperature. There are some mismatches at low temperatures. Remarkably at λ = 1, it is expected that the correlation length should be divergent while the numeral simulation shows to be almost flat. This can be due to the finite-size effect. On the other hand, the dramatically increasing (decreasing) of ξ for h = 0.95 is qualitatively consistent with theory. This corresponding to the semi-classical regimes, where χ should be exponentially growing with T −1 for λ < 1. Similarly, the phase coherent time ceases to increase with reducing temperature may be due to finite size. It is also observed a deviation of relations of ξ ∝ T −1 and τ ∝ T −1 at high temperature. This can be explained that the high-temperature regime T > J = 1 is governed by the lattice cutoff and thus no universal behavior can be expected. The above discussions suggest that the scaling behavior may be captured for an intermediate temperature regime. IV. CONCLUSIONS In summary, we have proposed a variational quantum computing approach for simulating the quantum critical regime, using the Kitaev ring as a prototype model, by investigating the temperature crossover and the scaling behavior. The variational quantum algorithm adopts an ansatz that the free energy can be obtained free of the difficulty of measuring the entropy. By numeral simulation, we have shown that the variational quantum algorithm can identify the temperature crossover accurately. Moreover, we have shown that both the correlation length and the phase coherence time are proportional to the inverse temperature in an intermediate regime of temperature. Our work has paved the way for simulating finite-temperature critical systems on a quantum computer. FIG. 1 . 1Illustration of the quantum critical regime. At zero temperature T = 0, there is a phase transition point λ = λc. By comparing the energy gap ∆ with the temperature T , the whole phase diagram can be divided into the quantum critical regime and the semi-classical regimes, which is separated by the temperature crossover line (red lines). FIG. 3 . 3Quantum circuit for computing the dynamical correlation function Tr[ρ(β)σ x i (t)σ x i ].The first qubit initialed as |0 is an ancillary qubit. FIG. 4 . 4Numeral simulation results (scatters) of free energy are compared with exact ones (lines) by increasing the number of blocks p in the quantum circuit. FIG. 5. Numeral simulation results (dots) of free energy are compared with exact ones (lines) with increasing temperature. The number of blocks is p = 5. FIG. 6 . 6Locating the quantum critical regime by identifying the temperature crossover line. (a) For a given λ, the susceptibility χ is evaluated for varied temperatures (only several λ are shown); (b) The temperature crossover point for each λ corresponds to a temperature that ξ is peaked in (a). For all scatters are simulation results and lines are exact results. obtained with VQA (red dots) fits well with the exact one (red line).FIG. 7. Dependence of the correlation length and the phase coherence time with temperature. (a) and (c) show the spatial correlation function R with the spatial separation n and the dynamical correlation function C with the time period t, respectively. Only results of T = 0.1, 0.5, 1.5 are shown; (b) shows the dependence of the correlation length χ with T and (d) shows the dependence of the phase coherent time with T . For all, simulation results marked by scatters are compared with exact results marked by lines. S Sachdev, Quantum Phase Transitions. Cambridge, EnglandCambridge University PressS. Sachdev, Quantum Phase Transitions (Cambridge University Press, Cambridge, England, 1999). . P A Lee, N Nagaosa, X.-G Wen, 10.1103/RevModPhys.78.17Rev. Mod. Phys. 7817P. A. Lee, N. Nagaosa, and X.-G. Wen, Rev. Mod. Phys. 78, 17 (2006). . H Meyer-Ortmanns, 10.1103/RevModPhys.68.473Rev. Mod. Phys. 68473H. Meyer-Ortmanns, Rev. Mod. Phys. 68, 473 (1996). . M Stephanov, K Rajagopal, E Shuryak, 10.1103/PhysRevLett.81.4816Phys. Rev. Lett. 814816M. Stephanov, K. Rajagopal, and E. Shuryak, Phys. Rev. Lett. 81, 4816 (1998). . M Troyer, U.-J Wiese, Phys. Rev. Lett. 94170201M. Troyer and U.-J. Wiese, Phys. Rev. Lett 94, 170201 (2005). . R Barends, L Lamata, J Kelly, L Garcã A-Ãąlvarez, A G Fowler, A Megrant, E Jeffrey, T C White, D Sank, J Y Mutus, B Campbell, Y Chen, Z Chen, B Chiaro, A Dunsworth, I C Hoi, C Neill, P J J Oâȃźmalley, C Quintana, P Roushan, A Vainsencher, J Wenner, E Solano, J M Martinis, 10.1038/ncomms8654Nature Communications. 67654R. Barends, L. Lamata, J. Kelly, L. Garcà a-ÃĄlvarez, A. G. Fowler, A. Megrant, E. Jeffrey, T. C. White, D. Sank, J. Y. Mu- tus, B. Campbell, Y. Chen, Z. Chen, B. Chiaro, A. Dunsworth, I. C. Hoi, C. Neill, P. J. J. OâȂŹMalley, C. Quintana, P. Roushan, A. Vainsencher, J. Wenner, E. Solano, and J. M. Martinis, Nature Communications 6, 7654 (2015). . H Bernien, S Schwartz, A Keesling, H Levine, A Omran, H Pichler, S Choi, A S Zibrov, M Endres, M Greiner, V Vuletiäg, M D Lukin, 10.1038/nature24622Nature. 551579H. Bernien, S. Schwartz, A. Keesling, H. Levine, A. Omran, H. Pichler, S. Choi, A. S. Zibrov, M. Endres, M. Greiner, V. VuletiÄG, and M. D. Lukin, Nature 551, 579 (2017). . A Kandala, A Mezzacapo, K Temme, M Takita, M Brink, J M Chow, J M Gambetta, 10.1038/nature23879Nature. 549242A. Kandala, A. Mezzacapo, K. Temme, M. Takita, M. Brink, J. M. Chow, and J. M. Gambetta, Nature 549, 242 (2017). . J Zhang, G Pagano, P W Hess, A Kyprianidis, P Becker, H Kaplan, A V Gorshkov, Z X Gong, C Monroe, 10.1038/nature24654Nature. 551601J. Zhang, G. Pagano, P. W. Hess, A. Kyprianidis, P. Becker, H. Kaplan, A. V. Gorshkov, Z. X. Gong, and C. Monroe, Nature 551, 601 (2017). . B Yang, H Sun, R Ott, H.-Y Wang, T V Zache, J C Halimeh, Z.-S Yuan, P Hauke, J.-W Pan, 10.1038/s41586-020-2910-8Nature. 587392B. Yang, H. Sun, R. Ott, H.-Y. Wang, T. V. Zache, J. C. Hal- imeh, Z.-S. Yuan, P. Hauke, and J.-W. Pan, Nature 587, 392 (2020). . B M Terhal, D P Divincenzo, 10.1103/PhysRevA.61.022301Phys. Rev. A. 6122301B. M. Terhal and D. P. DiVincenzo, Phys. Rev. A 61, 022301 (2000). . D Poulin, P Wocjan, 10.1103/PhysRevLett.103.220502Phys. Rev. Lett. 103D. Poulin and P. Wocjan, Phys. Rev. Lett. 103, 10.1103/Phys- RevLett.103.220502 (2009). . K Temme, T J Osborne, K G Vollbrecht, D Poulin, F Verstraete, 10.1038/nature09770Nature. 47187K. Temme, T. J. Osborne, K. G. Vollbrecht, D. Poulin, and F. Verstraete, Nature 471, 87 (2011). . A Riera, C Gogolin, J Eisert, 10.1103/PhysRevLett.108.080402Phys. Rev. Lett. 10880402A. Riera, C. Gogolin, and J. Eisert, Phys. Rev. Lett. 108, 080402 (2012). . J Wu, T H Hsieh, 10.1103/PhysRevLett.123.220502Phys. Rev. Lett. 123220502J. Wu and T. H. Hsieh, Phys. Rev. Lett. 123, 220502 (2019). Quantum hamiltonian-based models and the variational quantum thermalizer algorithm. G Verdon, J Marks, S Nanda, S Leichenauer, J Hidary, arXiv:1910.02071quantphG. Verdon, J. Marks, S. Nanda, S. Leichenauer, and J. Hidary, Quantum hamiltonian-based models and the variational quan- tum thermalizer algorithm (2019), arXiv:1910.02071 [quant- ph]. . J.-G Liu, L Mao, P Zhang, L Wang, 10.1088/2632-2153/aba19dMachine Learning: Science and Technology. 225011J.-G. Liu, L. Mao, P. Zhang, and L. Wang, Machine Learning: Science and Technology 2, 025011 (2021). A variational quantum algorithm for preparing quantum gibbs states. A N Chowdhury, G H Low, N Wiebe, arXiv:2002.00055quant-phA. N. Chowdhury, G. H. Low, and N. Wiebe, A variational quantum algorithm for preparing quantum gibbs states (2020), arXiv:2002.00055 [quant-ph]. . Y Wang, G Li, X Wang, 10.1103/PhysRevApplied.16.054035Phys. Rev. Appl. 1654035Y. Wang, G. Li, and X. Wang, Phys. Rev. Appl. 16, 054035 (2021). . D Zhu, S Johri, N M Linke, K A Landsman, C H Alderete, N H Nguyen, A Y Matsuura, T H Hsieh, C Monroe, https:/arxiv.org/abs/https:/www.pnas.org/doi/pdf/10.1073/pnas.2006337117Proceedings of the National Academy of Sciences. 11725402D. Zhu, S. Johri, N. M. Linke, K. A. Landsman, C. H. Alderete, N. H. Nguyen, A. Y. Matsuura, T. H. Hsieh, and C. Monroe, Proceedings of the National Academy of Sciences 117, 25402 (2020), https://www.pnas.org/doi/pdf/10.1073/pnas.2006337117. . D.-B Zhang, G.-Q Zhang, Z.-Y Xue, S.-L Zhu, Z Wang, Phys. Rev. Lett. 12720502D.-B. Zhang, G.-Q. Zhang, Z.-Y. Xue, S.-L. Zhu, and Z. Wang, Phys. Rev. Lett 127, 020502 (2021). . X.-D Xie, QuNu CollaborationX Guo, QuNu CollaborationH Xing, QuNu CollaborationZ.-Y Xue, QuNu CollaborationD.-B Zhang, QuNu CollaborationS.-L Zhu, QuNu Collaboration10.1103/PhysRevD.106.054509Phys. Rev. D. 10654509X.-D. Xie, X. Guo, H. Xing, Z.-Y. Xue, D.-B. Zhang, and S.-L. Zhu (QuNu Collaboration), Phys. Rev. D 106, 054509 (2022). S Mcardle, T Jones, S Endo, Y Li, S C Benjamin, X Yuan, 10.1038/s41534-019-0187-2npj Quantum Information. 575S. McArdle, T. Jones, S. Endo, Y. Li, S. C. Benjamin, and X. Yuan, npj Quantum Information 5, 75 (2019). . J , 10.22331/q-2018-08-06-79279J. Preskill, Quantum 2, 79 (2018). . H K Lau, R Pooser, G Siopsis, C Weedbrook, 10.1103/PhysRevLett.118.080501Phys. Rev. Lett. 11880501H. K. Lau, R. Pooser, G. Siopsis, and C. Weedbrook, Phys. Rev. Lett. 118, 080501 (2017). . D.-B Zhang, S.-L Zhu, Z  Wang, 10.1103/PhysRevLett.124.010506Phys. Rev. Lett. 12410506D.-B. Zhang, S.-L. Zhu, and Z. â. Wang, Phys. Rev. Lett. 124, 010506 (2020). . M Cerezo, A Arrasmith, R Babbush, S C Benjamin, S Endo, K Fujii, J R Mcclean, K Mitarai, X Yuan, L Cincio, P J Coles, 10.1038/s42254-021-00348-9Nature Reviews Physics. 3625M. Cerezo, A. Arrasmith, R. Babbush, S. C. Benjamin, S. Endo, K. Fujii, J. R. McClean, K. Mitarai, X. Yuan, L. Cincio, and P. J. Coles, Nature Reviews Physics 3, 625 (2021). . S Mcardle, S Endo, A Aspuru-Guzik, S C Benjamin, X Yuan, 10.1103/RevModPhys.92.015003Rev. Mod. Phys. 9215003S. McArdle, S. Endo, A. Aspuru-Guzik, S. C. Benjamin, and X. Yuan, Rev. Mod. Phys. 92, 015003 (2020). . A Y Kitaev, Physics-Uspekhi. 44131A. Y. Kitaev, Physics-Uspekhi 44, 131 (2001). M Kardar, 10.1017/CBO9780511815898Statistical Physics of Particles. Cambridge University PressM. Kardar, Statistical Physics of Particles (Cambridge Univer- sity Press, 2007). . M Kliesch, C Gogolin, M Kastoryano, A Riera, J Eisert, Phys. Rev. X. 431019M. Kliesch, C. Gogolin, M. Kastoryano, A. Riera, and J. Eisert, Phys. Rev. X 4, 031019 (2014). . T Kuwahara, A M Alhambra, A Anshu, 10.1103/PhysRevX.11.011047Phys. Rev. X. 1111047T. Kuwahara, A. M. Alhambra, and A. Anshu, Phys. Rev. X 11, 011047 (2021). . W W Ho, T H Hsieh, SCIPOST PHYS. 629W. W. Ho and T. H. Hsieh, SCIPOST PHYS 6, 029 (2019). . I Klich, G Refael, A Silva, Phys. Rev. A. 7432306I. Klich, G. Refael, and A. Silva, Phys. Rev. A 74, 032306 (2006). . R Islam, R Ma, P M Preiss, M Eric Tai, A Lukin, M Rispoli, M Greiner, Nature. 52877R. Islam, R. Ma, P. M. Preiss, M. Eric Tai, A. Lukin, M. Rispoli, and M. Greiner, Nature 528, 77 (2015). . T Brydges, A Elben, P Jurcevic, B Vermersch, C Maier, B P Lanyon, P Zoller, R Blatt, C F Roos, Science. 364260T. Brydges, A. Elben, P. Jurcevic, B. Vermersch, C. Maier, B. P. Lanyon, P. Zoller, R. Blatt, and C. F. Roos, Science 364, 260 (2019). . K M Audenaert, J. Phys. A: Math. Theor. 408127K. M. Audenaert, J. Phys. A: Math. Theor. 40, 8127 (2007). J Acharya, I Issa, N V Shende, A B Wagner, 2019 IEEE International Symposium on Information Theory (ISIT). IEEEJ. Acharya, I. Issa, N. V. Shende, and A. B. Wagner, in 2019 IEEE International Symposium on Information Theory (ISIT) (IEEE, 2019) pp. 3012-3016. . J Martyn, B Swingle, Phys. Rev. A. 10032107J. Martyn and B. Swingle, Phys. Rev. A 100, 032107 (2019). . J.-G Liu, L Mao, P Zhang, L Wang, Mach. Learn.: Sci. Technol. 225011J.-G. Liu, L. Mao, P. Zhang, and L. Wang, Mach. Learn.: Sci. Technol. 2, 025011 (2021). . B.-L Chen, D.-B Zhang, 10.1088/0256-307X/40/1/010303Chinese Physics Letters. 4010303B.-L. Chen and D.-B. Zhang, Chinese Physics Letters 40, 010303 (2023). . R Wiersema, C Zhou, Y De Sereville, J F Carrasquilla, Y B Kim, H Yuen, 10.1103/PRXQuantum.1.020319PRX Quantum. 120319R. Wiersema, C. Zhou, Y. de Sereville, J. F. Carrasquilla, Y. B. Kim, and H. Yuen, PRX Quantum 1, 020319 (2020). . S Hadfield, Z Wang, B Oâȃźgorman, E G Rieffel, D Venturelli, R Biswas, Algorithms. 1234S. Hadfield, Z. Wang, B. OâȂŹGorman, E. G. Rieffel, D. Ven- turelli, and R. Biswas, Algorithms 12, 34 (2017). . J S Pedernales, R Di Candia, I L Egusquiza, J Casanova, E Solano, 10.1103/PhysRevLett.113.020505Phys. Rev. Lett. 11320505J. S. Pedernales, R. Di Candia, I. L. Egusquiza, J. Casanova, and E. Solano, Phys. Rev. Lett. 113, 020505 (2014). . T Li, QuNu CollaborationX Guo, QuNu CollaborationW K Lai, QuNu CollaborationX Liu, QuNu CollaborationE Wang, QuNu CollaborationH Xing, QuNu CollaborationD.-B Zhang, QuNu CollaborationS.-L Zhu, QuNu Collaboration10.1103/PhysRevD.105.L111502Phys. Rev. D. 105111502T. Li, X. Guo, W. K. Lai, X. Liu, E. Wang, H. Xing, D.-B. Zhang, and S.-L. Zhu (QuNu Collaboration), Phys. Rev. D 105, L111502 (2022). . M E Fisher, M N Barber, 10.1103/PhysRevLett.28.1516Phys. Rev. Lett. 281516M. E. Fisher and M. N. Barber, Phys. Rev. Lett. 28, 1516 (1972). . D S Steiger, T Häner, M Troyer, 10.22331/q-2018-01-31-49249D. S. Steiger, T. Häner, and M. Troyer, Quantum 2, 49 (2018). . D.-B Zhang, T Yin, 10.1103/PhysRevA.101.032311Phys. Rev. A. 10132311D.-B. Zhang and T. Yin, Phys. Rev. A 101, 032311 (2020). . Z.-H Yuan, T Yin, D.-B Zhang, 10.1103/PhysRevA.103.012413Phys. Rev. A. 10312413Z.-H. Yuan, T. Yin, and D.-B. Zhang, Phys. Rev. A 103, 012413 (2021).
[]
[ "Visualizing Atomically-Layered Magnetism in CrSBr", "Visualizing Atomically-Layered Magnetism in CrSBr" ]
[ "Daniel J Rizzo \nDepartment of Physics\nColumbia University\n10027New YorkNYUSA\n", "Alexander S Mcleod \nDepartment of Physics\nColumbia University\n10027New YorkNYUSA\n\nSchool of Physics and Astronomy\nDepartment of Chemistry\nUniversity of Minnesota\n55455MinneapolisMNUSA\n\nAmherst College\n01002AmherstMAUSA\n", "Caitlin Carnahan \nDepartment of Physics\nCarnegie Mellon University\n15213PittsburghPennsylvaniaUSA\n", "Evan J Telford \nDepartment of Chemistry\nColumbia University\n10027New YorkNYUSA\n", "Avalon H Dismukes \nDepartment of Chemistry\nColumbia University\n10027New YorkNYUSA\n", "Ren A Wiscons \nDepartment of Chemistry\nColumbia University\n10027New YorkNYUSA\n", "Yinan Dong \nDepartment of Physics\nColumbia University\n10027New YorkNYUSA\n", "Colin Nuckolls \nDepartment of Chemistry\nColumbia University\n10027New YorkNYUSA\n", "Cory R Dean \nDepartment of Physics\nColumbia University\n10027New YorkNYUSA\n", "Abhay N Pasupathy \nDepartment of Physics\nColumbia University\n10027New YorkNYUSA\n", "Xavier Roy \nDepartment of Chemistry\nColumbia University\n10027New YorkNYUSA\n", "Di Xiao \nDepartment of Material Science and Engineering\nUniversity of Washington\n98195SeattleWAUSA\n", "D N Basov \nDepartment of Physics\nColumbia University\n10027New YorkNYUSA\n" ]
[ "Department of Physics\nColumbia University\n10027New YorkNYUSA", "Department of Physics\nColumbia University\n10027New YorkNYUSA", "School of Physics and Astronomy\nDepartment of Chemistry\nUniversity of Minnesota\n55455MinneapolisMNUSA", "Amherst College\n01002AmherstMAUSA", "Department of Physics\nCarnegie Mellon University\n15213PittsburghPennsylvaniaUSA", "Department of Chemistry\nColumbia University\n10027New YorkNYUSA", "Department of Chemistry\nColumbia University\n10027New YorkNYUSA", "Department of Chemistry\nColumbia University\n10027New YorkNYUSA", "Department of Physics\nColumbia University\n10027New YorkNYUSA", "Department of Chemistry\nColumbia University\n10027New YorkNYUSA", "Department of Physics\nColumbia University\n10027New YorkNYUSA", "Department of Physics\nColumbia University\n10027New YorkNYUSA", "Department of Chemistry\nColumbia University\n10027New YorkNYUSA", "Department of Material Science and Engineering\nUniversity of Washington\n98195SeattleWAUSA", "Department of Physics\nColumbia University\n10027New YorkNYUSA" ]
[]
Two-dimensional (2D) materials can host stable, long-range magnetic phases in the presence of underlying magnetic anisotropy. The ability to realize the full potential of 2D magnets necessitates systematic investigation of the role of individual atomic layers and nanoscale inhomogeneity (i.e., strain) on the emergence and stability of both intra-and interlayer magnetic phases. Here, we report multifaceted spatial-dependent magnetism in few-layer CrSBr using magnetic force microscopy (MFM) and Monte Carlo-based magnetic simulations. We perform nanoscale visualization of the magnetic sheet susceptibility from raw MFM data and force-distance curves, revealing a characteristic onset of both intra-and interlayer magnetic correlations as a function of temperature and layer-thickness. We demonstrate that the presence of a single uncompensated layer in odd-layer terraces significantly reduces the stability of the low-temperature antiferromagnetic (AFM) phase and gives rise to multiple coexisting magnetic ground states at temperatures close to the bulk Néel temperature (TN). Furthermore, the AFM phase can be reliably suppressed using modest fields (~300 Oe) from the MFM probe, behaving as a nanoscale magnetic switch. Our prototypical study of few-layer CrSBr demonstrates the critical role of layer parity on field-tunable 2D magnetism and provides vital design criteria for future nanoscale magnetic devices. Moreover, we provide a roadmap for using MFM for nanomagnetometry of 2D materials, despite the ubiquitous absence of bulk zero-field magnetism in magnetized sheets.
10.1002/adma.202201000
[ "https://arxiv.org/pdf/2112.12903v1.pdf" ]
245,501,902
2112.12903
646d81faffa34084d5dbdaaccfa9fb70d31cce6a
Visualizing Atomically-Layered Magnetism in CrSBr Daniel J Rizzo Department of Physics Columbia University 10027New YorkNYUSA Alexander S Mcleod Department of Physics Columbia University 10027New YorkNYUSA School of Physics and Astronomy Department of Chemistry University of Minnesota 55455MinneapolisMNUSA Amherst College 01002AmherstMAUSA Caitlin Carnahan Department of Physics Carnegie Mellon University 15213PittsburghPennsylvaniaUSA Evan J Telford Department of Chemistry Columbia University 10027New YorkNYUSA Avalon H Dismukes Department of Chemistry Columbia University 10027New YorkNYUSA Ren A Wiscons Department of Chemistry Columbia University 10027New YorkNYUSA Yinan Dong Department of Physics Columbia University 10027New YorkNYUSA Colin Nuckolls Department of Chemistry Columbia University 10027New YorkNYUSA Cory R Dean Department of Physics Columbia University 10027New YorkNYUSA Abhay N Pasupathy Department of Physics Columbia University 10027New YorkNYUSA Xavier Roy Department of Chemistry Columbia University 10027New YorkNYUSA Di Xiao Department of Material Science and Engineering University of Washington 98195SeattleWAUSA D N Basov Department of Physics Columbia University 10027New YorkNYUSA Visualizing Atomically-Layered Magnetism in CrSBr 1 *Correspondence to: [email protected] 2 Two-dimensional (2D) materials can host stable, long-range magnetic phases in the presence of underlying magnetic anisotropy. The ability to realize the full potential of 2D magnets necessitates systematic investigation of the role of individual atomic layers and nanoscale inhomogeneity (i.e., strain) on the emergence and stability of both intra-and interlayer magnetic phases. Here, we report multifaceted spatial-dependent magnetism in few-layer CrSBr using magnetic force microscopy (MFM) and Monte Carlo-based magnetic simulations. We perform nanoscale visualization of the magnetic sheet susceptibility from raw MFM data and force-distance curves, revealing a characteristic onset of both intra-and interlayer magnetic correlations as a function of temperature and layer-thickness. We demonstrate that the presence of a single uncompensated layer in odd-layer terraces significantly reduces the stability of the low-temperature antiferromagnetic (AFM) phase and gives rise to multiple coexisting magnetic ground states at temperatures close to the bulk Néel temperature (TN). Furthermore, the AFM phase can be reliably suppressed using modest fields (~300 Oe) from the MFM probe, behaving as a nanoscale magnetic switch. Our prototypical study of few-layer CrSBr demonstrates the critical role of layer parity on field-tunable 2D magnetism and provides vital design criteria for future nanoscale magnetic devices. Moreover, we provide a roadmap for using MFM for nanomagnetometry of 2D materials, despite the ubiquitous absence of bulk zero-field magnetism in magnetized sheets. Introduction The advent of two-dimensional (2D) materials made accessible through exfoliation of layered van der Waals (vdW) crystals has generated an explosion of research into atomically thin sheets possessing a wide range of physical properties 1,2 . Until recently [3][4][5] , the ability to synthesize and characterize 2D materials with long-range magnetic order was notably absent. Such materials hold great promise for potential applications in spintronics [6][7][8][9][10][11] , topological superconductivity 12,13 , vdW heterostructures 14,15 , and memory storage 16,17 . Hypothetical 2D magnets defy the Mermin-Wagner theorem 18 , which states that long range magnetic order cannot exist at finite temperature within the 2D isotropic Heisenberg model 19 . Hence, magnetic anisotropy is a necessary prerequisite to realizing stable 2D magnetism, and has been demonstrated to exist in several layered materials over the last few years 3,4,[20][21][22][23][24][25][26] . Among these, the chromium trihalides CrX3 (X = Cl, Br, I) have been the most extensively studied, all of which display in-plane ferromagnetic (FM) order and either FM or antiferromagnetic (AFM) interlayer coupling depending on the choice of halide and crystal thickness 27 . Despite their extensive characterization, chromium trihalides suffer from a lack of air and moisture stability and possess relatively low transition temperatures, limiting their applications. On the other hand, CrSBr has emerged [28][29][30][31] as an air-stable layered magnetic material possessing A-type AFM ordering at comparatively high temperatures (TN ≈ 132 K). A variety of analytical tools have been used to characterize CrSBr, including magnetometry 32,33 , magneto-transport 32,33 , Raman spectroscopy [32][33][34] , photoluminescence (PL) [33][34][35] , calorimetry 34 , and second-harmonic generation (SHG) 34 . Using these approaches, it has been suggested that intralayer FM correlations associated with spins on Cr orbitals give rise to shortrange b-axis polarized FM domains in each layer below a critical temperature of Tc ≈ 160 K (i.e., well above TN). This phase is dubbed the intermediate FM (iFM) phase 34 , and eventually gives way to the low temperature AFM phase as interlayer correlations emerge for T < TN. Reasoning based on c-axis spin wave confinement suggests that the difference between Tc and TN decreases as the crystal thickness approaches the few-layer limit, and ultimately vanishes in the case of a single FM monolayer 34 . Theoretical modeling indicates that CrSBr has a fully spin-polarized frontier band structure [28][29][30] , giving rise to strong coupling between excitonic transitions and magnetic order 35 . Finally, the low temperature AFM phase of CrSBr is sufficiently robust to induce proximal magnetism in bilayer graphene which subsequently displays spin-polarized conductivity and a spin-dependent Seebeck effect 36 . Despite these promising initial investigations, vital characterization of the nanoscale spatial-and layer-dependence of emergent magnetic ordering in CrSBr near its transition temperatures remains largely unexplored. There are a variety of experimental techniques that have been used to map real-space magnetic features in 2D magnets, including scanning magneto-optic Kerr microscopy 3,4,10,22,23 , spin-polarized scanning tunneling microscopy (sp-STM) 37 , scanning single-spin magnetometry 38 , and magnetic force microscopy (MFM) 39,40 . While measurement of the magneto-optic Kerr effect (MOKE) can provide quantitative information about the magnetic properties of a sample (such as transition temperatures and magnetization), minimum resolvable feature sizes are diffraction limited. On the other hand, sp-STM can provide atomically resolved images of the local spin density of states, but does not directly measure magnetic fields and has a limited penetration depth in multilayer samples. While scanning single-spin magnetometry is a highly sensitive, quantitative probe of magnetism down to the monolayer limit, it is technically challenging and time consuming. MFM represents an optimal compromise among these approaches, providing a suitably high lateral spatial resolution (≥ 20 nm) to resolve a wide range of magnetic textures [41][42][43][44] while also enabling quantitative estimates of intrinsic magnetic properties of materials with judicious modeling of probe and sample properties [45][46][47][48] . While MFM is most often used to resolve magnetic fields originating from FM materials, it can also be used to resolve magnetic fields associated with induced, stray or residual fields in AFM materials [49][50][51][52] . Despite the manifest utility of MFM for investigating magnetic behavior, its application to 2D materials has so far been minimal, being applied only to relatively thick layered magnetic materials that are effectively in their bulk magnetic state. In our study, we employed a home-built cryogenic MFM to characterize the temperature evolution of layer-dependent magnetic contrast in few-layer CrSBr ranging in thickness from two to six atomic layers (2L -6L). MFM directly detects nano-resolved variations ("contrast") in magnetic force acting between a magnetic sample and the tip of a magnetized cantilevered probe by recording frequency shifts Δ ∝ of the cantilever resonance versus probe position (with the surface normal direction; further discussion in the Support Information, SI). These measurements were conducted at temperatures ranging from well above Tc to well below TN using magnetic probes nominally magnetized to either the in-plane easy axis or the out-of-plane hard axis (b-and c-axes in Fig. 1A, respectively), affording sensitivity to fields along these axes that arise from sample magnetization. Nanoscale Magnetic Imaging of CrSBr Single crystals of CrSBr were grown by chemical vapor transport 33,54 and structurally characterized by single crystal X-ray diffraction (SCXRD) (Figs. 1A). 33 Few-layer flakes were isolated on SiO2/Si chips using standard mechanical exfoliation techniques 2 . Their layer thicknesses were determined by topographic imaging with an atomic force microscope (AFM) (Fig. 1B). For MFM experiments, a flake with the largest distribution of layer-thicknesses was chosen to provide simultaneous collection of the MFM response from several different terraces. MFM probes with a nominal remanent magnetism of 300 emu/cm 3 were first coerced using NdFeB magnets with poles aligned to either the out-of-plane or in-plane direction (i.e., the CrSBr c-axis and b-axis, respectively). By using MFM probes that are nominally aligned to either the hard out-of-plane or easy in-plane axes of CrSBr, we gain a multifaceted view of its magnetic response. On the one hand, the hard c-axis polarized probe should be sensitive to stray fields associated with the low-temperature AFM phase as well as induced magnetism caused by residual fields from the MFM probe (i.e., c-axis susceptibility). On the other hand, the easy b-axis polarized probe should also be sensitive to stray fields and b-axis susceptibility, while providing additional sensitivity to any residual zero-field magnetism in the low temperature AFM phase. When the latter arises from a sheet of magnetic dipoles aligned to the in-plane b- Fig. 2A are sorted by layer number and shown in Fig. 2B. A careful examination of these four characteristic maps yields several notable observations. In the high temperature PM phase, the MFM contrast is completely uniform across the entire field of view, indicative of a non-magnetic state. For T ≈ Tc, a highly inhomogeneous magnetic contrast emerges with diffuse boundaries between bright (small or negligible force) and dark (large attractive force) regions. We also observe an average contrast that increases with layer number. As the temperature decreases to T ≈ TN, the dark contrast phase begins to shrink and is largely sequestered to oddlayer regions. Meanwhile, a bright contrast phase is observed on all layer-thicknesses that completely envelops even-layered terraces. In contrast to the behavior of the magnetic contrast near Tc, the boundaries between bright and dark regions are quite sharp and depend on the scanning direction (Fig. S3), indicating a nontrivial influence of the magnetic tip on the stability of the attractive, dark contrast phase. Notably, bright contrast regions at T ≈ Tc are correlated spatially with dark contrast regions at T ≈ TN, indicating a variation of the magnetic transition temperatures with position that supersedes layer number (Fig. S4). Given that strain has been shown to significantly influence the stability of magnetic phases in CrSBr 53 , we speculate that this inhomogeneous spatial modulation of transition temperatures arises due to local strain induced during the exfoliation process. The MFM map at T < TN shows spatial dependence to the contrast in the AFM phase of CrSBr that is more subtle than that of the iFM phase. The most obvious features are observed at step edges, where the polarity of the MFM contrast switches on the +b side of a given terrace compared to the -b side (Fig. S5). This polarization behaves differently for odd layers compared to even layers (i.e., fringing fields on the -b side of even (odd) terraces are repulsive (attractive) while fields on the +b side are attractive (repulsive)). Such features are most prominent for step edges running perpendicular to the b-axis. In addition, step edges that are two layers thick (e.g., the step edge between 3L and 5L) have essentially no contrast compared to single-layer steps. We interpret this contrast as arising from the presence of stray fields in the top-most layer of CrSBr that is aligned either parallel (even layer) or antiparallel (odd layer) to the in-plane easy axis (b-axis) and is intuitively suppressed by cancellation of fields for integral "bilayers". The observed directional, layer parity, and step-height dependence of the MFM contrast on CrSBr step edges is consistent with the expectations from an interlayer-AFM ground state. Simultaneously, we observe a clear dependence of the interior MFM contrast on layer number in the low temperature AFM state. Notably, the addition of an odd layer increments the detectable magnetic force whereas the addition of an even layer has little effect. Hence, 3L and 4L terraces appear to have similar relative contrast, as do 5L and 6L regions. As mentioned, since a uniformly magnetized plane is not expected to give rise to detectable magnetic force directly above the sheet (i.e., within the interior of a layer), one would intuitively expect little to no dependence of the MFM contrast with layer number. The overall increase of the MFM signal with layer number contradicts this naïve expectation. Hence, a magnetic response arising solely from zero-field magnetization is insufficient to explain our observations in the low temperature AFM phase of CrSBr as detected by a b-axis magnetic probe. To clarify these findings, we repeated our temperature dependent study using an MFM Furthermore, the temperature-dependences of the MFM contrast for all layer thicknesses greater than 2L follows a trend that resembles the b-axis and c-axis susceptibilities measured previously for bulk CrSBr using volume-averaged magnetometry 32,33 . Taken together, the totality of the MFM data suggests that a clear understanding of the magnetic behavior of few-layer CrSBr demands an account of both zero-field magnetism and dynamically induced magnetism created by fields from our magnetic probe (i.e., susceptibility). Extracting the Layer Susceptibility of CrSBr We performed a series of experiments that explore the influence of stray magnetic fields from the MFM probe on the magnetic response of few-layer CrSBr in order to address the role of susceptibility in the observed MFM contrast. We use a c-axis polarized probe to minimize the influence of zero-field magnetism on the measured signal. This was achieved by collecting a grid of magnetic force-distance ("approach") curves over regions of the sample possessing both dark and bright contrast phases (i.e., both PM and iFM phases at T < Tc or both iFM and AFM phases for T ≈ TN) (Figs. 3A, F). Qualitatively, one might expect nano-scale regions of the sample with a high (positive) magnetic susceptibility to produce an attractive magnetic response increasing strongly with probe-sample distance, manifesting in MFM approach curves by Δ ( ) < 0 decreasing rapidly with decreasing . Quantitatively, we adapt a pseudo-pole description 45 of the magnetic interaction between our realistic probe geometry and the planar few-layer sample to reproduce our approach curves. This method allows quantitatively robust extraction of the local c-axis magnetic sheet susceptibility ( C 2D ) as a single fitting parameter from the associated approach curve. Here, the sheet susceptibility C 2D is derived from an area-normalized magnetization and thus has dimensions of length (see Supplementary discussion for complete description). We find that the spatial dependence of C 2D largely accounts for the layer- The contrasting behavior of the two approach curves indicates that the iFM phase has a higher value of C 2D than the AFM phase (consistent with the behavior of the bulk susceptibility 32,33 ). Note that a subset of the approach curves collected over the dark regions in Fig. 3G show a rapid switch at some fixed height above the sample surface (Fig. 3F, purple curve). Such curves initially behave like the AFM approach curve at large tip-sample separations, then undergo a discrete jump. After the jump, an approach profile characteristic of the iFM phase is observed. The difference between pre-and post-switching behavior can be seen clearly when extracting the best-fit values of C 2D for the purple approach curve in Fig. 3F, isolating for regions before and after the switch (Fig. S6B). The values of C 2D for the pre-and post-jump regions of the purple curve are 8 nm and 17 nm, respectively (Fig. S6B). These values are consistent with the values of C 2D derived for the red (5 nm) and blue (14 nm) curves, respectively (Fig. 3F). This suggests that certain regions of the CrSBr flake that are initially in the AFM state can be coerced back into the iFM state through application of small out-of-plane fields ranging from ~300-460 Oe (i.e., the residual magnetic field coming from the MFM probe, see Fig. S6). We The field-and spatial-dependence of the AFM-iFM magnetic switching is revealed by the height-dependent MFM maps derived from approach curves (Fig. 3J). These show relatively uniform MFM contrast at large tip heights (h = 200 nm). Regions with large C 2D (that do not switch) show MFM contrast that increases much more rapidly with a decrease in tip height than those with small C 2D . At the same time, a dark contrast phase suddenly emerges at h = 150 nm that steadily grows larger in lateral dimension as the tip height is reduced to h = 50 nm. This process reflects the growth of the iFM phase out of an AFM ground state due to the increasing residual field of the MFM probe impinging on the CrSBr surface. Moreover, the threshold field for inducing this switch has considerable spatial dependence (Figs. 3I, S6C-E) which is likely the cause of the dependence of the dark contrast phase on the MFM scanning direction (Fig. S3). The differences in the stability of these two ground states is apparently small enough at this threshold temperature to induce switching between these states with relatively small coercive fields (as small as ~300 Oe) (Fig. S6). The AFM phase appears to be particularly unstable on odd-layer terraces, which constitute the majority of the "switching" areas observed on the sample. Thus, the stability of the AFM phases is reduced in the presence of one uncompensated AFM layer. The coexistence of multiple magnetic phases at temperatures close to bulk Tc and TN can be rationalized as a spatial dependence to these transition temperatures in the few-layer limit. To understand this, we study the evolution of the magnetic phase fractions as a function of layer number. Using the MFM maps in Fig. S2, we compute the phase fraction as a function of temperature for the PM, iFM, and AFM phases for each layer thickness. The interpolated temperature at which the PM and iFM phase fractions are equivalent is treated as Tc, while the temperature at which the iFM and AFM fractions are equivalent is treated as TN. Fig. S7 shows the plot of Tc and TN as a function of layer number, revealing that Tc tends to increase with layer number while TN decreasesconsistent with the prediction of ref. 34 . In addition, TN oscillates with layer number, being higher for even layers compared to odd layers. Therefore, in addition to the overall suppression of TN as more monolayers are added, the AFM phase is periodically destabilized for odd-layer terraces. We also note that there is an additional spatial dependence to the onset of the iFM and AFM phases within a given layer thickness that is consistent across multiple heating and cooling cycles (Fig. S4). Hence, layer-thickness alone cannot account for the observed spatial dependence of Tc and TN. Therefore, we observe multiple spatial dependences to the relative stability of the iFM versus AFM phase at T ≈ TN and the iFM versus PM phase at T < Tc. Layer-dependence to these magnetic transitions are primarily dictated by the presence (or absence) of one uncompensated layer in odd-(or even-) layered terraces, while a concomitant layer-independent spatial modulation to these transitions is also observed (possibly arising from strain). Theoretical Modeling In = ⟨ ⟩ = [〈 2 〉 − 〈 〉 2 ] where = × × is the total number of spins, and is the total magnetization along the n-axis. The sheet susceptibility measured on the out-of-plane axis, , is shown in Fig. 4D, and mirrors the experimental c-axis polarized MFM signal shown in Fig. 4B. Again, both plots generally increase with layer number for all temperatures, and reach a maximum at the crossover temperature at which interlayer AFM correlations set in. In the low-limit, the exhibits well-spaced dependence on layer number akin to the c-axis polarized MFM signal shown in Fig. 4B. Furthermore, develops a subtle grouping behavior near the crossover point at which interlayer correlations begin to dominate. Here, the susceptibility curves of 3L and 4L nearly overlap, as do the curves of 5L and 6La subtle feature that is also observed in the c-axis polarized MFM contrast just below TN. We note that a slight peak is observed in Fig. 4B respectively. Therefore, we interpret our MFM data as acting primarily as a reporter of the local sheet susceptibility in CrSBr. Outlook We have performed a systematic temperature-dependent study of the magnetic properties of few-layer CrSBr using MFM. By employing probe tips magnetically polarized to both hard and easy axes of CrSBr, we were able to deconstruct the roles of stray, induced, and zero-field magnetism on the MFM contrast and correlate them to underlying magnetic ground states. This enables us to resolve the influence of single atomic layers on the stability of intrinsic and transient magnetic phases, revealing a systematic suppression of the AFM ground state in oddlayers and in the presence of modest magnetic fields. Our approach further reveals highly inhomogeneous magnetic couplings that do not correlate from layer thickness and likely arise from subtle structural distortions and strain that arise during the growth and fabrication process. Our results show that MFM is a quantitative tool for performing nano-magnetometry on layered magnetic materials down to the 2D limit. This study provides a template for extracting figures of merit for future atomically-thin magnets and demonstrates the necessity of these spatially-resolved diagnostic tools for assessing the next generation of 2D magnetic materials. Our susceptibility-sensitive analytic approach permits direct visualization of magnetic phase growth in layered antiferromagnets, despite such systems possessing small or vanishing bulk magnetic fields. Furthermore, we provide a straightforward method for probing the behavior of highly coercive magnetic ground states by exploiting the residual field from MFM tips, revealing a promising route to tailoring nanoscale magnetic switches through control of layer parity. The insights gained in this study both significantly expand our understanding of the behavior of fewlayer CrSBr and provide a conceptual foundation for the quantitative use of MFM in atomically thin magnets. Methods Material Growth: CrSBr was synthesized using chemical vapor transport from Cr and S2Br2 precursors following established protocols 33,54 . Figs. 2, 3, S1, S2, S3, S4, S5 and S6 is set as the largest value (i.e., the least negative value) within a given map. For MFM approach curve data, heights above the CrSBr surface were defined relative to the point of "hard contact" within a given approach curve (i.e., where f bottoms out). Magnetic Force Monte-Carlo Simulations Competing Interests The authors declare no competing financial interests. Data Availability All data presented in the manuscript are available upon request. Supporting Information Available Supporting Information contains auxiliary MFM data collected with both b-and c-axis polarized tips, additional computations of the off-axis sheet susceptibility, derivation of approach curve modeling, and a detailed description of Monte Carlo simulations. (purple) regions of the CrSBr near the Néel temperature (T = 127.5 K) as indicated by the crosshairs in (G). The black dashed and dotted curves correspond to the best-fit model pseudopole approach curve for the AFM and iFM phases, respectively. The purple approach curve appears to follow a similar trajectory to that of the red curve (AFM phase) for large tipsample distances followed by a discrete jump in the MFM contrast as the tip is brought closer to the surface and subsequently resembles the blue curve (iFM phase). Figure S7. Layer-dependence of TN and Tc. S10 Figure S8. Theoretical off-axis sheet susceptibility. S11 Supplementary Discussion S12 Section 1. Magnetic transfer function of a magnetized probe S12 Section 2. Transfer function of a realistic probe geometry: the magnetized hyperboloid S13 Section 3: Force between a magnetic probe and a magnetized material S14 Section 4: Monte Carlo simulations of magnetism in CrSBr S17 References S19 Plot of the spatial dependence of the MFM contrast in 3L to 4L (red curve), 4L to 5L (purple curve) and 5L to 6L (blue curve) step edges for both b-axis (solid curves) and c-axis (dashed curve) polarized probes. In all cases, a large MFM force is observed at the step edge, though the sign changes depending on the direction of the probe polarization. (D) Same as (C) but for a 3L to 5L region. The green curves show the spatial dependence of the MFM contrast when the 3L and 5L regions are laterally separated by a small 4L region, giving rise to large MFM contrast with opposite polarity on the two steps. The orange curves shown the MFM contrast when the 3L and 5L regions are separated by a "double step", showing no significant contrast above the bulk values due to cancellation of stray fields from the two top-most layers. (E) Same as (C) but for a 5L to 6L region. Here, the cyan linecut runs parallel to the b-axis and intersects a 5L to 6L "step up" and a 6L to 5L "step down" showing dark contrast with opposite polarity. Figure S6. Switching height map for T ≈ TN approach curves. (A) Tip-height dependent MFM maps extracted from approach curves collected with a c-axis polarized probe at T = 127.5 K (reproduced from Fig. 3J of the main manuscript). Some areas present contrast characteristic of the AFM phase at large tip-heights, but undergo a discrete switching event at some height after which the dark contrast, iFM phase is observed. As a result, the region of the maps showing the iFM phase grows in size as it is stabilized by the tip B-field at smaller tip heights. (B) Solid purple curve: A characteristic "switching" experimental approach curve taken in the location indicated by the cross in (A). Dashed red curve: The best-fit model approach curve to the "preswitch" region of the purple curve, yielding a value of the sheet susceptibility characteristic of the AFM phase. Dashed blue curve: The best-fit model approach curve to the "post-switch" region of the purple curve, yielding a value of the sheet susceptibility characteristic of the iFM phase. Solid green curve: Calculated B-field generated by a pseudo-pole tip with a radius of 20 nm, tip half angle of 25°, and remanent magnetism of 300 emu/cm 3 . (C) Spatial map of the switching height for regions that are observed to undergo AFM-to-iFM transitions with decreasing tip height. (D) Spatial map of the MFM contrast measured at the moment the associated region undergoes an AFM-to-iFM switch. It is evident that the iFM phase is more readily induced in odd-layer numbers (3L and 5L) compared to even layer numbers (2L and 4L). However, there is also a spatial dependence to the minimum field required to induce the iFM phase that does not depend on layer number alone (i.e., there is a broad range of switching heights and contrasts within the 3L region alone). (E) The calculated B-field shown in (B) impinging on the surface at the switching height plotted in (C). limited interval −1 < < ∞. Integrating Eq. (2) over this valid range of momenta yields ( ) ∝ log ( / ) for ≪ . The associated magnetic field is described by ∝ − ∝ 1/ , a so-called "pseudo-pole" field 1 . As described in the next section, this case is directly relevant to a physical model for the realistically extended magnetic force microscopy probe used in the present experiments. Section 2. Transfer function of a realistic probe geometry: the magnetized hyperboloid A semi-infinite magnetic probe of conical geometry with an opening half-angle of can be approximated by a hyperboloid of revolution with a radius of curvature at the apex. The transfer function for such a hyperboloid with metallic coating (thickness ) magnetized in the zdirection (with magnetic moment per unit volume ) can be deduced approximately as follows. We parameterize the inner and outer surfaces of the probe's magnetic coating by functions ± ( ), which denote lower and upper height coordinates corresponding to a single radial coordinate , where = 0 defines the probe (z-)axis. For simplicity, we will assume + ( ) ≈ − ( ) + . The hyperboloidal profile is described by: Here we have used the change of variable ≡ ′ / in order to apply Eq. (2.12.10.7) of ref. 2 , and we have defined ≡ 1/ cos . We can draw several qualitative observations from Eq. (5). For a narrow hyperboloid the cotangent in parentheses dominates and, below an exponential cutoff in momentum , we find ( ) ∝ −1 characteristic of a magnetic monopole, as one might obtain from a column of vertically stacked dipoles. The squared sine pre-factor reflects that the volume of the probe vanishes as → 0 and so also does its magnetic field. For an oblate hyperboloid → /2 , the cosine in parentheses dominates and carries the "pseudo-polar" contribution ( ) ∝ −2 as one might obtain from a column of vertically stacked monopoles 1 . By dimensional analysis, this feature is generic to a distribution of axial dipoles whose density grows in proportion to their distance, yielding a magnetic potential that is logarithmic in the proximal probe-sample distance , and a magnetic field scaling unconventionally as −1 . The cosine vanishes in the limit = /2, which matches expectation that the external field will vanish from a homogeneous planar surface of magnetic dipoles. Whereas the radius of curvature (the probe "sharpness") is here held fixed irrespective of , the momentum cutoff nevertheless shows a surprisingdependence: As the oblate hyperboloid tends towards a plane at → /2, the exponential cutoff softens at high momentum -a synthetic "sharpening" effect that counteracts the reduced transfer at high from the "pseudo-pole". A compromise is reached between increasing momentum transfer (softening momentum cutoff) and the overall falling field strength (∝ cos ) upon increasing half-angle up to (and not beyond) ≈ 1, or about ≈ 60 ∘ . For such choice of conical angle, the oblate probe might in principle supply best-resolved spatial sensitivity to local magnetic fields. A conventional probe geometry such as that used in the present experiments is reasonably described by ≈ 30 ∘ , for which −1 and −2 contributions to the probe's magnetic potential should be considered equally relevant for any quantitative analysis. Section 3: Force between a magnetic probe and a magnetized material We now consider the z-magnetized MFM probe as such an axisymmetric distribution of magnetization, as comprised by the magnetic coating over the probe surface. In the case where the probe's field magnetizes a planar sample (whose surface we now place at = 0) with permeability , the induced magnetic field will produce a force on the probe. We can compute this force between the probe and sample by considering the total magnetic energy ℇ . In the following = + is the total magnetic field summing contributions from the probe and sample, respectively, whereas = = −∇( + ) for probe-and sample-derived magnetic potentials , , and denotes the closest probe-sample separation distance: (6) ℇ = 1 4 ∫ ⋅ ⇒ = − ℇ ∴ = − 1 4 ∫ |∇ | 2 / ≈ − 1 2 ∫ ∇ ⋅ ∇ / ≈ 1 2 (∫ ( −1 ∇ 2 + ∇ −1 ⋅ ∇ ) − ∫ ⋅ ∇ / ) The approximation involves discarding the -independent contribution to energy associated with and excluding terms scaling as |∇ | 2 through the assumption that | | ≪ | |. The last step applies integration by parts. The second term in Eq. (6) vanishes when the domain(s) of S15 volume integration have piecewise homogeneous permeability. On the other hand, taking the volume integral over all space, the surface integral (element oriented "outward") vanishes at infinity, leaving ∇ 2 = −4 ∇ ⋅ nonzero only within the probe's volume. Meanwhile, taking the region(s) where ≠ 1 to be "small" compared to variations in , , as when the sample is a thin magnetic layer, the second term in the volume integral likewise tends toward vanishing. Then, applying integration by parts twice in sequence over the half-space volume ≥ 0 (including the probe) where = 1, we obtain: (7) ≈ − 1 2 (∫ Ω ∇ ⋅ ∇ + ∫ z=0̂⋅ ∇ ) ≈ 1 2 ∫ z=0̂⋅ ( ∇ − ∇ ) ≈ 1 2 ∫ z=0 ( − ) Here we have applied the defining feature of the response field: ∇ ⋅ = −∇ 2 = 0. Thus, Eq. (7) evaluates the probe-sample force strictly through a surface integral at = 0 in terms of the "driving" field and the response field "reflected" from the sample, and their surfacenormal derivatives. The assumed axisymmetry allows an angular spectrum representation of fields in the probe-sample gap, including the vicinity of = 0, via their Hankel transform , ( ): Whereas ( ) is simply the probe transfer function, the assumed planar sample geometry also implies that ( ) = − ( ) ( ), for some momentum-and permeability-dependent "magnetic reflection coefficient" . Combining Eqs. (7) and (8) and utilizing the areal integral relation below yields: (9) ∫ =0 0 ( ′ ) 0 ( ) = 2 ( − ′ ) ∴ ≈ −2 ∫ 3 ∞ 0 ( ) ( ) 2 −2 We can observe some basic phenomenology from Eq. (9) by assuming for demonstration a constant (momentum-independent) magnetic reflectance. The transfer function of an -pole probe ∝ −( +2) admits analytic evaluation of Eq. (9), yielding a force ∝ (2 ) −2 , as one might expect from two -poles interacting over a distance 2 . For the "pseudo-polar" case = 0, the force is again logarithmic in the normalized distance 2 / , with the characteristic size of the probe. Note that for positive "reflectance" , the probe-sample force < 0 is attractive. The correspondence = − ∇ ⇔ = − ∇ between electrostatics and magnetostatics allows to repurpose formulas for the electrostatic reflectance among layered media to obtain the corresponding magnetostatic reflectance simply by interchanging with . To wit, the magnetostatic reflectance of a single layer of non-unity permeability and finite thickness is given by: (10) ( ) = 1 + 2 −2 1+ 1 2 −2 with 1 = −1 +1 and 2 = − 1 This form is easily obtained by the transfer matrix method, as applied commonly to electrostatics of layered media. Eq. (10) has the general characteristic of "transmitting" fields of momenta ≪ −1 as ∝ and tends asymptotically to 1 for ≫ −1 . Therefore an -pole probe will transition between two distinct power laws as approaches , whereas sensitively controls the onset of this transition and thereby the overall shape of the force-distance curve, as well as the overall magnitude of the magnetic force. A magnetic probe interacting with a CrSBr layer only a few nanometers or less in thickness corresponds to the limit that ≪ −1 , in which case Eq. (10) simplifies to: (11) ( ) ≈ − with ≡ −(2 ) −1 where = 1 + 4 It is clear that the factor indeed represents an emergent length scale and is referred to as the sheet susceptibility ( C 2D ) in the main manuscript. A thin magnetic film will polarize in response to fields confined below this length scale, while transmitting fields that are comparatively delocalized. Thus, for a prescribed magnetic probe geometry and momentum transfer function, the dimensionless product determines the shape of the force-distance curve with respect to the dimensionless distance / , with a secondary influence on the overall amplitude of the force, whereas factors like the probe magnetization exclusively scale its amplitude. We consider Eq. (11) to faithfully describe the magnetic response of our few-layer CrSBr flakes. The measurable quantity by our magnetic force microscopy experiments is the frequency shift Δ = ( ) − 0 of the probe's cantilever resonance relative to its unloaded frequency 0 . For probe tapping amplitudes small relative to the length scale of magnetic features, this shift is proportional to the -derivative of the magnetic force (Eq. (7)) 3 : S17 Here is the spring constant of the probe's cantilever. Note that while the probe-sample force for a "pseudo-polar" probe is logarithmic in with manifest dependence on the probe size , Eq. (12) nevertheless reveals for this case that Δ ∝ 1/ , manifestly independent ofin other words, Δ remains a local probe of magnetism. Therefore, the "magnetic approach curves" Δ ( ) supplied by our magnetic force microscopy measurements encode local information on , and in principle thereby , through Eq. (11). We utilize the combination of Eqs. (11)(12) together with the realistic probe transfer function ( ) presented in the previous section to synthesize the experimental curves shown in Figs. 3, S6. Fixed-order Gauss-Legendre quadrature is used to integrate Eq. (12) and to evaluate Δ at each coordinate . The curves Δ ( ) so obtained are then scaled to the dimensional probe-sample distances recorded in the experiment by supposing a reasonable probe radius of curvature ≈ 20 nm. The introduction of an ad hoc "distance of closest approach" between the probe and sample, so commonplace among quantitative MFM analyses, is completely unnecessary in our formalism. The exponential momentum cutoff characteristic of the hyperboloid probe transfer function (Eq. (5)) naturally supplies such a mathematical feature with no need to impose any additional tuning parameter. We introduce an overall pre-factor to the computed curves Δ ( ), thus absorbing unknown factors like the thickness of the probe's magnetic layer and its precise magnetization density and cantilever spring constant, which should vary little over the course of our experiments, thus bringing these curves into physical units. Curves of best-fit in comparison to our experimental data are obtained by iterating evaluation of Eq. (1) with respect to using a nonlinear least-squares routine (the Levenberg-Marquardt algorithm), while fixing the probe-dependent pre-factor to a constant value when fitting across a group of approach curves collected from an individual co-localized map. The value for for which curves of best fit minimize the average residual with respect to an entire group of experimental curves is the one we select for ultimate fits among that group. Empirically, we find that such choice of is unique, and varies little between groups of fits. This outcome is expected, since the overall multiplication of force curves by is an effect linearly independent from the -dependence, which rather controls their shape. Section 4: Monte Carlo simulations of magnetism in CrSBr In our model, monolayers of CrSBr are characterized by rectangular lattice structures of × spins and monolayers stacked and allowed to interact via antiferromagnetic interlayer couplings. The total number of spins is given by = × × . The Hamiltonian of this model may be expressed as = + , where the first and second terms represent intralayer and interlayer interactions respectively. The intralayer contribution is given by where are dimensionless, unit-length vectors representing the magnetic moment at site and, therefore, the parameters are expressed in units of energy (meV). The sums corresponding to the anisotropic intralayer symmetric exchange parameters 1 , 2 , and 3 (for which < 0 represents ferromagnetic exchange) are taken over all nearest-, second nearest-, and third nearest-neighbors within the same layer. The parameters and represent single-ion magnetic anisotropy such that > 0 promotes an easy axis. The interlayer contribution to the Hamiltonian is given by where ,1 and ,2 are taken to be the isotropic interlayer exchange parameters between nearest-and second-nearest neighbors between layers. The values of our magnetic parameters are taken primarily from ref. 4 , and we choose the interlayer exchange parameters ,1 = ,2 = .012 meV such that the low-ground state exhibits antiferromagnet alignment between neighboring layers. The easy axis of magnetization is found to be the x-axis, while the zaxis is the out-of-plane axis. In general, calculation of the magnetic susceptibility probe polarized along the c-axis(Figs. 2C, D). Many of the salient features of the magnetic texture observed with the b-axis probe are also present in the c-axis data. These include: a uniform MFM contrast in the PM phase, the onset of a diffuse dark contrast (iFM) phase at T < Tc, the presence of abrupt boundaries between dark (iFM) and bright (AFM) contrast phases at T ≈ TN, and a bright contrast (AFM) phase at T < TN with high contrast step edges. Despite these many similarities, there are several notable differences. For instance, the c-axis MFM map collected at T ≈ TN records several discrete levels of contrast among both odd and even layers, whereas the corresponding b-axis data is much more binary with the iFM phase being restricted to odd layer regions. This is especially obvious in the 2L and 4L regions for the T ≈ TN maps, where the MFM contrast is far less uniform across the c-axis image compared to the b-axis image. Nevertheless, iFM and AFM regions can still be clearly distinguished in the c-axis T ≈ TN MFM map, with the iFM phase being primarily represented in odd-layer terraces.In the c-axis image at T < TN, the presence of high contrast step edges is consistent with the interpretation that stray fields associated with the edge of an AFM layer possess a nontrivial out-of-plane component. These latter findings reinforce our interpretation that analogous features observed in b-axis images associate with fringing fields generated at the edge of single added monolayers(Fig. S5). Unlike the b-axis data, the overall bulk MFM contrast observed with the caxis probe in the AFM phase appears to steadily increase with layer number, despite expectations that CrSBr generates no net out-of-plane fields for any layer thickness in the interlayer-AFM ground state. Moreover, an out-of-plane component to the zero-field magnetization is not expected in CrSBr at any temperature. Therefore, the rich temperature-dependent magnetic texture observed with the c-axis polarized probe prompts a similar conclusion to the b-axis data: zero-field magnetism alone fails to account for the magnetic response captured in our MFM study.Figs. 4A and 4B (for the b-and c-axis data, respectively) present a consolidated view of the layer-and temperature-dependent MFM maps shown inFig. 2. These figures plot the average contrast for each layer versus temperature relative to the average contrast observed on the 2L terrace. Here the 2L response supplies a reference for all of our data that controls against transient effects within and between different MFM maps (including thermal drift in the cantilever resonant frequency, see Methods). When presented in this manner, the MFM contrast evidently increases with layer number for T < Tc for both b-and c-axis polarized probes. and position-resolved collocated MFM contrast presented in Figs. 3B & G. The first set of approach curves was obtained at T < Tc (Figs. 3A-E). A high-resolution MFM map was first collected for reference (Fig. 3B) encompassing 2L, 3L, 4L and 5L regions.As observed in the large-scale image at the same temperature(Fig. 2D), a diffuse continuum of MFM contrast is observed between the iFM and PM phases. Isolating individual approach curves collected over the iFM and PM phases reveals clear differences in the associated characteristic length scale(Fig. 3A). The green curve is associated with the PM phase and the blue curve is associated with the iFM phase. The latter iFM curve has a much more rapid increase in MFM contrast with decreasing tip-sample distance than the prior PM phase, suggesting it possesses a higher value of C 2D . Applying the pseudo-pole model fit to both approach curves provides quantitative verification of this expectation, with the iFM phase showing C 2D = 57 nm and the PM phase showing C 2D = 17 nm. Repeating this procedure on a 2D grid corresponding to the location inFig. 3Byields a spatial map of C 2D(Fig. 3C). When comparing Figs. 3B and C, the MFM map shows essentially a one-to-one correlation with the map of C 2D . We also map the total integrated change in the MFM contrast within a given approach curve(Fig. 3D), which also acts as a qualitative proxy for C 2D and correlates with the MFM map inFig. 3B. Finally, we use the 2D grid of approach curves to extract a series of coarse-grained maps of the MFM signal at well-defined tip-sample separations(Fig. 3E). The MFM contrast is effectively uniform at the largest separations (h = 200 nm). As the tip height is reduced to h = 50 nm, regions with a larger value of C 2D begin to develop MFM contrast much more rapidly than those determined to have a smaller value of C 2D . Hence, the MFM approach curves conducted at T ≈ Tc provide a multifaceted view of the tip-sample interaction. Our data support the conclusion that the PM and iFM phases can be distinguished in MFM by virtue of their varying susceptibility, with the MFM probe acting as a nanoscale magnetometer.In order to explore the origin of MFM contrast between iFM and AFM magnetic phases, we repeated our approach curve analysis at T ≈ TN (Figs. 3F-J). A high magnification MFM image of a region possessing 2L, 3L, 4L and 5L terraces is used as a reference for approach curves (Fig. 3G) (note that there is substantial overlap between this image and the region probed in Figs. 3A-E). Three representative approach curves are shown inFig. 3F; one is collected with the tip held over the bright contrast AFM phase (red curve) and the other two are collected on the dark contrast iFM phase (purple and blue curves). The blue and red curves inFig. 3Fshow that the iFM phase (blue) possesses a shorter characteristic length scale than the AFM phase (red). continue our analysis of the dependence of the MFM contrast on the underlying sheet susceptibility by extracting C 2D across the regions shown in Fig. 3G (using the "post-jump" value of C 2D for regions that switch with height) and plot it in Fig. 3H. Once again, the map of C 2D in Fig. 3H largely correlates with the MFM map in Fig. 3G. One exception is in the bottom right of the images, where the MFM contrast is high but the sheet susceptibility is low. This discrepancy is due to the fact that this region has "switched" from the AFM to iFM phase in Fig. 3G, but the "post-switch" region of the associated approach curve was too small to extract a reliable value of C 2D . This forces us to fit to the "pre-switch" region possessing a value of C 2D more akin to the bright contrast AFM phase. The integrated MFM contrast (Fig. 3I) once again behaves qualitatively similar to C 2D and the MFM map. Hence, differences in sheet susceptibility permit us to resolve coexisting iFM and AFM phases at T ≈ TN, further validating the use of MFM as a nano-magnetometer., order to further solidify the link between the MFM signal and the magnetic susceptibility, we use a Heisenberg spin lattice model in which calculations of the magnetic susceptibility may be performed directly. In our model, monolayers of CrSBr are characterized by rectangular lattices of × spins and monolayers, featuring intralayer couplings in agreement with those reported in ref. 27 , which are stacked and allowed to interact via antiferromagnetic interlayer couplings (see Methods and section 4 of Supplementary discussion for more details). Here, we compute the sheet susceptibility along the principal x-( ) and zaxes ( ) to provide a direct comparison to MFM data collected with b-and c-axis polarized tips, respectively. For example, is defined as This plot shares many of the salient features of the temperature-dependent b-axis polarized MFM contrast shown in Fig. 4A. For all layer numbers, both plots in Figs. 4A and C gradually increase as T is lowered below Tc and show a peak at TN after which a comparatively sharp decrease is observed. In addition, exhibits an overall increase with layer number for most temperatures that is generally in agreement with the behavior observed for the b-axis polarized MFM signal. Theoretically, we expect → 0 as → 0; however, any off-axis contribution may generate an expected finite signal as → 0 if the experimental curve in Fig. 4A is extrapolated to T = 0 (see Fig. S8 for mixed x-and z-axis susceptibility). Such a situation may arise if the b-axis polarized tip possessed small non-trivial components along one of the hard magnetic axes, and may also be partially responsible for the observed "pairing" of MFM contrasts in the low temperature AFM phase. at the onset of the AFM phase that is not observed theoretically. Such behavior might arise if a small component of the tip magnetic moment is inadvertently aligned to the in-plane easy axis (Fig. S8). Our Monte Carlo simulations of the temperature-and layer-dependence of and in CrSBr correspond well with the experimental measures of b-and c-axis polarized MFM contrast, Microscopy: MFM experiments were carried out on a home-built UHV cryo-AFM using commercial PPP-MFMR probes fabricated by Nanosensors TM . Prior to each experiment, neodymium magnets were used to magnetize MFM probes ex situ in either the outof-plane direction (c-axis polarized) or the in-plane fast scanning axis (b-axis polarized).Exfoliated CrSBr flakes were prepared for UHV experiment such that the b-axis of each flake was aligned to the fast-scanning axis of MFM. In order to account for small shifts in the cantilever resonant frequency (f0) with changes in temperature, the drive frequency was tuned to f0 prior to each measurement. A DC bias was applied to the MFM tip to minimize the frequency shift f at h = 50 nm above the CrSBr surface to negate contributions of long-range electrostatic forces to the MFM measurement. For MFM measurements, a dual-pass approach was employed in which the first pass measures the AFM topography and the second pass measures f at h = 50 nm above the CrSBr surface (herein, f in the second pass is referred to as the MFM contrast). A median subtraction is applied to the raw MFM data to remove errant contributions to f arising from slow drift in f0 that occurs over the long frame time of a given MFM map (~ 2 hours). The zero-point of the MFM contrast for data presented in : We constructed a Heisenberg spin lattice model of CrSBr consisting of single-layer × rectangular spin lattices stacked with monolayers and allowed to interact via AFM interlayer couplings. For our simulations, = = 80 and = 2 − 6 depending on the structure of interest. The magnetic susceptibilities were extracted from computation of the average total magnetization along a given direction via a single-spin perturbative Metropolis Monte Carlo (MC) algorithm. The susceptibility values are averaged over an ensemble of 20 − 50 MC simulations per data point. A detailed description of our model can be found in section 4 of the supporting information. D.J.R. and A.S.M. performed all MFM experiments and analyzed the data. A.S.M. developed strategy for extracting the sheet susceptibility from MFM approach curves. C.C. performed all Monte-Carlo simulations of temperature-and layer-dependent magnetism in CrSBr. A.H.D. fabricated bulk CrSBr crystals. R.A.W. and E.J.D. prepared exfoliated CrSBr crystals for MFM experiments and performed initial characterization. Y.D. maintained cryo-AFM and assisted in troubleshooting MFM experiments. A.N.P. and C.R.D. assisted in experimental interpretation. X.R. and C.N. oversaw and advised with CrSBr growth. D.X. oversaw and assisted Monte-Carlo simulations. D.N.B. advised MFM experiments and assisted with interpretation. All authors participated in scientific discussion. Figure 1 . 1MFM characterization of magnetic phases in CrSBr. (A) Side-and top-down view of the crystal structure of the layered magnetic semiconductor CrSBr. Magnetic moments localized on Cr sites align ferromagnetically along the in-plane b-axis, and antiferromagnetically between layers along the out-of-plane c-axis. (B) AFM Topographic image of few-layer sample of CrSBr with 2 -6-layer terraces indicated. The temperature-dependence of the MFM response of few layer CrSBr is probes with both b-axis and c-axis polarized MFM probes. Figure 2 . 2Temperature-dependent evolution of magnetic phases in few-layer CrSBr observed with polarized MFM tips. (A)MFM maps of few-layer (2L through 5L) CrSBr taken at four characteristic temperatures using a b-axis polarized tip. Uniform MFM contrast is observed for temperatures well above Tc (the paramagnetic (PM) phase) which gradually give way to a diffuse, dark contrast phase at temperatures below Tc (the iFM phase). As the sample temperature is brought close to TN, a layer-dependent, bright contrast phase emerges characteristic of the antiferromagnetic (AFM) phase that forms sharp boundaries with the preexisting iFM phase. The latter is primarily represented in 3-and 6-layer regions.For all temperature T < TN, only the AFM phase remains. (B) Histograms of the MFM contrast for the four panels shown in (A). From these, it is clear that the transition from the PM to the iFM phase coincides with a continuous, broad range of MFM contrasts, while the transition from iFM to AFM creates an abrupt, discontinuous change in the MFM contrast. (C) The same as (A) but for a c-axis polarized tip. (D) The same as (B) but for a c-axis polarized tip. For T > TN, the overall evolution of the MFM contrast for the c-axis tip is similar to that of the b-axis tip. However, for T = TN, the c-axis tip additionally observes regions of intermediate contrast in the 2L and 4L regions. For the AFM phase, the dependence of the c-axis MFM contrast on layer number is also different than that of the b-axis tip. Figure 3 . 3Analysis of c-axis MFM approach curves and height-dependent MFM contrast. (A) MFM approach curves collecting over the PM (green) and iFM (blue) regions of the CrSBr near Tc (T = 137.5 K) as indicated by the crosshairs in (B). The PM phase has much more rapid onset in MFM contrast with tip height compared to the iFM phase. The black dashed and dotted curves correspond to the best-fit model pseudo-pole approach curve for the PM and iFM phases, respectively, with the best fit value of C 2D indicated. (B) MFM image of few-layer CrSBr showing diffuse boundaries between dark and bright contrast regions (iFM and PM, respectively). (C) Model sheet susceptibility extracted from the approach curve map collected in the same region as (B). Regions that possess dark contrast in (B) have a larger sheet susceptibility compared to bright contrast regions. (D) Map of the integrated MFM contrast along each approach curve in (C). (E) Reconstructed height-dependent MFM maps extracted from approach curve maps, showing a gradual onset of contrast as the MFM tip-sample distance is reduced. (F) MFM approach curves collecting over the iFM (blue), AFM (red), and "switch" (G) MFM image of fewlayer CrSBr showing sharp boundaries between dark and bright contrast regions (FM and AFM, respectively). (H) Model sheet susceptibility extracted from the approach curve map collected in the the same region as (G). Once again, the sheet susceptibility appears to scale with the overall MFM contrast shown in (G). (I) Map of the integrated MFM contrast along each approach curve in (H). (J) Reconstructed height-dependent MFM maps extracted from approach curve maps, showing a discrete onset, nucleation and growth of the iFM phase as the MFM tip height is reduced (and the associated tips fields impinging on the surface are increased). Figure 4 .Figure S1 . S3 Figure S2 . S4 Figure S3 . S5 Figure S4 . S6 Figure S5 . S7 Figure S6 . 4S1S3S2S4S3S5S4S6S5S7S6Temperature-dependence of the MFM contrast compared to the theoretical sheet susceptibility. (A) Temperature-dependence of the MFM contrast collected with a b-axis polarized MFM tip. The ground state at high temperatures is paramagnetic and yields only a small MFM contrast. As in-plane correlations begin to emerge for T < Tc, the MFM contrast rapidly increases and reaches a peak close to T  TN. For T < TN, the MFM contrast of the antiferromagnetic phase converges to values intermediate between the higher temperature phases and generally increase with layer number. For all temperatures, the MFM contrast generally increases with layer number. The 3L and 4L regions having similar contrast in the lowtemperature AFM phase, as do the 5L and 6L regions. (B) Same as (A) but for an MFM tip polarized to the out-of-plane c-axis. The overall trend in the MFM contrast with temperature is similar to that of the b-axis tip, with the MFM contrast generally increasing with layer number at all temperatures. (C) The theoretical b-axis sheet susceptibility ( X ) is plotted as a function of temperature. The overall temperature-and layer-dependence of X is similar to the MFM contrast observed in (A). (D) The theoretical c-axis sheet susceptibility ( ) is plotted as a function of temperature. The overall temperature-and layer-dependence of is similar to the MFM contrast observed in (BTemperature-dependent MFM maps for b-axis polarized probe. Temperature-dependent MFM maps for c-axis polarized probe. Dependence of T ≈ TN MFM maps on scanning direction. Spatial-dependent phase transitions near Tc and TN. Observation of stray fields on step edges in AFM ground state. Switching height map for T ≈ TN approach curves. S9 Figure S1 . S1Temperature-dependent MFM maps for b-axis polarized probe. The MFM maps collected with the b-axis polarized probe labelled with the associated temperatures ranging from T = 200 K to T = 40 K. Figure S2 . S2Temperature-dependent MFM maps for c-axis polarized probe. The MFM maps collected with the c-axis polarized probe labelled with the associated temperatures ranging from T = 200 K to T = 40 K. Figure S3 . S3Dependence of T ≈ TN MFM maps on scanning direction. (A) MFM map collected with an upward-scanning c-axis polarized probe at T = 127.5 K showing both iFM (dark blue) and AFM (yellow) phases coexisting with sharp boundaries between each phase. (B) The same as (A) but for a downward-scanning probe. While there is some spatial overlap between the iFM and AFM phases seen in (A), there as some regions that appear in the iFM phase in (A) that appear to be in the AFM phase in (B) (and vice versa). This demonstrates a nontrivial influence of the magnetic probe on the stability of the underlying magnetic phase. Figure S4 . S4Spatial-dependent phase transitions near Tc and TN. (A) MFM map collected at T = 140 K showing inhomogeneous spatial dependence of the yellow (PM) and dark blue (iFM) phases that cannot be understood in terms of the underlying layer number alone. Those regions that are still in the PM phase are interpreted as having a relatively low value of Tc and are highlighted red, while those which are in the iFM phase have a relatively high value of Tc and are highlighted blue. (B) MFM map collected at T = 127.5 K showing inhomogeneous spatial dependence of the yellow (AFM) and dark blue (iFM) phases. Those regions that are still in the iFM phase are interpreted as having a lower value of TN, while those that are already in the AFM phase have a higher value of TN. The red and blue highlighted regions in (A) are reproduced in (B), showing a spatial correlation between those regions for which the PM phase persists at T = 140 K and those regions where the iFM phases persists at T = 127.5 K. Figure S5 . S5Observation of stray fields on step edges in AFM ground state. (A) MFM map collected with a b-axis polarized probe of the AFM phase at T = 33 K. The associated layer numbers are labelled. Significant bright (repulsive magnetic force) and dark (attractive magnetic force) MFM contrast can be observed at step edges. The colored lines indicate the linecuts sampled for the plots in panels (C)-(E). (B) Same as (A) but for a c-axis polarized probe of the AFM phase at T = 100 K. As with the b-axis polarized probe, significant contrast is observed at step edges. (C) Top panel: Schematic of fringing fields emerging at the step edges of a 3L through 6L region of CrSBr in the AFM state. Bottom panel: Figure S7 . S7Layer-dependence of TN and Tc. The experimental values of Tc (orange line) and TN (green line) are plotted as a function of layer number.Here, Tc is defined as the interpolated temperature at which phase fraction of PM and iFM phases are equal for within a given layer, and TN is define as the interpolated temperature at which iFM and AFM phases are equivalent. Tc tends to increase with layer number while TN tends to decrease with layer number. Odd-layer thicknesses experience an additional suppression of TN compared to even layers. Figure S8 . S8Theoretical off-axis sheet susceptibility. Theoretical temperature-and layerdependence of the sheet susceptibility for magnetization sampled at ̂= 1 √2 [̂+] (i.e., 45° between band c-axis directions). This theoretical data provides a reference for potential off-axis components to Btip when interpreting the data in Figs. 4A, B. tan . For simplicity, we henceforth consider and in units of . For a uniformly magnetized coating, = ̂ is constant between surfaces ± ( ), and we have: − ∂ ∂ ′ z ( ′ , ′ ) = [ ( ′ − + ( ′ )) − ( ′ − − ( ′ ))].In the limit that ≪ we can approximate the transfer function defined in Eq. . 〈 〉 is the thermodynamic average of the total magnetization along the z-axis, = ∑ , Here, we compute the "sheet susceptibility" along several axes which is defined, for example, along the z-axis as= [〈 2 〉 − 〈 〉 2 ]The thermodynamic quantities ⟨ 2 ⟩ and ⟨ ⟩ are computed via a single-spin perturbative Metropolis Monte Carlo (MC) algorithm. Lattices of linear dimensions = = 80 are annealed to the target temperature for × 10 5 MC steps, further equilibration takes place for 2.5 × × 10 5 MC steps, and observable measurements are recorded over the last 2.5 × × 10 5 MC steps. The susceptibility values are averaged over an ensemble of 20 − 50 MC simulations per data point. The temperature dependence of our MFM results suggests a susceptibility-dominated magnetic response at all temperatures below Tc. A comparatively small (but non-zero) contribution from zero-field magnetism is observed in the low-temperature AFM phase that presents as fringing fields at the edge of atomic terraces that depend on the parity of the layer thickness and step height. Meanwhile, we observed two distinct magnetic transitions associated with the onset of intralayer FM correlations at higher temperatures (T ≈ Tc) and interlayer AFM correlations at low temperatures (T ≈ TN). Both transitions show a clear dependence on layer thickness and the parity of the layer number. We observe additional longwavelength spatial variations in the magnetic transition temperatures, implying microscopic inhomogeneity in interlayer coupling that we speculate arises from local strain 53 (possibly induced by exfoliation). Finally, we selectively induced facile discrete magnetic switching events in CrSBr at temperatures close to TN using stray fields from the magnetic probe tip. Our experimental findings are closely reproduced by Monte Carlo simulations based on a Heisenberg spin lattice model. Our results lay the groundwork for quantitative magnetic imaging and nanomagnetometry of 2D materials in the few-layer limit. axis, simple magnetostatic reasoning predicts negligible magnetic field over an open terrace, akin to the vanishing electric field outside a capacitor. In further analogy to the capacitor, such uniform magnetization will nevertheless generate detectable bipolar fringing fields near layersheet, yet detectable fringing fields with unipolar c-axis component. For all experiments, a controlled electrical bias was applied to the MFM probe relative to the sample to compensate the contact potential difference arising from a change in work function between the tip and sample40,55 . Using an electrically biased tip negates electrostatic contributions to our nano-resolved force measurements (see Methods for full description of our MFM measurement approach). Using this experimental setup, we performed a series of temperature dependent MFM experiments on the CrSBr flake shown in Fig. 1B, ranging from T = 200 K to 40 K. Based on previous results 34 , this should provide a clear view of the magnetic properties of CrSBr well above the onset of magnetic correlations (i.e., the paramagnetic (PM) phase), through the iFM phase, and eventually the AFM phase at low temperature. Fig. 2A shows maps of MFM contrast collected with the tip polarization along the b-axis at key selected temperatures associated with the PM phase (T > Tc), just below the onset of the iFM phase (T < Tc), the onset of the AFM phase (T ≈ TN), and well into the AFM phase (T < TN). In the colormap shown, "darker" contrasts (for decreasing Δ < 0) signify regions of qualitatively attractive magnetic force, whereas "brighter" regions imply zero or repulsive magnetic force. A complete set of MFM maps collected at all temperatures is found in Figs. S1 and S2. The distributions of the MFM contrast for each map inedges with a non-zero b-axis component. Extending this reasoning to out-of-plane magnetization also predicts undetectable magnetic force ( = 0) in the interior of a uniformly magnetized Author ContributionsSupplementary DiscussionSection 1. Magnetic transfer function of a magnetized probeThe magnetic potential associated with an arbitrary bounded distribution of magnetization described by ( ) is given by:where Ω denotes the volume occupied by finite , and denotes the effective "magnetic charge" associated with divergence in magnetization. If the magnetization is axisymmetric about an axis , then in cylindrical coordinates ( , ) the divergence can be rewritten and the "Coulomb kernel" can be expanded in a basis of cylindrical waves:When evaluated at positions ≡ − < 0 below a magnetized distribution for which only is nonzero for ′ > 0, the expression simplifies:(2)Here we identify ( ) as the "transfer function" of the magnetization relative to = 0, and the subscript p indicates that forces on the magnetization functionalize it as a magnetic probe. Up to an exponential propagator in the probe-sample separation distance , ( ) is then given by the Hankel transform of ( ), and the latter is readily identified as:This expression is itself a weighted Hankel transform over variations in the magnetization, otherwise regarded as the distribution of "magnetic charge". Eq.(2) implies that for a magnetic transfer function ( ) ∝ , the associated magnetic potential at = 0 is likewise a power law ( ) ∝ −( +2) for > −2. Noting that a magnetic -pole (e.g., a dipole corresponds to = 2) exhibits ( ) ∝ − , we conclude that anpole is described by a transfer function ( ) ∝ −2 . On the other hand, a distribution z ( ′) ∝ ′ is integrable in Eq. (3) over a finite-sized probe volume Ω, e.g., over an interval 0 ≤ ′ ≤ , to the marginal case described by a transfer function with = −2, but only in the K S Novoselov, A Mishchenko, A Carvalho, Castro Neto, A. H. 2d Materials and Van Der Waals Heterostructures. 3539439Novoselov, K. S., Mishchenko, A., Carvalho, A. & Castro Neto, A. H. 2d Materials and Van Der Waals Heterostructures. Science 353, aac9439, (2016). Electric Field Effect in Atomically Thin Carbon Films. K S Novoselov, A K Geim, S V Morozov, D Jiang, Y Zhang, S V Dubonos, I V Grigorieva, A A Firsov, Science. 306Novoselov, K. S., Geim, A. K., Morozov, S. V., Jiang, D., Zhang, Y., Dubonos, S. V., Grigorieva, I. V. & Firsov, A. A. Electric Field Effect in Atomically Thin Carbon Films. Science 306, 666-669, (2004). Discovery of Intrinsic Ferromagnetism in Two-Dimensional Van Der Waals Crystals. C Gong, L Li, Z Li, H Ji, A Stern, Y Xia, T Cao, W Bao, C Wang, Y Wang, Z Q Qiu, R J Cava, S G Louie, J Xia, X Zhang, Nature. 546Gong, C., Li, L., Li, Z., Ji, H., Stern, A., Xia, Y., Cao, T., Bao, W., Wang, C., Wang, Y., Qiu, Z. Q., Cava, R. J., Louie, S. G., Xia, J. & Zhang, X. Discovery of Intrinsic Ferromagnetism in Two-Dimensional Van Der Waals Crystals. Nature 546, 265-269, (2017). Layer-Dependent Ferromagnetism in a Van Der Waals Crystal Down to the Monolayer Limit. B Huang, G Clark, E Navarro-Moratalla, D R Klein, R Cheng, K L Seyler, D Zhong, E Schmidgall, M A Mcguire, D H Cobden, W Yao, D Xiao, P Jarillo-Herrero, X Xu, Nature. 546Huang, B., Clark, G., Navarro-Moratalla, E., Klein, D. R., Cheng, R., Seyler, K. L., Zhong, D., Schmidgall, E., McGuire, M. A., Cobden, D. H., Yao, W., Xiao, D., Jarillo- Herrero, P. & Xu, X. Layer-Dependent Ferromagnetism in a Van Der Waals Crystal Down to the Monolayer Limit. Nature 546, 270-273, (2017). Magnetism in Two-Dimensional Van Der Waals Materials. K S Burch, D Mandrus, J.-G Park, Nature. 563Burch, K. S., Mandrus, D. & Park, J.-G. Magnetism in Two-Dimensional Van Der Waals Materials. Nature 563, 47-52, (2018). Spin Tunnel Field-Effect Transistors Based on Two-Dimensional Van Der Waals Heterostructures. S Jiang, L Li, Z Wang, J Shan, K F Mak, Nature Electronics. 2Jiang, S., Li, L., Wang, Z., Shan, J. & Mak, K. F. Spin Tunnel Field-Effect Transistors Based on Two-Dimensional Van Der Waals Heterostructures. Nature Electronics 2, 159- 163, (2019). Recent Advances in Two-Dimensional Spintronics. G Hu, B Xiang, Nanoscale Research Letters. 15Hu, G. & Xiang, B. Recent Advances in Two-Dimensional Spintronics. Nanoscale Research Letters 15, 226, (2020). Giant Tunneling Magnetoresistance in Spin-Filter Van Der Waals Heterostructures. T Song, X Cai, W.-Y Tu Matisse, X Zhang, B Huang, P Wilson Nathan, L Seyler Kyle, L Zhu, T Taniguchi, K Watanabe, A Mcguire Michael, H Cobden David, D Xiao, W Yao, X Xu, Science. 360Song, T., Cai, X., Tu Matisse, W.-Y., Zhang, X., Huang, B., Wilson Nathan, P., Seyler Kyle, L., Zhu, L., Taniguchi, T., Watanabe, K., McGuire Michael, A., Cobden David, H., Xiao, D., Yao, W. & Xu, X. Giant Tunneling Magnetoresistance in Spin-Filter Van Der Waals Heterostructures. Science 360, 1214-1218, (2018). Probing Magnetism in 2d Van Der Waals Crystalline Insulators Via Electron Tunneling. D R Klein, D Macneill, J L Lado, D Soriano, E Navarro-Moratalla, K Watanabe, T Taniguchi, S Manni, P Canfield, J Fernández-Rossier, P Jarillo-Herrero, Science. 360Klein, D. R., MacNeill, D., Lado, J. L., Soriano, D., Navarro-Moratalla, E., Watanabe, K., Taniguchi, T., Manni, S., Canfield, P., Fernández-Rossier, J. & Jarillo-Herrero, P. Probing Magnetism in 2d Van Der Waals Crystalline Insulators Via Electron Tunneling. Science 360, 1218-1222, (2018). Electrical Control of 2d Magnetism in Bilayer Cri3. B Huang, G Clark, D R Klein, D Macneill, E Navarro-Moratalla, K L Seyler, N Wilson, M A Mcguire, D H Cobden, D Xiao, W Yao, P Jarillo-Herrero, X Xu, Nature Nanotechnology. 13Huang, B., Clark, G., Klein, D. R., MacNeill, D., Navarro-Moratalla, E., Seyler, K. L., Wilson, N., McGuire, M. A., Cobden, D. H., Xiao, D., Yao, W., Jarillo-Herrero, P. & Xu, X. Electrical Control of 2d Magnetism in Bilayer Cri3. Nature Nanotechnology 13, 544- 548, (2018). Electric-Field Switching of Two-Dimensional Van Der Waals Magnets. S Jiang, J Shan, K F Mak, Nature Materials. 17Jiang, S., Shan, J. & Mak, K. F. Electric-Field Switching of Two-Dimensional Van Der Waals Magnets. Nature Materials 17, 406-410, (2018). Atomic-Scale Interface Engineering of Majorana Edge Modes in a 2d Magnet-Superconductor Hybrid System. A Palacio-Morales, E Mascot, S Cocklin, H Kim, S Rachel, K Morr Dirk, R Wiesendanger, Science Advances. 56600Palacio-Morales, A., Mascot, E., Cocklin, S., Kim, H., Rachel, S., Morr Dirk, K. & Wiesendanger, R. Atomic-Scale Interface Engineering of Majorana Edge Modes in a 2d Magnet-Superconductor Hybrid System. Science Advances 5, eaav6600. Majorana Corner States in a Two-Dimensional Magnetic Topological Insulator on a High-Temperature Superconductor. T Liu, J J He, F Nori, Physical Review B. 98245413Liu, T., He, J. J. & Nori, F. Majorana Corner States in a Two-Dimensional Magnetic Topological Insulator on a High-Temperature Superconductor. Physical Review B 98, 245413, (2018). Two-Dimensional Magnetic Crystals and Emergent Heterostructure Devices. C Gong, X Zhang, Science. 3634450Gong, C. & Zhang, X. Two-Dimensional Magnetic Crystals and Emergent Heterostructure Devices. Science 363, eaav4450, (2019). Switching 2d Magnetic States Via Pressure Tuning of Layer Stacking. T Song, Z Fei, M Yankowitz, Z Lin, Q Jiang, K Hwangbo, Q Zhang, B Sun, T Taniguchi, K Watanabe, M A Mcguire, D Graf, T Cao, J.-H Chu, D H Cobden, C R Dean, D Xiao, X Xu, Nature Materials. 18Song, T., Fei, Z., Yankowitz, M., Lin, Z., Jiang, Q., Hwangbo, K., Zhang, Q., Sun, B., Taniguchi, T., Watanabe, K., McGuire, M. A., Graf, D., Cao, T., Chu, J.-H., Cobden, D. H., Dean, C. R., Xiao, D. & Xu, X. Switching 2d Magnetic States Via Pressure Tuning of Layer Stacking. Nature Materials 18, 1298-1302, (2019). Recent Breakthroughs in Two-Dimensional Van Der Waals Magnetic Materials and Emerging Applications. Y Khan, S M Obaidulla, M R Habib, A Gayen, T Liang, X Wang, M Xu, Nano Today. 34100902Khan, Y., Obaidulla, S. M., Habib, M. R., Gayen, A., Liang, T., Wang, X. & Xu, M. Recent Breakthroughs in Two-Dimensional Van Der Waals Magnetic Materials and Emerging Applications. Nano Today 34, 100902, (2020). Prospects and Opportunities of 2d Van Der Waals Magnetic Systems. M.-C Wang, C.-C Huang, C.-H Cheung, C.-Y Chen, S G Tan, T.-W Huang, Y Zhao, Y Zhao, G Wu, Y.-P Feng, H.-C Wu, C.-R Chang, Annalen der Physik. 532Wang, M.-C., Huang, C.-C., Cheung, C.-H., Chen, C.-Y., Tan, S. G., Huang, T.-W., Zhao, Y., Zhao, Y., Wu, G., Feng, Y.-P., Wu, H.-C. & Chang, C.-R. Prospects and Opportunities of 2d Van Der Waals Magnetic Systems. Annalen der Physik 532, 1900452, (2020). Absence of Ferromagnetism or Antiferromagnetism in One-or Two-Dimensional Isotropic Heisenberg Models. N D Mermin, H Wagner, Physical Review Letters. 17Mermin, N. D. & Wagner, H. Absence of Ferromagnetism or Antiferromagnetism in One-or Two-Dimensional Isotropic Heisenberg Models. Physical Review Letters 17, 1133-1136, (1966). Zur Theorie Des Ferromagnetismus. W Heisenberg, Zeitschrift für Physik. 49Heisenberg, W. Zur Theorie Des Ferromagnetismus. Zeitschrift für Physik 49, 619-636, (1928). Atomically Thin Crcl3: An in-Plane Layered Antiferromagnetic Insulator. X Cai, T Song, N P Wilson, G Clark, M He, X Zhang, T Taniguchi, K Watanabe, W Yao, D Xiao, M A Mcguire, D H Cobden, X Xu, Nano Letters. 19Cai, X., Song, T., Wilson, N. P., Clark, G., He, M., Zhang, X., Taniguchi, T., Watanabe, K., Yao, W., Xiao, D., McGuire, M. A., Cobden, D. H. & Xu, X. Atomically Thin Crcl3: An in-Plane Layered Antiferromagnetic Insulator. Nano Letters 19, 3993-3998, (2019). Stacking-Dependent Magnetism in Bilayer Cri3. N Sivadas, S Okamoto, X Xu, C J Fennie, D Xiao, Nano Letters. 18Sivadas, N., Okamoto, S., Xu, X., Fennie, C. J. & Xiao, D. Stacking-Dependent Magnetism in Bilayer Cri3. Nano Letters 18, 7658-7664, (2018). Vi3-a New Layered Ferromagnetic Semiconductor. T Kong, K Stolze, E I Timmons, J Tao, D Ni, S Guo, Z Yang, R Prozorov, R J Cava, Advanced Materials. 31Kong, T., Stolze, K., Timmons, E. I., Tao, J., Ni, D., Guo, S., Yang, Z., Prozorov, R. & Cava, R. J. Vi3-a New Layered Ferromagnetic Semiconductor. Advanced Materials 31, 1808074, (2019). Magneto-Optical Kerr Switching Properties of (Cri3)2 and (Crbr3/Cri3) Bilayers. K Yang, W Hu, H Wu, M.-H Whangbo, P G Radaelli, A Stroppa, ACS Applied Electronic Materials. 2Yang, K., Hu, W., Wu, H., Whangbo, M.-H., Radaelli, P. G. & Stroppa, A. Magneto- Optical Kerr Switching Properties of (Cri3)2 and (Crbr3/Cri3) Bilayers. ACS Applied Electronic Materials 2, 1373-1380, (2020). Direct Photoluminescence Probing of Ferromagnetism in Monolayer Two-Dimensional Crbr3. Z Zhang, J Shang, C Jiang, A Rasmita, W Gao, T Yu, Nano Letters. 19Zhang, Z., Shang, J., Jiang, C., Rasmita, A., Gao, W. & Yu, T. Direct Photoluminescence Probing of Ferromagnetism in Monolayer Two-Dimensional Crbr3. Nano Letters 19, 3138-3142, (2019). Ising-Type Magnetic Ordering in Atomically Thin Feps3. J.-U Lee, S Lee, J H Ryoo, S Kang, T Y Kim, P Kim, C.-H Park, J.-G Park, H Cheong, Nano Letters. 16Lee, J.-U., Lee, S., Ryoo, J. H., Kang, S., Kim, T. Y., Kim, P., Park, C.-H., Park, J.-G. & Cheong, H. Ising-Type Magnetic Ordering in Atomically Thin Feps3. Nano Letters 16, 7433-7438, (2016). Controlling Magnetic and Optical Properties of the Van Der Waals Crystal Crcl3−Xbrx Via Mixed Halide Chemistry. M Abramchuk, S Jaszewski, K R Metz, G B Osterhoudt, Y Wang, K S Burch, F Tafti, Advanced Materials. 301801325Abramchuk, M., Jaszewski, S., Metz, K. R., Osterhoudt, G. B., Wang, Y., Burch, K. S. & Tafti, F. Controlling Magnetic and Optical Properties of the Van Der Waals Crystal Crcl3−Xbrx Via Mixed Halide Chemistry. Advanced Materials 30, 1801325, (2018). Evolution of Interlayer and Intralayer Magnetism in Three Atomically Thin Chromium Trihalides. H H Kim, B Yang, S Li, S Jiang, C Jin, Z Tao, G Nichols, F Sfigakis, S Zhong, C Li, S Tian, D G Cory, G.-X Miao, J Shan, K F Mak, H Lei, K Sun, L Zhao, A W Tsen, Proceedings of the National Academy of Sciences. 11611131Kim, H. H., Yang, B., Li, S., Jiang, S., Jin, C., Tao, Z., Nichols, G., Sfigakis, F., Zhong, S., Li, C., Tian, S., Cory, D. G., Miao, G.-X., Shan, J., Mak, K. F., Lei, H., Sun, K., Zhao, L. & Tsen, A. W. Evolution of Interlayer and Intralayer Magnetism in Three Atomically Thin Chromium Trihalides. Proceedings of the National Academy of Sciences 116, 11131, (2019). A Family of High-Temperature Ferromagnetic Monolayers with Locked Spin-Dichroism-Mobility Anisotropy: Mnnx and Crcx. C Wang, X Zhou, L Zhou, N.-H Tong, Z.-Y Lu, W Ji, Cl, I; C = S Br, Se, ) Te, Science Bulletin. 64Wang, C., Zhou, X., Zhou, L., Tong, N.-H., Lu, Z.-Y. & Ji, W. A Family of High- Temperature Ferromagnetic Monolayers with Locked Spin-Dichroism-Mobility Anisotropy: Mnnx and Crcx (X = Cl, Br, I; C = S, Se, Te). Science Bulletin 64, 293-300, (2019). Electrically Tunable High Curie Temperature Two-Dimensional Ferromagnetism in Van Der Waals Layered Crystals. H Wang, J Qi, X Qian, Applied Physics Letters. 11783102Wang, H., Qi, J. & Qian, X. Electrically Tunable High Curie Temperature Two- Dimensional Ferromagnetism in Van Der Waals Layered Crystals. Applied Physics Letters 117, 083102, (2020). Chromium Sulfide Halide Monolayers: Intrinsic Ferromagnetic Semiconductors with Large Spin Polarization and High Carrier Mobility. Y Guo, Y Zhang, S Yuan, B Wang, J Wang, Nanoscale. 10Guo, Y., Zhang, Y., Yuan, S., Wang, B. & Wang, J. Chromium Sulfide Halide Monolayers: Intrinsic Ferromagnetic Semiconductors with Large Spin Polarization and High Carrier Mobility. Nanoscale 10, 18036-18042, (2018). Triaxial Magnetic Anisotropy in the Two-Dimensional Ferromagnetic Semiconductor Crsbr. K Yang, G Wang, L Liu, D Lu, H Wu, Physical Review B. 104144416Yang, K., Wang, G., Liu, L., Lu, D. & Wu, H. Triaxial Magnetic Anisotropy in the Two- Dimensional Ferromagnetic Semiconductor Crsbr. Physical Review B 104, 144416, (2021). Hidden Low-Temperature Magnetic Order Revealed through Magnetotransport in Monolayer Crsbr. E J Telford, A H Dismukes, R L Dudley, R A Wiscons, K Lee, J Yu, S Shabani, A Scheie, K Watanabe, T Taniguchi, arXiv:2106.08471arXiv preprintTelford, E. J., Dismukes, A. H., Dudley, R. L., Wiscons, R. A., Lee, K., Yu, J., Shabani, S., Scheie, A., Watanabe, K. & Taniguchi, T. Hidden Low-Temperature Magnetic Order Revealed through Magnetotransport in Monolayer Crsbr. arXiv preprint arXiv:2106.08471, (2021). Layered Antiferromagnetism Induces Large Negative Magnetoresistance in the Van Der Waals Semiconductor Crsbr. E J Telford, A H Dismukes, K Lee, M Cheng, A Wieteska, A K Bartholomew, Y.-S Chen, X Xu, A N Pasupathy, X Zhu, C R Dean, X Roy, Advanced Materials. 32Telford, E. J., Dismukes, A. H., Lee, K., Cheng, M., Wieteska, A., Bartholomew, A. K., Chen, Y.-S., Xu, X., Pasupathy, A. N., Zhu, X., Dean, C. R. & Roy, X. Layered Antiferromagnetism Induces Large Negative Magnetoresistance in the Van Der Waals Semiconductor Crsbr. Advanced Materials 32, 2003240, (2020). Magnetic Order and Symmetry in the 2d Semiconductor Crsbr. K Lee, A H Dismukes, E J Telford, R A Wiscons, J Wang, X Xu, C Nuckolls, C R Dean, X Roy, X Zhu, Nano Letters. 21Lee, K., Dismukes, A. H., Telford, E. J., Wiscons, R. A., Wang, J., Xu, X., Nuckolls, C., Dean, C. R., Roy, X. & Zhu, X. Magnetic Order and Symmetry in the 2d Semiconductor Crsbr. Nano Letters 21, 3511-3517, (2021). N P Wilson, K Lee, J Cenker, K Xie, A H Dismukes, E J Telford, J Fonseca, S Sivakumar, C Dean, T Cao, arXiv:2103.13280Interlayer Electronic Coupling on Demand in a 2d Magnetic Semiconductor. arXiv preprintWilson, N. P., Lee, K., Cenker, J., Xie, K., Dismukes, A. H., Telford, E. J., Fonseca, J., Sivakumar, S., Dean, C. & Cao, T. Interlayer Electronic Coupling on Demand in a 2d Magnetic Semiconductor. arXiv preprint arXiv:2103.13280, (2021). Electrical and Thermal Generation of Spin Currents by Magnetic Bilayer Graphene. T S Ghiasi, A A Kaverzin, A H Dismukes, D K De Wal, X Roy, B J Van Wees, Nature Nanotechnology. 16Ghiasi, T. S., Kaverzin, A. A., Dismukes, A. H., de Wal, D. K., Roy, X. & van Wees, B. J. Electrical and Thermal Generation of Spin Currents by Magnetic Bilayer Graphene. Nature Nanotechnology 16, 788-794, (2021). Direct Observation of Van Der Waals Stacking-Dependent Interlayer Magnetism. W Chen, Z Sun, Z Wang, L Gu, X Xu, S Wu, C Gao, Science. 366Chen, W., Sun, Z., Wang, Z., Gu, L., Xu, X., Wu, S. & Gao, C. Direct Observation of Van Der Waals Stacking-Dependent Interlayer Magnetism. Science 366, 983-987, (2019). Probing Magnetism in 2d Materials at the Nanoscale with Single-Spin Microscopy. L Thiel, Z Wang, M A Tschudin, D Rohner, I Gutiérrez-Lezama, N Ubrig, M Gibertini, E Giannini, A F Morpurgo, P Maletinsky, Science. 364Thiel, L., Wang, Z., Tschudin, M. A., Rohner, D., Gutiérrez-Lezama, I., Ubrig, N., Gibertini, M., Giannini, E., Morpurgo, A. F. & Maletinsky, P. Probing Magnetism in 2d Materials at the Nanoscale with Single-Spin Microscopy. Science 364, 973-976, (2019). J Yi, H Zhuang, Q Zou, Z Wu, G Cao, S Tang, S Calder, P Kent, D Mandrus, Z Gai, Competing Antiferromagnetism in a Quasi-2d Itinerant Ferromagnet: Fe3gete2. 2D Materials 4, 011005. Yi, J., Zhuang, H., Zou, Q., Wu, Z., Cao, G., Tang, S., Calder, S., Kent, P., Mandrus, D. & Gai, Z. Competing Antiferromagnetism in a Quasi-2d Itinerant Ferromagnet: Fe3gete2. 2D Materials 4, 011005, (2016). Coexistence of Magnetic Orders in Two-Dimensional Magnet Cri3. B Niu, T Su, B A Francisco, S Ghosh, F Kargar, X Huang, M Lohmann, J Li, Y Xu, T Taniguchi, K Watanabe, D Wu, A Balandin, J Shi, Y.-T Cui, Nano Letters. 20Niu, B., Su, T., Francisco, B. A., Ghosh, S., Kargar, F., Huang, X., Lohmann, M., Li, J., Xu, Y., Taniguchi, T., Watanabe, K., Wu, D., Balandin, A., Shi, J. & Cui, Y.-T. Coexistence of Magnetic Orders in Two-Dimensional Magnet Cri3. Nano Letters 20, 553-558, (2020). Visualizing Ferromagnetic Domains in Magnetic Topological Insulators. W Wang, F Yang, C Gao, J Jia, G D Gu, W Wu, APL Materials. 383301Wang, W., Yang, F., Gao, C., Jia, J., Gu, G. D. & Wu, W. Visualizing Ferromagnetic Domains in Magnetic Topological Insulators. APL Materials 3, 083301, (2015). Chiral-Bubble-Induced Topological Hall Effect in Ferromagnetic Topological Insulator Heterostructures. W Wang, Y.-F Zhao, F Wang, M W Daniels, C.-Z Chang, J Zang, D Xiao, W Wu, Nano Letters. 21Wang, W., Zhao, Y.-F., Wang, F., Daniels, M. W., Chang, C.-Z., Zang, J., Xiao, D. & Wu, W. Chiral-Bubble-Induced Topological Hall Effect in Ferromagnetic Topological Insulator Heterostructures. Nano Letters 21, 1108-1114, (2021). Multi-Messenger Nanoprobes of Hidden Magnetism in a Strained Manganite. A S Mcleod, J Zhang, M Q Gu, F Jin, G Zhang, K W Post, X G Zhao, A J Millis, W B Wu, J M Rondinelli, R D Averitt, D N Basov, Nature Materials. 19McLeod, A. S., Zhang, J., Gu, M. Q., Jin, F., Zhang, G., Post, K. W., Zhao, X. G., Millis, A. J., Wu, W. B., Rondinelli, J. M., Averitt, R. D. & Basov, D. N. Multi-Messenger Nanoprobes of Hidden Magnetism in a Strained Manganite. Nature Materials 19, 397- 404, (2020). Artificial Spin Ice: Paths Forward. P Schiffer, C Nisoli, Applied Physics Letters. 118Schiffer, P. & Nisoli, C. Artificial Spin Ice: Paths Forward. Applied Physics Letters 118, 110501, (2021). Towards Quantitative Magnetic Force Microscopy: Theory and Experiment. T Häberle, F Haering, H Pfeifer, L Han, B Kuerbanjiang, U Wiedwald, U Herr, B Koslowski, New Journal of Physics. 14Häberle, T., Haering, F., Pfeifer, H., Han, L., Kuerbanjiang, B., Wiedwald, U., Herr, U. & Koslowski, B. Towards Quantitative Magnetic Force Microscopy: Theory and Experiment. New Journal of Physics 14, 043044, (2012). Combined Kerr-/Magnetic Force Microscopy on Ndfeb Crystals of Different Crystallographic Orientation. E Zueco, W Rave, R Schäfer, A Hubert, L Schultz, Journal of Magnetism and Magnetic Materials. 190Zueco, E., Rave, W., Schäfer, R., Hubert, A. & Schultz, L. Combined Kerr-/Magnetic Force Microscopy on Ndfeb Crystals of Different Crystallographic Orientation. Journal of Magnetism and Magnetic Materials 190, 42-47, (1998). Determining the Permeability of Magnetic Thin Film Material by Magnetic Force Microscopy: Relation with Superconducting Thin Films. A De La Cruz De Oña, Physica B: Condensed Matter. 348de la Cruz de Oña, A. Determining the Permeability of Magnetic Thin Film Material by Magnetic Force Microscopy: Relation with Superconducting Thin Films. Physica B: Condensed Matter 348, 177-182, (2004). Determining the Permeability of a Sphere of Linear Magnetic Material by Magnetic Force Microscopy. M W Coffey, Physical Review B. 60Coffey, M. W. Determining the Permeability of a Sphere of Linear Magnetic Material by Magnetic Force Microscopy. Physical Review B 60, 3346-3354, (1999). Seeing Is Believing: Visualization of Antiferromagnetic Domains. S.-W Cheong, M Fiebig, W Wu, L Chapon, V Kiryukhin, npj Quantum Materials. 5Cheong, S.-W., Fiebig, M., Wu, W., Chapon, L. & Kiryukhin, V. Seeing Is Believing: Visualization of Antiferromagnetic Domains. npj Quantum Materials 5, 3, (2020). Magnetic Imaging of Domain Walls in the Antiferromagnetic Topological Insulator Mnbi2te4. P M Sass, W Ge, J Yan, D Obeysekera, J J Yang, W Wu, Nano Letters. 20Sass, P. M., Ge, W., Yan, J., Obeysekera, D., Yang, J. J. & Wu, W. Magnetic Imaging of Domain Walls in the Antiferromagnetic Topological Insulator Mnbi2te4. Nano Letters 20, 2609-2614, (2020). Robust $a$-Type Order and Spin-Flop Transition on the Surface of the Antiferromagnetic Topological Insulator ${\Mathrm{Mnbi}}_{2}{\Mathrm{Te}}_{4}$. P M Sass, J Kim, D Vanderbilt, J Yan, W Wu, Physical Review Letters. 12537201Sass, P. M., Kim, J., Vanderbilt, D., Yan, J. & Wu, W. Robust $a$-Type Order and Spin- Flop Transition on the Surface of the Antiferromagnetic Topological Insulator ${\Mathrm{Mnbi}}_{2}{\Mathrm{Te}}_{4}$. Physical Review Letters 125, 037201, (2020). Native Defects in Antiferromagnetic Topological Insulator ${\Mathrm{Mnbi}}_{2}{\Mathrm{Te}}_{4}$. Z Huang, M.-H Du, J Yan, W Wu, Physical Review Materials. 4121202Huang, Z., Du, M.-H., Yan, J. & Wu, W. Native Defects in Antiferromagnetic Topological Insulator ${\Mathrm{Mnbi}}_{2}{\Mathrm{Te}}_{4}$. Physical Review Materials 4, 121202, (2020). Reversible Strain-Induced Magnetic Phase Transition in a Van Der Waals Magnet. J Cenker, S Sivakumar, A Miller, P Thijssen, Z Liu, A H Dismukes, J Fonseca, E Anderson, K Xie, X Zhu, X Roy, D Xiao, J.-H Chu, T Cao, X Xu, Nature Nanotechnology Accepted. Cenker, J., Sivakumar, S., Miller, A., Thijssen, P., Liu, Z., Dismukes, A. H., Fonseca, J., Anderson, E., Xie, K., Zhu, X., Roy, X., Xiao, D., Chu, J.-H., Cao, T. & Xu, X. Reversible Strain-Induced Magnetic Phase Transition in a Van Der Waals Magnet. Nature Nanotechnology Accepted, (2021). Über Chalkogenidhalogenide Des Chroms Synthese, Kristallstruktur Und Magnetismus Von Chromsulfidbromid, Crsbr. Zeitschrift für anorganische und allgemeine Chemie. J Beck, 585Beck, J. Über Chalkogenidhalogenide Des Chroms Synthese, Kristallstruktur Und Magnetismus Von Chromsulfidbromid, Crsbr. Zeitschrift für anorganische und allgemeine Chemie 585, 157-167, (1990). Distinguishing Magnetic and Electrostatic Interactions by a Kelvin Probe Force Microscopy-Magnetic Force Microscopy Combination. M Jaafar, Ó Iglesias-Freire, L Serrano-Ramón, M Ibarra, J De Teresa, A Asenjo, Beilstein journal of nanotechnology. 2Jaafar, M., Iglesias-Freire, Ó., Serrano-Ramón, L., Ibarra, M., De Teresa, J. & Asenjo, A. Distinguishing Magnetic and Electrostatic Interactions by a Kelvin Probe Force Microscopy-Magnetic Force Microscopy Combination. Beilstein journal of nanotechnology 2, 552-560, (2011). Towards Quantitative Magnetic Force Microscopy: Theory and Experiment. T Häberle, F Haering, H Pfeifer, L Han, B Kuerbanjiang, U Wiedwald, U Herr, B Koslowski, New Journal of Physics. 14Häberle, T., Haering, F., Pfeifer, H., Han, L., Kuerbanjiang, B., Wiedwald, U., Herr, U. & Koslowski, B. Towards Quantitative Magnetic Force Microscopy: Theory and Experiment. New Journal of Physics 14, 043044, (2012). . A P Prudnikov, I U A Brychkov, Marichev, O. I. Integrals and Series: Special Functions. 2CRC pressPrudnikov, A. P., Brychkov, I. U. A. & Marichev, O. I. Integrals and Series: Special Functions. Vol. 2 (CRC press, 1986). Frontiers of Magnetic Force Microscopy. O Kazakova, R Puttock, C Barton, H Corte-León, M Jaafar, V Neu, A Asenjo, Journal of Applied Physics. 12560901Kazakova, O., Puttock, R., Barton, C., Corte-León, H., Jaafar, M., Neu, V. & Asenjo, A. Frontiers of Magnetic Force Microscopy. Journal of Applied Physics 125, 060901, (2019). Electrically Tunable High Curie Temperature Two-Dimensional Ferromagnetism in Van Der Waals Layered Crystals. H Wang, J Qi, X Qian, Applied Physics Letters. 11783102Wang, H., Qi, J. & Qian, X. Electrically Tunable High Curie Temperature Two- Dimensional Ferromagnetism in Van Der Waals Layered Crystals. Applied Physics Letters 117, 083102, (2020).
[]
[ "Observation of a linked loop quantum state", "Observation of a linked loop quantum state" ]
[ "Ilya Belopolski \nDepartment of Physics\nLaboratory for Topological Quantum Matter and Spectroscopy (B7)\nPrinceton University\n08544PrincetonNew JerseyUSA\n\nRIKEN Center for Emergent Matter Science (CEMS)\n351-0198WakoSaitamaJapan\n", "Guoqing Chang \nDivision of Physics and Applied Physics\nSchool of Physical and Mathematical Sciences\nNanyang Technological University\n21 Nanyang Link637371Singapore\n", "Tyler A Cochran \nDepartment of Physics\nLaboratory for Topological Quantum Matter and Spectroscopy (B7)\nPrinceton University\n08544PrincetonNew JerseyUSA\n", "Zi-Jia Cheng \nDepartment of Physics\nLaboratory for Topological Quantum Matter and Spectroscopy (B7)\nPrinceton University\n08544PrincetonNew JerseyUSA\n", "Xian P Yang \nDepartment of Physics\nLaboratory for Topological Quantum Matter and Spectroscopy (B7)\nPrinceton University\n08544PrincetonNew JerseyUSA\n", "Cole Hugelmeyer \nDepartment of Mathematics\nPrinceton University\n08544PrincetonNew JerseyUSA\n", "Kaustuv Manna \nMax Planck Institute for Chemical Physics of Solids\nNöthnitzer Straße 4001187DresdenGermany\n\nDepartment of Physics\nIndian Institute of Technology Delhi\n110016Hauz Khas, New DelhiIndia\n", "Jia-Xin Yin \nDepartment of Physics\nLaboratory for Topological Quantum Matter and Spectroscopy (B7)\nPrinceton University\n08544PrincetonNew JerseyUSA\n", "Guangming Cheng \nPrinceton Institute for Science and Technology of Materials\nPrinceton University\n08544PrincetonNew JerseyUSA\n", "Daniel Multer \nDepartment of Physics\nLaboratory for Topological Quantum Matter and Spectroscopy (B7)\nPrinceton University\n08544PrincetonNew JerseyUSA\n", "Maksim Litskevich \nDepartment of Physics\nLaboratory for Topological Quantum Matter and Spectroscopy (B7)\nPrinceton University\n08544PrincetonNew JerseyUSA\n", "Nana Shumiya \nDepartment of Physics\nLaboratory for Topological Quantum Matter and Spectroscopy (B7)\nPrinceton University\n08544PrincetonNew JerseyUSA\n", "Songtian S Zhang \nDepartment of Physics\nLaboratory for Topological Quantum Matter and Spectroscopy (B7)\nPrinceton University\n08544PrincetonNew JerseyUSA\n", "Chandra Shekhar \nMax Planck Institute for Chemical Physics of Solids\nNöthnitzer Straße 4001187DresdenGermany\n", "Niels B M Schröter \nSwiss Light Source\nPaul Scherrer Institut\nCH-5232VilligenSwitzerland\n", "Alla Chikina \nSwiss Light Source\nPaul Scherrer Institut\nCH-5232VilligenSwitzerland\n", "Craig Polley \nMAX IV Laboratory\nLund University\nP.O. Box 118221 00LundSweden\n", "Balasubramanian Thiagarajan \nMAX IV Laboratory\nLund University\nP.O. Box 118221 00LundSweden\n", "Mats Leandersson \nMAX IV Laboratory\nLund University\nP.O. Box 118221 00LundSweden\n", "Johan Adell \nMAX IV Laboratory\nLund University\nP.O. Box 118221 00LundSweden\n", "Shin-Ming Huang \nDepartment of Physics\nNational Sun Yat-Sen University\n804KaohsiungTaiwan\n", "Nan Yao \nPrinceton Institute for Science and Technology of Materials\nPrinceton University\n08544PrincetonNew JerseyUSA\n", "Vladimir N Strocov \nSwiss Light Source\nPaul Scherrer Institut\nCH-5232VilligenSwitzerland\n", "Claudia Felser \nMax Planck Institute for Chemical Physics of Solids\nNöthnitzer Straße 4001187DresdenGermany\n", "M Zahid Hasan \nDepartment of Physics\nLaboratory for Topological Quantum Matter and Spectroscopy (B7)\nPrinceton University\n08544PrincetonNew JerseyUSA\n\nPrinceton Institute for Science and Technology of Materials\nPrinceton University\n08544PrincetonNew JerseyUSA\n\nMaterials Sciences Division\nLawrence Berkeley National Laboratory\n94720BerkeleyCAUSA\n" ]
[ "Department of Physics\nLaboratory for Topological Quantum Matter and Spectroscopy (B7)\nPrinceton University\n08544PrincetonNew JerseyUSA", "RIKEN Center for Emergent Matter Science (CEMS)\n351-0198WakoSaitamaJapan", "Division of Physics and Applied Physics\nSchool of Physical and Mathematical Sciences\nNanyang Technological University\n21 Nanyang Link637371Singapore", "Department of Physics\nLaboratory for Topological Quantum Matter and Spectroscopy (B7)\nPrinceton University\n08544PrincetonNew JerseyUSA", "Department of Physics\nLaboratory for Topological Quantum Matter and Spectroscopy (B7)\nPrinceton University\n08544PrincetonNew JerseyUSA", "Department of Physics\nLaboratory for Topological Quantum Matter and Spectroscopy (B7)\nPrinceton University\n08544PrincetonNew JerseyUSA", "Department of Mathematics\nPrinceton University\n08544PrincetonNew JerseyUSA", "Max Planck Institute for Chemical Physics of Solids\nNöthnitzer Straße 4001187DresdenGermany", "Department of Physics\nIndian Institute of Technology Delhi\n110016Hauz Khas, New DelhiIndia", "Department of Physics\nLaboratory for Topological Quantum Matter and Spectroscopy (B7)\nPrinceton University\n08544PrincetonNew JerseyUSA", "Princeton Institute for Science and Technology of Materials\nPrinceton University\n08544PrincetonNew JerseyUSA", "Department of Physics\nLaboratory for Topological Quantum Matter and Spectroscopy (B7)\nPrinceton University\n08544PrincetonNew JerseyUSA", "Department of Physics\nLaboratory for Topological Quantum Matter and Spectroscopy (B7)\nPrinceton University\n08544PrincetonNew JerseyUSA", "Department of Physics\nLaboratory for Topological Quantum Matter and Spectroscopy (B7)\nPrinceton University\n08544PrincetonNew JerseyUSA", "Department of Physics\nLaboratory for Topological Quantum Matter and Spectroscopy (B7)\nPrinceton University\n08544PrincetonNew JerseyUSA", "Max Planck Institute for Chemical Physics of Solids\nNöthnitzer Straße 4001187DresdenGermany", "Swiss Light Source\nPaul Scherrer Institut\nCH-5232VilligenSwitzerland", "Swiss Light Source\nPaul Scherrer Institut\nCH-5232VilligenSwitzerland", "MAX IV Laboratory\nLund University\nP.O. Box 118221 00LundSweden", "MAX IV Laboratory\nLund University\nP.O. Box 118221 00LundSweden", "MAX IV Laboratory\nLund University\nP.O. Box 118221 00LundSweden", "MAX IV Laboratory\nLund University\nP.O. Box 118221 00LundSweden", "Department of Physics\nNational Sun Yat-Sen University\n804KaohsiungTaiwan", "Princeton Institute for Science and Technology of Materials\nPrinceton University\n08544PrincetonNew JerseyUSA", "Swiss Light Source\nPaul Scherrer Institut\nCH-5232VilligenSwitzerland", "Max Planck Institute for Chemical Physics of Solids\nNöthnitzer Straße 4001187DresdenGermany", "Department of Physics\nLaboratory for Topological Quantum Matter and Spectroscopy (B7)\nPrinceton University\n08544PrincetonNew JerseyUSA", "Princeton Institute for Science and Technology of Materials\nPrinceton University\n08544PrincetonNew JerseyUSA", "Materials Sciences Division\nLawrence Berkeley National Laboratory\n94720BerkeleyCAUSA" ]
[]
Quantum phases can be classified by topological invariants, which take on discrete values capturing global information about the quantum state[1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19]. Over the past decades, these invariants have come to play a central role in describing matter, providing the foundation for understanding superfluids[6,7], magnets[8,9], the quantum Hall effect[4,10], topological insulators[11][12][13][14][15], Weyl semimetals[16][17][18][19]and other phenomena. Here we report a remarkable linking number (knot theory) invariant associated with loops of electronic band crossings in a mirror-symmetric ferromagnet[20][21][22][23][24][25][26]. Using state-of-the-art spectroscopic methods, we directly observe three intertwined degeneracy loops in the material's bulk Brillouin zone three-torus, T 3 . We find that each loop links each other loop twice. Through systematic spectroscopic investigation of this linked loop quantum state, we explicitly draw its link diagram and conclude, in analogy with knot theory, that it exhibits linking number (2, 2, 2), providing a direct determination of the invariant structure from the experimental data. On the surface of our samples, we further predict and observe Seifert boundary states protected by the bulk linked loops, suggestive of a remarkable Seifert bulk-boundary correspondence. Our observation of a quantum loop link motivates the application of knot theory to the exploration of exotic properties of quantum matter.Quantum topology is powerful in understanding condensed matter systems that exhibit a winding[1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19]. Often, this winding occurs in real space. For example, in a magnetic material, the local magnetization may exhibit a rotating pattern centered around a point in real space, forming a magnetic vortex encoding an integer winding number[3,8]. Alternatively, the winding may occur in momentum space. For example, in a one-dimensional topological insulator, the quantum-mechanical wavefunctions wind as the momentum scans through the Brillouin zone[4,5,[10][11][12][13][14]. These two broad paradigms-order parameters, such as magnetization, which wind in real space and quantum wavefunctions which wind in momentum space-capture a vast landscape of topological phases of matter, spanning decades of research by myriad communities of physicists. Real-space order parameter winding further encompasses disclinations in liquid crystals; vortices in superconductors and superfluid 4 He; and magnetic skyrmions, whose invariants are proposed as the basis for next-generation computing memory and logic [2, 3, 6-9]. On the other hand, momentum-space wavefunction winding is associated with emergent Dirac fermions in two-and three-dimensional topological insulators; Weyl fermions in topological semimetals; and the quantum Hall effect, which sets the prevailing von Klitzing standard of electrical resistance[10][11][12][13][14][15][16][17][18][19]. Despite their importance in modern physics, there is no indication that these two paradigms are exhaustive.Novel paradigms for topology promise to deepen our fundamental understanding of nature, as well as enable new quantum technologies.Recently, there has been considerable interest in node loops, an electronic structure where two bands cross along a closed curve in momentum space[19][20][21][27][28][29][30][31]. Away from the crossing curve, the two bands disperse linearly, so that the node loop consists of a cone dispersion persisting along a loop. Within the paradigm of momentum-space wavefunction winding, node loops are topological, with a quantized Berry phase invariant [11-14, 19, 30, 31]. However, in contrast to other electronic structures studied to date[10][11][12][13][14][15][16][17][18][19], node loops can link each other, encoding a linking number invariant(Fig. 1a, Extended Data Fig. 1)[22][23][24][25][26]. Unlike the traditional paradigms of winding, this linking number is associated with the composite loop structure of quantum-mechanical band crossings of the Hamiltonian.Such linked node loops offer the possibility of a new bridge between physics and knot theory.It has further been proposed that these links are governed by emergent non-Abelian node loop charges[22]and that the linking number determines the θ angle of the axion Lagrangian in certain node loop phases[25,32,33]. Since the three-dimensional condensed matter GBMF9461; M. Z. H.). The ARPES
10.1038/s41586-022-04512-8
[ "https://arxiv.org/pdf/2112.14722v2.pdf" ]
245,537,500
2112.14722
26ddb24782e6d49291ce1fea3b8be3e6c94c6e50
Observation of a linked loop quantum state Ilya Belopolski Department of Physics Laboratory for Topological Quantum Matter and Spectroscopy (B7) Princeton University 08544PrincetonNew JerseyUSA RIKEN Center for Emergent Matter Science (CEMS) 351-0198WakoSaitamaJapan Guoqing Chang Division of Physics and Applied Physics School of Physical and Mathematical Sciences Nanyang Technological University 21 Nanyang Link637371Singapore Tyler A Cochran Department of Physics Laboratory for Topological Quantum Matter and Spectroscopy (B7) Princeton University 08544PrincetonNew JerseyUSA Zi-Jia Cheng Department of Physics Laboratory for Topological Quantum Matter and Spectroscopy (B7) Princeton University 08544PrincetonNew JerseyUSA Xian P Yang Department of Physics Laboratory for Topological Quantum Matter and Spectroscopy (B7) Princeton University 08544PrincetonNew JerseyUSA Cole Hugelmeyer Department of Mathematics Princeton University 08544PrincetonNew JerseyUSA Kaustuv Manna Max Planck Institute for Chemical Physics of Solids Nöthnitzer Straße 4001187DresdenGermany Department of Physics Indian Institute of Technology Delhi 110016Hauz Khas, New DelhiIndia Jia-Xin Yin Department of Physics Laboratory for Topological Quantum Matter and Spectroscopy (B7) Princeton University 08544PrincetonNew JerseyUSA Guangming Cheng Princeton Institute for Science and Technology of Materials Princeton University 08544PrincetonNew JerseyUSA Daniel Multer Department of Physics Laboratory for Topological Quantum Matter and Spectroscopy (B7) Princeton University 08544PrincetonNew JerseyUSA Maksim Litskevich Department of Physics Laboratory for Topological Quantum Matter and Spectroscopy (B7) Princeton University 08544PrincetonNew JerseyUSA Nana Shumiya Department of Physics Laboratory for Topological Quantum Matter and Spectroscopy (B7) Princeton University 08544PrincetonNew JerseyUSA Songtian S Zhang Department of Physics Laboratory for Topological Quantum Matter and Spectroscopy (B7) Princeton University 08544PrincetonNew JerseyUSA Chandra Shekhar Max Planck Institute for Chemical Physics of Solids Nöthnitzer Straße 4001187DresdenGermany Niels B M Schröter Swiss Light Source Paul Scherrer Institut CH-5232VilligenSwitzerland Alla Chikina Swiss Light Source Paul Scherrer Institut CH-5232VilligenSwitzerland Craig Polley MAX IV Laboratory Lund University P.O. Box 118221 00LundSweden Balasubramanian Thiagarajan MAX IV Laboratory Lund University P.O. Box 118221 00LundSweden Mats Leandersson MAX IV Laboratory Lund University P.O. Box 118221 00LundSweden Johan Adell MAX IV Laboratory Lund University P.O. Box 118221 00LundSweden Shin-Ming Huang Department of Physics National Sun Yat-Sen University 804KaohsiungTaiwan Nan Yao Princeton Institute for Science and Technology of Materials Princeton University 08544PrincetonNew JerseyUSA Vladimir N Strocov Swiss Light Source Paul Scherrer Institut CH-5232VilligenSwitzerland Claudia Felser Max Planck Institute for Chemical Physics of Solids Nöthnitzer Straße 4001187DresdenGermany M Zahid Hasan Department of Physics Laboratory for Topological Quantum Matter and Spectroscopy (B7) Princeton University 08544PrincetonNew JerseyUSA Princeton Institute for Science and Technology of Materials Princeton University 08544PrincetonNew JerseyUSA Materials Sciences Division Lawrence Berkeley National Laboratory 94720BerkeleyCAUSA Observation of a linked loop quantum state May 2022 2(Dated: May 25, 2022) * These authors contributed equally to this work. 1 Quantum phases can be classified by topological invariants, which take on discrete values capturing global information about the quantum state[1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19]. Over the past decades, these invariants have come to play a central role in describing matter, providing the foundation for understanding superfluids[6,7], magnets[8,9], the quantum Hall effect[4,10], topological insulators[11][12][13][14][15], Weyl semimetals[16][17][18][19]and other phenomena. Here we report a remarkable linking number (knot theory) invariant associated with loops of electronic band crossings in a mirror-symmetric ferromagnet[20][21][22][23][24][25][26]. Using state-of-the-art spectroscopic methods, we directly observe three intertwined degeneracy loops in the material's bulk Brillouin zone three-torus, T 3 . We find that each loop links each other loop twice. Through systematic spectroscopic investigation of this linked loop quantum state, we explicitly draw its link diagram and conclude, in analogy with knot theory, that it exhibits linking number (2, 2, 2), providing a direct determination of the invariant structure from the experimental data. On the surface of our samples, we further predict and observe Seifert boundary states protected by the bulk linked loops, suggestive of a remarkable Seifert bulk-boundary correspondence. Our observation of a quantum loop link motivates the application of knot theory to the exploration of exotic properties of quantum matter.Quantum topology is powerful in understanding condensed matter systems that exhibit a winding[1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19]. Often, this winding occurs in real space. For example, in a magnetic material, the local magnetization may exhibit a rotating pattern centered around a point in real space, forming a magnetic vortex encoding an integer winding number[3,8]. Alternatively, the winding may occur in momentum space. For example, in a one-dimensional topological insulator, the quantum-mechanical wavefunctions wind as the momentum scans through the Brillouin zone[4,5,[10][11][12][13][14]. These two broad paradigms-order parameters, such as magnetization, which wind in real space and quantum wavefunctions which wind in momentum space-capture a vast landscape of topological phases of matter, spanning decades of research by myriad communities of physicists. Real-space order parameter winding further encompasses disclinations in liquid crystals; vortices in superconductors and superfluid 4 He; and magnetic skyrmions, whose invariants are proposed as the basis for next-generation computing memory and logic [2, 3, 6-9]. On the other hand, momentum-space wavefunction winding is associated with emergent Dirac fermions in two-and three-dimensional topological insulators; Weyl fermions in topological semimetals; and the quantum Hall effect, which sets the prevailing von Klitzing standard of electrical resistance[10][11][12][13][14][15][16][17][18][19]. Despite their importance in modern physics, there is no indication that these two paradigms are exhaustive.Novel paradigms for topology promise to deepen our fundamental understanding of nature, as well as enable new quantum technologies.Recently, there has been considerable interest in node loops, an electronic structure where two bands cross along a closed curve in momentum space[19][20][21][27][28][29][30][31]. Away from the crossing curve, the two bands disperse linearly, so that the node loop consists of a cone dispersion persisting along a loop. Within the paradigm of momentum-space wavefunction winding, node loops are topological, with a quantized Berry phase invariant [11-14, 19, 30, 31]. However, in contrast to other electronic structures studied to date[10][11][12][13][14][15][16][17][18][19], node loops can link each other, encoding a linking number invariant(Fig. 1a, Extended Data Fig. 1)[22][23][24][25][26]. Unlike the traditional paradigms of winding, this linking number is associated with the composite loop structure of quantum-mechanical band crossings of the Hamiltonian.Such linked node loops offer the possibility of a new bridge between physics and knot theory.It has further been proposed that these links are governed by emergent non-Abelian node loop charges[22]and that the linking number determines the θ angle of the axion Lagrangian in certain node loop phases[25,32,33]. Since the three-dimensional condensed matter GBMF9461; M. Z. H.). The ARPES Brillouin zone is a three-torus T 3 , linked node loops also offer the rare possibility of observing links in a space other than ordinary infinite space R 3 . Moreover, the Seifert surface of the bulk link is associated with topological boundary states, opening the possibility of a unique Seifert bulk-boundary correspondence in quantum matter [34][35][36][37][38]. Ferromagnets with crystalline mirror symmetry naturally give rise to node loops. In this scenario, the ferromagnetic exchange interaction produces spin-split electronic bands which are generically singly-degenerate throughout momentum space, while mirror symmetry protects two-fold band degeneracies along closed curves confined to the momentum-space mirror planes [28]. Such node loops are called Weyl loops, by analogy with the two-fold degeneracy of a Weyl point [16][17][18][29][30][31]. Weyl loops are extremely effective at concentrating Berry curvature, giving rise to giant anomalous Hall and Nernst effects, up to room temperature and promising for technological applications [20,[39][40][41][42][43]. In crystallographic space groups with multiple perpendicular mirror planes, different Weyl loops living in different mirror planes can naturally link each other [21,26]. The ferromagnet Co 2 MnGa exhibits a crystal structure with multiple perpendicular mirror planes and was recently observed to host electronic Weyl loops [20,21], bringing together the key ingredients for node loop links. Co 2 MnGa crystallizes in the full Heusler structure, with face-centered cubic Bravais lattice; space group F m3m (No. 225); octahedral point group O h ; and conventional unit cell lattice constant c = 5.77Å (Fig. 1b, Extended Data Fig. 2). We observe that our Co 2 MnGa samples are ferromagnetic, with Curie temperature T C = 690 K, consistent with earlier reports [44,45]. The point group includes mirror planes normal tox,ŷ andẑ (defined by the conventional unit cell lattice vectors, representative mirror plane shown in orange in Extended Data Fig. 2a), as well as three C 4 rotation symmetries relating any one of these mirror planes to the other two. We first perform a characterization by atomic-level energy-dispersive X-ray spectroscopy (EDS), providing direct structural evidence that our Co 2 MnGa samples are crystallographically well-ordered, show the expected lattice constant and exhibit these mirror and rotation symmetries (Fig. 1b). The real-space mirror planes give rise to momentum-space mirror planes, labelled M 1 (normal toẑ); M 2 (ŷ); and M 3 (x, Fig. 1c). These mirror planes contain the time-reversal invariant momenta X 1 , X 2 and X 3 , sitting at the centers of the square faces of the bulk Brillouin zone. Motivated by the observation of mirror-symmetry-protected magnetic Weyl loops in Co 2 MnGa [20,21], we explore the electronic structure of our samples on M 1 , M 2 and M 3 . We perform ab initio calculations of Co 2 MnGa in the ferromagnetic state, focusing on these three mutually-perpendicular mirror planes. We find that each mirror plane hosts a Weyl loop, and that the three Weyl loops link one another (Fig. 1c, Extended Data Fig. 9). To experimentally investigate this ab initio prediction, we carry out angle-resolved photoemission spectroscopy (ARPES) using soft X-ray photons, optimized for exploring bulk electronic states [46,47]. Without loss of generality, we consider the crystal cleaving plane in our photoemission experiments to be parallel to M 1 . We first acquire a Fermi surface at a fixed incident photon energy hν = 544 eV, chosen to fix the k z momentum to this 'in-plane' mirror plane (M 1 , Fig. 1d). We observe a diamond-shaped contour centered on X 1 , which traces out a momentum-space trajectory encircling the square top face of the bulk Brillouin zone. We also observe a small circular feature at the corners of the Fermi surface, which arises from an unrelated band at Γ, irrelevant for what follows. We next perform a photon-energy dependence on the same sample, measuring from hν = 500 to 800 eV, which allows us to access the electronic structure on the 'out-of-plane' mirror M 2 (Fig. 1e). We again observe a diamond-shaped loop contour, now centered on X 2 and encircling the square side face of the bulk Brillouin zone. We then rotate the sample by 90 • and repeat the same photon-energy dependence to capture the electronic structure on the other 'out-of-plane' mirror M 3 . We observe again a similar diamond-shaped loop contour centered on X 3 (Fig. 1f). These systematic observations suggest a family of symmetry-related diamond-shaped loop contours, consistent with the octahedral point group O h . Each of the three loops is related to the others by rotation symmetry and each lives in one of the three mirror planes. To further understand the loop electronic structures, we examine energy-momentum photoemission spectra slicing through the M 1 loop (Fig. 2a). We observe two bands which disperse toward each other and meet near the Fermi level, E F (E B = 0), suggesting a cone dispersion. The presence of a cone dispersion in both slices further shows that the cone persists as we move in momentum space, following the M 1 loop Fermi surface. Since Co 2 MnGa is ferromagnetic, we expect generically singly-degenerate bands throughout the Brillouin zone [44,45]. This suggests a cone dispersion consisting of singly-degenerate branches which exhibit a double degeneracy at the touching points, indicating a Weyl loop electronic structure. To better understand the cone dispersions, we perform ab initio calculations of the electronic structure on energy-momentum slices corresponding to these ARPES spectra. The calculation shows a Weyl cone with characteristic two-fold degenerate crossing and linear dispersion away from the crossing (Fig. 2b). Assembled together, this series of Weyl cones forms a Weyl loop. The ARPES and ab initio calculations are in good agreement, further suggesting that we have observed a magnetic Weyl loop on M 1 . To characterize this Weyl loop using ARPES, we systematically track all cone crossings observed along the full M 1 loop trajectory (Fig. 2c). We then fit the crossings to a two-band effective k · p Hamiltonian for a Weyl loop, H = k, a,b∈{±} c † ka h ab (k)c kb , h(k) = f (k)σ z + v F q z σ x + g(k)1(1) Here the c † ka are fermionic creation operators, k is the crystal momentum, σ z and σ x are Pauli matrices, 1 is the 2 × 2 identity and q z ≡ k z − 2π/c is theẑ component of the momentum measured relative to M 1 , where c is the conventional unit cell lattice constant. This Hamiltonian exhibits a Weyl loop on q z = 0 with trajectory given by f (k) = 0, formed from two bands with opposite mirror eigenvalues. From our ARPES spectra, we experimentally extract the full Weyl loop trajectory by fitting to a low-order expansion around X 1 , consistent with the symmetries of the system, f (k) = γ 1 + α(k 2 x + k 2 y ) + βk 2 x k 2 y(2) Here α and β fix the Weyl loop trajectory and the scaling factor γ sets an energy scale. The train of crossing points observed in ARPES is well-captured by α = −1.23 ± 0.03Å 2 and β = −31.5 ± 4.1Å 4 (Fig. 2c). We also find that our ARPES-extracted Weyl loop trajectory agrees well with the trajectory observed in ab initio calculations (Extended Data Fig. 3). The energy dispersion of the Weyl loop is set by g(k), well-described by g(k) = δ + η cos(4θ), where δ = −75 ± 17 meV, η = 46 ± 17 meV, and θ is the ordinary polar angle of k, and M 3 Weyl loop Fermi surfaces. We first consider the M 1 and M 2 Weyl loops in an extended zone scheme. We study two adjacent Brillouin zones (Fig. 3a, inset) and zoom in on the momentum-space region around X 1 . By plotting the M 1 and M 2 Weyl loop Fermi surfaces simultaneously in this region of three-dimensional momentum space, we observe that these two loops appear to link each other twice (Fig. 3a). We next consider the M 2 and M 3 Weyl loops and we shift our momentum-space field of view to a region around X 2 . tan θ ≡ k y /k x . We plot the M 2 and M 3 Weyl loop Fermi surfaces and we again directly observe from our photoemission spectra that the two loops link each other twice (Fig. 3b). Repeating the analogous procedure for X 3 , we observe that the M 3 and M 1 Weyl loops also link twice To further explore this link, we examine all three Weyl loops simultaneously using the experimentally-extracted loop trajectory, Eq. 2. In an extended zone scheme, we plot the Having systematically characterized the link structure in the bulk of Co 2 MnGa, we next consider its topological surface states. Unlinked loop nodes host conventional drumhead surface states, which fill a simply connected region of momentum space in the surface Brillouin zone. By contrast, linked loops exhibit an alternating pattern of topologically distinct regions where surface states are either present or suppressed, and which are pinned together at generic points in momentum space. This topological structure is captured by the Seifert surface, defined as a three-dimensional surface which has the link as its boundary [34]. For a condensed matter system, we consider a Seifert surface defined in (k x , k y , k z ) and bounded by the linked loop nodes, with energy axis collapsed. For the minimal case of a Hopf link, the Seifert surface exhibits a branched structure which 'wraps' around the link (Fig. 5a, left). A two-dimensional projection of the Seifert surface then produces alternating filled and empty regions, which meet at characteristic touching points (Fig. 5a, right). COMPETING INTERESTS The authors declare no competing interests. METHODS Single crystal growth: Co 2 MnGa single crystals were grown using the Bridgman-Stockbarger method. A polycrystalline ingot was first prepared using an induction melt technique, with a stoichiometric mixture of Co, Mn and Ga metal pieces of 99.99% purity. Then the powdered material was poured into an alumina crucible and sealed in a tantalum tube. Growth temperatures were controlled using a thermocouple attached to the bottom of the crucible. During the heating cycle, the material was melted at temperatures above 1200 • C and then slowly cooled below 900 • C. Angle-resolved photoemission spectroscopy: Soft X-ray ARPES measurements were carried out at the ADRESS beamline of the Swiss Light Source in Villigen, Switzerland under vacuum better than 5 × 10 −11 Torr and a temperature of 16 K [46,47,49]. Rod-shaped single crystals of Co 2 MnGa oriented along the conventional unit cellẑ direction were cleaved in situ at base temperature. The constant-energy cuts were symmetrized about M x and M xy (Fig. 1d), M x and M xz (Fig. 1e) and M y and M yz (Fig. 1f). The high-symmetry energy-momentum cuts were similarly symmetrized about M x , M y or M z , as appropriate and consistent with the nominal symmetries of the crystal (Fig. 4a). Scanning transmission electron microscopy: Thin lamellae for microstructure characterization were prepared from bulk single crystals by focused ion beam cutting using a ThermoFisher Helios NanoLab G3 UC DualBeam system (FIB/SEM). Atomic resolution high-angle annular dark-field (HAADF) scanning transmission electron microscopy (STEM) imaging and atomic-level energy-dispersive X-ray spectroscopy (EDS) mapping were performed on a double Cs-corrected ThermoFisher Titan Cubed Themis 300 scanning/transmission electron microscope (S/TEM) equipped with an X-FEG source operated at 300 kV with a Super-X EDS system. X 1 X 2 X 3 k x (Å -1 ) k y (Å -1 ) k z (Å -1 ) M 1 M 3 Mk x (Å -1 ) X 1 ARPES, E F k a k b H L X 1 X 3 X 2 k x k y k z M 1 M 2 M 3 -1.0 -0.5 0.0 0.5 1.0 k x (Å -1 ) X 2 ARPES, E F H L d -M 2 k z (Å -1 ) X 1 M 1 M 3 k z (Å -1 ) X 2 M 2 M 1 k y (Å -1 ) k x (Å -1 ) k z (Å-1 )k 1 k 2 M 1 M 3 M 2 -2 -1 0 0 1 2 -2 -1 0 k x (Å -1 ) k y (Å -1 ) k z (Å -1 ) M 1 M 3 M 2 (1)(2) (3) (4) Extended Data Fig. 8: Unsymmetrized energy-momentum cuts. Photoemission spectra displayed in Fig. 4a, without symmetrization. We observe that the Weyl loop Fermi surface pockets form a linked structure. Co 2 MnGa _ Γ _ K _ M -0.5 0.0 0.5 k 1 (Å -1 ) k 2 (Å -1 ) M 1 M 2 M 3 d, e(111) ( Fig. 3c). To estimate the stability of these links, we can measure how far apart one would need to slide two Weyl loops in order to unlink them (Extended Data Figs.4,5). From the loop Fermi surfaces, we find that the typical 'depth' of the link in momentum-space is d avg = 0.58 ± 0.08Å −1 , of the same order of magnitude as the radius |k| avg of the Weyl loop. This large depth suggests that the system lives well within a linked electronic phase.Our three-dimensional momentum-space analysis of the photoemission spectra suggests that each of the M 1 , M 2 and M 3 Weyl loops links each other Weyl loop twice, forming a robust linked structure. M 1 , 1M 2 and M 3 Weyl loops around six nearby X points, so that two redundant copies of each Weyl loop are included(Fig. 4b). We find that the M 1 Weyl loop links both the M 2 and M 3 Weyl loops; the M 2 Weyl loop links both the M 3 and M 1 loops; and similarly for the M 2 loop. This suggests that the three Weyl loops together form a single composite linked structure. By plotting additional redundant copies of the loops in higher Brillouin zones, we can form a Weyl loop network extending outward to infinity. To more deeply explore this linked structure, we examine energy-momentum photoemission slices tangential to all three loops near their extrema(Fig. 4a). All slices exhibit a cone dispersion, consistent with the Weyl loop electronic structure. Moreover, we find quantitative agreement between the Weyl loop extrema expected from Eq. 2 and the locations of the photoemission cone dispersions, for all three loops. This agreement again suggests the observation of a composite structure of three interwoven Weyl loops. To better visualize the complete link structure, we construct a link diagram for our Weyl loops. In such a link diagram, we flatten the link from three to two dimensions while preserving the link information using an over/under notation (illustrated for the example of a Hopf link,Fig. 4c). Because the Weyl loop link lives in the periodic momentum space of the crystal, we flatten the link into the surface Brillouin zone. Moreover, because our analysis shows that all three Weyl loops are symmetry-related, we choose the (111) surface Brillouin zone (Extended DataFig. 2), which treats X 1 , X 2 and X 3 equivalently(Fig. 4d). The resulting link diagram shows three loops straddling the edges of the surface Brillouin zone(Fig. 4e). We observe that the link wraps around T 3 in all three momentum-space directions. This behavior suggests that the link is geometrically essential, in the sense that it cannot be smoothly perturbed to live entirely within a localregion of the Brillouin zone. The link diagram further shows that each loop is linked with each other loop exactly twice. This gives 2 for the geometric linking number, defined as the minimal number of crossing changes between link components needed to separate the components. The geometric linking number of the composite Weyl loop structure can then be written as (2, 2, 2), where the first entry in the list corresponds to the linking number between M 1 and M 2 , the second entry between M 2 and M 3 , and the third entry between M 3 and M 1 . By analogy with topological insulators and Weyl point semimetals, this Weyl loop link is expected to be stable under arbitrary, small, symmetry-preserving perturbations of the electronic structure. I In the case of the Weyl loop link which we observe in Co 2 MnGa, the Seifert projection on the (111) hexagonal surface Brillouin zone then predicts several alternating regions with and without topological boundary states (blue and white regions, Fig. 5c), exhibiting touching points alongΓ −K. Since the Seifert surface encodes the linking number [34], the topological boundary states are associated with a Seifert bulk-boundary correspondence. In this correspondence, the linking number of the Weyl loops in the bulk is encoded in a Seifert surface, whose projection gives the topological boundary states. A measurement of the bulk link determines the Seifert boundary states, while a measurement of the Seifert boundary states allows a reconstruction of the bulk linking number. To explore these possible surface states, we examine the (111) surface of our Co 2 MnGa samples in ab initio calculation and surface-sensitive vacuum ultraviolet (VUV) ARPES. On an energy-momentum cut through the touching point we observe in calculation a pair of surface modes pinned together at the Weyl loops (Figs. 5d). Moreover, our photoemission spectra are consistent with our ab initio prediction, suggesting the observation of Seifert boundary states approaching the Weyl loop linking point (Fig. 5e). On iso-energy contours of the electronic structure, we expect to observe arc-like slices of the Seifert states, stretching across the filled regions and connecting the linked Weyl loops. Examining the Fermi surface obtained in calculation, we observe a sharp arc of surface states connecting the linked Weyl loops, consistent with the Seifert projection (Fig. 5f, left). At the same time, the suppressed region exhibits no topological surface states in calculation. Our Fermi surface obtained by VUV-ARPES matches the ab initio prediction well (Fig. 5f, right). We observe distinct arcs of states connecting the linked Weyl loops across the topological region, corresponding to the topological sur-face states observed on the energy-momentum cuts (Figs. 5d, e) and suggestive of Seifert states at the Fermi level in Co 2 MnGa. Our ab initio calculations and photoemission spectra suggest the observation of Seifert boundary states. Our photoemission spectra, ab initio calculations and theoretical analysis suggest the observation of a loop node link in a quantum magnet. On the sample surface, we further observe Seifert boundary states protected by the bulk link, indicating a Seifert bulk-boundary correspondence. These results establish a new bridge between physics and knot theory, motivating further exploration of links and knots in electronic structures. Moreover, the linked loop state in Co 2 MnGa, as well as in other materials, may give rise to exotic response quantized to the linking number, such as a link-quantized topological magneto-electric effect [25, 32, 33, 48]. Since high-symmetry magnetic and correlated materials are abundant in nature, these ideas open the way to understanding the exotic behavior of a wide class of quantum magnets and superconductors, as well as their photonic analogs. † Electronic address: [email protected] ‡ Electronic address: [email protected] thank Jessica McChesney and Fanny Rodolakis for experimental support during preliminary ARPES measurements carried out at BL29 of the Advanced Photon Source (APS) in Illinois, USA. I.B. acknowledges discussions with Boris Belopolski on Savitzky-Golay analysis. G.C. acknowledges the support of the National Research Foundation, Singapore under its NRF Fellowship Award (NRF-NRFF13-2021-0010) and the Nanyang Assistant Professorship grant from Nanyang Technological University. T.A.C. acknowledges support by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1656466. A.C. acknowledges funding from the Swiss National Science Foundation under Grant No. 200021-165529. The authors acknowledge synchrotron radiation beamtime at the ADRESS beamline of the Swiss Light Source of the Paul Scherrer Institut in Villigen, Switzerland under Proposals 20170898, 20190740 and 20191674. The authors further acknowledge use of Princeton's Imaging and Analysis Center (IAC), which is partially supported by the Princeton Center for Complex Materials (PCCM), a National Science Foundation (NSF) Materials Research Science and Engineering Center (MRSEC; DMR-2011750). This research used resources of the Advanced Photon Source, a U.S. Department of Energy (DOE) Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory under Contract No. DE-AC02-06CH11357. The authors acknowledge beamtime at BL25SU of SPring-8 under Proposal 2017A1669 and at BL29 of the APS under Proposals 54992 and 60811. K.M. and C.F. acknowledge financial support from the European Research Council (ERC) Advanced Grant No. 742068 "TOP-MAT". C.F. acknowledges the DFG through SFB 1143 (project ID. 247310070) and the Würzburg-Dresden Cluster of Excellence on Complexity and Topology in Quantum Matter ct.qmat (EXC2147, project ID. 39085490). M.Z.H. acknowledges visiting scientist support at Berkeley Lab (LBNL) during the early phases of this work. Work at Princeton University was supported by the Gordon and Betty Moore Foundation (Grants No. GBMF4547 and No. tional user facility, is supported by the Swedish Research council under contract 2018.B., G. Chang, T.A.C. and M.Z.H. initiated the project. I.B., T.A.C., Z.-J. C. and M.Z.H. acquired and analyzed ARPES spectra with help from X.P.Y., D.M., J.-X.Y., M.L., N.S. and S.S.Z. ARPES measurements were supported by N.B.M.S., A.C., C.P., B.T., M.L., J.A. and V.N.S. G. Chang performed the first-principles calculations. I.B. performed the k·p model analysis with help from G. Chang and S.-M.H. I.B. performed the linking number analysis with help from C.H. G. Cheng and N.Y. performed the scanning transmission electron microscopy measurements. K.M., C.S. and C.F. synthesized and characterized the single crystals. I.B. wrote the manuscript with contributions from all authors. A background was removed from the photoemission spectra by a fixed intensity cutoff (raw, unsymmetrized data in Extended Data Figs. 6, 7, 8). For the Fermi surfaces acquired at hν = 544 eV, the nominal energy resolution was δE = 75 meV; for the photon-energy dependences, the nominal energy resolution varied from δE = 75 meV at hν = 500 eV to δE = 125 meV at hν = 800 eV. The angular resolution was better than 0.2 • in all cases. The Fermi surfaces were binned in an energy window of ±38 meV (Fig. 1d) and ±25 meV (Fig. 1e,f) around E F . Vacuum ultraviolet ARPES measurements were carried out at Beamline 5-2 of the Stanford Synchrotron Radiation Lightsource in Menlo Park, CA, USA at δE = 15 meV and temperature 20 K.Ab initio calculations: The electronic structure of Co 2 MnGa in the ferromagnetic phase was calculated within the density functional theory (DFT) framework using the projector augmented wave method as implemented in the VASP package[50,51]. The generalized gradient approximation (GGA)[52] and a Γ-centered k-point 12 × 12 × 12 mesh were used. Ga s, p orbitals and Mn, Co d orbitals were used to generate a real space tight-binding model, from which Wannier functions were determined. The Fermi level in DFT was shifted to match the ARPES. FIG. 1 : 1Signatures of linked node loops in Co 2 MnGa. a, Weyl loops in the electronic structure of Co 2 MnGa, predicted by density functional theory (DFT). Three distinct Weyl loops are confined to the three mirror planes M 1 , M 2 and M 3 , in such a way that the loops link one another (additional copies of the loops in higher Brillouin zones not shown). b, Element-resolved crystal structure of Co 2 MnGa along the [001] direction, acquired by atomic-level energy-dispersive X-ray spectroscopy (EDS). Atomic columns consist either entirely of cobalt (green) or alternating manganese (red) and gallium (blue). c, Bulk Brillouin zone (black truncated octahedron) of Co 2 MnGa with three mirror planes indicated, M 1 (magenta, constant k z ), M 2 (red, constant k y ) and M 3 (gold, constant k x ). Each mirror plane contains square faces of the Brillouin zone. The high-symmetry momentum-space points at the center of each square are marked X 1 , X 2 , X 3 . d, Fermi surface acquired by angle-resolved photoemission spectroscopy (ARPES) at incident photon energy 544 eV, corresponding to M 1 . e, Out-of-plane Fermi surface acquired on the same Co 2 MnGa sample by an ARPES photon energy dependence from 500 eV to 800 eV in steps of 2 eV, corresponding to M 2 . f, Analogous out-of-plane Fermi surface corresponding to M 3 , again on the same sample. FIG. 2 : 2Weyl loop trajectory in Co 2 MnGa. a, Energy-momentum photoemission slices through the loop Fermi surface (slice locations marked by the dotted lines in (d) and Fig. 1d). b, Energy-momentum slices through the Weyl loop from DFT, showing a Weyl loop cone (slice locations marked in Extended Data Fig. 3c). c, Cone locations (magenta squares) systematically extracted from cone dispersions observed in photoemission spectra on M 1 . Experimental loop trajectory extracted by fitting to the cone locations (cyan, see main text). The binding energy axis is collapsed. d, Constant-energy photoemission slice with analytical model of the Weyl loop (black lines). This slice intersects the Weyl loop at a discrete set of points (cyan dots). e, Dispersion of an effective k · p Hamiltonian for the Weyl loop, capturing the experimental loop trajectory. FIG. 3 : 3Linked Weyl loops in Co 2 MnGa. a, M 1 and M 2 loop Fermi surfaces from adjacent bulk Brillouin zones, plotted in an extended zone scheme, exhibiting a link structure. Inset: M 1 and M 2 plotted across multiple Brillouin zones. b, M 2 and M 3 loop Fermi surfaces from adjacent bulk Brillouin zones. c, M 3 and M 1 loop Fermi surfaces from adjacent bulk Brillouin zones. FIG. 4 : 4Linking number (2, 2, 2) in topological quantum matter. a, Energy-momentum photoemission slices tangential to the M 1 , M 2 and M 3 Weyl loops at their extrema. b, Weyl loops from adjacent bulk Brillouin zones, based on the analytical model extracted in Eq. 2 and Fig. 2, exhibiting links. Weyl cone positions (blue dots) extracted from the slices in Fig. 4a (dotted blue lines), consistent with the analytical model (short blue line segments indicate the error). c, Link diagrams help visualize a three-dimensional link structure by flattening it to two dimensions while retaining the link information, illustrated for the example of a Hopf link. d, In a crystal, it is natural to draw link diagrams in the surface Brillouin zone, such as the (001) surface Brillouin zone (hexagon, Extended Data Fig. 2). e, Link diagram for the Co 2 MnGa Weyl loop link. There are three distinct Weyl loops and each Weyl loop links each other Weyl loop exactly twice, giving linking number (2, 2, 2). The arrows indicate out-of-plane wrapping: as one follows the loop in the direction of the arrow, the loop wraps out of the page, exiting the Brillouin zone from the front and re-entering from the back, at the same time reconnecting at the opposite edge of the hexagon. FIG. 5 : 5Seifert bulk-boundary correspondence. a, A Seifert surface is defined as a threedimensional surface bounded by a link, shown for the example of a Hopf link. Its two-dimensional projection produces alternating filled and empty regions pinned together at characteristic touching points. b, In a condensed matter system, the Seifert surface is taken as a surface bounded by the linked loop nodes in three-dimensional momentum space (k x , k y , k z ), shown for the case of the link observed in Co 2 MnGa. c, The projection of the Seifert surface into the surface Brillouin zone is associated with topological boundary modes (blue regions) which touch at points in momentum space. Energy axis collapsed for clarity. d, Ab initio calculation of the surface states through the touching point, exhibiting pairs of boundary modes pinned together at the Weyl loops. e, Surfacesensitive vacuum ultraviolet (VUV) ARPES energy-momentum cut through the touching point, exhibiting signatures of the pinned Seifert boundary modes, consistent with ab initio calculations. Photon energy hν = 63 eV. f, Fermi surface in ab initio calculation (left) and VUV-ARPES (right) exhibiting Seifert boundary modes that stretch across the topological regions, connecting different Weyl loops, consistent with the Seifert projection. Fig. 1 : 1Topological invariants in physics. a, An example of an order parameter winding in real space: a magnetic vortex. In this case, the order parameter is the local magnetization m(x), confined to a magnetic easy plane in real space (x, y). It may happen that m(x) winds around a point in real space, forming a magnetic vortex characterized by a winding number topological invariant, in this example given by w = 1. b, An example of a quantum wavefunction winding in momentum space: the one-dimensional topological insulator (Su-Schrieffer-Heeger model). This phase is described by Bloch Hamiltonian h(k) = d(k) · σ, where k is the one-dimensional crystal momentum, σ refers to the Pauli matrices and d(k) is a two-component object confined to the (d x , d y ) plane. The normalized quantityd(k) ≡ d(k)/|d(k)| (orange arrow) moves around the unit circle (dotted blue) as k varies. The topological invariant is related to how many timesd(k) winds around the origin as k scans through the one-dimensional Brillouin zone. c, Node loops linking in momentum space: a three-dimensional electronic structure may exhibit multiple node loops (cyan and purple), characterized by k n (θ), where n indexes the loops and θ parametrizes the loop trajectory in momentum space. The loops may link one another, encoding a linking number topological invariant. This example shows a Hopf link. Fig. 2 :Fig. 3 :Fig. 6 :Fig. 7 : 2367Crystal structure and Brillouin zone of Co 2 MnGa. a, Conventional unit cell with representative crystallographic mirror plane M (orange). b, The primitive unit cell (grey) includes one formula unit. c, Brillouin zone, with reciprocal lattice basis vectors (grey). In the reciprocal lattice basis, the M 1 plane corresponds to (001), M 2 corresponds to (010) and M 3 corresponds to (100). d, Slice through Γ in an extended zone scheme. Energy dispersion of the Weyl loop. a, Crossing point energies E B and b, crossing point momenta (k x , k y ) systematically extracted from cone dispersions observed in the ARPES spectra (magenta squares), same dataset as Fig. 2c (hν = 544 eV), with fit of the Weyl loop momentum trajectory and energy dispersion (cyan, see main text). The crossing point energies are parametrized by a polar angle θ defined by tan θ ≡ k y /k x . c, Weyl loop trajectory from DFT, with dotted lines indicating the DFT energy-momentum slices shown in Fig. 2b. The binding energy axes in (b) and (c) are collapsed. Unsymmetrized Fermi surfaces. a-c, Left: photoemission spectra displayed in Fig. 1d-f, without symmetrization. Right: the same spectra, with the experimentallydetermined Weyl loop trajectory overlaid across multiple Brillouin zones. The irrelevant Γ pocket is consistently observed in all unsymmetrized spectra. Signatures of Weyl loops are observed around all X points. Energy-momentum cuts through the Weyl loop. Photoemission spectra used to extract Fig. 2c. Fig. 9 : 9Linked Weyl loop Fermi surface. Constant-energy slice of the pockets (navy) making up two linked Weyl loops obtained by ab initio calculation, at binding energy E B = −10 meV, below the experimental Fermi level. The Fermi surface pockets touch at a set of discrete points, where the Weyl loop disperses through this particular E B . For reference, the full Weyl loop trajectories are indicated, collapsed in energy (orange around X 3 , magenta around X 1 ). Fig. 10: Measured Fermi surfaces in an extended zone scheme. The Brillouin zone corresponds to Γ (066) in the primitive reciprocal basis. The unifying role of topology. M Buchanan, Nat. Phys. 16818M. Buchanan, The unifying role of topology. Nat. Phys. 16, 818 (2020). M Nakahara, Geometry, Topology and Physics. Institute of PhysicsM. Nakahara, Geometry, Topology and Physics (Institute of Physics, 2003). P M Chaikin, T C Lubensky, Principles of condensed matter physics. Cambridge University PressTopological defectsP. M. Chaikin, T. C. Lubensky, Topological defects, Chap. 9, Principles of condensed matter physics (Cambridge University Press, 1995). Nobel lecture: Topological quantum matter. F D M Haldane, Rev. Mod. Phys. 8940502F. D. M. Haldane, Nobel lecture: Topological quantum matter. Rev. Mod. Phys. 89, 040502 (2017). Colloquium: Zoo of quantum-topological phases of matter. X.-G Wen, Rev. Mod. Phys. 8941004X.-G. Wen, Colloquium: Zoo of quantum-topological phases of matter. Rev. Mod. Phys. 89, 041004 (2017). G E Volovik, The Universe in a Helium Droplet. Oxford University PressG. E. Volovik, The Universe in a Helium Droplet (Oxford University Press, 2003). Quantized vortices in superfluid 3 He. M M Salomaa, G E Volovik, Rev. Mod. Phys. 59533M. M. Salomaa, G. E. Volovik, Quantized vortices in superfluid 3 He. Rev. Mod. Phys. 59, 533 (1987). J Zang, V Cros, A Hoffmann, Topology in Magnetism. SpringerJ. Zang, V. Cros, A. Hoffmann, Topology in Magnetism (Springer, 2018). Topological properties and dynamics of magnetic skyrmions. N Nagaosa, Y Tokura, Nat. Nano. 8899N. Nagaosa, Y. Tokura, Topological properties and dynamics of magnetic skyrmions. Nat. Nano. 8, 899 (2013). A topological look at the quantum Hall effect. J E Avron, D Osadchy, R Seiler, Phys. Today. 5638J. E. Avron, D. Osadchy, R. Seiler, A topological look at the quantum Hall effect. Phys. Today 56, 38 (2003). Colloquium: Topological insulators. M Z Hasan, C L Kane, Rev. Mod. Phys. 823045M. Z. Hasan, C. L. Kane, Colloquium: Topological insulators. Rev. Mod. Phys. 82, 3045 (2010). A Bernevig, Topological Insulators and Topological Superconductors. Princeton University PressA. Bernevig, Topological Insulators and Topological Superconductors (Princeton University Press, 2013). Topological band theory and the Z 2 invariant. C L Kane, Topological Insulators, Contemporary Concepts of Condensed Matter Science. ElsevierC. L. Kane, Topological band theory and the Z 2 invariant, Chap. 1, Topological Insulators, Contemporary Concepts of Condensed Matter Science (Elsevier, 2013). Colloquium: Topological band theory. A Bansil, H Lin, T Das, Rev. Mod. Phys. 8821004A. Bansil, H. Lin, T. Das, Colloquium: Topological band theory. Rev. Mod. Phys. 88, 021004 (2016). Magnetic topological insulators. Y Tokura, K Yasuda, A Tsukazaki, Nat. Rev. Phys. 1126Y. Tokura, K. Yasuda, A. Tsukazaki, Magnetic topological insulators. Nat. Rev. Phys. 1, 126 (2019). Discovery of Weyl fermion semimetals and topological Fermi arc states. M Z Hasan, S.-Y Xu, I Belopolski, S.-M Huang, Ann. Rev. Cond. Matt. Phys. 8289M. Z. Hasan, S.-Y. Xu, I. Belopolski, S.-M. Huang, Discovery of Weyl fermion semimetals and topological Fermi arc states. Ann. Rev. Cond. Matt. Phys. 8, 289 (2017). Topological insulators, topological superconductors and Weyl fermion semimetals: discoveries, perspectives and outlooks. M Z Hasan, S.-Y Xu, G Bian, Phys. Scr. 164M. Z. Hasan, S.-Y. Xu, G. Bian, Topological insulators, topological superconductors and Weyl fermion semimetals: discoveries, perspectives and outlooks. Phys. Scr. 2015, T164 (2015). Weyl and Dirac semimetals in three-dimensional solids. N P Armitage, E J Mele, A Vishwanath, Rev. Mod. Phys. 9015001N. P. Armitage, E. J. Mele, A. Vishwanath, Weyl and Dirac semimetals in three-dimensional solids. Rev. Mod. Phys. 90, 015001 (2018). Classification of topological quantum matter with symmetries. C.-K Chiu, J C Y Teo, A P Schnyder, S Ryu, Rev. Mod. Phys. 8835005C.-K. Chiu, J. C. Y. Teo, A. P. Schnyder, S. Ryu, Classification of topological quantum matter with symmetries. Rev. Mod. Phys. 88, 035005 (2016). Discovery of topological Weyl fermion lines and drumhead surface states in a room temperature magnet. I Belopolski, Science. 3651278I. Belopolski, et al., Discovery of topological Weyl fermion lines and drumhead surface states in a room temperature magnet. Science 365, 1278 (2019). Topological Hopf and chain link semimetal states and their application to Co 2 MnGa. G Chang, Phys. Rev. Lett. 119156401G. Chang, et al., Topological Hopf and chain link semimetal states and their application to Co 2 MnGa. Phys. Rev. Lett. 119, 156401 (2017). Q Wu, A A Soluyanov, T Bzdušek, Non-Abelian band topology in noninteracting metals. 3651273Q. Wu, A. A. Soluyanov, T. Bzdušek, Non-Abelian band topology in noninteracting metals. Science 365, 1273 (2019). Nodal-link semimetals. Z Yan, Phys. Rev. B. 96R41103Z. Yan, et al., Nodal-link semimetals. Phys. Rev. B 96, 041103(R) (2017). Topological semimetals carrying arbitrary Hopf numbers: Fermi surface topologies of a Hopf link, Solomon's knot, trefoil knot, and other linked nodal varieties. M Ezawa, Phys. Rev. B. 9641202M. Ezawa, Topological semimetals carrying arbitrary Hopf numbers: Fermi surface topologies of a Hopf link, Solomon's knot, trefoil knot, and other linked nodal varieties. Phys. Rev. B 96, 041202 (2017). Weyl-link semimetals. P.-Y Chang, C.-H Yee, Phys. Rev. B. 9681114P.-Y. Chang, C.-H. Yee, Weyl-link semimetals. Phys. Rev. B 96, 081114 (2017). Three-dimensional Pentagon Carbon with a genesis of emergent fermions. C Zhong, Nat. Commun. 815641C. Zhong, et al., Three-dimensional Pentagon Carbon with a genesis of emergent fermions. Nat. Commun. 8, 15641 (2017). Topological nodal-line fermions in spin-orbit metal PbTaSe 2. G Bian, Nat. Commun. 710556G. Bian, et al., Topological nodal-line fermions in spin-orbit metal PbTaSe 2 . Nat. Commun. 7, 10556 (2016). Structure and topology of band structures in the 1651 magnetic space groups. H Watanabe, H C Po, A Vishwanath, Sci. Adv. 48685H. Watanabe, H. C. Po, A. Vishwanath, Structure and topology of band structures in the 1651 magnetic space groups. Sci. Adv. 4, eaat8685 (2018). Topological surface superconductivity in doped Weyl loop materials. Y Wang, R Nandkishore, Phys. Rev. B. 9560506Y. Wang, R. Nandkishore, Topological surface superconductivity in doped Weyl loop materi- als. Phys. Rev. B 95, 060506 (2017). Topological phonons and Weyl lines in three dimensions. O Stenull, C L Kane, T C Lubensky, Phys. Rev. Lett. 11768001O. Stenull, C. L. Kane, T. C. Lubensky, Topological phonons and Weyl lines in three dimen- sions. Phys. Rev. Lett. 117, 068001 (2016). Weyl and Dirac loop superconductors. R Nandkishore, Phys. Rev. B. 9320506R. Nandkishore, Weyl and Dirac loop superconductors. Phys. Rev. B 93, 020506(R) (2016). Double Helix Nodal Line Superconductor. X.-Q Sun, B Lian, S.-C Zhang, Phys. Rev. Lett. 119147001X.-Q. Sun, B. Lian, S.-C. Zhang, Double Helix Nodal Line Superconductor. Phys. Rev. Lett. 119, 147001 (2017). Chern-Simons theory and Wilson loops in the Brillouin zone. B Lian, C Vafa, F Vafa, S.-C Zhang, Phys. Rev. B. 9594512B. Lian, C. Vafa, F. Vafa, S.-C. Zhang, Chern-Simons theory and Wilson loops in the Brillouin zone. Phys. Rev. B 95, 094512 (2017). Über das geschlecht von knoten. H Seifert, Mathematische Annalen. 110571H. Seifert,Über das geschlecht von knoten. Mathematische Annalen 110, 571 (1935). Exceptional topology of non-Hermitian systems. E J Bergholtz, J C Budich, F K Kunt, Rev. Mod. Phys. 9315005E. J. Bergholtz, J. C. Budich, F. K. Kunt, Exceptional topology of non-Hermitian systems. Rev. Mod. Phys. 93, 015005 (2021). Emergence and full 3D-imaging of nodal boundary Seifert surfaces in 4D topological matter. L Li, C H Lee, J Gong, Commun. Phys. 2135L. Li, C. H. Lee, J. Gong, Emergence and full 3D-imaging of nodal boundary Seifert surfaces in 4D topological matter. Commun. Phys. 2, 135 (2019). Knotted non-Hermitian metals. J Carlström, M Stålhammar, J C Budich, E J Bergholtz, Phys. Rev. B. 99161115J. Carlström, M. Stålhammar, J. C. Budich, E. J. Bergholtz, Knotted non-Hermitian metals. Phys. Rev. B 99, 161115 (2019). Tidal surface states as fingerprints of non-Hermitian nodal knot metals. X Zhang, Commun. Phys. 447X. Zhang, et al., Tidal surface states as fingerprints of non-Hermitian nodal knot metals. Commun. Phys. 4, 47 (2021). Iron-based binary ferromagnets for transverse thermoelectric conversion. A Sakai, Nature. 58153A. Sakai, et al., Iron-based binary ferromagnets for transverse thermoelectric conversion. Na- ture 581, 53 (2020). Giant anomalous Nernst effect and quantum-critical scaling in a ferromagnetic semimetal. A Sakai, Nat. Phys. 141119A. Sakai, et al., Giant anomalous Nernst effect and quantum-critical scaling in a ferromagnetic semimetal. Nat. Phys. 14, 1119 (2018). Anomalous Nernst effect beyond the magnetization scaling relation in the ferromagnetic Heusler compound Co 2 MnGa. S N Guin, NPG Asia Mat. 1116S. N. Guin, et al., Anomalous Nernst effect beyond the magnetization scaling relation in the ferromagnetic Heusler compound Co 2 MnGa. NPG Asia Mat. 11, 16 (2019). Thickness dependence of the anomalous Nernst effect and the Mott relation of Weyl semimetal Co 2 MnGa thin films. G.-H Park, Phys. Rev. B. 10160406G.-H. Park, et al., Thickness dependence of the anomalous Nernst effect and the Mott relation of Weyl semimetal Co 2 MnGa thin films. Phys. Rev. B 101, 060406 (2020). Hard magnet topological semimetals in xPt 3 compounds with the harmony of Berry curvature. A Markou, Commun. Phys. 4104A. Markou, et al., Hard magnet topological semimetals in xPt 3 compounds with the harmony of Berry curvature. Commun. Phys. 4, 104 (2021). Magnetic and chemical order in Heusler alloys containing cobalt and manganese. P J Webster, J. Phys. Chem. Solids. 321221P. J. Webster, Magnetic and chemical order in Heusler alloys containing cobalt and manganese. J. Phys. Chem. Solids 32, 1221 (1971). Magnetic properties of Co-Heusler and related mixed alloys. H Ido, S Yasuda, J. de Physique. 498H. Ido, S. Yasuda, Magnetic properties of Co-Heusler and related mixed alloys. J. de Physique 49, C8 (1988). High-resolution soft X-ray beamline ADRESS at the Swiss Light Source for resonant inelastic X-ray scattering and angle-resolved photoelectron spectroscopies. V N Strocov, J. Synch. Rad. 17631V. N. Strocov, et al., High-resolution soft X-ray beamline ADRESS at the Swiss Light Source for resonant inelastic X-ray scattering and angle-resolved photoelectron spectroscopies. J. Synch. Rad. 17, 631 (2010). Soft-X-ray ARPES facility at the ADRESS beamline of the SLS: concepts, technical realisation and scientific applications. V N Strocov, J. Synch. Rad. 2132V. N. Strocov, et al., Soft-X-ray ARPES facility at the ADRESS beamline of the SLS: concepts, technical realisation and scientific applications. J. Synch. Rad. 21, 32 (2014). Quantized Faraday and Kerr rotation and axion electrodynamics of a 3D topological insulator. L Wu, Science. 3541124L. Wu, et al., Quantized Faraday and Kerr rotation and axion electrodynamics of a 3D topological insulator. Science 354, 1124 (2016). Three-Dimensional Electron Realm in VSe 2 by Soft-X-Ray Photoelectron Spectroscopy: Origin of Charge-Density Waves. V N Strocov, Phys. Rev. Lett. 10986401V. N. Strocov, et al., Three-Dimensional Electron Realm in VSe 2 by Soft-X-Ray Photoelectron Spectroscopy: Origin of Charge-Density Waves. Phys. Rev. Lett. 109, 086401 (2012). Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set. G Kresse, J Furthmueller, Phys. Rev. B. 5411169G. Kresse, J. Furthmueller, Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set. Phys. Rev. B 54, 11169 (1996). From ultrasoft pseudopotentials to the projector augmented-wave method. G Kresse, D Joubert, Phys. Rev. B. 591758G. Kresse, D. Joubert, From ultrasoft pseudopotentials to the projector augmented-wave method. Phys. Rev. B 59, 1758 (1999). Generalized gradient approximation made simple. J P Perdew, K Burke, M Ernzerhof, Phys. Rev. Lett. 773865J. P. Perdew, K. Burke, M. Ernzerhof, Generalized gradient approximation made simple. Phys. Rev. Lett. 77, 3865 (1996). thanks Nikita Lvov and Zoltán Szabó for discussions on linking numbers. The authors thank D. Lu and M. Hashimoto at Beamline 5-2 of the Stanford Synchrotron Radiation Lightsource (SSRL) at the SLAC National Accelerator Laboratory. B Acknowledgments I, CA, USAfor supportACKNOWLEDGMENTS I.B. thanks Nikita Lvov and Zoltán Szabó for discussions on linking numbers. The au- thors thank D. Lu and M. Hashimoto at Beamline 5-2 of the Stanford Synchrotron Radiation Lightsource (SSRL) at the SLAC National Accelerator Laboratory, CA, USA for support. thank Takayuki Muro for experimental support during preliminary ARPES measurements carried out at BL25SU of SPring-8 in Hyogo, Japan. I.B. thanks Biao Lian for discussions on the topological magneto-electric effect. I B , D M , I.B., T.A.C., X.P.Y. and D.MI.B. and D.M. thank Takayuki Muro for experimental support during preliminary ARPES measurements carried out at BL25SU of SPring-8 in Hyogo, Japan. I.B. thanks Biao Lian for discussions on the topological magneto-electric effect. I.B., T.A.C., X.P.Y. and D.M. Energy-momentum photoemission slices along the highsymmetry paths b, X 1 − X 2 and c, X 3 − X 1 obtained at photon energy hν = 642 eV. We observe d 12 = 0.56 ± 0.1Å −1 and d 31 = 0.61 ± 0.1Å −1 , consistent with Extended Data Fig. 4. d, Fermi surface acquired at hν = 642 eV, exhibiting an in-plane Weyl loop contour, M 1 . We further observe spectral weight emanating along k x. Extended Data Fig. 5: Supplementary measurement of the link depth. a, M 1 , M 2 and M 3 Weyl loops, with trajectories obtained from the analytical model (see main text), showing that M 1 links M 2 twice and M 3 twice. and k y from the center of M 1 , corresponding to the linearly dispersive branches in (b, c), again suggesting that M 1 is linked by M 2 and M 3Extended Data Fig. 5: Supplementary measurement of the link depth. a, M 1 , M 2 and M 3 Weyl loops, with trajectories obtained from the analytical model (see main text), showing that M 1 links M 2 twice and M 3 twice. Energy-momentum photoemission slices along the high- symmetry paths b, X 1 − X 2 and c, X 3 − X 1 obtained at photon energy hν = 642 eV. We observe d 12 = 0.56 ± 0.1Å −1 and d 31 = 0.61 ± 0.1Å −1 , consistent with Extended Data Fig. 4. d, Fermi surface acquired at hν = 642 eV, exhibiting an in-plane Weyl loop contour, M 1 . We further observe spectral weight emanating along k x and k y from the center of M 1 , corresponding to the linearly dispersive branches in (b, c), again suggesting that M 1 is linked by M 2 and M 3 .
[]
[ "Visual Detection of Structural Changes in Time-Varying Graphs Using Persistent Homology Time of Day Node Class MP*1 MP*2 PC PC* PSI* midnight 6pm noon 6am midnight", "Visual Detection of Structural Changes in Time-Varying Graphs Using Persistent Homology Time of Day Node Class MP*1 MP*2 PC PC* PSI* midnight 6pm noon 6am midnight" ]
[ "Mustafa Hajij \nUniversity of South Florida\nUniversity of Utah\nUniversity of Arizona\nUniversity of South Florida\n\n", "Bei Wang \nUniversity of South Florida\nUniversity of Utah\nUniversity of Arizona\nUniversity of South Florida\n\n", "Carlos Scheidegger \nUniversity of South Florida\nUniversity of Utah\nUniversity of Arizona\nUniversity of South Florida\n\n", "Paul Rosen \nUniversity of South Florida\nUniversity of Utah\nUniversity of Arizona\nUniversity of South Florida\n\n" ]
[ "University of South Florida\nUniversity of Utah\nUniversity of Arizona\nUniversity of South Florida\n", "University of South Florida\nUniversity of Utah\nUniversity of Arizona\nUniversity of South Florida\n", "University of South Florida\nUniversity of Utah\nUniversity of Arizona\nUniversity of South Florida\n", "University of South Florida\nUniversity of Utah\nUniversity of Arizona\nUniversity of South Florida\n" ]
[]
Figure 1: Timeline showing the first Monday of the High School Communication Network dataset. The timeline is generated by comparing the commute-time 0-dimensional homological features of the time-varying network using the bottleneck distance. Here, the 0-dimensional homological features capture cluster-like behaviors in the data at multiple scales. The timeline differentiates periods of highly connected behaviors, such as instances C, D, E, and F, from periods of low or no activity, such as A, B, or G.ABSTRACTTopological data analysis is an emerging area in exploratory data analysis and data mining. Its main tool, persistent homology, has become a popular technique to study the structure of complex, highdimensional data. In this paper, we propose a novel method using persistent homology to quantify structural changes in time-varying graphs. Specifically, we transform each instance of the time-varying graph into metric spaces, extract topological features using persistent homology, and compare those features over time. We provide a visualization that assists in time-varying graph exploration and helps to identify patterns of behavior within the data. To validate our approach, we conduct several case studies on real world data sets and show how our method can find cyclic patterns, deviations from those patterns, and one-time events in time-varying graphs. We also examine whether persistence-based similarity measure as a graph metric satisfies a set of well-established, desirable properties for graph metrics.
10.1109/pacificvis.2018.00024
[ "https://arxiv.org/pdf/1707.06683v2.pdf" ]
37,351,245
1707.06683
34588987b4dbe6bdec51a29f44633865eedca028
Visual Detection of Structural Changes in Time-Varying Graphs Using Persistent Homology Time of Day Node Class MP*1 MP*2 PC PC* PSI* midnight 6pm noon 6am midnight Mustafa Hajij University of South Florida University of Utah University of Arizona University of South Florida Bei Wang University of South Florida University of Utah University of Arizona University of South Florida Carlos Scheidegger University of South Florida University of Utah University of Arizona University of South Florida Paul Rosen University of South Florida University of Utah University of Arizona University of South Florida Visual Detection of Structural Changes in Time-Varying Graphs Using Persistent Homology Time of Day Node Class MP*1 MP*2 PC PC* PSI* midnight 6pm noon 6am midnight 2:42 AM to 3:42 AM A B C D E F G 3:42 PM to 4:42 PM 5:36 AM to 6:36 AM 10:18 AM to 11:18 AM 1:54 PM to 2:54 PM 7:30 AM to 8:30 AM 11:54 AM to 12:54 PMTopological data analysistime-varying graphpersis- tent homologygraph visualization Figure 1: Timeline showing the first Monday of the High School Communication Network dataset. The timeline is generated by comparing the commute-time 0-dimensional homological features of the time-varying network using the bottleneck distance. Here, the 0-dimensional homological features capture cluster-like behaviors in the data at multiple scales. The timeline differentiates periods of highly connected behaviors, such as instances C, D, E, and F, from periods of low or no activity, such as A, B, or G.ABSTRACTTopological data analysis is an emerging area in exploratory data analysis and data mining. Its main tool, persistent homology, has become a popular technique to study the structure of complex, highdimensional data. In this paper, we propose a novel method using persistent homology to quantify structural changes in time-varying graphs. Specifically, we transform each instance of the time-varying graph into metric spaces, extract topological features using persistent homology, and compare those features over time. We provide a visualization that assists in time-varying graph exploration and helps to identify patterns of behavior within the data. To validate our approach, we conduct several case studies on real world data sets and show how our method can find cyclic patterns, deviations from those patterns, and one-time events in time-varying graphs. We also examine whether persistence-based similarity measure as a graph metric satisfies a set of well-established, desirable properties for graph metrics. INTRODUCTION Time-varying graphs are ubiquitous across many disciplines, yet difficult to analyze, making them a natural target for visualization -a good visual representation of a time-varying graph will present its structure and structural changes quickly and clearly, to enable further analysis and exploration. A major development in graph drawing has been the observation that using derived information can retain structure in static graph visualizations. For example, the dot layout uses node ranks to perform hierarchical drawings [31]; the neato algorithm employs graph distances within statistical multidimensional scaling [30]; Noack's energy model utilizes approximated clustering [46]. In this paper, we take the first steps towards using topological features -captured by persistent homology -with the design goal of detecting potentially important structural changes in time-varying graph data. By topological features, we do not mean the configuration of nodes and edges alone, but instead the 0-and 1-dimensional homology groups of a metric space that describe its connected components and tunnels, respectively. This definition allows us to quantify structural elements within time-varying graphs to identify behavior patterns in the data. Persistent homology quantifies individual topological features (events) in the graph according to their significance (or persistence). The set of all features, encoded by the persistence diagram, can be seen as a fingerprint for the graph. Using this fingerprint, the most topologically-important structures of two graphs can be compared in a manner that is robust to small perturbations in the data. Well-understood techniques in topological data analysis typically focus on the qualitative study of point cloud data under the metric space setting. In order to study graph data, our approach is to embed the graph in a metric space, where topological techniques can be applied. In other words, the notion of metric space acts as an organizational principle [9] in interpreting the graph data. Our approach, as seen Figure 2, can be summarized as follows. The input of our pipeline is a time-varying graph, which is an ordered sequence of graph instances. First, each instance is embedded into a metric space. Second, topological features of each instance are extracted using persistent homology, and encoded within persistence diagrams. Third, instances are compared by calculating the distance between persistence diagrams and projecting them using classical multidimensional scaling (MDS) [6]. The data is then visualized using an interactive timeline and nodelink diagrams, as shown in Figure 1. In this figure, the horizontal axis is used to represent time, while the vertical location is the first component of MDS, in other words, it captures the dissimilarities among instances. Graph instances from selected timeframes are drawn using a force-directed layout to demonstrate how the approach highlights different structure in the graph. The contributions of our paper are: • A novel pipeline for detecting structural changes in timevarying graphs that uses persistent homology to summarize important structures, as opposed to directly comparing nodes and edges. • An interface that uses conventional visualization approaches adapted to the design goal of highlighting structural changes. • Two case studies of time-varying graphs showing how our approach can find cyclic patterns, deviations from those patterns, and unique one-time events in the graphs. • A study of the suitability of using persistence-based similarity measure for detecting structural changes in time-varying graphs. RELATED WORK Static Graph Analysis and Visualization. We provide a brief overview here. See von Landesberger et al.'s survey [53] for a full treatment. The first automated technique for node-link diagrams is Tutte's barycentric coordinate embedding [49], followed by linear programming techniques [31], force-directed/mass-spring embeddings [28,36], embeddings of the graph metric [30], and linear-algebraic properties of the connectivity structures (especially, the graph Laplacian and associated eigenspaces) [39,40]. Most graph visualization systems, including Gephi [3], NodeXL [34], and Graphviz [25], use variations on node-link visualizations to display graphs. For dense graphs, edge bundling can reduce visual clutter by routing graph edges to the same portion of the screen [35]. In terms of quality, divided edge bundling [48] produces high-quality results, while hierarchical edge bundling [29] scales to millions of edges with slightly lower quality. Because these quality and runtime trade-offs are so characteristic of nodelink diagram visualizations, whether or not this class of diagrams can effectively unlock the insights hidden inside the structure of large networks remains an open research question. Other visual metaphors have been proposed to reduce clutter, ranging from relatively conservative proposals [20,21] to variants of matrix diagrams [18] and abstract displays of graph statistics [38]. Time-Varying Graph Analysis. The problem we address is closely related to the problem of measuring similarity or dissimilarly between graphs without knowing node correspondences. Comparing between graphs up to isomorphism is hard [1]. For this reason many notions of graph similarities have been proposed [4,47]. These methods rely on mapping the graphs into a feature space and then defining distances on that space. Other approaches use kernel functions to build a similarity measures on graphs [43,51]. While large portions of the literature on graph similarity focus on graph comparison with known node correspondences, there are attempts to tackle the problem where node correspondence is unknown [51,52]. Distance functions on the space of graphs have also been studied [12]. Time-Varying Graph Visualization. Beck et al. [5] provide a detailed survey of dynamic graph visualization. They divide the techniques into two major categories, animation and timelines. Our approach falls into the latter category. Animation approaches, such as the work of Misue et al. [44], vary the graph representation over time, while making the graph as legible as possible at any given instance. Timeline approaches, such as the work of Greilich et al. [33], use a non-animated, often spatially-oriented, visual channel to show the changes in the graph over time. Timeline approaches seem to provide a better overview of the data as it tries to capture the entire graph sequence in a single image. These approaches include multiple techniques such as node-link-based methods [37], matrix-based approaches [8] and feature vector-based method [50]. For more references and background see also Landesberger et al.'s survey [53]. Topological Data Analysis of Networks. Persistent homology is becoming an emerging tool in studying complex networks [19,22] including collaboration [2,10] and brain networks [11,15]. To the best of our knowledge, our approach is the first in connecting topological technique with the visualization design of (time-varying) graphs. APPROACH Our approach uses persistent homology to identify and compare features in a time-varying graph. Our visual design goal is to identify high-level structural changes in the graph. To do this, consider a time-varying graph G = {G 0 , ..., G n }, which contains an ordered sequence of static graph instances G i = (V i , E i ). We are interested in quantifying and visualizing structural changes of G . Our analysis pipeline (see Figure 2) is described below, and we provide detailed description of each step in the subsequent sections. 1. Associate each instance G i with a metric space representation. This yields a symmetric distance matrix d i , where d i (x, y) measures the (shortest-path or commute-time) distance between vertices x and y in G i (Section 3.1). 2. Extract topological features of G i by constructing a filtration F i from its distance matrix d i and computing its corresponding p-dimensional persistence diagrams PD p (F i ) for p ∈ {0, 1} (Section 3.2). 3. Capture the structural differences between G i and G j by computing the bottleneck or Wasserstein distance between their corresponding persistence diagrams PD p (F i ) and PD p (F j ) (Section 3.3). 4. Visualize the structural differences among the instances of G (Section 3.4). Graphs and Metric Space Representations Suppose an instance G i is represented as a weighted, undirected graph with a vertex set V and an edge set E equipped with a positive edge weight w. We associate each graph instance G i with a metric space representation, which yields a symmetric distance matrix d i . Consider the positive edge weight as the length of an edge, then a natural metric d sp is obtained on G i , where for every pair of vertices x and y in G i , the distance d sp (x, y) is the length of the shortest-path between them. This is the classic shortest-path distance, which is typically computed with Dijkstra's algorithm [17] and its variations. Alternatively, other distance metrics based on the graph Laplacian [14], such as commute-time distance, discrete biharmonic distance … … … Section 3.2 Section 3.1 Section 3.3 & 3.4 Input Figure 2: The pipeline of our approach. An ordered sequence of graphs representing a time-varying graph is given as an input. Each graph instance is individually embedded into a metric space (Section 3.1). The topological features of each (metric-space-embedded) graph instance is extracted, by computing persistent homology of its corresponding Rips filtration; the topological features are encoded by persistence diagrams and visualized as barcodes (Section 3.2). Finally, persistence diagrams are compared and the structural changes among the graph instances are visualized (Section 3.3 and 3.4). and diffusion distance, can be considered. For instance, the commutetime distance is defined as [27], d 2 ct (x, y) = |V |−1 ∑ i=1 1 λ i (φ i (x) − φ i (y)) 2 .(1) Here {λ i } |V |−1 i=0 and {φ } |V |−1 i=0 are the generalized eigenvalues and eigenvectors of the graph Laplacian of G i , respectively [13]. In practice, we approximate the summations of Equation (1) by considering the first few eigenvectors, since the higher eigenvectors do not contribute significantly. These distance metrics are illustrated in Figure 3. This illustration shows the distance from a point source to all other locations on the surface. From this we see commute-time distance produces a smoother gradient than shortest-path distance. Extracting Topological Features To extract topological features from each graph instance, we apply persistent homology to its metric space representation. To describe our process, we first briefly review persistent homology. We then describe persistence diagrams, which encode topological features of a given graph instance. For more details and background on persistence homology see [23] and the references within. Topological features. Homology deals with topological features of a space. Given a topological space X, the 0-, 1-and 2-dimensional homology groups, denoted as H 0 (X), H 1 (X) and H 2 (X) respectively, correspond to (connected) components, tunnels and voids of X. In our context, we care about the 0-and 1-dimensional topological features of a graph instance G i , which correspond to H 0 and H 1 of its metric space representation. These 0-and 1-dimensional topological features, roughly speaking, capture connected components and tunnels formed by vertices in the instances. Persistent homology. In practice, there might not exist a unique scale that captures topological structures of the data. Instead, we adapt a multi-scale notion of homology, called persistent homology, a main tool in topological data analysis, to describe the topological features of a space at different spatial resolutions. Persistent homology typically starts with a finite set of points in a metric space. In our setting, each graph instance G i is associated with a metric space, where vertices in G i form a finite set of points S, and d i encodes the pairwise distance among points in S. We then apply a geometric construction, such as a Rips complex, on the point set S, that describe the combinatorial structure among the points. For a real number r > 0, a Rips complex, denotes as R(r), is formed by considering a set of balls of radius r/2 centered at points in S. Given a finite point set S from G i , continuously increase the diameter forms a 1-parameter family of nested unions of balls; and correspondingly we obtain a 1-parameter family of nested Rips complexes, referred to as a Rips filtration. Let 0 = r 0 ≤ r 1 ≤ r 2 ≤ · · · ≤ r m denote a finite sequence of increasing diameter. The Rips Figure 5 shows a Rips filtration defined on an example graph equipped with a shortest-path metric. filtration F i (of G i ) is a sequence of Rips complexes connected by inclusions, R(r 0 ) → R(r 1 ) → R(r 2 ) → · · · → R(r m ). Applying homology to a Rips filtration, the homology groups are connected from left to right by homomorphisms induced by inclusions, H(R(r 0 )) → H(R(r 1 )) → H(R(r 2 )) → · · · → H(R(r m )). Topological features appear and disappear as the diameter increases: when a topological feature appears, that is, a cluster or a tunnel forms, this is called a birth event; when a topological feature disappears, that is, two clusters merge into one or a tunnel is filled, it is called a death event. The persistence of a topological feature, is the time difference between the death and the birth event. the death events as a multi-set of points in the plane, called the persistence diagram (see [24]). Each topological features is represented as a point (u, v), where u is the birth time, and v is the death time of the feature. Certain feature may "live" forever, in that case, they are assigned a death time of ∞. Therefore, persistence diagram contains a multi-set of points in the extended plane (i.e. (R ∪ ±∞) 2 ). For technical reasons, we add the points on the diagonal to the diagram, each with infinite multiplicity. The persistence of the pair (u, v) is simply |v − u|. Features with higher persistence carry more significant topological information. Features with low persistence are typically considered to be noise. A persistence diagram can be visualized as persistence barcodes [32] (see Figures 2 and 5), where each bar starts at time u and ends at time v. We are interested in 0-and 1-dimensional topological features, so we consider the 0-and 1-persistence diagrams, denoted as PD 0 (F i ) and PD 1 (F i ), respectively. Comparing Sets of Topological Features A persistence diagram can be thought of as a summary of topological features of a graph instance G i . To quantify the structural difference between two instances G i and G j , we compute the bottleneck and Wasserstein distances between their persistence diagrams. Given two persistence diagrams X and Y , let η be a bijection between points in the diagram. The bottleneck distance [24] is defined as, W ∞ (X,Y ) = inf η:X→Y sup x∈X x − η(x) ∞ .(2) The Wasserstein distance is, W q (X,Y ) = inf η:X→Y Σ x∈X x − η(x) q ∞ 1/q ,(3) for any positive real number q; in our setting, q = 2. The set of points in the persistence diagram can be considered as a feature vector, where the feature space consists of all persistence diagrams for the time-varying graph G. Given all pairwise distances between persistence diagrams, classical multidimensional scaling (MDS) is then used to reduce the dimensionality of the feature vectors for visualization, and to identify the instances where topologically interesting events occur. Visualization The design goal of our interactive visualization tool is to provide insights about variation in the structural properties of time-varying graphs. In this way, we hope to identify time periods of uniform behavior (low variation) and outlier behavior (instances of high variation). Our visualization tool provides a number of capabilities to support this form of investigation. Timeline. The timeline view uses the horizontal axis to represent time and the vertical axis to represent the first dimension returned by applying classical MDS to the space of persistence diagrams. This in essence highlights the dissimilarity between graph instances. Each point on the timeline represents a single instance of the time-varying graph. The points are colored using cyclic colormaps, such as the time-of-day colormap of Figure 1 or the day-of-the-week colormap of Figure 11. Cyclic Patterns. Two techniques are available for showing repetitive patterns in the data, both being variations of the timeline. The first technique simply splits the data based upon a user-specified period length. Each period is colored uniquely. Figure 7 shows an example of this. For the second technique, the time periods are clustered based upon their 2 -norm using k-means clustering with a user specified k. Figure 12 shows an example of this where the points are colored by day-of-the-week. Graph Visualization. For investigating the behavior of specific graph instances, the instances are displayed by two visualization mechanisms. The first is a node-link diagram created using a forcedirected layout. If categorical information is available (such as in Figure 1), the nodes are colored by those categories. For 1dimensional topological features, nodes can be parameterized around the tunnel using a 1-dimensional cyclic parameterization [16,54], and colored accordingly. An example of this is seen in Figure 9. In other cases, nodes receive a fixed color. The second mechanism visualizes the persistence diagram for a given graph instance using its barcodes (see fourth row of Figure 6). The barcode is a variation on a bar chart that represents the birth and death of all topological features in the graph. Example We provide an illustrative example of our pipeline in Figure 6. In step 1 (1st row), a time-varying graph G is given as sequence of graph instances, where each instance is a connected, weighted graph. In step 2 (2nd row), each graph instance is embedded in a metric space by calculating a distance matrix using the shortest-path metric. In step 3 (3rd row), the distance matrix is used to compute a series of filtrations. In reality, additional filtrations are created, but we only shows those that produce topological events, in this example only 0-dimensional features. The step 4 (4th row), the 0-dimensional persistence diagrams of the filtrations are extracted and shown as barcodes. The final step (5th row) consists of computing the distances between these diagrams using bottleneck and Wasserstein distances. The bottleneck or Wasserstein distance as persistence-based similarity measure helps to quantify topologically similarity between a pair of instances. For example, under both distances, G 0 and G 1 are much closer to one another than either G 0 and G 2 or G 1 and G 2 . CASE STUDIES To validate our approach, we look at case studies of two publicly available datasets. Both are communication networks, one involves interpersonal communication of high school students, and the other contains e-mail communications between researchers. These case studies help demonstrate how our approach can identify cyclic patterns in data, deviations from patterns, and one-time events in timevarying graphs. Our pipeline requires a number of tools for processing. Graph processing and metric space embedding are coded using Python. Persistent homology calculations and the bottleneck and Wasserstein distances are computed using Dionysus 1 . Finally, visualizations are implemented using Processing 2 . High School Communications The High School Communications dataset [26] is a time-varying graph that tracks the contact between high school students. The data was collected for 180 students in 5 classes over 7 school days in November 2012 in Marseilles, France. The graph tracks Monday through Friday of the first week and Monday and Tuesday of the following week. We compute both shortest-path and commute-time distances and both 0-and 1-dimensional persistence diagrams. Then, both the bottleneck and Wasserstein distances are used to compare persistence diagrams. We present a small set of configurations and draw a 1 http://www.mrzv.org/software/dionysus/ 2 https://processing.org/ Figure 6: From left to right, 1st row: three weighted graph instances G 0 , G 1 and G 2 representing a time varying graph. 2nd row: each graph instance is embedded into a metric space, represented by a shortest-path distance matrix. 3rd row shows the filtrations in which topologically significant events occur, resulting in persistence barcodes in the 4th row. 5th row: the persistence diagrams are compared pairwise using bottleneck and Wasserstein distance. few conclusions from them. Many similar conclusions have been identified in other configurations that are not shown. An Average Day First, to examine an average day of communication, we look at the 0-dimensional features of the first Monday of the dataset in Figure 1. In this figure, commute-time is used to generate persistence diagrams and bottleneck distance is used to compare diagrams. In this figure, a number of phases can be seen. In the early and late hours, no interactions occur (e.g., time A). As the school day begins at time B, light, loosely-connected communications begin. By midmorning (time C), class MP*1, PC, PC*, and PSI* are all interacting heavily within and between groups. Midday (times D & E), shows classes heavily interacting once again. Early afternoon (time F) shows mostly within communications for classes PC, PC*, and PSI* and within and between communications for MPI*1 and MPI*2. Finally, the end of the day, time G, shows much sparser group communications. Comparison with Other Days While observing patterns within a single day is interesting, comparing Monday with other days can help to better identify regular and irregular daily behavior. Figure 7 shows just such a comparison; it uses commute-time to generate 0-dimensional persistence diagrams, and Wasserstein distance to compare diagrams. The top chart of Figure 7 compares the first Monday and the first Tuesday. Ignoring outlier graph instances, two main differences can be observed. First, the early morning of Tuesday shows different levels of activity than Monday. This can be confirmed by looking at examples from those days. Figure 7 (top left) shows example graphs from Monday and Tuesday morning. Secondly, at both the beginning and end of midday, Tuesday shows higher activity than Monday. The middle chart of Figure 7 compares Wednesday, Thursday, and Friday. In this chart, Wednesday and Friday show more early morning activity than Monday, but Thursday shows activity levels similar to Tuesday. Individual graph instances of the time-varying graph from this timeframe can be seen in Figure 7 (middle left). Late morning shows that Wednesday is extremely active, while Thursday and Friday are mostly inactive. Midday across all three days remains similar. Finally, the afternoons of all three days are similarly inactive. Sample graphs for this timeframe can be seen in Figure 7 (middle right). The bottom of Figure 7 shows the second Monday and Tuesday. These days show almost no morning activity (also see Figure 7 (bottom left)) and normal midday activity. Early afternoon shows midrange and high activity for Monday and Tuesday, respectively. Graphs associated with these activity levels can be seen in Figure 7 (bottom right). As a means to compare results to a more traditional analytic, Figure 8 bottom is a timeline that captures the number of interaction events for a given graph instance in the time-varying graph (i.e. the sum of the weights). Comparing this chart to that in Figure 8 bottom, it is clear that our approach captures a different type of behavior than edge counting alone. 1-Dimensional Topological Features The High School Communications dataset ultimately contains very few 1-dimensional topological features, the majority of which have low persistence. The one-time exception, which appears on the first Monday, can be seen in Figure 9. Between 11:48 am and 12:48 pm a high persistence 1-dimensional pattern appears in the graph. The nodes of the graph are parameterized using that feature and visualized using a cyclic rainbow colormap. The graph shows a large tunnel (loop) towards the upper left. EU Research Institution E-Mail The EU Research Institution E-mail [42] 3 dataset is an anonymized time-varying graph tracking e-mails between members of "a large European research institution". We have used the smaller of the available networks containing 986 nodes and 332,334 temporal edges. The graph tracks the activity for 803 days. A period of about 200 days is missing towards the end of the dataset, so we have analyzed the first 500 days. A single graph instance is created per day and shared 45% overlap with neighboring days. Once again, edge weight is chosen by counting the number of communications during the graph instance. 3 http://snap.stanford.edu/data/email-Eu-core.html Figure 9: Timeline of the High School Communications dataset for 1-dimensional features. The timeline was generated by comparing the commute-time features using bottleneck distance. The single outlier is a graph with a high persistence cycle. To highlight that feature, the graph is parameterized and visualized with a cyclic rainbow colormap [54]. Bottleneck vs. Wasserstein Distance The bottleneck and Wasserstein distance both capture important but distinct differences among sets of topological features. Intuitively, the bottleneck distance (p = ∞) captures the most perturbed topological feature (or the extreme behavior); while the Wasserstein distance (p = 2) captures the perturbation across all features (or the average behavior). Figure 10 shows how this impact the analysis of the EU E-Mail dataset. For 0-dimensional (Figure 10(a)) and 1-dimensional ( Figure 10(b)) bottleneck distances, the result is noisy, as the value captured has the most variation. For 0-dimensional (Figure 10(c)) and 1-dimensional (Figure 10(d)) Wasserstein distances, the result is smoother, since it encodes the perturbations across all features. For our analysis of the EU E-mail data, this property is more desirable. Revealing Cyclic Patterns Upon investigating the data, cyclic patterns were immediately apparent with all configurations of the Wasserstein distance (0-& 1dimensional features and shortest-path & commute-time). Figure 11 A & B show the 1-dimensional shortest-path version, where the cyclic patterns are most prominent (also see supplemental material for the complete 1-dimensional feature timeline). It is notable that this pattern is related to the natural cycle of the week. To identify the pattern of the "standard" week, we divided the data into 7 day segments and used k-means clustering to group similar weeks. Figure 12 shows the result with 5 clusters. Each of the 5 clusters shows a version of the typical week for this institution. One-time Events When looking at the entire timeline (see supplemental material), a number of one-time events are easily discovered. Figure 11 C & F are two such events. During these time periods very little activity is present in the graphs. These times happen to be the last week of December and first few days of January, during the Christmas and New Year's holidays. Figure 11 D is a one time event that shows an extreme increase in activity for a 1-2 day period. After entering the date, June 13, 2004, into Google, we discovered that this day corresponds to the release day of the results for the EU Parliamentary Election. Finally, Figure 11 E shows a 3-4 week period of significantly decreased activity. Despite our best efforts, we could not identify a major external event that would have caused such a reduction, and since the data is anonymized, we could not identify the institution to investigate a local or internal cause. DISCUSSION In the previous section, we construct a similarity measure between two graph instances of a time-varying graph by utilizing the bottleneck or Wasserstein distance between their persistence diagrams, which encode the topological features associated with each instance. However, one might ask: why persistent homology? We argue that using topological data analysis and in particular, persistent homology, to study graphs, have complementary benefits and offer new insights. In this section, we conduct several experiments to (a) (b) (c) (d) Figure 10: Comparing shortest-path bottleneck ((a) and (b)) and Wasserstein ((c) and (d)) distance on 0-dimensional ((a) and (c)) and 1-dimensional ((b) and (d)) features in the EU E-Mail dataset. Since bottleneck distance captures the most perturbed feature, the result may be noisy. Wasserstein distance captures variation across all features in the graph resulting in a smoother pattern. justify our approach. In addition, we describe some intuition behind the information encoded by the persistence diagram of a graph, and the distances functions defined on them. Persistent Diagram As a Graph Fingerprint Conventional graph-theoretical approaches typical utilize the statistical properties of the vertices and edges, for instance, degrees, connectivity, path lengths to describe the short range and pairwise interactions in the system. On the other hand, topological summaries, such as the persistence diagrams, are compressed feature representation of the underlying data, that can capture long range and higher-order interactions. We test our persistence-based similarity measure against a set of desirable properties for a similarity measure on a graph (the first four conditions are introduced in [41]): 1. Edge importance: An edge whose insertion or deletion changes the number of connected components is more important than an edge that does not. 2. Weight awareness: In weighted graphs, the bigger the weight of the removed edge is, the greater the impact on the similarity measure should be. 3. Edge-submodularity: Changing an edge in a dense graph is less important than changing an edge in an equally-sized, sparse graph. 4. Focus awareness: Random changes in graphs are less important than targeted changes of the same extent. 5. Node awareness: We add an extra condition in this paper, i.e., deleting a large number of nodes in a graph has a larger impact than deleting a small number of nodes from the same graph. We conduct several experiments on synthetic and real-world datasets to test the above conditions. For the node awareness (property 5) we consider the graphs BR shown in Figure 13 (c) top left. Each of the graphs n i BR is obtained from the original graph BR by deleting i number of nodes (in blue). The bottleneck and Wasserstein distance matrices of PD 0 between these graphs are shown in the top of Figure 13 (a)- (b). The PD 1 distance matrices are omitted since their entries are all zeros. From the matrices in Figure 13 (a)-(b) top, we observe that persistencebased similarity measure is sensitive to node deletion, that is, it satisfies node awareness, in particular, the Wasserstein distance is more node aware than the bottleneck distance in these examples. Similarly, to test edge importance (property 1) against our similarity measure we delete a set of edges from a graph LP, shown in Figure 13 (c) top right. The graph e i LP is obtained from LP by deleting i edges (in blue). The bottleneck and Wasserstein distance matrices of PD 0 among these graphs are shown in Figure 13 (a)- (b) bottom. We observe that our persistence-based similarity measure is sensitive to edge deletions that change the connectivity of the graph, that is, it satisfies edge importance. Notice how the the Wasserstein distance is more aware of the level of (dis)connectedness between the graphs. To test weight awareness (property 2), we run our test on three randomly generated, weighted graphs A 1 = (V 1 , E 1 , w 1 ), A 2 = (V 2 , E 2 , w 2 ) and A 3 = (V 3 , E 3 , w 3 ), where |V 1 | = 50, |V 2 | = 60, |V 3 | = 70, |E 1 | = 200, |E 2 | = 250 and |E 3 | = 300 respectively. Each is generated from the G n,m random graph model, where a graph is chosen uniformly at random from the set of all graphs with n nodes and m edges (by setting n = |V i | and m = |E i | for 1 ≤ i ≤ 3). The weights on the edges are drawn uniformly from (0.1, 1). For each graph A i = (V i , E i , w i ), we obtain a set of |E i | modified graphs B e i = (V i , E i , u i ) by only modifying the weight of an edge e (for all edges) in A i such that u i (e) = w i (e) + δ , where δ is drawn uniformly randomly from (4,5); similarly, we obtain a set of modified graph Figure 13: Given synthetic, small exemplar graphs in (c), we study the node awareness (property 5, a-b, top) and the edge importance (property 1, a-b, bottom) on these graphs by computing the bottleneck (a) and Wasserstein distances (b) matrix between PD 0 of the corresponding graphs. All edge weights are assumed to be 1. C e i = (V i , E i , v i ) from A i that v i (e) = w i (e) + δ , where δ is drawn uniformly randomly from (2,3). Let graph eA i denote the graph obtained from A i by deleting an edge e. Property 2 holds when W (eA i , B e i ) −W (eA i ,C e i ) ≥ 0 for all e in A i . In Figure 14 we represent the difference W (eA i , B e i ) −W (eA i ,C e i ) by plotting the points (W (eA i ,C e i ),W (eA i , B e i )). Hence, property 2 holds for a point (x, y) on and above the diagonal (i.e. y ≥ x). Note that our similarity measure satisfies weight awareness for dimension 0 but violates the condition for dimension 1. This is due to the fact that W q,1 (eA i , B e i ) −W q,1 (eA i ,C e i ) for some e captures a creation or a destruction of a cycle. To test edge-submodularity (property 3), we consider a set of four graphs A, B, C and D. These graphs share the same number of nodes. Graph A is denser than graph C; while graph B and D are obtained from A and C respectively by deleting an edge. We test property 3 against four sets of small synthetic graphs in Figure 13 (c) bottom; the results are shown in Table 1. We see that both Wasserstein and bottleneck on PD 0 capture better the changes that occur in a sparser graph than they do on an equally sized denser graph; i.e. they satisfy edge-submodularity in dimension 0. However, these distances behave differently on PD 1 . Table 1 shows some negative entries; this is due to the fact that between C and D, a cycle is either created or destroyed; while no cycle appears/disappears between A and B (that is, W (A, B) = 0). For focus awareness (property 4), we generate three random weighted graphs A 1 , A 2 and A 3 following the same G n,m model as before, with 35, 100, 120 vertices, 70, 500 and 300 edges respectively; and all edge weights are chosen uniformly random from (0.1, 1). We generate a collection of so-called corrupted (i.e. modified) graphs from the original graph with two types of corruptions: (1) by deleting 10% to 70% of random edges (with 10% increment) of the original graph; and (2) by deleting the same number of edges from the original graph in a targeted way, specifically, among the edges with the largest weights. For each Figure 14: Testing weight awareness (property 2). Points (W (eA,C e ),W (eA, B e )) on and above the diagonal correspond to instances where property 2 is satisfied. Three sets of graphs are represented by blue, orange and green points respectively. W 1,0 W 2,0 W 1,1 W 2,1 Graphs W 2,0 W 2,1 W ∞,0 W ∞,1 A B C D ∆(W ) = W (A, B) −W (C, D) C 5 e 1 C 5 K 5 e 1 K 5 0 0.25 0 0.5 P 5 e 1 P 5 C 5 e 1 C 5 0.25 -0.25 0.5 -0.5 C 9 e 1 C 9 K 9 e 1 K 9 0 1 0 1 P 9 e 1 P 9 C 9 e 1 C 9 0.25 -1 0.5 -1 Z e 1 Z L e 1 L 0.25 0 0.5 0 Table 1: Testing edge-submodularity (property 3) using the graphs from Figure 13 (c). graph A i we plot the difference between the targeted corruption T k (A i ) and the random corruption R k (A i ), for some percentage k: ∆(W q, j ) := {W q, j (A i , T k (A i )) −W q, j (A i , R k (A i ))} 70 k=10 against the percentage of deleted edges i. We obtain similar observation shown in Figure 15 as the property 3 test. Our persistence-based similarity measure satisfies the focus awareness property in dimension 0 but not in dimension 1. This is due to the fact that the deletion of an edge might create a cycle in the corrupted graph (see the negative values in Figure 15 bottom). Stability Under Perturbation The persistence diagram computation depends on the distance matrix we impose on a graph. A natural question is: what are the advantages of using the persistence digram on a graph over the distance matrix itself as a topological fingerprint of the graph? We would like to give some experimental evidence in this section to justify our choice of persistence-based similarity measure. To simplify the analysis, we perturb a small percentage of edges on a simple example, the "map of science" graph [7] and we focus only on edge deletion. The experiments we show here only use PD 0 . PD 1 is omitted because the results are similar. The map of science graph consists of 554 nodes and 2276 edges; we refer to it as the baseline graph, denoted as G 0 . Edge Deletion Model. Our edge deletion is designed as follows. For the i-th perturbation step, i% of edges are deleted from the baseline G 0 uniformly at random; and such a perturbation is repeated 20 times to obtain (almost) unbiased results. We perform a total of 20 perturbation steps, that is, up to 20% of edges can be deleted from the baseline. Similarity Measures. We compare variations among various similarity measure. Recall G 0 is the baseline graph, and d 0 is the distance matrix of its metric space representation. Let G i be an instance of a perturbed graph at the i-the perturbation step, d i be its distance matrix of its metric space representation. The first set of similarity measures are based on bottleneck and Wasserstein distances. We examine the bottleneck distance W ∞ and the Wasserstein Figure 15: Testing focus awareness (property 4). Each colored curve represents a graph among three randomly generated graphs. The difference between the targeted corruption and the random corruption is plotted against the percentage of the deleted edges. distance W 2 between the 0-and 1-dimensional persistence diagrams associated with G 0 and G i respectively. The second set of similarity measures are based upon matrix norms on the distance matrices. We measure the matrix max norm, that is, d i − d 0 max , where A max := max i j |a i j | for a matrix A. We also measure the matrix Frobenius norm, that is, d i − d 0 F , where A F := ∑ i ∑ j (a i j ) 2 . Experimental Results. Figure 16 shows our experimental results. Figure 16(a) uses shortest-path distance metric in the computation of various similarity measures; while Figure 16(b) uses commute-time distance metric. Each subfigure is a box-plot whose y-axis corresponds to a particular similarity measure. Since these similarity measures are not directly comparable, the range of y-axis for each plot has been normalized to [0, 1] according to the maximum similarity measure across all experimental instances. In Figure 16(a), under the shortest-path distance metric, there appears to be a linear relationship between perturbation and the bottleneck distance (and Wasserstein distance). Furthermore, the Wasserstein distance has a smaller variance than the bottleneck distance, making it suitable to study global perturbation in the data. On the other hand, similarity measures based on matrix norms are relatively unstable. Both max norm and Frobenius norm show large fluctuations and variance making them less suitable for analysis. Moreover, these measures completely fail when the perturbed graph becomes disconnected, which is not an issue for our approach. Figure 16 (b), under the commute-time distance, we observe that persistence-based measure appears to be less noisy and more stable than the shortest-path distance metric. CONCLUSION Time-varying graphs are becoming increasingly important in data analysis and visualization. In this paper, we address the problem of capturing and visualizing structural changes in time-varying graphs using techniques originated from topological data analysis, in particular, persistent homology. We provide a simple and intuitive visual interface for investigating structural changes in the graph using persistence-based similarity measures. There are many on-going and future research avenues based upon our approach. For example, in our work, we restrict topological feature extraction to Rips filtrations. Other types of filtrations, such as clique filtration [55], can be natural in analyzing and understanding time-varying graphs. One interesting question that arises in our approach is how best to convert edge weights into distances. The conventional wisdom is that the stronger the communication between nodes (i.e., higher edge weight), the closer together they should be. However, we have some evidence that such a conversion may not always capture the underlying structural changes, and sometimes, an inverse weighting scheme may be more effective. It would also be interesting to perform systematic comparison to a wide range of similarity measures in the study of time-varying graphs [45], in particular, to see how these different measures can complement one another in enriching our current visual analytic framework. A final note is that we hope the work described here could inspire more graph visualization research to move beyond graph-theoretical measures and venture into techniques from topological data analysis. Figure 3 : 3The (a) shortest-path and (b) commute-time distance measured from a source point on a 2-dimensional surface embedded in R 3 . Blue indicates the regions closest to the source. a 1-simplex (an edge) is formed between two points in S if and only if their balls intersect (see Figure 4 left). A 2-simplex (a triangular face) is formed among three points if the balls intersect between every pair of points (see Figure 4 right). Figure 4 : 4Edges (left) and triangles (right) in a Rips complex. PersistenceFigure 5 : 5Diagrams. Topological features of a graph instance and their persistence are recorded by pairing their birth and Constructing a Rips filtration from a distance matrix on a graph. The numbers above each Rips complex indicates the diameter at which the complex is computed. The corresponding 0-persistence diagrams are shown in the gray box to the right of each complex. Figure 7 : 7Timeline comparison for the 7 weekdays of the High School Communications dataset. The timeline was generated by comparing the commute-time 0-dimensional homological features of the time-varying network using Wasserstein distance. Additionally, two sets of graphs, each from the same time for 7 different days are provided. These graphs validate the different levels of communication visible using our approach. Figure 8 : 8Top: Persistent homology timeline for the first Monday and Tuesday of the High School Communications dataset. Bottom: Timeline counting the number of events (sum of all weights) in each graph instance. The timeline shows how different features can be identified in our approach as compared to edge counts alone. Figure 11 : 11by only modifying the weight of edge e such Highlights from the EU E-Mail dataset using the shortest-path Wasserstein distance on 1-dimensional persistence diagrams. A & B show graphs from a timeframe of normal weekly cyclic activity. C & F show timeframes of limited activity from December of 2003 and 2004 during the Christmas and New Years Holidays. D shows an unexpected boost in activity on June 13, 2004 that is correlated with the release of results for the EU Parliamentary Election. E shows a 3-4 week period of low activity in November and December of 2004. We could not identify any externally correlated event to explain this occurrence. Figure 12 : 12Clustering of the weekly behavior in the EU E-Mail dataset using the shortest-path Wasserstein distance on 1dimensional features. The clusters shows 4 primary patterns and 1 outlier pattern (bottom). The number of weeks in each cluster is listed in the lower right. Graph isomorphism in quasipolynomial time. L Babai, arxiv.org/abs/1512.03547L. Babai. Graph isomorphism in quasipolynomial time. arxiv.org/abs/1512.03547, 2016. Modeling collaborations with persistent homology. CoRR, abs/1403. M Bampasidou, T Gentimis, 5346M. Bampasidou and T. Gentimis. Modeling collaborations with persis- tent homology. CoRR, abs/1403.5346, 2014. Commute-time Distance Metric Figure 16: Study of the stability for different similarity measures under small perturbations. The x-axis of each plot shows the percentage of edges deleted from the graph. The y-axis represents the difference between the perturbed graph and the original graph. The y-axes are normalized to [0, 1] based upon the maximum observed valuesCommute-time Distance Metric Figure 16: Study of the stability for different similarity measures under small perturbations. The x-axis of each plot shows the percentage of edges deleted from the graph. The y-axis represents the difference between the perturbed graph and the original graph. The y-axes are normalized to [0, 1] based upon the maximum observed values. Gephi: an open source software for exploring and manipulating networks. M Bastian, S Heymann, M Jacomy, ICWSM. M. Bastian, S. Heymann, and M. Jacomy. Gephi: an open source software for exploring and manipulating networks. In ICWSM, pages 361-362, 2009. Network comparison. M Baur, M Benkert, Network analysis. SpringerM. Baur and M. Benkert. Network comparison. In Network analysis, pages 318-340. Springer, 2005. The state of the art in visualizing dynamic graphs. F Beck, M Burch, S Diehl, D Weiskopf, EuroVis STAR. 2F. Beck, M. Burch, S. Diehl, and D. Weiskopf. The state of the art in visualizing dynamic graphs. EuroVis STAR, 2, 2014. Modern multidimensional scaling: Theory and applications. I Borg, P J Groenen, Springer Science & Business MediaI. Borg and P. J. Groenen. Modern multidimensional scaling: Theory and applications. Springer Science & Business Media, 2005. Design and update of a classification system: The ucsd map of science. K Börner, R Klavans, M Patek, A M Zoss, J R Biberstine, R P Light, V Larivière, K W Boyack, PloS one. 7739464K. Börner, R. Klavans, M. Patek, A. M. Zoss, J. R. Biberstine, R. P. Light, V. Larivière, and K. W. Boyack. Design and update of a clas- sification system: The ucsd map of science. PloS one, 7(7):e39464, 2012. A matrix-based visualization for exploring dynamic compound digraphs. M Burch, B Schmidt, D Weiskopf, Information Visualisation International Conference. IEEEM. Burch, B. Schmidt, and D. Weiskopf. A matrix-based visualization for exploring dynamic compound digraphs. In Information Visualisa- tion International Conference, pages 66-73. IEEE, 2013. Topological pattern recognition for point cloud data. G Carlsson, Acta Numerica. 23G. Carlsson. Topological pattern recognition for point cloud data. Acta Numerica, 23:289-368, 2014. Persistent homology of collaboration networks. C J Carstens, K J Horadam, Mathematical Problems in Engineering. C. J. Carstens and K. J. Horadam. Persistent homology of collaboration networks. Mathematical Problems in Engineering, 2013, 2013. Brain activity: Conditional dissimilarity and persistent homology. B Cassidy, C Rae, V Solo, IEEE 12th International Symposium on Biomedical Imaging (ISBI). B. Cassidy, C. Rae, and V. Solo. Brain activity: Conditional dissimilar- ity and persistent homology. IEEE 12th International Symposium on Biomedical Imaging (ISBI), pages 1356 -1359, 2015. Edge rotations and distance between graphs.Časopis pro pěstování matematiky. G Chartrand, F Saba, H B Zou, 110G. Chartrand, F. Saba, and H. B. Zou. Edge rotations and distance between graphs.Časopis pro pěstování matematiky, 110(1):87-91, 1985. Spectral graph theory. F R Chung, American Mathematical Soc. 92F. R. Chung. Spectral graph theory, volume 92. American Mathemati- cal Soc., 1997. Spectra of graphs: theory and application. D M Cvetković, M Doob, H Sachs, Academic Pr. 87D. M. Cvetković, M. Doob, and H. Sachs. Spectra of graphs: theory and application, volume 87. Academic Pr, 1980. A topological paradigm for hippocampal spatial map formation using persistent homology. Y Dabaghian, F Mémoli, L Frank, G Carlsson, PLoS Computational Biology. 881002581Y. Dabaghian, F. Mémoli, L. Frank, and G. Carlsson. A topologi- cal paradigm for hippocampal spatial map formation using persistent homology. PLoS Computational Biology, 8(8):e1002581, 2012. Persistent cohomology and circular coordinates. V Silva, D Morozov, M Vejdemo-Johansson, Proceedings 25th Annual Symposium on Computational Geometry. 25th Annual Symposium on Computational GeometryV. de Silva, D. Morozov, and M. Vejdemo-Johansson. Persistent coho- mology and circular coordinates. Proceedings 25th Annual Symposium on Computational Geometry, pages 227-236, 2009. A note on two problems in connexion with graphs. E W Dijkstra, Numerische Mathematik. 1E. W. Dijkstra. A note on two problems in connexion with graphs. Numerische Mathematik, 1:269-271, 1959. Compressed adjacency matrices: untangling gene regulatory networks. K Dinkla, M A Westenberg, J J Van Wijk, IEEE Transactions on Vis. and CG. 1812K. Dinkla, M. A. Westenberg, and J. J. van Wijk. Compressed adja- cency matrices: untangling gene regulatory networks. IEEE Transac- tions on Vis. and CG, 18(12):2457-2466, 2012. Decimation of fast states and weak nodes. I Donato, G Petri, M Scolamiero, L Rondoni, F Vaccarino, Proceedings of the European Conference on Complex Systems. the European Conference on Complex SystemsI. Donato, G. Petri, M. Scolamiero, L. Rondoni, and F. Vaccarino. Decimation of fast states and weak nodes. Proceedings of the European Conference on Complex Systems, pages 295-301, 2012. Motif simplification. C Dunne, B Shneiderman, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. the SIGCHI Conference on Human Factors in Computing SystemsC. Dunne and B. Shneiderman. Motif simplification. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2013. Edge compression techniques for visualization of dense directed graphs. T Dwyer, N H Riche, K Marriott, C Mears, IEEE Transactions on Visualization and Computer Graphics. 1912T. Dwyer, N. H. Riche, K. Marriott, and C. Mears. Edge compression techniques for visualization of dense directed graphs. IEEE Trans- actions on Visualization and Computer Graphics, 19(12):2596-2605, 2013. The landscape of complex networks. CoRR, abs/1204. W E , J Lu, Y Yao, 6376W. E, J. Lu, and Y. Yao. The landscape of complex networks. CoRR, abs/1204.6376, 2012. Persistent homology -a survey. H Edelsbrunner, J Harer, Contemporary Mathematics. 453H. Edelsbrunner and J. Harer. Persistent homology -a survey. Contem- porary Mathematics, 453:257-282, 2008. Computational Topology: An Introduction. H Edelsbrunner, J Harer, American Mathematical SocietyProvidence, RI, USAH. Edelsbrunner and J. Harer. Computational Topology: An Introduc- tion. American Mathematical Society, Providence, RI, USA, 2010. J Ellson, E Gansner, L Koutsofios, S C North, G Woodhull, Graph Drawing. SpringerJ. Ellson, E. Gansner, L. Koutsofios, S. C. North, and G. Woodhull. In Graph Drawing, pages 483-484. Springer, 2002. Contact patterns among high school students. J Fournet, A Barrat, PloS one. 99107878J. Fournet and A. Barrat. Contact patterns among high school students. PloS one, 9(9):e107878, 2014. Random-walk computation of similarities between nodes of a graph with application to collaborative recommendation. F Fouss, A Pirotte, J.-M Renders, M Saerens, IEEE transactions on Knowledge and data engineering. 193F. Fouss, A. Pirotte, J.-M. Renders, and M. Saerens. Random-walk computation of similarities between nodes of a graph with application to collaborative recommendation. IEEE transactions on Knowledge and data engineering, 19(3):355-369, 2007. Graph drawing by forcedirected placement. Software: Practice and experience. T M Fruchterman, E M Reingold, 21T. M. Fruchterman and E. M. Reingold. Graph drawing by force- directed placement. Software: Practice and experience, 21(11):1129- 1164, 1991. Multilevel agglomerative edge bundling for visualizing large graphs. E R Gansner, Y Hu, S North, C Scheidegger, IEEE Pacific Visualization Symposium. E. R. Gansner, Y. Hu, S. North, and C. Scheidegger. Multilevel agglom- erative edge bundling for visualizing large graphs. In IEEE Pacific Visualization Symposium, pages 187-194, 2011. Graph drawing by stress majorization. E R Gansner, Y Koren, S North, Graph Drawing. SpringerE. R. Gansner, Y. Koren, and S. North. Graph drawing by stress majorization. In Graph Drawing, pages 239-250. Springer, 2005. A technique for drawing directed graphs. E R Gansner, E Koutsofios, S C North, K.-P Vo, IEEE Transactions on Software Engineering. 193E. R. Gansner, E. Koutsofios, S. C. North, and K.-P. Vo. A technique for drawing directed graphs. IEEE Transactions on Software Engineering, 19(3):214-230, 1993. Barcodes: The persistent topology of data. R Ghrist, Bullentin of the American Mathematical Society. 45R. Ghrist. Barcodes: The persistent topology of data. Bullentin of the American Mathematical Society, 45:61-75, 2008. Visualizing the evolution of compound digraphs with timearctrees. M Greilich, M Burch, S Diehl, Computer Graphics Forum. Wiley Online Library28M. Greilich, M. Burch, and S. Diehl. Visualizing the evolution of compound digraphs with timearctrees. In Computer Graphics Forum, volume 28, pages 975-982. Wiley Online Library, 2009. Analyzing social media networks with NodeXL: Insights from a connected world. D Hansen, B Shneiderman, M A Smith, Morgan KaufmannD. Hansen, B. Shneiderman, and M. A. Smith. Analyzing social media networks with NodeXL: Insights from a connected world. Morgan Kaufmann, 2010. Force-directed edge bundling for graph visualization. D Holten, J J Van Wijk, Computer Graphics Forum. 28983D. Holten and J. J. Van Wijk. Force-directed edge bundling for graph visualization. In Computer Graphics Forum, volume 28, pages 983- Efficient, high-quality force-directed graph drawing. Y Hu, Mathematica Journal. 101Y. Hu. Efficient, high-quality force-directed graph drawing. Mathemat- ica Journal, 10(1):37-71, 2005. Exploring the design space of composite visualization. W Javed, N Elmqvist, PacificVis 2012. IEEEW. Javed and N. Elmqvist. Exploring the design space of composite visualization. In PacificVis 2012, pages 1-8. IEEE, 2012. Graphprism: Compact visualization of network structure. S Kairam, D Maclean, M Savva, J Heer, Advanced Visual Interfaces. S. Kairam, D. MacLean, M. Savva, and J. Heer. Graphprism: Compact visualization of network structure. In Advanced Visual Interfaces, 2012. Drawing large graphs by low-rank stress majorization. M Khoury, Y Hu, S Krishnan, C Scheidegger, Computer Graphics Forum. Wiley Online Library31M. Khoury, Y. Hu, S. Krishnan, and C. Scheidegger. Drawing large graphs by low-rank stress majorization. In Computer Graphics Forum, volume 31, pages 975-984. Wiley Online Library, 2012. Ace: A fast multiscale eigenvectors computation for drawing huge graphs. Y Koren, L Carmel, D Harel, IEEE Symposium on Information Visualization. Y. Koren, L. Carmel, and D. Harel. Ace: A fast multiscale eigenvec- tors computation for drawing huge graphs. In IEEE Symposium on Information Visualization, pages 137-144, 2002. Deltacon: A principled massive-graph similarity function. D Koutra, J T Vogelstein, C Faloutsos, Proceedings of the 2013 SIAM International Conference on Data Mining. the 2013 SIAM International Conference on Data MiningSIAMD. Koutra, J. T. Vogelstein, and C. Faloutsos. Deltacon: A principled massive-graph similarity function. In Proceedings of the 2013 SIAM International Conference on Data Mining, pages 162-170. SIAM, 2013. Graph evolution: Densification and shrinking diameters. J Leskovec, J Kleinberg, C Faloutsos, ACM Transactions on Knowledge Discovery from Data (TKDD). 11J. Leskovec, J. Kleinberg, and C. Faloutsos. Graph evolution: Densi- fication and shrinking diameters. ACM Transactions on Knowledge Discovery from Data (TKDD), 1(1):2, 2007. Graph classification via topological and label attributes. G Li, M Semerci, B Yener, M J Zaki, Proceedings of the 9th international workshop on MLG. the 9th international workshop on MLG2G. Li, M. Semerci, B. Yener, and M. J. Zaki. Graph classification via topological and label attributes. In Proceedings of the 9th international workshop on MLG, volume 2, 2011. Layout adjustment and the mental map. K Misue, P Eades, W Lai, K Sugiyama, Journal of Visual Languages & Computing. 62K. Misue, P. Eades, W. Lai, and K. Sugiyama. Layout adjustment and the mental map. Journal of Visual Languages & Computing, 6(2):183- 210, 1995. The resistance perturbation distance. N D Monnig, F G Meyer, arXiv:1605.01091arXiv preprintN. D. Monnig and F. G. Meyer. The resistance perturbation distance. arXiv preprint arXiv:1605.01091, 2016. Energy models for graph clustering. A Noack, Journal of Graph Algorithms and Applications. 112A. Noack. Energy models for graph clustering. Journal of Graph Algorithms and Applications, 11(2):453-480, 2007. Web graph similarity for anomaly detection. P Papadimitriou, A Dasdan, H Garcia-Molina, Journal of Internet Services and Applications. 11P. Papadimitriou, A. Dasdan, and H. Garcia-Molina. Web graph simi- larity for anomaly detection. Journal of Internet Services and Applica- tions, 1(1):19-30, 2010. Divided edge bundling for directional network data. D Selassie, B Heller, J Heer, IEEE Transactions on Visualization and Computer Graphics. 1712D. Selassie, B. Heller, and J. Heer. Divided edge bundling for direc- tional network data. IEEE Transactions on Visualization and Computer Graphics, 17(12):2354-2363, 2011. How to draw a graph. W T Tutte, Proceedings of the London Mathematical Society. the London Mathematical SocietyW. T. Tutte. How to draw a graph. Proceedings of the London Mathe- matical Society, s3-13(1):743-767, Jan 1963. Reducing snapshots to points: A visual analytics approach to dynamic network exploration. S Van Den Elzen, D Holten, J Blaas, J J Van Wijk, IEEE transactions on visualization and computer graphics. 221S. van den Elzen, D. Holten, J. Blaas, and J. J. van Wijk. Reducing snapshots to points: A visual analytics approach to dynamic network exploration. IEEE transactions on visualization and computer graphics, 22(1):1-10, 2016. Graph kernels. S V N Vishwanathan, N N Schraudolph, R Kondor, K M Borgwardt, Journal of Machine Learning Research. 11S. V. N. Vishwanathan, N. N. Schraudolph, R. Kondor, and K. M. Borgwardt. Graph kernels. Journal of Machine Learning Research, 11(Apr):1201-1242, 2010. Shuffled graph classification: Theory and connectome applications. J T Vogelstein, C E Priebe, arXiv:1112.5506arXiv preprintJ. T. Vogelstein and C. E. Priebe. Shuffled graph classification: Theory and connectome applications. arXiv preprint arXiv:1112.5506, 2011. Visual analysis of large graphs: State-of-the-art and future research challenges. T Von Landesberger, A Kuijper, T Schreck, J Kohlhammer, J J Van Wijk, J.-D Fekete, D W Fellner, Computer graphics forum. Wiley Online Library30T. Von Landesberger, A. Kuijper, T. Schreck, J. Kohlhammer, J. J. van Wijk, J.-D. Fekete, and D. W. Fellner. Visual analysis of large graphs: State-of-the-art and future research challenges. In Computer graphics forum, volume 30, pages 1719-1749. Wiley Online Library, 2011. Branching and circular features in high dimensional data. B Wang, B Summa, V Pascucci, M Vejdemo-Johansson, IEEE Transactions on Visualization and Computer Graphics. 17B. Wang, B. Summa, V. Pascucci, and M. Vejdemo-Johansson. Branch- ing and circular features in high dimensional data. IEEE Transactions on Visualization and Computer Graphics, 17:1902-1911, 2011. The tidy set: A minimal simplicial set for computing homology of clique complexes. A Zomorodian, Proceedings 26th ACM Symposium on Computational Geometry. 26th ACM Symposium on Computational GeometryA. Zomorodian. The tidy set: A minimal simplicial set for computing homology of clique complexes. Proceedings 26th ACM Symposium on Computational Geometry, pages 257-266, 2010.
[]
[ "PERCOLATION ON RANDOM TRIANGULATIONS AND STABLE LOOPTREES", "PERCOLATION ON RANDOM TRIANGULATIONS AND STABLE LOOPTREES" ]
[ "Nicolas Curien [email protected] \nCNRS\nLPMA UPMC\nParis 6\n\nÉcole Normale Supérieure\n\n", "♦ ", "Igor Kortchemski [email protected] " ]
[ "CNRS\nLPMA UPMC\nParis 6", "École Normale Supérieure\n" ]
[]
We study site percolation on Angel & Schramm's Uniform Infinite Planar Triangulation. We compute several critical and near-critical exponents, and describe the scaling limit of the boundary of large percolation clusters in all regimes (subcritical, critical and supercritical). We prove in particular that the scaling limit of the boundary of large critical percolation clusters is the random stable looptree of index 3/2, which was introduced in [13]. We also give a conjecture linking looptrees of any index α ∈ (1, 2) with scaling limits of cluster boundaries in random triangulations decorated with O(N) models. Figure 1: A site percolated triangulation and the interfaces separating the clusters.
10.1007/s00440-014-0593-5
[ "https://arxiv.org/pdf/1307.6818v3.pdf" ]
55,530,619
1307.6818
60b346eee77f63638f6739cc4136a1cafdd4e90e
PERCOLATION ON RANDOM TRIANGULATIONS AND STABLE LOOPTREES July 2013 Nicolas Curien [email protected] CNRS LPMA UPMC Paris 6 École Normale Supérieure ♦ Igor Kortchemski [email protected] PERCOLATION ON RANDOM TRIANGULATIONS AND STABLE LOOPTREES July 2013This work is partially supported by the French "Agence Nationale de la Recherche" ANR-08-BLAN-0190.MSC2010 subject classifications Primary 05C8060J80 ; secondary 60K35 Keywords and phrases Random planar triangulationspercolationGalton-Watson trees We study site percolation on Angel & Schramm's Uniform Infinite Planar Triangulation. We compute several critical and near-critical exponents, and describe the scaling limit of the boundary of large percolation clusters in all regimes (subcritical, critical and supercritical). We prove in particular that the scaling limit of the boundary of large critical percolation clusters is the random stable looptree of index 3/2, which was introduced in [13]. We also give a conjecture linking looptrees of any index α ∈ (1, 2) with scaling limits of cluster boundaries in random triangulations decorated with O(N) models. Figure 1: A site percolated triangulation and the interfaces separating the clusters. INTRODUCTION Introduction We investigate site percolation on large random triangulations and in particular on the Uniform Infinite Planar Triangulation (in short, UIPT) which was introduced by Angel & Schramm [4]. In particular, we compute the critical and near-critical exponents related to the perimeter of percolation interfaces and we identify the scaling limit of the boundary of large clusters in all regimes (critical, subcritical and supercritical). In the critical case, this limit is shown to be L 3/2 , the stable looptree of parameter 3/2 introduced in [13]. Our method is based on a surgery technique inspired from Borot, Bouttier & Guitter and on a tree decomposition of triangulations with non-simple boundary. We finally state precise conjectures linking the whole family of looptrees (L α ) 1<α<2 to scaling limits of cluster boundaries of random planar triangulations decorated with O(N) models. The UIPT. The probabilistic theory of random planar maps and its physics counterpart, the Liouville 2D quantum gravity, is a very active field of research. See in particular the work of Le Gall and Miermont on scaling limits of large random planar maps and the Brownian map [31,35]. The goal is to understand universal large-scale properties of random planar graphs or maps. One possible way to get information about the geometry of these random lattices is to understand the behavior of (critical) statistical mechanics models on them. In this paper, we focus on one of the simplest of such models: site percolation on random triangulations. Recall that a triangulation is a proper embedding of a finite connected graph in the twodimensional sphere, considered up to orientation-preserving homeomorphisms, and such that all the faces have degree 3. We only consider rooted triangulations, meaning that an oriented edge is distinguished and called the root edge. Note that we allow loops and multiple edges. We write T n for the set of all rooted triangulations with n vertices, and let T n be a random triangulation chosen uniformly at random among T n . Angel & Schramm [4] have introduced an infinite random planar triangulation T ∞ , called the Uniform Infinite Planar Triangulation (UIPT), which is obtained as the local limit of T n as n → ∞. More precisely, T ∞ is characterized by the fact that for every r 0 we have the following convergence in distribution B r (T n ) (d) − −−− → n→∞ B r (T ∞ ),(1) where B r (m) is the map formed by the edges and vertices of m that are at graph distance smaller than or equal to r from the origin of the root edge. This infinite random triangulation and its quadrangulation analog (the UIPQ, see [11,28]) have attracted a lot of attention, see [5,19] and the references therein. INTRODUCTION of the origin is by definition the set of all the white vertices and edges between them that can be reached from the origin of the root edge using white vertices only. We denote by H • a its hull, which is obtained by filling-in the holes of the white cluster except the one containing the target of the root edge called the exterior component (see Fig. 2 and Section 2.2 below for a precise definition). Finally, we denote by ∂H • a the boundary of the hull, which is the graph formed by the edges and vertices of H • a adjacent to the exterior (see Fig. 2), and let #∂H • a be its perimeter, or length, that is the number of half edges of ∂H • a belonging to the exterior (it follows from [2] that #∂H • a is always finite). Note that ∂H • a is formed of discrete cycles attached by some pinch-points. One of our contributions is to find the precise asymptotic behavior for the probability of having a large perimeter in the critical case. P (#∂H • 1 ⁄2 = n) ∼ n→∞ √ 3 |Γ (−2/3)| 3 · n −4/3 , where Γ is Euler's Gamma function. It is interesting to mention that the exponent 4/3 for the perimeter of the boundary of critical clusters also appears when dealing with the half-plane model of the UIPT: using the peeling process, it is shown in [3] that P(#∂H • a c > n) n −1/3 , where a n b n means that the sequence a n /b n is bounded from below and above by certain constants. The main idea used to establish Theorem 1.1 is a tree representation of the 2-connected components of ∂H • a , which we prove to be closely related to the law of a certain two-type Galton-Watson tree. We reduce the study of this two-type random tree to the study of a standard one-type Galton-Watson tree by using a recent bijection due to Janson & Stefánsson [23], which enables us to use the vast literature on random trees and branching processes to make exact computations. This method also allows us to fully understand the probabilistic structure of the hull of the white cluster and to identify the scaling limits (for the Gromov-Hausdorff topology) in any regime (subcritical, critical and supercritical) of ∂H • a , seen as a compact metric space, when its perimeter tends to infinity. In particular, we establish that the scaling limit of ∂H • a c conditioned to be large, appropriately rescaled, is the stable looptree of parameter 3/2 introduced in [13], whose definition we now recall. Stable looptrees. Random stable looptrees are random compact metric spaces and can, in a certain sense, be seen as the dual of the stable trees introduced and studied in [33,16]. They are constructed in [13] using stable processes with no negative jumps, but can also be defined as scaling limits of discrete objects: With every rooted oriented tree (or plane tree) τ, we associate a graph, called the discrete looptree of τ and denoted by Loop(τ), which is the graph on the set of vertices of τ such that two vertices u and v are joined by an edge if and only if one of the following three conditions are satisfied in τ: u and v are consecutive siblings of a same parent, or u is the first sibling (in the lexicographical order) of v, or u is the last sibling of v, see Fig. 3. Note that in [13], Loop(τ) is defined as a different graph, and that here Loop(τ) is the graph which is denoted by Loop (τ) in [13]. We view Loop(τ) as a compact metric space by endowing its vertices with the graph distance (every edge has unit length). Fix α ∈ (1, 2). Now let τ n be a Galton-Watson tree conditioned on having n vertices, whose offspring distribution µ is critical and satisfies µ k ∼ c · k −1−α as k → ∞ for a certain c > 0. In [13,Section 4.2], it is shown that there exists a random compact metric space L α , called the stable looptree of index α, such that n −1/α · Loop(τ n ) (d) − −−− → n→∞ (c|Γ (−α)|) −1/α · L α ,(2) where the convergence holds in distribution for the Gromov-Hausdorff topology and where c · M stands for the metric space obtained from M by multiplying all distances by c > 0. Recall that the Gromov-Hausdorff topology gives a sense to the convergence of (isometry classes) of compact metric spaces, see Section 4.3.1 below for the definition. It has been proved in [13] that the Hausdorff dimension of L α is almost surely equal to α. Furthermore, the stable looptrees can be seen as random metric spaces interpolating between the unit length circle C 1 := 1 2π · S 1 and Aldous' Brownian CRT [1] (which we view here as the tree T e coded by a normalized Brownian excursion e, see [32]). We are now in position to describe the possible scaling limits of the boundary of percolation clusters in the UIPT. For fixed a ∈ (0, 1), let ∂H • a (n) be the boundary of the white hull of the origin conditioned on the event that H • a is finite and that the perimeter of ∂H • a is n. We view ∂H • a (n) as a compact metric space by endowing its vertices with the graph distance (every edge has unit length). Figure 4: An α = 3/2 stable tree, and its associated looptree L 3/2 , embedded non isometrically and non properly in the plane. Theorem 1.2 (Scaling limits for ∂H • a ). For every a ∈ (0, 1), there exists a positive constant C a such that the following convergences hold in distribution for the Gromov-Hausdorff topology: (i) when 1/2 < a < 1, n −1 · ∂H • a (n) (d) − −−− → n→∞ C a · C 1 , (ii) when a = a c = 1/2, n −2/3 · ∂H • a (n) (d) − −−− → n→∞ 3 1/3 · L 3/2 , (iii) when 0 < a < 1/2, n −1/2 · ∂H • a (n) (d) − −−− → n→∞ C a · T e . See Theorem 1.3 below for more details about the constants C a . Although Theorem 1.2 does not imply that 1/2 is the critical threshold for percolation on the UIPT (as shown in [2]), it is a compelling evidence for it. Let us give a heuristic justification for the three limiting compact metric spaces appearing in the statement of this theorem. Imagine that we condition the cluster of the origin to be finite and have a very large, but finite, boundary. In the supercritical regime (i), as soon as the cluster grows arms it is likely to become infinite, hence the easiest way to stay finite is to look like a loop. On the contrary, in the subcritical regime (iii), having a large boundary costs a lot, so the cluster adopts the shape which maximizes its boundary length for fixed size: the tree. In the critical case (ii), these effects are balanced and a fractal object emerges: not quite a loop, nor a tree, but a looptree! The proof of Theorem 1.2 gives the expression of C a in terms of certain quantities involving Galton-Watson trees. This allows us to obtain the following near-critical scaling behavior. Theorem 1.3 (Near-critical scaling constants). The constants C a satisfy the following near-critical asymptotic behavior: C a ∼ a↓1/2 2 √ 3 · a − 1 2 and C a ∼ a↑1/2 3 3/4 8 · 1 2 − a −1/2 . See (21) below for the exact expression of C a . Let us mention that the exponents appearing in the previous theorems are expected to be universal (see Section 5.2 for analogous results for type II triangulations). Finally, we believe that our techniques may be extended to prove that the stable looptrees (L α : α ∈ (1, 2)) give the scaling limits of the outer boundary of clusters of suitable statistical mechanics models on random planar triangulations, see Section 5.3. Strategy and organization of the paper. In Section 2, we explain how to decompose a percolated triangulation into a white hull, a black hull and a necklace by means of a surgery along a percolation interface. In Section 3, we study the law of the white hull by using the so-called Boltzmann measure with exposure and its relation with a certain Galton-Watson tree. In Section 4, we prove our main results by carrying out explicit calculations. Finally, Section 5 is devoted to comments, extensions and conjectures. Acknowledgments: The first author is indebted to Olivier Bernardi and Grégory Miermont for many useful discussions concerning percolation on random maps. Decomposition of percolated triangulations In this section, we explain how to decompose certain percolated triangulations into a pair of two hulls glued together by a so-called necklace of triangles. This sort of decomposition was first considered by Borot, Bouttier & Guitter [9,8]. We then show how to associate a natural tree structure with a triangulation with boundary. The crucial feature is that when considering percolation on the UIPT, the random tree coding the hull turns out to be related to a Galton-Watson tree with an explicit offspring distribution in the domain of attraction of a stable law of index 3/2. Percolated triangulations A planar map is a proper embedding of a finite connected graph in the two-dimensional sphere, considered up to orientation-preserving homeomorphisms. The faces of the map are the connected components of the complement of the edges, and the degree of a face is the number of edges that are incident to it, with the convention that if both sides of an edge are incident to the same face, this edge is counted twice. As usual in combinatorics, we will only consider rooted maps that are maps with a distinguished oriented root edge. If m is a planar map, we will denote by V(m), E(m) and F(m) respectively the sets of vertices, edges and faces of m. A triangulation is a (rooted) planar map whose faces are all triangles, i.e. have degree three. Loops and multiples edges are allowed. A triangulation with boundary T is a planar map whose faces are triangles except the face adjacent on the right of the root edge called the external face, which can be of arbitrary degree. The size |T | of T is its total number of vertices. The boundary ∂T is the graph made of the vertices and edges adjacent to the external face of T and its perimeter #∂T is the degree of the external face. The boundary is simple if the number of vertices of ∂T is equal to its perimeter, or equivalently if ∂T is a discrete cycle. In the following, we denote by T n,p the set of all triangulations with boundary of perimeter p having n vertices in total. For reasons that will appear later, the set T 2,2 is made of the "triangulation" made of a single edge. By identifying the two sides of the external face of a triangulation with perimeter 2, we see that #T n,2 is also the number of rooted triangulations with n vertices in total. When working with the UIPT we will also allow triangulations to be infinite, see [14] for background (in the quadrangular case). A percolated triangulation is by definition a triangulation T with a coloring of its vertices in black or white. We say that the percolation is nice if the root edge joins a white vertex to a black vertex (which we write • → •). Note that this forces a percolation interface to go through the root edge, and that the latter cannot be a loop. The origin of the root edge is called the white origin and its target is called the black origin. In the following, we always assume that the percolation is nice. Necklace surgery In this section, we assume in addition that the percolation interface going through the root edge is finite, see Fig. 5. The white and black hulls. Let T be a nicely percolated triangulation (possibly infinite). The white cluster is by definition the submap consisting of all the edges (together with their extremities) whose endpoints are in the same white connected component as the white origin. The complement of this cluster consists of connected components. The white hull H • is the union of the white cluster and of all the latter connected components, except the one containing the black origin (see Fig. 5). Hence H • is a triangulation with a (non necessarily simple) boundary, and is by convention rooted at the edge whose origin is the white origin (with the external face lying on its right). Note that by definition all the vertices on the boundary of H • are white. We similarly define the black cluster as the submap consisting of all the edges (together with their extremities) whose endpoints are in the same connected component as the black origin. The black hull H • is similarly obtained by filling-in the holes of the black cluster except the one containing the white origin. The map H • is thus a triangulation with a black boundary which is rooted at the edge whose origin is the black origin. Recall that the perimeters of H • and H • are respectively denoted by #∂H • and #∂H • . These quantities are finite by the assumption made in the beginning of this section. However, |H • | and |H • | may be infinite. Surgery. Imagine that using a pair of scissors, we separate the two hulls H • and H • by cutting along their boundaries. During this operation, we duplicate the vertices which are pinch-points in the boundary of H • or H • , see Fig. 5. After doing so, we are left with the two hulls H • and H • which are now separated, together with the map that was stuck in-between H • and H • , which is called the necklace [9,8]. If n, m 0 are integers such that mn = 0, a (n, m)-necklace is by definition a triangulation with two simple boundaries, a "white" one of perimeter n and a "black" one of perimeter m, such that every vertex belongs to one of these two boundaries and such that every triangle has at least one vertex on each boundary, rooted along an edge joining a white vertex to a black one. The root of the necklace obtained from a nicely percolated triangulation T is the root of T . It is an easy exercise to show that for n, m 0, #{(n, m)-necklaces} = n + m n . It is plain that the last decomposition is invertible, in other words the following result holds: In the next subsection, we further decompose a hull according to the tree structure provided by its 2-connected components. Tree representation of triangulation with boundary We denote by T the set of all plane (rooted and oriented) trees, see [32,37] for the formalism. In the following, tree will always mean plane tree. We will view each vertex of a tree τ as an individual of a population whose τ is the genealogical tree. The vertex ∅ is the ancestor of this population and is called the root. The degree of a vertex u ∈ τ is denoted by deg(u) and its number of children is denoted by k u . The size of τ is by definition the total number of vertices and will be denoted by |τ| and H(τ) is the height of the tree, that is, its maximal generation. We denote by T B the set of all triangulations with boundary and by T S the set of all triangulations with simple boundary (also called simple triangulations in the sequel). Let T be a triangulation with boundary. We recall that the perimeter #∂T of T is the number of half-edges on its boundary. We define the set E(T ) of all exterior vertices of T as the set of all the vertices on the boundary of T , and I(T ) as the complement of E(T ) in V(T ), which are the so-called inner vertices of T . Note that #E(T ) = #∂T when T has a simple boundary, and that #E(T ) < #∂T otherwise. If T is not a simple triangulation, then T can be decomposed into #∂T − #E(T ) + 1 different simple triangulations, attached by some pinch-points, see Fig. 7. Imagine that we scoop out the interior of all these simple triangulation components and duplicate each edge whose sides both belong to the external face into two "parallel" edges (see Fig. 7). We thus obtain a collection of cycles glued together, which we call the scooped-out triangulation and denote it by Scoop(T ). Note that Scoop(T ) may differ from the boundary ∂T of T only because some edges may have been duplicated, see Fig. 7. Note however that the underlying metric spaces are identical. The scooped-out triangulation Scoop(T ) can naturally be represented as a tree. More precisely, with Scoop(T ) we associate a tree with two types of vertices, white and black, as follows. Inside each cycle, add a new black vertex which is connected to all the white vertices belonging to this cycle. The resulting tree is denoted by Tree(T ) and is rooted at the corner adjacent to the target of the root edge of T (see Fig. 7). By construction, Tree(T ) is a plane tree such that all the vertices at even (resp. odd) height are white (resp. black). If t is a plane tree, let •(t) (resp. •(t)) be the set of all vertices at odd (resp. even) height. If t = Tree(T ), then the vertices belonging to •(t) correspond to the exposed vertices of T and the following relations are easy to check: # • (t) = #E(T ) (3) |t| = # • (t) + # • (t) = #∂T + 1 (4) u∈•(t) deg(u) = #∂T (5) u∈•(t) (1 + k u ) = |t|.(6) This scooping-out procedure is a bijection between the set of all triangulations with boundary and the set of all plane trees having at least two vertices together with a finite sequence (T u , u ∈ •(t)) of triangulations with simple boundary such that #∂T u = deg(u) (which correspond to the triangulations inside each cycle). Recall that T 2,2 , the set of all triangulations with simple boundary of perimeter 2 and no internal vertices, is by convention composed by a degenerate triangulation made of a single edge : We use this triangulation to close a double edge into a single one, see Fig. 7. Enumerative results In the previous section, we have explained how to decompose a triangulation with boundary into a tree of triangulations with simple boundary. We now present some useful enumerative results on triangulations with simple boundary. We denote by W the generating function of triangulations with simple boundary having weight x per inner vertex and y per edge on the boundary, that is W(x, y) := T ∈T S x #I(T ) y #∂T = yx + y 2 + · · · . Note that the contribution of the "edge triangulation" is y 2 in the previous sum. We also let w n,p = [x n ][y p ]W be the number of triangulations with simple boundary of perimeter p 1 and n internal vertices. Following Tutte [40], Krikun [29] calculated the generating function W(x, y). In particular, the radius of convergence of W as a function of x is r c := 1 √ 432 = 1 12 √ 3 , and an explicit formula for w n,p can be found in [29]. We will not need its exact expression, but we will heavily rely on the following asymptotic estimates: w n,p ∼ n→∞ C p · n −5/2 r −n c , C p = 3 p−2 · p · (2p)! 4 √ 2π · (p!) 2 ∼ p→∞ 1 36π √ 2 · √ p 12 p .(7) In particular, note that the number of triangulations with n vertices is #T n,2 = w n−2,2 ∼ n→∞ 1 144 √ 2π · n −5/2 r −n c .(8) We will also use the explicit expression of W(r c , y): W(r c , y) = y 2 + (1 − 12y) 3/2 − 1 24 √ 3 .(9) This expression can be obtained from [29, (4)] after a change of variables. For every integer k 1, we also introduce q k := 1 12 k · [y k ]W(r c , y) = 1 12 k · ∞ n=0 w n,k r n c .(10) Standard singularity analysis shows that q k ∼ k→∞ 1 32 √ 3π · k −5/2 .(11) In particular, note that the series k 1 q k is convergent. Boltzmann triangulations with exposure and GW trees This section is devoted to the study of the tree structure of a random triangulation with boundary distributed according to the Boltzmann measure with exposure defined below. This measure will naturally arise in Proposition 4.2 when considering the hulls in a Bernoulli site percolation of the UIPT. Definition 3.1 (Boltzmann measure with exposure on triangulations). For every a ∈ (0, 1), we introduce a measure Q a on the set of all triangulations with (general) boundary, called the critical Boltzmann measure with exposure a, defined by Q a (T) = r #V(T ) c 12 −#∂T a #E(T ) , ∀ T ∈ T B .(12) Note that in this definition, r c is elevated to the power #V(T ) and not to the power #I(T ) as in W. Our goal is now to describe the "law" of the tree of components of a triangulation under the measure Q a . A two-type Galton-Watson tree Given two probability measures µ • and µ • on {0, 1, 2, 3, . . .}, we consider a two-type Galton-Watson tree where every vertex at even (resp. odd) height has a number of children distributed according to µ • (resp. µ • ), all independently of each other. Specifically, using the notation k u for the number of children of a vertex u in a plane tree, its law, denoted by GW µ • ,µ • , is characterized by the following formula: GW µ • ,µ • (t) = u∈•(t) µ • (k u ) u∈•(t) µ • (k u ), ∀ t ∈ T. Recall that a ∈ (0, 1). To simplify notation, set γ = √ 3 − 1, ξ = γ/(γ + 2a) and define two probability distributions µ • and µ • a on {0, 1, 2, 3, . . .} by µ • (j) = q j+1 Z • , µ • a (j) = (1 − ξ)ξ j (j 0), where Z • is a normalizing constant. Using (9), simple computations show that Z • = γr c /2. The following proposition is the key of this work: Proposition 3.2. For every a ∈ (0, 1) and for every plane tree t such that |t| 1, we have Q a {T ∈ T B ; Tree(T ) = t} = r c (2a + γ) 2 |t| · GW µ • a ,µ • (t). Proof. Fix t ∈ T with |t| 1. Using (12), the scoop decomposition of Section 2.3 (and its consequence (3)), (5), the definition of the q k in (10), then (4) and finally (6), we get that Q a ({T : Tree(T ) = t}) = 1 12 #∂T u∈•(t) ar c n 1 ,...,n #•(t) 0 u∈•(t) r n i c w n i ,deg(u) = (5),(10) u∈•(t) ar c u∈•(t) q k u +1 = (4) Z #•(t)+#•(t) • u∈•(t) ar c Z • u∈•(t) q k u +1 Z • = (6) Z • ξ #•(t)+#•(t) u∈•(t) ar c Z • ξ k u +1 u∈•(t) q k u +1 Z • . Since ξar c /Z • = 1 − ξ, this completes the proof. Remark 3.3. A simple computation shows that the mean of µ • is equal to m • = 1/γ and that the mean of µ • a is m • a = γ/(2a). In particular m • m • a = 1/(2a) , so that the two-type Galton-Watson tree is critical if and only if a = 1/2. The following proposition will be useful when we will deal with the UIPT. Recall that T n,p is the set of all triangulations with general boundary of perimeter p having n vertices in total. Proposition 3.4. For every fixed p 1, we have Q a (T n,p ) ∼ K a (p) · n −5/2 as n → ∞, with K a (p) = r c (γ + 2a) 2 p+1 GW µ • a ,µ •   u∈•(τ) φ(k u )1 |τ|=p+1   , where φ(k) = C k+1 12 k+1 q k+1 ∼ k→∞ 4 9 6 π · k 3 . Proof. By (4), if T ∈ T n,p and t = Tree(T ), then 1 # • (t) p. Using the scoop-out decomposition, we can thus write Q a (T n,p ) = 1 12 p · t∈T |t|=p+1 •(t) ar c n 1 +...+n #•(t) =n−#•(t) u∈•(t) r n i c w n i ,deg(u) .(13) As n → ∞, a standard phenomenon occurs: in the second sum appearing in (13), the only terms n 1 , . . . , n #•(t) that have a contribution in the limit are those where one term is of order n whereas all the others remain small. More precisely, we use the following lemma whose proof is similar to that of [4,Lemma 2.5] and is left to the reader: n i = k i=1 C i j =i ∞ n=0 a (j) n . Multiplying both sides of (13) by n 5/2 , by (7) we are in position to apply Lemma 3.5 together with the definition of q k (10) to get: lim n→∞ n 5/2 Q a (T n,p ) = 1 12 p t∈T |t|=p+1 •(t) ar c u∈•(t) C deg(u) v∈•(t) v =u 12 deg(v) q deg(v) = (5) t∈T |t|=p+1 •(t) ar c u∈•(t) C deg(u) 12 deg(u) v∈•(t) v =u q deg(v) = t∈T |t|=p+1 •(t) ar c v∈•(t) q deg(v) u∈•(t) C deg(u) 12 deg(u) q deg(u) . Performing the same manipulations as in the proof of Proposition 3.2, the last display is equal to = r c (γ + 2a) 2 |t| t∈T |t|=p+1 u∈•(t) µ • a (k u ) u∈•(t) µ • (k u ) u∈•(t) C k u +1 12 k u +1 q k u +1 = r c (γ + 2a) 2 p+1 GW µ • a ,µ •   u∈•(τ) φ(k u )1 |τ|=p+1   , where φ(k) = C k+1 /(12 k+1 q k+1 ) is asymptotically equivalent to 4 9 6 π · k 3 as k → ∞ by (11) and (7). We conclude this section by giving the asymptotic behavior of the expectation appearing in the definition of K a (p) as p → ∞, in the critical case a = 1/2: GW µ • 1 ⁄2 ,µ •   u∈•(τ) φ(k u )1 |τ|=n+1   ∼ n→∞ 3 1/6 Γ (−2/3) 2 · √ 2π · n 1/3 .(14) The proof is postponed to the appendix (see Corollary A.7). Reduction to a one-type Galton-Watson tree We have seen in the last section that the "law" of the tree associated with a Boltzmann triangulation with exposure is closely related to a two-type Galton-Watson tree. In order to study this random tree, we will use a bijection due to Janson & Stefánsson [23, Section 3] which will map this two-type Galton-Watson tree to a standard one-type Galton-Watson tree, thus enabling us to use the vast literature on this subject. We start by describing this bijection, denoted by G (the interested reader is referred to [23] for further details). First set G(τ) = {∅} if τ = {∅} is composed of a single vertex. Now fix a tree τ = {∅}. The tree G(τ) has the same vertices as τ, but the edges are different and are defined as follows. For every white vertex u repeat the following operation : denote u 0 be the parent of u (if u = ∅) and then list the children of u in lexicographical order u 1 , u 2 , ..., u k . If u = ∅ draw the edge between u 0 and u 1 and then edges between u 1 and u 2 , ... , u k−1 and u k and finally between u k and u. If u is a white leaf this reduces to draw the edge between u 0 and u. One can check that the graph G(τ) defined by this procedure is a tree. In addition, G(τ) is rooted at the corner between the root of τ and its first child (see Fig. 8). Figure 8: An example of a tree τ, where vertices at even (resp. odd) generation have been colored in white (resp. black), and G(τ). This mapping thus has the property that every vertex at even generation is mapped to a leaf, and every vertex at odd generation with k 0 children is mapped to a vertex with k + 1 children. The following result is implicit in [23, Appendix A], but for sake of completeness we give a proof. Proposition 3.6 ([23] ). Let ρ, µ be two probability measures on {0, 1, 2, . . .}. Assume that ρ is a geometric distribution, i.e. there exists λ ∈ (0, 1) such that ρ(i) = (1 − λ)λ i for i 0. Then the image of GW ρ,µ under G is the Galton-Watson measure GW ν , where ν is defined by: ν 0 = 1 − λ, ν k = λ · µ k−1 , k 1. Proof. Fix a tree t. Color in white the vertices at even generation in t and in black the other vertices. Recall that •(t) (resp. •(t)) is the set of all black (resp. white) vertices. Then, using the fact that # • (t) + # • (t) = u∈•(t) (1 + k u ), write GW ρ,µ (t) = u∈•(t) µ k u u∈•(t) λ k u +1 (1/λ − 1) = λ #•(t)+#•(t) u∈•(t) µ k u u∈•(t) (1/λ − 1) = u∈•(t) (1 − λ) · u∈•(t) λµ k u . Since G maps white vertices to leaves and black vertices with k children to vertices with k + 1 children, the last expression implies that for a tree τ: GW ρ,µ (G −1 (τ)) = u∈τ k u =0 (1 − λ) u∈τ k u >0 λ · µ k u −1 . The conclusion follows. Application. In virtue of Proposition 3.6 (applied with µ = µ • and ρ = µ • a ), the image of GW µ • a ,µ • by G is a standard Galton-Watson measure with offspring distribution ν a on {0, 1, . . .} defined by ν a (0) = 2a γ + 2a , ν a (k) = 2 r c (γ + 2a) q k (k 1). Using (9), it is a simple matter to check that the generating function of ν a is given by: F a (z) = i 0 ν a (i)z i = 2a − 1 + √ 3z + (1 − z) 3/2 2a − 1 + √ 3 .(15) In particular F a (1) = (1 + 2(a − 1/2)/ √ 3) −1 , so that ν a is critical if and only if a = 1/2. When a = 1/2, to simplify notation we write ν = ν1 ⁄2 . Then note that: ν(0) = √ 3 3 and ν(i) = 24 · q i (i 1), i 0 ν(i)z i = z + (1 − z) 3/2 √ 3 , ν(k) ∼ k→∞ √ 3 4 √ π · k −5/2 .(16) In particular, this enables us to find the asymptotic behavior of the probability that our two-type Galton-Watson tree has total size n (see Corollary A.7 for this well-known fact): GW µ • 1 ⁄2 ,µ • (|τ| = n) = GW ν (|τ| = n) ∼ n→∞ 3 1/3 |Γ (−2/3)| · n −5/3 .(17) Remark 3.7. The exponent 5/3 in (17) also appears in [6] when analyzing the Boltzmann distribution with exposure using generating functions and methods from the theory of analytic combinatorics. Study of the percolation hull With the tools developed in the previous sections, we can now proceed to the proofs of our main results. We start by identifying the law of the hulls of a nicely percolated UIPT (Proposition 4.2), and then connect it with the Boltzmann measure with exposure introduced in Section 3. Identification of the law of the hull of the origin Fix a ∈ (0, 1). Recall from (1) the construction of the Uniform Infinite Planar Triangulation (UIPT) as the distributional local limit of uniform triangulations of size tending to infinity. Given T ∞ , we define a site percolation (percolation in short) as a random bi-coloring of the vertices of T ∞ , obtained by painting independently each vertex white with probability a and black with probability 1 − a, see Fig. 2. Recall that Angel [2] has proved that the critical threshold parameter for percolation is almost surely a c := 1/2 and that, furthermore, at a c there is no percolation. Angel also proved that on the event that the percolation is nice, the percolation interface going through the root edge is finite in all regimes (subcritical, critical and supercritical) allowing us to perform the necklace surgery. Since almost surely the UIPT has one end, only one of the two hulls H • and H • is infinite. In the sequel, we will implicitly use the above remarks without further notice. To stress the dependence in a ∈ (0, 1), conditionally on the percolation on the UIPT being nice, we denote by H • a and H • a respectively the white and black hulls H • and H • . We start with a useful remark based on symmetry. Proposition 4.1. We have the following equality in distribution H • a , H • a (d) = H • 1−a , H • 1−a . Proof. This is a consequence of the fact that flipping all the colors into their opposite reverses the roles of H • and H • , and exchanges a with 1 − a. Proposition 4.2. Let h ∈ T B be a finite triangulation with boundary. Set n = #∂h . For every m 1, we have P (H • a = h, #∂H • a = m) = 144 √ 2π · 12 n Q a (h) · n + m n · 12 m K 1−a (m). Proof. For every N 1, let T N be a uniform triangulation with N vertices. Conditionally on T N , sample a site percolation on T N with parameter a ∈ (0, 1). On the event on which the percolation is nice, denote respectively by H • a,N and H • a,N the white and black hulls. Recall the notation T n,p for the set of all triangulations with boundary of perimeter p and n vertices in total. By the necklace decomposition of Section 2. Since T N converges locally in distribution towards the UIPT (see (1)), using Proposition 3.4 and (8) we can take the limit as N → ∞ and get the statement of the proposition. The critical exponent for the perimeter We are now ready to prove Theorem 1.1. Proof of Theorem 1.1. We keep the notation of Section 2.3, in particular recall that Tree(T ) denotes the tree of components of a triangulation T ∈ T B and that |Tree(T )| = #∂T + 1. Note that when a = 1/2, we have r c (2a + γ)/2 = 1/24. To simplify notation, set K n := 24 n · K1 ⁄2 (n) = 1 24 · GW µ • 1 ⁄2 ,µ •   u∈•(τ) φ(k u )1 |τ|=n+1   , Q n := 24 n · Q1 ⁄2 ({T : #∂T = n}) = 1 24 · GW µ • 1 ⁄2 ,µ • (|τ| = n + 1) . This implies that K n ∼ n→∞ 1 8 · 3 5/6 · Γ (−2/3) 2 · √ 2π · n 1/3 andQ n ∼ n→∞ 1 8 · 3 2/3 · |Γ (−2/3)| · n −5/3 . (18) Indeed, the first statement follows from (14), while the second one is a consequence of Proposition 3.2 combined with (17). Next, using Proposition 4.1, write for n 1 P(#∂H • 1 ⁄2 = n) = P(#∂H • 1 ⁄2 = n, |H • 1 ⁄2 | = ∞) + P(#∂H • 1 ⁄2 = n, |H • 1 ⁄2 | < ∞) = Prop.4.1 P(#∂H • 1 ⁄2 = n, |H • 1 ⁄2 | = ∞) + P(|H • 1 ⁄2 | < ∞, #∂H • 1 ⁄2 = n) = ∞ m=1 h∈T B ,|h|<∞ #∂h=n P(H • 1 ⁄2 = h, #∂H • 1 ⁄2 = m) + ∞ m=0 h∈T B ,|h|<∞ #∂h=m P(H • 1 ⁄2 = h, #∂H • 1 ⁄2 = n) = Prop. 4.2 144 √ 2π n + m m 12 n+m · ∞ m=1 Q1 ⁄2 ({T ∈ T B ; #∂T = n})K1 ⁄2 (m) + ∞ m=0 Q1 ⁄2 ({T ∈ T B ; #∂T = m})K1 ⁄2 (n) = 144 √ 2π ∞ m=0 n + m n 2 −n−m Q mKn +Q nKm , with the convention thatK 0 = 0. Now, for every m 0 and u ∈ R, set F n (u) = 2n + u √ n n √ n 2 2n+ u √ n . It is a simple matter to check that for fixed u ∈ R, F n (u) → e −u 2 /4 / √ π as n → ∞ and that there exists a constant C > 0 such that F n (u) C2 −|u| for every n 2 and u ∈ R. Combined with (18), the dominated convergence theorem implies that ∞ m=0 n + m n 1 2 m+nK mQn +K nQm 4K nQn = ∞ − √ n du F n (u)K n+ u √ n Q n +K nQ n+ u √ n 4K nQn −→ n→∞ 1 2 · ∞ −∞ 1 √ π e −u 2 /4 du = 1. Hence P(#∂H • 1 ⁄2 = n) ∼ n→∞ 576 √ 2π ·K nQn . Thus, by (18), we get P(#∂H • 1 ⁄2 = n) ∼ 64 · 3 2 · √ 2π · 1 8 · 3 5/6 · Γ (−2/3) 2 · √ 2π · n 1/3 · 1 8 · 3 2/3 · |Γ (−2/3)| · n −5/3 . = √ 3 |Γ (−2/3)| 3 · n −4/3 . This completes the proof. Scaling limits We are now ready to prove Theorem 1.2, which describes the scaling limits of the boundary of large percolation clusters in the UIPT. We start by recalling the definition of the Gromov-Hausdorff topology (see [10] for additional details). The Gromov-Hausdorff Topology If (E, d) and (E , d ) are two compact metric spaces, the Gromov-Hausdorff distance between E and E is defined by d GH (E, E ) = inf d F H (φ(E), φ (E )) , where the infimum is taken over all choices of metric space (F, δ) and isometric embeddings φ : E → F and φ : E → F of E and E into F, and where d F H is the Hausdorff distance between compacts sets in F. The Gromov-Hausdorff distance is indeed a metric on the space of all isometry classes of compact metric spaces, which makes it separable and complete. An alternative practical definition of d GH uses correspondences. A correspondence between two metric spaces (E, d) and (E , d ) is by definition a subset R ⊂ E×E such that, for every x 1 ∈ E, there exists at least one point x 2 ∈ E such that (x 1 , x 2 ) ∈ R and conversely, for every y 2 ∈ E , there exists at least one point y 1 ∈ E such that (y 1 , y 2 ) ∈ R. The distortion of the correspondence R is defined by dis(R) = sup |d(x 1 , y 1 ) − d (x 2 , y 2 )| : (x 1 , x 2 ), (y 1 , y 2 ) ∈ R . The Gromov-Hausdorff distance can then be expressed in terms of correspondences by the formula d GH (E, E ) = 1 2 inf R⊂E×E dis(R) ,(19) where the infimum is over all correspondences R between (E, d) and (E , d ). Discrete looptrees and scooped-out triangulations The main ingredient for proving Theorems 1.2 and 1.3 is a relation between the boundary of a triangulation and the discrete looptree associated with its tree of components, which we now describe. To this end, we need to introduce a slightly modified discrete looptree. Let τ be a plane tree and recall from the Introduction the construction of Loop(τ). We define Loop(τ) as the graph obtained from Loop(τ) by contracting the edges linking two vertices u and v such that v is the last child of u in lexicographical order in τ (meaning that we identify such vertices). Recall from Section 2.3 the definition of scooped-out triangulation Scoop(T ) and the tree Tree(T ) for a triangulation with boundary T , and from Section 3.2 the bijection G. , the second one represents its associated trees Tree(T ) (with dashed edges) and G(Tree(T )) (in bold red), the third one represents Loop(G(Tree(T ))) (in light blue). Finally, the last figure represents Loop(G(Tree(T ))) (which is exactly Scoop(T )). Figure 10: The same elements as in Fig. 9, but locally around a pinch-point of T . Proof of Theorems 1.2 and 1.3 Fix a ∈ (0, 1) and an integer n 1. Let H • a (n) denote the random variable H • a conditioned on the event {#∂H • a = n, |H • a | < ∞}. From Proposition 4.2, it follows that the distribution of H • a (n) is the probability measure Q a ( · | #∂T = n). Hence, by Proposition 3.2 the tree of components Tree(H • a (n)) is distributed as a GW µ • a ,µ • tree conditioned on having n + 1 vertices. Set τ n a = G(Tree(H • a (n))). By Proposition 3.6, τ n a is distributed as a GW ν a tree conditioned on having n + 1 vertices. Proof of Theorem 1.2. Case 1/2 < a < 1. Denote by ∆ n the maximal degree of τ n a and let u n ∈ τ n a be a vertex with degree ∆ n (this vertex is asymptotically unique by [24,Theorem 5.5]). Hence, by [24, Theorem 5.5], ∆ n n (P) −→ n→∞ 1 − ∞ i=0 iν a (i) = 2a − 1 √ 3 − 1 + 2a . Since µ • a has an exponential tail, a simple argument shows that u n is a black vertex with probability tending to one as n → ∞. Informally, this means that a loop of length roughly ∆ n appears in ∂H • a (n). By [26,Corollary 2], the maximal size of the connected components of τ n a \{u n }, divided by n, converges in probability towards 0 as n → ∞. By properties of the bijection G, this means that the maximal size of a cluster branching on the macroscopic loop corresponding to the black vertex u n , divided by n, converges in probability towards 0 as n → ∞, and immediately implies that 1 n · ∂H • a (n) (d) −→ n→∞ 2a − 1 √ 3 − 1 + 2a · L 1 . Case a = 1/2. First of all recall that ∂H • a (n), viewed as a metric space, is the same as Scoop(H • a (n)), viewed as a metric space. We shall thus work with the latter. Recall from (16) that in the case a = 1/2, ν = ν1 ⁄2 is critical and ν(k) ∼ Since τ n 1 ⁄2 /n 1/3 converges towards the stable tree of index 3/2 (see [15] or [27]), we get that the quantity H(τ n 1 ⁄2 )/n 2/3 converges in probability towards 0 as n → ∞. These observations combined with (2) show that n −2/3 · Loop(τ n 1 ⁄2 ) (d) − −−− → n→∞ 3 1/3 · L 3/2 , where the convergence holds in distribution for the Gromov-Hausdorff topology. Finally, we use Lemma 4.3 to replace Loop(τ n 1 ⁄2 ) by Scoop(H • 1 ⁄2 (n)) in the last display and get the desired result. Case a < 1/2. In this case, the tree τ n a is a supercritical Galton-Watson tree conditioned on having n + 1 vertices. We perform a standard exponential tilting of the offspring distribution in order to reduce to the critical case as follows. Recall that F a denotes the generating function of the offspring distribution ν a . For every λ ∈ (0, 1) it is easy to see that τ n a has the same law as a Galton-Watson tree whose offspring distribution generating function is z → F a (λz)/F a (λ) conditioned on having n + 1 vertices, see e.g. [25]. We now choose λ a ∈ (0, 1) be the unique positive real number such λ a · F a (λ a ) = F a (λ a ). In other words, λ a is chosen such that the offspring distribution ν a whose generating function is z → F a (λ a z)/F a (λ a ) is critical (see e.g. [22,Section 4] for a proof that λ a exists and is unique). We write τ n a for a ν a -Galton-Watson tree conditioned on having n + 1 vertices. Since ν a has small exponential moments, we can apply [12,Theorem 14] and get 1 √ n · Loop(τ n a ) (d) = 1 √ n · Loop( τ n a ) (d) − −−− → n→∞ 2 σ a · 1 4 σ 2 a + ν a (2Z + ) · T e , where σ 2 a is the variance of ν a and ν a (2Z + ) = ν a (0) + ν a (2) + ν a (4) + · · · . Applying Lemma 4.3, we have established Theorem 1.2 in the case a < 1/2 with C a = 2 σ · 1 4 ( σ a + ν a (2Z + )). Proof of Theorem 1.3 . It follows from the proof of Theorem 1.2 that: C a = 2a − 1 √ 3 − 1 + 2a a ∈ (1/2, 1), C a = 2 σ a · 1 4 σ 2 a + ν a (2Z + ) a ∈ (0, 1/2).(21) The asymptotic estimate as a ↓ 1/2 immediately follows. As a ↑ 1/2, we have σ a → ∞, so that C a ∼ σ a /2. Next, using the exact expression of F a given in (15), simple calculations show that λ a = c 1/3 a + c −1/3 a − 1 with c a = 8(1 − a)a − 4(1 − 2a) √ a 2 − a − 1 and in particular we have λ a = 1 − 16 9 · (1/2 − a) 2 + o((1/2 − a) 2 ), F a (λ a ) −→ a↑1/2 1, F a (λ a ) ∼ a↑1/2 3 √ 3 16 · 1 1/2 − a . For the last asymptotic estimate, we use the fact that F a (z) = 3 4(2a − 1 + √ 3) · 1 √ 1 − z Since σ 2 a = λ 2 a · F a (λ a )/F a (λ a ), the conclusion follows. Comments Critical Boltzmann triangulations Instead of working with the infinite model of the UIPT we can also consider another natural model of random triangulations: the critical Boltzmann measure is the probability measure on triangulations which assigns a probability As for the UIPT, this implies that the random variable H • a , under P b , conditioned on the event {#∂H • a = n}, is distributed according to Q a ( · | #∂T = n). Hence Theorems 1.2 and 1.3 remain true without changes. To get an analog of Theorem 1.1, we similarly compute P b (T ) =P b #∂H • a c = n = 192 √ 3 ·Q n · m 0 n + m n 1 2 m+nQ m ∼ n→∞ 384 √ 3 ·Q 2 n = 2 · 3 1/6 Γ (−2/3) 2 · n −10/3 . Remark 5.2. It is useful to note that the exponent 10/3 appearing in the last formula is obtained as 2 1 + 1 α(22) with α = 3/2. Note also that 1 + 1/α is exactly the exponent appearing in the probability that a Galton-Watson tree with an offspring distribution in the domain of attraction of an α-stable law has a large progeny (see Proposition A.3 (i) for a precise statement). Type II triangulations In this work, we focused on general triangulations, but similar results can be derived in the context of type II triangulations, that are triangulations where no loops are allowed. The approach is exactly the same, the necklace surgery and the tree representation of clusters work alike and so we only give the main intermediate enumerative results. In particular, if W denotes the generating function of type II triangulations with simple boundary with weight x per inner vertex and y then W(x, y) is also well-known (see [18]). In particular, the radius of convergence of W as a function of x is r c = 2/27, and we have W(r c , y) = y 2 + (1 − 9y) 3/2 − 1 27 . As previously, for every integer k 1, set q k = [y k ]W(r c , y)/9 k . Note that q 1 = 0. In this case, the tree of components of the white hull is described similarly as in Proposition 3.2 by the two offspring distributions µ • a and µ • defined by µ • (j) = q j+1 Z • , µ • a (j) = (1 − ξ)ξ j (j 0), where Z • = 1/54 and ξ = 1/(1 + 4a). Note that µ • (0) = 0, which is consistent with the fact that we are working with type II triangulations, since black vertices with no children of the tree of components are in bijection with loops. In addition, if ν a is the image of GW µ • a ,µ • by G, then i 0 ν a (i)z i = 4a − 2 + 3z + 2(1 − z) 3/2 4a + 1 . In particular the mean of ν a is 3/(1+4a), so that ν a is critical if and only if a = 1/2. When a = 1/2, to simplify notation we write ν = ν1 ⁄2 . Note that then: i 0 ν(i)z i = z + 2 3 (1 − z) 3/2 , ν(k) ∼ k→∞ 1 2 √ π · k −5/2 . It easily follows that Theorem 1.2 holds in this case with the constants C1 ⁄2 = (3/2) 2/3 , C a = 4(a − 1/2)/(4a + 1) for 1/2 < a < 1. In addition, C a ∼ a↓1/2 4 3 a − 1 2 and C a ∼ a↑1/2 √ 3 4 √ 2 · 1 2 − a −1/2 . Theorem 1.1 also holds in this case with the same exponent (but with a different constant). Conjectures about O(N) models on random triangulations In this work, we established that L 3/2 is the scaling limit of the boundary of the cluster of the origin for critical site percolation on random triangulations (such as the UIPT or random Boltzmann triangulations). We conjecture that all the family of looptrees L α for α ∈ (1, 2) are scaling limits of boundary clusters of certain statistical mechanics models on random planar maps. More precisely, we focus on the so-called O(N) model on random planar triangulations. We follow closely the presentation of [34,Section 8] and [36,Section 3.4], see also [9,8]. A loop configuration on a triangulation T is a collection of loops drawn on the dual of T such that two different loops visit different faces. Equivalently, a loop configuration is a consistent gluing of two types of triangles (an empty triangle, and a triangle with a dual path inside joining two different edges) such that the result is a topological sphere, see The total perimeter | | of the collection of loops is the number of faces visited by the union of the loops and the number of loops of is denoted by # . Provided that a loop traverses the root edge (which we assume from now on), we can define the hull of the origin and its boundary as depicted in Fig. 11. We can also define the gasket of {T, } as the map obtained by removing the interior of the loops (the exterior of a loop contains the target of the root edge) as well as the faces traversed by the loops, see [9,Fig. 4]. The Boltzmann annealed O(N) measure on decorated triangulations is the probability measure to hold from now on, see [9,8] for similar models. We are thus left with one parameter h > 0. In this case, Le Gall and Miermont [34] provided the conjectural scaling limits of the gasket of decorated triangulations: It is conjectured that there exists h c (N) so that if h < h c (N) then the scaling limits of random planar (decorated) triangulations as well as their gaskets converge towards the Brownian map. If h > h c (N), it is conjectured that the gasket of a large random decorated triangulation under P g,h,N converges (after suitable scaling) towards the stable map of parameter a = 3/2 + π −1 arcsin(N/2). In this regime, stable maps have large macroscopic faces that touch themselves and each other. In the discrete underlying model, this means that the boundaries of large clusters should possess pinch-points at large scale. We believe that in this case, the scaling limits of these cluster boundaries (or equivalently the inner geometry of a face in a stable map of parameter a ∈ (3/2, 2)) is the stable looptree L α of index α, where α satisfies the relation 1 + 1 α = a ∈ (3/2, 2).(23) Let us give an heuristic argument supporting this prediction. In [9, Eq. 3.18] the authors showed that (in certain closely related models) the probability that the origin hull H of a random decorated map under under P g,h,N (with the parameters chosen as above) satisfies P(#∂H = k) ∼ k→∞ C · k −2a . On the other hand, if the tree structure associated to the origin hull is a critical Galton-Watson tree with an offspring distribution in the domain of attraction of an α-stable law, from Remark 5.2 we should have P(#∂H = k) ∼ k→∞ C · k −2(1+α −1 ) . Identifying the exponents in the last two displays gives our conjecture (23) . A Appendix: proof of the technical lemmas We conclude this work by establishing several technical results, some of which involve stable densities. We will only use the case α = 3/2, but prove the general case in view of future applications. By α-stable Lévy process we will always mean a stable spectrally positive Lévy process (X t ) t 0 of index α, normalized so that for every λ > 0, E[exp(−λX t )] = exp(tλ α ). The process X takes values in the Skorokhod space D(R + , R) of right-continuous with left limits (càdlàg) real-valued functions, endowed with the Skorokhod topology (see [7,Chap. 3]). The dependence of X in α will be implicit in this section, and we denote by p 1 the density of X 1 . A.1 Technical lemmas on stable densities The following result, which is a consequence of [17, Lemma XVII.6.1], will be useful. Lemma A.1. We have p 1 (0) = 1/|Γ (−1/α)|. Lemma A.2. For every β > 0 and α ∈ (1, 2), ∞ 0 dx x β · p 1 (−x)dx = Γ (β) Γ (β/α) . Proof. By [17, Lemma XVII.6.1], we have the following series representation, valid for x > 0: p 1 (−x) = − 1 πx ∞ k=1 Γ (1 + k/α) k! (−x) k sin(kπ/α). To simplify notation, set F(A) = A 0 dx x β · p 1 (−x)dx for A > 0, so that: F(A) = − 1 π ∞ k=1 Γ (1 + k/α) k! 1 k + β A k+β (−1) k sin(kπ/α) = − A β π · Im ∞ k=1 Γ (k/α + 1)Γ (k + β) Γ (k + β + 1) · −Ae iπ/α k k! . By [38, (2.3.10)], we have Im ∞ k=1 Γ (k/α + 1)Γ (k + β) Γ (k + β + 1) · −Ae iπ/α k k! ∼ A→∞ −Γ (β)Γ (1 − β/α) sin(πβ/α) · A −β . Hence, using the reflection formula for the Γ function, we get lim A→∞ F(A) = Γ (β)Γ (1 − β/α) sin(πβ/α) π = Γ (β) Γ (β/α) . In particular, note that for α = 3/2 and β = 1/2 we have p 1 (0) = 2 3Γ (1/3) and ∞ 0 dx √ x · p 1 (−x)dx = √ π Γ (1/3) .(24) A.2 A technical estimate for Galton-Watson trees We next establish the following asymptotic estimates. Proposition A.3. Fix α ∈ (1, 2) and β > α. Let ρ be a critical offspring distribution such that ρ k ∼ C · k −1−α as k → ∞ for a certain C > 0. Let φ : Z + → R + a function such that φ(x) ∼ κ · x β as x → ∞ for a certain κ > 0. (i) We have GW ρ (|τ| = n) ∼ n→∞ 1 |Γ (−1/α)| · (Γ (−α)C) 1/α · 1 n 1+1/α . (ii) We have GW ρ u∈τ φ(k u ) |τ| = n ∼ n→∞ κ · C (β−1)/α · Γ (−α) (β−α−1)/α · Γ (β − α − 1) Γ ((β − α − 1)/α) · n β/α . (iii) We have GW ρ u∈τ φ(k u )1 |τ|=n ∼ n→∞ κ · C (β−2)/α · Γ (−α) (β−α−2)/α · Γ (β − α − 1) |Γ (−1/α)|Γ ((β − α − 1)/α) · n (β−α−1)/α . Before proving Proposition A.3, we state two usful results. Theorem A.4 (Local limit theorem). Let (W n ) n 0 be a random walk on Z started from 0. Assume that P[W 1 < 0] = 0 and that there exists α ∈ (1, 2) and c > 0 such that P[W 1 = k] ∼ C · k −1−α as k → ∞. Set a n = (Γ (−α)C) 1/α n 1/α . Then: lim n→∞ sup k∈Z a n P[W n = k] − p 1 k − nE[W 1 ] a n = 0.(25) See e.g. [20, Theorem 4.2.1] for a proof of the local limit theorem. Lemma A.5. Let (S n ) n 0 be a random walk on Z whose jump distribution is ρ. Then: (i) for n 1, GW ρ (|τ| = n) = 1 n P (S n = n − 1) (ii) for every function F : Z → R + , GW ρ u∈τ F(k u ) |τ| = n = n · E [F(S 1 )|S n = n − 1] . Proof. For (i), see e.g. [39,Section 5.2]. Assertion (ii) easily follows from the fact that u∈τ F(k u ) is invariant under cyclic shifts. We are now ready to prove Proposition A.3. Proof of Proposition A.3. Let (S n ) n 0 be a random walk on Z whose jump distribution is ρ. Since ρ is critical and ρ i ∼ Ci −1−α as i → ∞, (25) applies with a n = (Γ (−α)C) 1/α n 1/α . Then by Lemma A.5 (i) and the local limit theorem: GW ρ (|τ| = n) ∼ n→∞ p 1 (0) (Γ (−α)C) 1/α · 1 n 1+1/α = 1 |Γ (−1/α)| · (Γ (−α)C) 1/α · 1 n 1+1/α . This proves the first assertion. For (ii), write 1 n β/α GW ρ u∈τ φ(k u ) |τ| = n = n ∞ k=0 φ(k) n β/α ρ k P (S n−1 = n − 1 − k) P (S n = n − 1) by Lemma A.5 (ii) = ∞ 0 dx φ( xn 1/α ) n β/α n 1+1/α ρ xn 1/α P S n−1 = n − 1 − xn 1/α P (S n = n − 1) . In order to use the dominated convergence theorem we use the following technical lemma whose proof is postponed to the end of this section: Lemma A.6. Let ξ be a critical probability measure on {0, 1, 2, . . .} of span 1 (i.e. the greatest integer dividing all the integers n such that ξ(n) > 0 is 1). Let F(z) = E z ξ be the probability generating function of ξ and assume there exists c ∈ R such that the following Taylor expansion holds around z = 1: F(z) = 1 + (z − 1) + c(z − 1) α + o(|z − 1| α ), |z| 1.(27) Let S N = N i=1 ξ i , where ξ i are i.i.d. copies of ξ. There exists constants c 1 , c 2 such that for every k, N 1: P(S N = N − k) c 1 N 1/α e −c 2 k α /N . We return to the proof of Proposition A.3. We may apply Lemma A.6, which combined with the local limit theorem gives that for every n 1 and x 0 P S n−1 = n − 1 − xn 1/α P (S n = n − 1) c 3 e −c 4 x α . In addition, since φ(x) ∼ κ · x β and ρ k ∼ C · k −1−α , we have φ( xn 1/α ) n β/α n 1+1/α ρ xn 1/α c 5 x β−α−1 . The expression appearing under the integral in (26) is thus bounded by c 6 x β−α−1 e −c 4 x α . By using the dominated convergence theorem as n → ∞ in (26) combined with the local limit theorem, we conclude that: 1 n β/α GW ρ u∈τ φ(k u ) |τ| = n −→ n→∞ κC ∞ 0 dx x β−α−1 p 1 − x (Γ (−α)C) 1/α = κC (Γ (−α)C) (β−α−1)/α · ∞ 0 dx x β−α−1 p 1 (−x). = κC (Γ (−α)C) (β−α−1)/α · Γ (β − α − 1) Γ ((β − α − 1)/α) = κ · C (β−1)/α · Γ (−α) (β−α−1)/α · Γ (β − α − 1) Γ ((β − α − 1)/α) . This completes the proof of (ii). The last assertion should now be of no difficulty. Recall the two offspring distributions µ • and µ • a introduced in Section 3.1 and the bijection G introduced in Section 3.2. Here we take a = 1/2, and to simplify notation we set µ • = µ • 1 ⁄2 . Recall finally that ν is the image of GW µ • a ,µ • by G. In the previous sections, we have used the following corollary of Proposition A.3: Corollary A.7. Let φ be the same function as in Proposition 3.4. We have the following two asymptotic behaviors: GW µ • ,µ • (|τ| = n) = GW ν (|τ| = n) ∼ n→∞ 3 1/3 |Γ (−2/3)| · n −5/3 GW µ • ,µ •   u∈•(τ) φ(k u )1 |τ|=n   ∼ n→∞ 3 1/6 Γ (−2/3) 2 · √ 2π · n 1/3 . Proof. By Proposition 3.6, using the fact that G maps black vertices of degree k to vertices of degree k + 1, it is sufficient to prove that GW ν u∈τ φ(k u − 1)1 |τ|=n ∼ n→∞ 3 1/6 Γ (−2/3) 2 · √ 2π · n 1/3 , where by convention we set φ(−1) = 0. Corollary A.7 hence immediately follows from Proposition A.3 (iii), applied with C = 3/π/4, κ = 4/9 · 6/π, α = 3/2 and β = 3. Proof of Lemma A.6. We adapt the proof of Lemma 2.1 of [21]. By the local limit theorem we may assume that N 1/α k N. To simplify notation, set G(z) = F(z)/z. By (27), we have the following Taylor expansion around z = 1: G(z) = 1 + c(z − 1) α + o(|z − 1| α ) for |z| 1. In particular, ln G(e w ) = cw α + o(|w| α ), Re w 0. By integration around the circle of radius exp(−δk α−1 /N) (for some small δ > 0 to be chosen later): P(S N = N − k) = 1 2iπ z k−N F(z) N dz z = 1 2π π −π exp(−δk α /N + ikt)G(e −δk α−1 /N+it ) N dt(29) It is easy to check that, for η, t 0 small enough, Re((−η + it) α ) η α − | cos(απ/2)|t α and | − η + it| α 2(η α + t α ). Thus, by (28) Since k α(α−1) /N α N α(α−1) /N α = N −α(2−α) → 0 as N → ∞, there exist δ 0 , t 0 > 0 sufficiently small such that for 0 < δ δ 0 and |t| t 0 : ln |G(e −δk α−1 /N+it )| 2cδ α k α(α−1) /N α − c| cos(απ/2)|t α /2. Now, since the span of ξ is 1, we have |F(z)| < 1 for |z| 1 and z = 1, so by continuity and compactness, there exists ∈ (0, 1) such that |F(re it )| 1 − < e − when e −δ 0 r 1 and t 0 |t| π. Thus, using the fact that k α−1 N, for t 0 |t| π and 0 δ δ 1 := min(δ 0 , /2) we get: |G(e −δk α−1 /N+it )| = e δk α−1 /N |F(e −δk α−1 /N+it )| e δ e − e − /2 . Combining (30) and (31), we get for δ δ 1 and |t| π: |G(e −δk α−1 /N+it )| e 2cδ α k α(α−1) /N α −c 1 t α with c 1 := min(c| cos(απ/2)|/2, /(2π α )). Plugging this in (29), we get: P(S N = N − k) e 2cδ α k α(α−1) /N α−1 −δk α /N ∞ −∞ e −c 1 Nt α dt. The result follows by choosing δ 1/(2c) α−1 . Indeed, for δ 1/(2c) α−1 , since k N 1/α , we have k α N 2−α 2cδ α−1 . This completes the proof. Figure 2 : 2A part of a site-percolated triangulation with the interface going through the root edge and the hull of the cluster of the origin. The boundary of the hull is in bold black line segments and has perimeter 16. Theorem 1. 1 ( 1Critical exponent for the perimeter). For a = a c = 1/2 we have Figure 3 : 3A plane tree τ and its looptree Loop(τ). Proposition 2. 1 ( 1Necklace surgery). Every nicely percolated triangulation, such that the interface going through the root edge is finite, can be unambiguously decomposed into a pair of two triangulations with boundary (H • , H • ) forming the white and black hulls glued together along a (#∂H • , #∂H • )-necklace. Figure 5 : 5A nicely percolated triangulation, the two hulls H • and H • as well as the creation of the necklace (in light blue) after the surgical operation. Figure 6 : 6Examples of 0-5, 1-5, 2-12, 8-4 and 3-1 necklaces. Figure 7 : 7A triangulation with boundary, its scooped-out triangulation and the tree structure underneath. Lemma 3. 5 . 5Fix an integer k 0 and β > 1. For every i ∈ {1, 2, . . . , k}, let (a (i) n ; n 0) be a sequence of positive numbers such that a (i) n ∼ C i · n −β as n → ∞. a 2, on the event {#∂H • a,N = m, H • a,N = h}, the triangulation T N is a gluing, along a (n, m)-necklace, of the hull h with another triangulation with boundary of perimeter m totalizing N − #V(h) vertices, and such that all the vertices of the boundary of h are white and those on the boundary of the second triangulation are black. Hence, P (#∂H • a,N = m, H • a,N = h) #E(h) 12 #∂h n + m n 12 m t∈T N−#V(h),m r #V(t) c (1 − a) #E(t) 12 #∂t = 12 n r N c #T N,2 Q a (h) n + m n 12 m Q 1−a (T N−#V(h),m ). Lemma 4. 3 . 3Let T ∈ T B be a finite triangulation with boundary. Then the graphs Loop G Tree(T ) and Scoop(T ) are equal.Proof drawings. A formal proof of this would not be enlightening and this property should be clear onFig. 9andFig. 10below. Figure 9 : 9The first figure represents a scooped-out triangulation Scoop(T ) as k → ∞. Next, since the longest path in τ n 1 ⁄2 containing only vertices that are identified in the definition of Loop( finite triangulation T , where r c = 1/ √ 432 and T is the set of all finite triangulations. Note that t∈T r #V(t) c = (192 √ 3) −1 by (9). With this model, the underlying triangulation T is always finite, so that both the white and black hulls H • a and H • a are finite. It is straightforward to adapt Proposition 4.2 in order to get the following: Proposition 5.1. Let h be a finite triangulation with boundary of perimeter n. We have P b (H • a = h) = 192 √ 3 · 12 n Q a (h) ∞ m=0 12 m n + m n Q 1−a ({T : #∂T = m}). Fig. 11. Figure 11 : 11A triangulation with a loop decoration and the cluster of the origin . ,ln |G(e −δk α−1 /N+it )| = Re ln G(e −δk α−1 /N+it ) = c −δk α−1 /N + it α + o −δk α−1 /N + it α cδ α k α(α−1) /N α − c| cos(απ/2)|t α + o k α(α−1) /N α + t α DECOMPOSITION OF PERCOLATED TRIANGULATIONS The continuum random tree. I. D Aldous, Ann. Probab. 19D. ALDOUS, The continuum random tree. I, Ann. Probab., 19 (1991), pp. 1-28. Growth and percolation on the uniform infinite planar triangulation. O Angel, Geom. Funct. Anal. 13O. ANGEL, Growth and percolation on the uniform infinite planar triangulation, Geom. Funct. Anal., 13 (2003), pp. 935-974. O Angel And N, Curien, arXiv:1301.5311Percolations on random maps I: half-plane models. submittedO. ANGEL AND N. CURIEN, Percolations on random maps I: half-plane models, arXiv:1301.5311 (submitted). Uniform infinite planar triangulations. O Angel And O, Schramm, Comm. Math. Phys. O. ANGEL AND O. SCHRAMM, Uniform infinite planar triangulations, Comm. Math. Phys., 241 (2003), pp. 191-213. Simple random walk on the uniform infinite planar quadrangulation: Subdiffusivity via pioneer points. I Benjamini And N, Curien, Geom. Funct. Anal. 232I. BENJAMINI AND N. CURIEN, Simple random walk on the uniform infinite planar quadrangula- tion: Subdiffusivity via pioneer points, Geom. Funct. Anal., 23(2) (2013), pp. 501-531. A nested loop approach to percolation on random triangulations. O Bernardi, N Curien, And G Miermont, In preparationO. BERNARDI, N. CURIEN, AND G. MIERMONT, A nested loop approach to percolation on random triangulations, In preparation. P Billingsley, Convergence of probability measures, Wiley Series in Probability and Statistics: Probability and Statistics. New YorkWiley-Interscience Publicationsecond ed.P. BILLINGSLEY, Convergence of probability measures, Wiley Series in Probability and Statistics: Probability and Statistics, John Wiley & Sons Inc., New York, second ed., 1999. A Wiley- Interscience Publication. More on the O(n) model on random maps via nested loops: loops with bending energy. G Borot, J Bouttier, And E Guitter, J. Phys. A. 45275206G. BOROT, J. BOUTTIER, AND E. GUITTER, More on the O(n) model on random maps via nested loops: loops with bending energy, J. Phys. A, 45 (2012), 275206. A recursive approach to the O(n) model on random maps via nested loops. J. Phys. A. 4545002, A recursive approach to the O(n) model on random maps via nested loops, J. Phys. A, 45 (2012), 045002. D Burago, Y Burago, And S Ivanov, A course in metric geometry. Providence, RIAmerican Mathematical Society33D. BURAGO, Y. BURAGO, AND S. IVANOV, A course in metric geometry, vol. 33 of Graduate Studies in Mathematics, American Mathematical Society, Providence, RI, 2001. Local limit of labeled trees and expected volume growth in a random quadrangulation. P Chassaing And B, Durhuus, Ann. Probab. 34P. CHASSAING AND B. DURHUUS, Local limit of labeled trees and expected volume growth in a random quadrangulation, Ann. Probab., 34 (2006), pp. 879-917. N Curien, B Haas, And I Kortchemski, arXiv:1305.3534The CRT is the scaling limit of random dissections. submittedN. CURIEN, B. HAAS, AND I. KORTCHEMSKI, The CRT is the scaling limit of random dissections, arXiv:1305.3534 (submitted). N Curien And I, Kortchemski, arXiv:1304.1044Random stable looptrees. submittedN. CURIEN AND I. KORTCHEMSKI, Random stable looptrees, arXiv:1304.1044, (submitted). A view from infinity of the uniform infinite planar quadrangulation. N Curien, L Ménard, And G Miermont, To appear in ALEAN. CURIEN, L. MÉNARD, AND G. MIERMONT, A view from infinity of the uniform infinite planar quadrangulation, To appear in ALEA. A limit theorem for the contour process of conditioned Galton-Watson trees. T Duquesne, Ann. Probab. 31T. DUQUESNE, A limit theorem for the contour process of conditioned Galton-Watson trees, Ann. Probab., 31 (2003), pp. 996-1027. Random trees, Lévy processes and spatial branching processes. T. Duquesne And J.-F. Le Gall, Astérisque. 147T. DUQUESNE AND J.-F. LE GALL, Random trees, Lévy processes and spatial branching processes, Astérisque, (2002), pp. vi+147. An introduction to probability theory and its applications. W Feller, John Wiley & Sons IncIINew YorkSecond editionW. FELLER, An introduction to probability theory and its applications. Vol. II., Second edition, John Wiley & Sons Inc., New York, 1971. Combinatorial enumeration, A Wiley-Interscience Publication. I P M Goulden And D, Jackson, Wiley-Interscience Series in Discrete Mathematics. New YorkJohn Wiley & Sons IncWith a foreword by Gian-Carlo RotaI. P. GOULDEN AND D. M. JACKSON, Combinatorial enumeration, A Wiley-Interscience Pub- lication, John Wiley & Sons Inc., New York, 1983. With a foreword by Gian-Carlo Rota, Wiley-Interscience Series in Discrete Mathematics. Recurrence of planar graph limits. O Gurel-Gurevich And A, Nachmias, Ann. Maths. to appearO. GUREL-GUREVICH AND A. NACHMIAS, Recurrence of planar graph limits, Ann. Maths (to appear), (2012). Independent and stationary sequences of random variables. I A V Ibragimov And Y, Linnik, J. F. C. KingmanWolters-Noordhoff PublishingGroningenWith a supplementary chapterI. A. IBRAGIMOV AND Y. V. LINNIK, Independent and stationary sequences of random variables, Wolters-Noordhoff Publishing, Groningen, 1971. With a supplementary chapter by I. A. Ibragimov and V. V. Petrov, Translation from the Russian edited by J. F. C. Kingman. Random cutting and records in deterministic and random trees. S Janson, Random Structures Algorithms. 29S. JANSON, Random cutting and records in deterministic and random trees, Random Structures Algorithms, 29 (2006), pp. 139-179. Simply generated trees, conditioned Galton-Watson trees, random allocations and condensation. Probab. Surv. 9, Simply generated trees, conditioned Galton-Watson trees, random allocations and condensation, Probab. Surv., 9 (2012), pp. 103-252. S O Janson And S, Stefánsson, arXiv:1212.5072Scaling limits of random planar maps with a unique large face. S. JANSON AND S. O. STEFÁNSSON, Scaling limits of random planar maps with a unique large face, arXiv:1212.5072. Condensation in nongeneric trees. T O Jonsson And S, Stefánsson, J. Stat. Phys. 142T. JONSSON AND S. O. STEFÁNSSON, Condensation in nongeneric trees, J. Stat. Phys., 142 (2011), pp. 277-313. The Galton-Watson process conditioned on the total progeny. D P Kennedy, J. Appl. Probability. 12D. P. KENNEDY, The Galton-Watson process conditioned on the total progeny, J. Appl. Probability, 12 (1975), pp. 800-806. I Kortchemski, arXiv:1205.3145Limit theorems for conditioned non-generic Galton-Watson trees. submittedI. KORTCHEMSKI, Limit theorems for conditioned non-generic Galton-Watson trees, arXiv:1205.3145 (submitted). A simple proof of Duquesne's theorem on contour processes of conditioned Galton-Watson trees. I Kortchemski, To appear in Séminaire de ProbabilitésI. KORTCHEMSKI, A simple proof of Duquesne's theorem on contour processes of conditioned Galton- Watson trees, To appear in Séminaire de Probabilités. M Krikun, arXiv:0512304Local structure of random quadrangulations. M. KRIKUN, Local structure of random quadrangulations, arXiv:0512304. Explicit enumeration of triangulations with multiple boundaries. Electron. J. Combin. 14Research Paper 61, 14 pp. (electronic, Explicit enumeration of triangulations with multiple boundaries, Electron. J. Combin., 14 (2007), pp. Research Paper 61, 14 pp. (electronic). M Laurent And P, Nolin, Percolation on uniform infinite planar maps percolation on uniform infinite planar maps, available on arXiv. M. LAURENT AND P. NOLIN, Percolation on uniform infinite planar maps percolation on uniform infinite planar maps, available on arXiv, (2013). Uniqueness and universality of the Brownian map. J.-F. Le Gall, Ann. Probab. 41J.-F. LE GALL, Uniqueness and universality of the Brownian map, Ann. Probab., 41(2013), pp. 2880-2960. Random trees and applications. Probability Surveys. , Random trees and applications, Probability Surveys, (2005). Branching processes in Lévy processes: the exploration process. J.-F Le Gall And, Y Le Jan, Ann. Probab. 26J.-F. LE GALL AND Y. LE JAN, Branching processes in Lévy processes: the exploration process, Ann. Probab., 26 (1998), pp. 213-252. Scaling limits of random planar maps with large faces. J.-F Le Gall And G, Miermont, Ann. Probab. 39J.-F. LE GALL AND G. MIERMONT, Scaling limits of random planar maps with large faces, Ann. Probab., 39 (2011), pp. 1-69. The Brownian map is the scaling limit of uniform random plane quadrangulations. G Miermont, Acta Math. to appearG. MIERMONT, The Brownian map is the scaling limit of uniform random plane quadrangulations, Acta Math. (to appear). Random maps and continuum random 2-dimensional geometries. Proceedings of the 6th ECM. the 6th ECMto appear, Random maps and continuum random 2-dimensional geometries, Proceedings of the 6th ECM, to appear. J Neveu, Arbres et processus de Galton-Watson. 22J. NEVEU, Arbres et processus de Galton-Watson, Ann. Inst. H. Poincaré Probab. Statist., 22 (1986), pp. 199-207. R B Paris And D, Asymptotics Kaminski, Mellin-Barnes Integrals, of Encyclopedia of Mathematics and its Applications. CambridgeCambridge University Press85R. B. PARIS AND D. KAMINSKI, Asymptotics and Mellin-Barnes integrals, vol. 85 of Encyclope- dia of Mathematics and its Applications, Cambridge University Press, Cambridge, 2001. Lectures from the 32nd Summer School on Probability Theory held in Saint-Flour. J Pitman, Combinatorial stochastic processes. Jean PicardBerlinSpringer-Verlag1875J. PITMAN, Combinatorial stochastic processes, vol. 1875 of Lecture Notes in Mathematics, Springer-Verlag, Berlin, 2006. Lectures from the 32nd Summer School on Probability The- ory held in Saint-Flour, July 7-24, 2002, With a foreword by Jean Picard. A census of planar triangulations. W T Tutte, Canad. J. Math. 14W. T. TUTTE, A census of planar triangulations, Canad. J. Math., 14 (1962), pp. 21-38.
[]
[ "Diversity Induced Environment Design via Self-Play", "Diversity Induced Environment Design via Self-Play" ]
[ "Dexun Li ", "Wenjun Li ", "Pradeep Varakantham " ]
[]
[]
Recent work on designing an appropriate distribution of environments has shown promise for training effective generally capable agents. Its success is partly because of a form of adaptive curriculum learning that generates environment instances (or levels) at the frontier of the agent's capabilities. However, such an environment design framework often struggles to find effective levels in challenging design spaces and requires costly interactions with the environment. In this paper, we aim to introduce diversity in the Unsupervised Environment Design (UED) framework. Specifically, we propose a task-agnostic method to identify observed/hidden states that are representative of a given level. The outcome of this method is then utilized to characterize the diversity between two levels, which as we show can be crucial to effective performance. In addition, to improve sampling efficiency, we incorporate the self-play technique that allows the environment generator to automatically generate environments that are of great benefit to the training agent. Quantitatively, our approach, Diversity-induced Environment Design via Self-Play (DivSP ), shows compelling performance over existing methods.
10.48550/arxiv.2302.02119
[ "https://export.arxiv.org/pdf/2302.02119v2.pdf" ]
256,615,176
2302.02119
e87f8e752708a9fa9929185de922db23850256bd
Diversity Induced Environment Design via Self-Play Dexun Li Wenjun Li Pradeep Varakantham Diversity Induced Environment Design via Self-Play Recent work on designing an appropriate distribution of environments has shown promise for training effective generally capable agents. Its success is partly because of a form of adaptive curriculum learning that generates environment instances (or levels) at the frontier of the agent's capabilities. However, such an environment design framework often struggles to find effective levels in challenging design spaces and requires costly interactions with the environment. In this paper, we aim to introduce diversity in the Unsupervised Environment Design (UED) framework. Specifically, we propose a task-agnostic method to identify observed/hidden states that are representative of a given level. The outcome of this method is then utilized to characterize the diversity between two levels, which as we show can be crucial to effective performance. In addition, to improve sampling efficiency, we incorporate the self-play technique that allows the environment generator to automatically generate environments that are of great benefit to the training agent. Quantitatively, our approach, Diversity-induced Environment Design via Self-Play (DivSP ), shows compelling performance over existing methods. Introduction The advances in Reinforcement Learning (RL) have recorded great success in a variety of applications, such as game playing (Mnih et al., 2015;Silver et al., 2016) and robot control (Levine et al., 2016;Akkaya et al., 2019). However, using RL to train agents with general capabilities remains a major challenge. This is because RL utilizes stochastic exploration, which requires collecting millions of samples, and interacting with the environment is very expensive and time-consuming. Therefore, it is essential to improve sample efficiency and build effective exploration. Training an agent by specifying a distribution of tasks or environments is one promising approach to improve the overall capabilities of RL agents (Dennis et al., 2020;Jiang et al., 2021b;Parker-Holder et al., 2022;Li et al., 2023). This type of approach automatically generates challenging environments, giving rise to a curriculum that can constantly promote the frontier of the agent's capabilities. For example, we can control a set of parameters that may correspond to the position of an obstacle in a maze environment or the coefficient of friction in a robot simulator. Then each set of parameters will produce an instance of the environment, which we call a level 1 . By adapting the training distribution over the parameters of environments, adaptive curricula have been shown to produce more robust agents in fewer training steps (Portelas et al., 2020a;Jiang et al., 2021b). Typically, Unsupervised Environment Design (UED, (Dennis et al., 2020)) introduces a self-supervised RL paradigm where they formalize the problem of generating a distribution over environments by considering an environment policy (teacher) to curate such an environmental distribution to best support the continued training of a particular agent policy (student). Here, the teacher is co-evolved with the student and is trained to produce difficult but solvable environments that maximize the defined object of regret. Dennis et al. (2020) showed that this would lead to a form of adaptive curriculum learning and that the student's policy is the minimum regret strategy when the multi-agent system reaches Nash equilibrium. However, training such a teacher-student framework remains a challenge because the teacher's policy update relies on the regret value, which is approximated by the difference between the expected payoffs of the protagonist and the antagonist, resulting in costly interactions between both student agents and the environment. In contrast, another line of work, Prioritized Levell Replay (PLR, (Jiang et al., 2021b)), embodies an alternative form of dynamic curriculum learning that does not assume control of level generation; but instead, PLR will selectively revisit/replay existing randomly generated levels with probability proportional to their corresponding regret values. This type of approach would suffer from the costly randomly generated level, it would not be able to exploit any previously discovered level structures, and its performance is highly influenced by the magnitude of possible environment parameters. Notably, exploration is a key problem in reinforcement learning, since agents can only learn from data they acquire in the environment. In this paper, we attempt to build an ef-fective exploration and regret-based curriculum in a sample efficient manner. We harness the advantages from both PAIRED and PLR and introduce a novel diversity-driven approach, named Diversity-induced Environment Design via Self-Play (DivSP ). DivSP maintains a set of diverse and useful environments that allow data to be collected in an informative manner. Specifically, to encourage diversity, we propose a task-agnostic observed/hidden states representative method to address the problems of measuring the diversity score between two environments. More importantly, DivSP is able to automatically generate environments at the frontier of the student agent's capabilities while avoiding the costly interaction with the environment by incorporating the technique of Self-Play. By combining diversity and learning potential, our algorithm enables us to validate the effectiveness of the current training level, achieving a dynamic balance between diversity and marginal benefit, effectively catalyzing the general ability of the agent. Overall, our contributions are three-fold as follows: • We propose a diversity-driven approach that maintains a diverse level buffer with informative and useful environments to enhance effective exploration. • Taking advantage of the self-play technique, we avoid costly training of both protagonist and antagonist, and our algorithm learns to automatically generate environments in a sample efficient way. • We empirically demonstrate that DivSP outperforms existing methods while achieving a speedup compared to other methods for automatic environment generation. Preliminaries Underspecified Partially Observable Markov Decision Process A Markov Decision Process (MDP) is defined as a tuple < S, A, P, R, γ, T >. Here S and A stand for the set of states and actions respectively, P (s t+1 |s t ) is the probability that the state transitions from s t ∈ S to s t+1 ∈ S given action a t , and r t = R(s t , a t , s t+1 ) is a reward obtained by the agent transitioning from s t to s t+1 when taking the action a t . γ is the discount factor, and T is the horizon. Given an MDP, the goal of an RL agent is to learn the policy π such that the cumulative discounted reward, i.e., E[ t = 0 T γ T r t ], is maximized. However, the MDP framework is not feasible to describe most of the real-world applications as the agent is unable to directly observe the underlying whole state. Furthermore, the UED problem requires using an underspecified environment to produce a distribution over fully specified environments, resulting in the state space and tran- Compared to POMDP, there is an addition of Θ in UPOMDP, which represents the free parameters of the environment. Those parameters are incorporated into the transition function and reward function. Following (Jiang et al., 2021a), we define a level M θ as an environment resulting from a fixed θ ∈ Θ. The objective of the RL agent policy π is to maximize the value V θ (π) in M θ , which is V θ (π) = E[ T t=0 γ t r t ] , and r t are the rewards collected by π in M θ . et al. (2020) first formalize the UED and introduce the Protagonist Antagonist Induced Regret Environment Design (PAIRED) algorithm, which is a three-agent game: the protagonist π P , the antagonist π A and the environment generator G. The environment generator G learns to control the distribution of environmental parameters by maximizing regret, which is approximated by the difference between the cumulative reward obtained by the protagonist and the adversary agent under a fixed environment theta, respectively: Existing Methods Dennis REGRET θ (π P , π A ) = V θ (π A ) − V θ (π P )(1) The protagonist and antagonist are both trained to maximize their own cumulative reward in current environments. Note that the environment generator (teacher) is discouraged from generating levels that can not be solved because they will have a maximum regret of 0. This teacher-student framework co-evolves teacher and student policies, creating a form of adaptive curriculum learning in which the teacher constantly creates an emergent class of levels that get progressively more difficult along the borderline of the protagonist's ability, allowing agents to learn a good policy that enables zero-shot transfer. More specifically, if Π is the strategy set of the protagonist and antagonist, and Θ is the strategy set of the teacher, then if the learning process reaches a Nash equilibrium, the resulting student policy π provably converges to a minimax regret policy, defined as π = arg min π P ∈Π { max θ,π A ∈Θ,Π {REGRET θ (π P , π A )}}(2) However, this teacher-student framework struggles with the building of an efficient generator (teacher), as it requires expensive interactions with the environment to collect millions of samples to train Protagonist and Antagonist agents separately. As an alternative regret-based UED approach, Jiang et al. (2021b) propose Prioritized Level Replay (PLR), where a student policy is challenged by two co-evolving teachers, Generator and Curator. In PLR, Generator randomly creates new environments and Curator prioritizes the replay probability for each environment based on the estimated learning potential in each environment. By adapting the sampling of the previously encountered levels to train, PLR is an active learning strategy that improves sample efficiency and generalization. In addition, PLR uses the positive value loss to approximate the regret compared to the generative but slow adaptive PAIRED. Specifically, regret is approximated by: F gae (θ) = 1 T T t=0 max{ T k=t (γλ) k−t δ k , 0}(3) where λ and γ are the Generalized Advantage Estimation (GAE, (Schulman et al., 2015)) and MDP discount factor respectively. δ is the TD-error at timestep t. The agent trained by PLR shows good generalization ability in terms of empirical results. However, PLR is still limited as it is unable to exploit any previously discovered level structure and can only curate randomly sampled levels; moreover, the random search will be heavily affected by the high-dimensional design space, making it highly unlikely to sample levels at the frontier of the agent's current capabilities. Algorithm In this section, we propose a novel generic diversity-driven UED appraoch that can improve the sampling efficiency and generality of RL agents. The overall algorithm is summarized in Algorithm 1. Regret-based Generator with Self-Play We first introduce how to encourage the environment generator (teacher) to automatically generate levels that are at the frontier of an agent's capabilities by designing a marginal improvement-based regret through Self-Play. We consider the teacher-student framework with a single student agent, but we allow it to have two separate "minds": Alice (π A ) and Bob (π B ). They have the same objective and Alice shares the old policy with Bob. More specifically, during the self-play phase, both Alice and Bob will collect several trajectories within the current level θ (Line 6 and 15 in Algorithm 1), and then we compute the approximate regret value by the difference between the rewards they received (Line 7 in Algorithm 1). Formally, regret is approximated as: Collect Alice's and Bob's trajectories τ A and τ B in M θ , and compute V θ (π A ) = T t=0 γ t r t and V θ (π B ) = T t=0 γ t r t , respectively 7: REGRET θ (π A , π B ) ≈ V θ (π A ) − V θ (π B ) = E τ A [V θ (π A )] − E τ B [V θ (π B )](4) Compute the REGRET using Equation 4 8: Update Bob's policy π B by letting π B = π A 9: Train Alice's policy π A to maximize V θ (π A ) 10: Train G with RL update and reward R = REGRET 11: Get observed state representative from trajectory τ A according to Equation 8 12: If its state-aware diversity score is lower than the highest one in the buffer, replace that one by adding the new level θ to Λ and update the replay probability P replay according to Equation 12 13: else 14: Sample level θ from Λ according to P replay , and create POMDP M θ 15: (Schulman et al., 2015)) over each of T time steps. In this work, we used the difference between Bob and Alice's expected cumulative returns to approximate regret because it is straightforward and shows encouraging results for building difficult but solvable environments. Collect Alice's and Bob's trajectories τ A and τ B in M θ , and compute V θ (π A ) = T t=0 γ t r t and V θ (π B ) = T t=0 γ t r t , When generating a new environment, the environment generator (teacher) is trained to maximize the approximated regret (Line 10). The key idea is that Bob's play with Alice should help the environment generator understand how the previously discovered structure works and enable it to automatically generate a more challenging level to encourage further robustness and generalization. Furthermore, Bob will directly copy current Alice's policy (Line 8 and line 16), avoiding the drawback of PAIRED, which requires expensive interactions with the environment to train an optimal antagonist's policy at the current level. Instead, Alice is trained to maximize the corresponding cumulative reward (line 9 and line 17). Overall, this self-regulating feedback between Alice and Bob allows the environment generator to establish an adaptive curriculum: new levels that could lead to a dramatic difference in Alice and Bob's behavior are constantly produced, leading to a more robust agent policy. State-Aware Diversity The objective of this section is to apply diversity to the UED framework to build effective explorations and improve sample efficiency. Existing algorithms ( (Jiang et al., 2021a;Parker-Holder et al., 2022)) consider a setup whereby a level buffer Λ is introduced to store the top K visited levels with the highest learning potential, which is estimated by the GAE value of the learning agent over the last episode. However, determining the level buffer by the learning potential alone may result in preserving similar or repeated levels, so it would suffer from low-quality exploration, and the agent will not learn much from training on these repeated collected trajectories. As a result, we present a diversity-driven strategy to maintain a set of diverse environments so that data can be collected in an informative manner. To do so, we first need to specify a Diversity measure that can be used to distinguish the diversity between two levels. We derive a state-aware diversity representative method as follows: From teacher's perspective: Two levels θ 1 and θ 2 can be said to be different if the distance between the parameters of the environment is large. However, this method is not interpretable because the mapping from the parameters to levels is not linear. For example, in the continuous-control BipedalWalker environment, the environment design space is an 8-dimensional indirect encoding representing the intensity of four kinds of obstacles for the student agent. We are unable to capture the stochasticity in the mapping from parameters to the environment. Additionally, the normal-ization of different parameters is domain-specific, which prevents us from directly using environment parameters to measure the diversity of different levels. From student's perspective: Intuitively, to encourage the specialty of individual levels in the replay buffer, student agents need to obtain different local observations to highlight levels from others. To achieve this goal, we can sample a set of states that best represent the structure of the environment, and then use these sets of representative states to measure the diversity in the level buffer. Measuring diversity among levels requires a function to measure the diversity of the whole population as the volume of the inter-state matrix. Inspired by (Parker-Holder et al., 2020), we consider a smooth kernel k, such that k(o 1 , o 2 ) ≤ 1 for observed state o 1 , o 2 ∈ O. A popular choice of the kernel is cosine similarity, which is defined as follows: k(o 1 , o 2 ) = o 1 o 2 o 1 o 2 ,(5) and we have: k(o 1 , o 2 ) = 1 ⇐⇒ o 1 = o 2 .(6) Note that if we use a Recurrent Neural Network (RNN) to parameterize the student agent, such as the partially observable navigation task domain, we can also replace the observed state with the output of the hidden layer of the RNN agent. With a flexible method of measuring the interstate diversity at hand, we can give a formal definition of our observed state representative method. 2. Then we use a greedy algorithm to pick the top n observed states from the set O . we start by taking S env as an empty set and at each step, we will add the observed state o that maximizes the marginal gain, where the marginal gain F rep (o|S env ) is defined as the difference of adding observed state o into S env : F rep (o|S env ) = F rep (o ∪ S env ) − F rep (S env ) (8) Because F rep (S env ) defined in Equation 7 is a submodular function, the greedy algorithm can provide a solution with an approximation factor 1 − 1 e (Nemhauser et al., 1978). We now consider the diversity of two levels. The intuition for doing so is that a diverse level buffer is more informative and thus contributes to effective exploration and sampling efficiency. The formal definition of measuring the stateaware diversity among levels is as follows. Definition 3.2. (State-Aware Diversity) Consider the level buffer Λ = {θ 1 , . . . , θ K } and the newly generated level θ new , each of which has its corresponding observed state representative S env . For the newly generated level or any level in the level buffer θ i ⊂ {θ new } ∪ Λ, we can compute its state-aware diversity score F div (, ) among other levels as follows: F div (θ i , {θ new } ∪ Λ \ {θ i }) = oi∈Senv max oj ∈S env {k(o 1 , o 2 )},(9) where S env is the observed state representative for level θ i , and S env is the set of observed state representative for levels in {θ new } ∪ Λ \ {θ i }. Similarly, the state-aware diversity of level buffer is: F div (Λ, Λ) = θi∈Λ F div (θ i , Λ \ {θ i })(10) Finally, at the beginning of each iteration, DivSP either generates new levels (with probability p, line 4 and 5) or sample a mini-batch of levels in the level buffer to train the agent (line 13 and 14): • (Generating new level): When there is a newly generated level, we measure the diversity score of the new level and the levels in the level buffer according to Equation 9. To achieve a diverse level buffer, if the diversity score of a new level is lower than one of the levels in the buffer, we add the new level θ new to the buffer Λ to replace the level with the highest diversity score F div (, ) (Line 11 and 12). Note that, unlike the state representative where we want the diversity score to be high because we hope every observation collected from the level can find a similar observed state in the set S env ; instead, here we want a diverse level buffer, which implies minimal similarity. Therefore, the lower its diversity score, the less likely it is to find a similar level in the level buffer and be replaced. • (Sampling level from buffer): To decide at which level to train, we assign each level θ i a probability that is based on the combination of its diversity score and learning potential. Following Jiang et al. (2021b), we employ the GAE function shown in Equation 3 as the proxy for its learning potential. Given the learning potential of F gae (θ i ), we first rank them accordingly and then use a prioritization function h to define how differences in learning potential translated into differences in prioritization. As a result, we can get a learning potential prioritized distribution P gae (Λ) over the level buffer, and the probability for θ i is P gae (θ i |Λ, F gae ) = h (rank(F gae (θ i ))) 1/β j h (rank(F gae (θ j ))) 1/β(11) where β is the temperature parameter that tunes how much h(rank(F gae (θ i ))) determines related probability, and rank(F gae (θ i )) is the rank of level θ i sorted in the descending order among the level buffer. Same to Jiang et al. (2021b), we use h (rank(F gae (θ j ))) = 1 rank(Fgae(θi)) . Similarly, we can compute the diversity score computed through Equation 9 and its corresponding diversity score prioritized distribution P div (Λ) over the level buffer 2 . Combining on P gae (Λ), based on the learning potential, and P div (Λ), based on diversity score, we update the overall replay distribution P replay (Λ) over Λ as (Line 19): P replay (Λ) = (1 − ρ) · P gae (Λ) + ρ · P div (Λ) (12) Therefore, when a level has a higher learning potential or is significantly different from other levels in the level buffer, it has a higher chance of being sampled. We present the overall framework for DivSP in Figure 1. Experiment In this section, we present our experimental results in the domains of BipedalWalker, Minigrid, and CarRacing to 2 We rank F div (, ) in the ascending order. demonstrate and illustrate the outperformance of our approach when a trained agent is transferred to new environments. We compare our approach against existing UED methods: Domain Randomization (DR (Tobin et al., 2017)), Minimax (Wang et al., 2019), PAIRED (Dennis et al., 2020), PLR (Jiang et al., 2021a). We show the average and variance of the performance for our method, baselines with five random seeds. Performance on BipedalWalker We first evaluate our approach on BipedalWalker (Wang et al., 2019) environment. This environment entails continuous control with dense rewards. Similar to Wang et al. (2019), we use a modified version of BipedalWalker-Hardcore from OpenAI Gym (Brockman et al., 2016). In BipedalWalker, there are 8 parameters that indirectly represent the intensity of four kinds of terrain-based obstacles for a two-legged robot: the minimum/maximum roughness of the ground, the minimum/maximum height of stump obsta- cles, the minimum/maximum width of pit gap obstacles, and the minimum/maximum size of ascending and descending flights of stairs. We provide an illustration of these four kinds of obstacles in Figure 5. The agent receives a 24-dimensional proprioceptive state with respect to its lidar sensors, angles, and contacts. The action space consists of four continuous values that control the torques of its four motors. In this environment, the teacher learns to control 8 parameters corresponding to the range of 4 kinds of obstacles and then combines a random seed to generate a specific level. All agents are trained with Proximal Policy Optimization (PPO, (Schulman et al., 2017)). For a fair comparison, during training, we presented a vanilla BipedalWalker, a challenging BipedalWalker-Hardcore environment, and four specific levels in the context of isolated challenges in {Roughness, Stump height, Pit gap, Stair step} to evaluate our algorithm and baselines. Figure 2 shows the transfer performance throughout training. As shown in the figure, DivSP outperforms all baselines in most test environments, achieving a faster convergence. Those results support the key drivers of DivSP 's motivation: Producing incremental challenging environments at the frontier of the agent's capabilities and maintaining diverse environments to improve sample efficiency and build effective exploration. Performance on Minigrid Here we investigate the maze navigation environment introduced by Dennis et al. (2020), which is based on Minigrid (Chevalier-Boisvert et al., 2018). We train the environment generator to learn how to build maze environments by choosing the location of the obstacles, the goals, and the starting location of the agent. Specifically, at the beginning of each iteration, the generator will place the student agent and the goal, and then every time step afterward, the generator outputs a location where the next obstacle will be placed. There will be up to 25 blocks that can be placed. We give several examples of generated mazes during the training in Figure 6. The maze environment is partially observable, where the student agent's view is shown as a blue-shaded area in Figure 6. The student agent (blue triangle) must explore the maze to find a goal (green square). In order to deal with the partially observable setting, our agents use PPO with a Recurrent Neural Network structure. We compare our agents' transfer ability trained by different approaches on humandesigned levels. The test environments and the performance are reported in Figure 3. While DR acts as a strong baseline in this domain, DivSP can attain the similar highest mean return. Performance on CarRacing Finally, we investigate the learning dynamics of DivSP and baselines on CarRacing (Brockman et al., 2016), a popular continuous-control environment with dense rewards. Similar to the partially observable navigation task, The student agent in CarRacing receives a partial, pixel observation and has a 3-dimensional action space. The goal of the agent is to drive a full lap around a generated track. To generate a feasible level (closed-loop track), following (Jiang et al., 2021a), the generator learns to choose a sequence of up to 12 control points, which will unique generate a Bézier curve (Mortenson, 1999) within predefined curvature constraints. In Figure 7, we show some examples of CarRacing tracks produced by different algorithms. We present per-track zero-shot transfer returns of policies trained by each method on some of the human-designed Formula One (F1) tracks throughout training in Figure 4. Note that these tracks are significantly out-of-distribution (OOD) as they can not be generated within 12 control points. Remarkably, DivSP can either mitigate the degeneracy of PAIRED or achieve significant outperformance than other baselines in mean performance, providing further evidence of the benefits of the induced curriculum and diverse level buffer. Related Work This work aims to train agents that are capable of generalizing across a wide range of environments (Whiteson, 2009). Several methods for enhancing generalization in RL use techniques from supervised learning, such as data augmentation (Raileanu et al., 2020;Kostrikov et al., 2020;Wang et al., 2020), and feature distillation (Igl et al., 2020). Opposed to supervised learning, there is a trend of introducing the mechanism of curricula in different learning situations (Fang et al., 2019;Weinshall & Amir, 2020;Wu et al., 2020). In RL, curricula improve the learning performance of the agent by adapting the training environment to the agent's current capabilities. One prior approach is domain randomization (Jakobi, 1997;Tobin et al., 2017), where they train the agent to a wide range of randomly generated environments. In contrast, Akkaya et al. (2019) propose automatic domain randomization, where they use a curriculum that gradually increases the difficulty for agent training. In the multi-task domain (Sukhbaatar et al., 2017;Zhang et al., 2020;Du et al., 2022;Klink et al., 2022), they particularly set up an automatic curriculum for goals that agent needs to solve. Curricula are often generated as the proposed goals are right at the frontier of the learning process of an agent. In particular, we focus on a growing corpus of work in unsupervised environment design (Dennis et al., 2020), which is inherently related to the Automatic Curriculum Learning (Florensa et al., 2017;Portelas et al., 2020b). It seeks to learn a curriculum of adaptively generating challenging environments to train robust agents. Dennis et al. (2020) proposed PAIRED algorithm, where they introduce an environmental adversary that learns a curriculum to control environmental parameters to maximize approximate regret. POET (Wang et al., 2019; co-evolves the generation of environmental challenges and the optimization of agents to solve them. (Jiang et al., 2021b;a) further introduce PLR, a general framework where agents can revisit previously generated environments with high learning potential for training. We take inspiration from the automatically generated environments with a curriculum design and maintain a level buffer with high learning potential and high diversity. For brevity, we provide the works most related to our approach and summarize them in Table 1. Conclusion In this paper, we introduce DivSP , a novel method for unsupervised environment design, that evolves a curriculum to automatically create a distribution of training environments through self-play. Furthermore, in order to improve sample efficiency, DivSP selectively revisit previously generated environments by prioritizing those with higher estimated learning potential and diversity. Finally, we performed experiments in various benchmark environments and demonstrated that our algorithm leads to superior zero-shot transfer performance in most of the settings. sition probability change according to the specified environment. To model the fully specified environments with Partially Observable Markov Decision Process (POMDP) in the UED framework, Dennis et al. (2020) first introduce the Underspecified POMDP (UPOMDP) as a tuple M =< A, O, Θ, S M , P M , I M , R M , γ, T >. Here O is a set of observations, and I M : S → O is the observation function. Figure 1 . 1The overall framework of DivSP for Unsupervised Environment Design Definition 3 . 1 .Figure 2 . 312(Observed State Representative) For each environment θ, considering several trajectories induced by the current student policy, we denote O = {o 1 , . . . , o m } as the set of observed states of size m collected from the current level, and we can derive a set of observed state S env of size n, where n m, to best represent the environment θ. Formally, the representative score is defined as:F rep (S env ) = oi∈O max oj ∈Senv {k(o i , o j )}(7)Performance on test environments during training (mean and standard error). Figure 3 . 3Zero-shot transfer performances in challenging environments after 100 million training steps. We show the median and interquartile range of solved rates over 5 runs. Figure 4 . 4Zero-shot transfer performance on the OOD F1 tracks: Vanilla, Italy and Germany. We provide mean and standard deviation over 5 runs. Figure 5 . 5An illustration of levels generated with four kinds of obstacles: (a) Roughness of range (2,8) (b) Stump height of range (1,3) (c) Pit gap of range(1,3) (d) Stair steps of range (2,6) (e) Vanilla BipedalWalker (f)Hard BipedalWalker with a mix of (a) to (d) parameters. Figure 6 . 6Example levels generated by DivSP during training. The generator can place up to 25 blocks. Figure 7 . 7A randomly-selected examples of CarRacing tracks produced by different algorithms. (a)DR (b) Minimax (c) PAIRED (d) PLR (e)DivSP (f) Two examples in the CarRacing F1 benchmark that are used for evaluating zero-shot generalization. Note that there are alternative methods to compute regret, i.e., using the difference of the scoring function based onAlgorithm 1 DivSP 1: Input: Level buffer Λ of size K, replay probability p. Initialize policy Alice π A , Bob π B , level generator G; 2: while Not converged do Sample a replay decision, ∼ U [0, 1] Use G to generate new environment parameters θ, and create POMDP M θ3: 4: if ≤ p then 5: 6: respectively 16 : 16Update Bob's policy π B by letting π B = π A17:Train Alice's policy π A to maximize V θ (π A ) Update the observed state representative for level θ according to Equation 8Update the replay probability P replay according to Equation1220: end if 21: end while the average magnitude of the Generalized Advantage Estimate (GAE;18: 19: Intuitively, we use the cosine similarity kernel k to measure how well the selected observed states in the set S env can represent the whole observed states O collected from the current level. A high representative score F rep (S env ) indicates that each observed state collected from the current level can find a sufficiently similar state in S env , such that S env is a good representative of the current level. However, getting the exact solution of set S env from O is NP-hard, and it is costly since the set O is very large. Motivate byLi et al. (2021), we propose a heuristic way to do so:and we have S env = arg max Senv⊂O,|Senv|≤n F rep (S env ) 1. We first randomly sample a set O ⊂ O of size m , where n < m m; Table 1 . 1The components of baselines. Like PAIRED, our algorithm DivSP uses regret to train the generator, but it also replay levels according to its regret value and diversity score. Here MCC is an abbreviation for Minimal Criteria Coevolution(Wang et al., 2019).Algorithm Generation Strategy Replay Strategy Buffer Objective Setting DR (Tobin et al., 2017) Random None None Single Agent Minimax (Wang et al., 2019) Evolution MCC Minimax Population-Based PAIRED (Dennis et al., 2020) RL None None Single Agent PLR (Jiang et al., 2021b;a) Random Minimax Regret Learning potential Single Agent DivSP RL Max Regret and Diversity Diversity Single Agent We use level and environment interchangeably in the paper arXiv:2302.02119v2 [cs.AI] 21 Mar 2023 Solving rubik's cube with a robot hand. I Akkaya, M Andrychowicz, M Chociej, M Litwin, B Mcgrew, A Petron, A Paino, M Plappert, G Powell, R Ribas, arXiv:1910.07113arXiv preprintAkkaya, I., Andrychowicz, M., Chociej, M., Litwin, M., McGrew, B., Petron, A., Paino, A., Plappert, M., Powell, G., Ribas, R., et al. Solving rubik's cube with a robot hand. arXiv preprint arXiv:1910.07113, 2019. . G Brockman, V Cheung, L Pettersson, J Schneider, J Schulman, J Tang, Zaremba , arXiv:1606.01540W. Openai gym. arXiv preprintBrockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., and Zaremba, W. Openai gym. arXiv preprint arXiv:1606.01540, 2016. Minimalistic gridworld environment for openai gym. M Chevalier-Boisvert, L Willems, S Pal, Chevalier-Boisvert, M., Willems, L., and Pal, S. Minimalis- tic gridworld environment for openai gym, 2018. Emergent complexity and zero-shot transfer via unsupervised environment design. M Dennis, N Jaques, E Vinitsky, A Bayen, S Russell, A Critch, S Levine, Advances in neural information processing systems. 33Dennis, M., Jaques, N., Vinitsky, E., Bayen, A., Russell, S., Critch, A., and Levine, S. Emergent complexity and zero-shot transfer via unsupervised environment design. Advances in neural information processing systems, 33: 13049-13061, 2020. It takes four to tango: Multiagent selfplay for automatic curriculum generation. Y Du, P Abbeel, A Grover, arXiv:2202.10608arXiv preprintDu, Y., Abbeel, P., and Grover, A. It takes four to tango: Multiagent selfplay for automatic curriculum generation. arXiv preprint arXiv:2202.10608, 2022. Curriculum-guided hindsight experience replay. Advances in neural information processing systems. M Fang, T Zhou, Y Du, L Han, Z Zhang, 32Fang, M., Zhou, T., Du, Y., Han, L., and Zhang, Z. Curriculum-guided hindsight experience replay. Ad- vances in neural information processing systems, 32, 2019. Reverse curriculum generation for reinforcement learning. C Florensa, D Held, M Wulfmeier, M Zhang, Abbeel , P , Conference on robot learning. PMLRFlorensa, C., Held, D., Wulfmeier, M., Zhang, M., and Abbeel, P. Reverse curriculum generation for reinforce- ment learning. In Conference on robot learning, pp. 482- 495. PMLR, 2017. M Igl, G Farquhar, J Luketina, W Boehmer, S Whiteson, arXiv:2006.05826The impact of non-stationarity on generalisation in deep reinforcement learning. arXiv preprintIgl, M., Farquhar, G., Luketina, J., Boehmer, W., and Whiteson, S. The impact of non-stationarity on gen- eralisation in deep reinforcement learning. arXiv preprint arXiv:2006.05826, 2020. Evolutionary robotics and the radical envelopeof-noise hypothesis. N Jakobi, Adaptive behavior. 62Jakobi, N. Evolutionary robotics and the radical envelope- of-noise hypothesis. Adaptive behavior, 6(2):325-368, 1997. Replay-guided adversarial environment design. M Jiang, M Dennis, J Parker-Holder, J Foerster, E Grefenstette, T Rocktäschel, Advances in Neural Information Processing Systems. 34Jiang, M., Dennis, M., Parker-Holder, J., Foerster, J., Grefen- stette, E., and Rocktäschel, T. Replay-guided adversarial environment design. Advances in Neural Information Processing Systems, 34:1884-1897, 2021a. Prioritized level replay. M Jiang, E Grefenstette, T Rocktäschel, International Conference on Machine Learning. PMLRJiang, M., Grefenstette, E., and Rocktäschel, T. Prioritized level replay. In International Conference on Machine Learning, pp. 4940-4950. PMLR, 2021b. Curriculum reinforcement learning via constrained optimal transport. P Klink, H Yang, C D&apos;eramo, J Peters, J Pajarinen, International Conference on Machine Learning. PMLRKlink, P., Yang, H., D'Eramo, C., Peters, J., and Pajarinen, J. Curriculum reinforcement learning via constrained op- timal transport. In International Conference on Machine Learning, pp. 11341-11358. PMLR, 2022. Image augmentation is all you need: Regularizing deep reinforcement learning from pixels. I Kostrikov, D Yarats, Fergus , R , arXiv:2004.13649arXiv preprintKostrikov, I., Yarats, D., and Fergus, R. Image augmentation is all you need: Regularizing deep reinforcement learning from pixels. arXiv preprint arXiv:2004.13649, 2020. End-toend training of deep visuomotor policies. S Levine, C Finn, T Darrell, Abbeel , P , The Journal of Machine Learning Research. 171Levine, S., Finn, C., Darrell, T., and Abbeel, P. End-to- end training of deep visuomotor policies. The Journal of Machine Learning Research, 17(1):1334-1373, 2016. Claim: Curriculum learning policy for influence maximization in unknown social networks. D Li, M Lowalekar, P Varakantham, Uncertainty in Artificial Intelligence. PMLRLi, D., Lowalekar, M., and Varakantham, P. Claim: Cur- riculum learning policy for influence maximization in unknown social networks. In Uncertainty in Artificial Intelligence, pp. 1455-1465. PMLR, 2021. Effective diversity in unsupervised environment design. W Li, P Varakantham, Li , D , arXiv:2301.08025arXiv preprintLi, W., Varakantham, P., and Li, D. Effective diversity in unsupervised environment design. arXiv preprint arXiv:2301.08025, 2023. Human-level control through deep reinforcement learning. V Mnih, K Kavukcuoglu, D Silver, A A Rusu, J Veness, M G Bellemare, A Graves, M Riedmiller, A K Fidjeland, G Ostrovski, nature. 5187540Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidje- land, A. K., Ostrovski, G., et al. Human-level control through deep reinforcement learning. nature, 518(7540): 529-533, 2015. Mathematics for computer graphics applications. M E Mortenson, Industrial Press IncMortenson, M. E. Mathematics for computer graphics ap- plications. Industrial Press Inc., 1999. An analysis of approximations for maximizing submodular set functions-i. Mathematical programming. G L Nemhauser, L A Wolsey, M L Fisher, 14Nemhauser, G. L., Wolsey, L. A., and Fisher, M. L. An analysis of approximations for maximizing submodular set functions-i. Mathematical programming, 14(1):265- 294, 1978. Effective diversity in population based reinforcement learning. J Parker-Holder, A Pacchiano, K M Choromanski, S J Roberts, Advances in Neural Information Processing Systems. 33Parker-Holder, J., Pacchiano, A., Choromanski, K. M., and Roberts, S. J. Effective diversity in population based reinforcement learning. Advances in Neural Information Processing Systems, 33:18050-18062, 2020. Evolving curricula with regret-based environment design. J Parker-Holder, M Jiang, M Dennis, M Samvelyan, J Foerster, E Grefenstette, T Rocktäschel, arXiv:2203.01302arXiv preprintParker-Holder, J., Jiang, M., Dennis, M., Samvelyan, M., Foerster, J., Grefenstette, E., and Rocktäschel, T. Evolv- ing curricula with regret-based environment design. arXiv preprint arXiv:2203.01302, 2022. Teacher algorithms for curriculum learning of deep rl in continuously parameterized environments. R Portelas, C Colas, K Hofmann, P.-Y Oudeyer, Conference on Robot Learning. PMLRPortelas, R., Colas, C., Hofmann, K., and Oudeyer, P.-Y. Teacher algorithms for curriculum learning of deep rl in continuously parameterized environments. In Conference on Robot Learning, pp. 835-853. PMLR, 2020a. Automatic curriculum learning for deep rl: A short survey. R Portelas, C Colas, L Weng, K Hofmann, P.-Y Oudeyer, arXiv:2003.04664arXiv preprintPortelas, R., Colas, C., Weng, L., Hofmann, K., and Oudeyer, P.-Y. Automatic curriculum learning for deep rl: A short survey. arXiv preprint arXiv:2003.04664, 2020b. Automatic data augmentation for generalization in deep reinforcement learning. R Raileanu, M Goldstein, D Yarats, I Kostrikov, Fergus , R , arXiv:2006.12862arXiv preprintRaileanu, R., Goldstein, M., Yarats, D., Kostrikov, I., and Fergus, R. Automatic data augmentation for general- ization in deep reinforcement learning. arXiv preprint arXiv:2006.12862, 2020. High-dimensional continuous control using generalized advantage estimation. J Schulman, P Moritz, S Levine, M Jordan, Abbeel , P , arXiv:1506.02438arXiv preprintSchulman, J., Moritz, P., Levine, S., Jordan, M., and Abbeel, P. High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438, 2015. J Schulman, F Wolski, P Dhariwal, A Radford, O Klimov, arXiv:1707.06347Proximal policy optimization algorithms. arXiv preprintSchulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. Mastering the game of go with deep neural networks and tree search. D Silver, A Huang, C J Maddison, A Guez, L Sifre, G Van Den Driessche, J Schrittwieser, I Antonoglou, V Panneershelvam, M Lanctot, nature. 5297587Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484-489, 2016. Intrinsic motivation and automatic curricula via asymmetric self-play. S Sukhbaatar, Z Lin, I Kostrikov, G Synnaeve, A Szlam, Fergus , R , arXiv:1703.05407arXiv preprintSukhbaatar, S., Lin, Z., Kostrikov, I., Synnaeve, G., Szlam, A., and Fergus, R. Intrinsic motivation and auto- matic curricula via asymmetric self-play. arXiv preprint arXiv:1703.05407, 2017. Domain randomization for transferring deep neural networks from simulation to the real world. J Tobin, R Fong, A Ray, J Schneider, W Zaremba, Abbeel , P , 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEETobin, J., Fong, R., Ray, A., Schneider, J., Zaremba, W., and Abbeel, P. Domain randomization for transferring deep neural networks from simulation to the real world. In 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 23-30. IEEE, 2017. Improving generalization in reinforcement learning with mixture regularization. K Wang, B Kang, J Shao, J Feng, Advances in Neural Information Processing Systems. 33Wang, K., Kang, B., Shao, J., and Feng, J. Improving gen- eralization in reinforcement learning with mixture regu- larization. Advances in Neural Information Processing Systems, 33:7968-7978, 2020. Paired open-ended trailblazer (poet): Endlessly generating increasingly complex and diverse learning environments and their solutions. R Wang, J Lehman, J Clune, K O Stanley, arXiv:1901.01753arXiv preprintWang, R., Lehman, J., Clune, J., and Stanley, K. O. Paired open-ended trailblazer (poet): Endlessly generating in- creasingly complex and diverse learning environments and their solutions. arXiv preprint arXiv:1901.01753, 2019. Theory of curriculum learning, with convex loss functions. D Weinshall, D Amir, Journal of Machine Learning Research. 21222Weinshall, D. and Amir, D. Theory of curriculum learning, with convex loss functions. Journal of Machine Learning Research, 21(222):1-19, 2020. Generalized domains for empirical evaluations in reinforcement learning. S Whiteson, Whiteson, S. Generalized domains for empirical evaluations in reinforcement learning. 2009. When do curricula work?. X Wu, E Dyer, B Neyshabur, arXiv:2012.03107arXiv preprintWu, X., Dyer, E., and Neyshabur, B. When do curricula work? arXiv preprint arXiv:2012.03107, 2020. Automatic curriculum learning through value disagreement. Y Zhang, P Abbeel, L Pinto, Advances in Neural Information Processing Systems. 33Zhang, Y., Abbeel, P., and Pinto, L. Automatic curriculum learning through value disagreement. Advances in Neural Information Processing Systems, 33:7648-7659, 2020.
[]
[ "SANSCrypt: A Sporadic-Authentication-Based Sequential Logic Encryption Scheme", "SANSCrypt: A Sporadic-Authentication-Based Sequential Logic Encryption Scheme" ]
[ "Yinghua Hu [email protected] \nDepartment of Electrical and Computer Engineering\nUniversity of Southern California\nLos AngelesCAUSA\n", "Kaixin Yang [email protected] \nDepartment of Electrical and Computer Engineering\nUniversity of Southern California\nLos AngelesCAUSA\n", "Shahin Nazarian [email protected] \nDepartment of Electrical and Computer Engineering\nUniversity of Southern California\nLos AngelesCAUSA\n", "Pierluigi Nuzzo [email protected] \nDepartment of Electrical and Computer Engineering\nUniversity of Southern California\nLos AngelesCAUSA\n" ]
[ "Department of Electrical and Computer Engineering\nUniversity of Southern California\nLos AngelesCAUSA", "Department of Electrical and Computer Engineering\nUniversity of Southern California\nLos AngelesCAUSA", "Department of Electrical and Computer Engineering\nUniversity of Southern California\nLos AngelesCAUSA", "Department of Electrical and Computer Engineering\nUniversity of Southern California\nLos AngelesCAUSA" ]
[]
We propose SANSCrypt, a novel sequential logic encryption scheme to protect integrated circuits against reverse engineering. Previous sequential encryption methods focus on modifying the circuit state machine such that the correct functionality can be accessed by applying the correct key sequence only once. Considering the risk associated with one-time authentication, SANSCrypt adopts a new temporal dimension to logic encryption, by requiring the user to sporadically perform multiple authentications according to a protocol based on pseudorandom number generation. Analysis and validation results on a set of benchmark circuits show that SANSCrypt offers a substantial output corruptibility if the key sequences are applied incorrectly. Moreover, it exhibits an exponential resilience to existing attacks, including SAT-based attacks, while maintaining a reasonably low overhead.
10.1109/vlsi-soc46417.2020.9344079
[ "https://arxiv.org/pdf/2010.05168v1.pdf" ]
222,291,023
2010.05168
971c0a7f122884a44e6462cc304cc6abc06f1623
SANSCrypt: A Sporadic-Authentication-Based Sequential Logic Encryption Scheme 11 Oct 2020 Yinghua Hu [email protected] Department of Electrical and Computer Engineering University of Southern California Los AngelesCAUSA Kaixin Yang [email protected] Department of Electrical and Computer Engineering University of Southern California Los AngelesCAUSA Shahin Nazarian [email protected] Department of Electrical and Computer Engineering University of Southern California Los AngelesCAUSA Pierluigi Nuzzo [email protected] Department of Electrical and Computer Engineering University of Southern California Los AngelesCAUSA SANSCrypt: A Sporadic-Authentication-Based Sequential Logic Encryption Scheme 11 Oct 2020 We propose SANSCrypt, a novel sequential logic encryption scheme to protect integrated circuits against reverse engineering. Previous sequential encryption methods focus on modifying the circuit state machine such that the correct functionality can be accessed by applying the correct key sequence only once. Considering the risk associated with one-time authentication, SANSCrypt adopts a new temporal dimension to logic encryption, by requiring the user to sporadically perform multiple authentications according to a protocol based on pseudorandom number generation. Analysis and validation results on a set of benchmark circuits show that SANSCrypt offers a substantial output corruptibility if the key sequences are applied incorrectly. Moreover, it exhibits an exponential resilience to existing attacks, including SAT-based attacks, while maintaining a reasonably low overhead. I. INTRODUCTION The design process of modern VLSI systems often relies on a supply chain where several services, such as verification, fabrication, and testing, are outsourced to third-party companies. If these companies gain access to a sufficient amount of critical design information, they can potentially reverse engineer the design. One possible consequence of reverse engineering is Hardware Trojan (HT) insertion, which can be destructive for many applications. HTs can either disrupt the normal circuit operation [1] or provide the attacker with access to critical data or software running on the chip [2]. Countermeasures such as logic encryption [3]- [6], integrated circuit (IC) camouflaging [7], watermarking [8], and split manufacturing [9] have been developed over the past decades to prevent IC reverse engineering. Among these, logic encryption has received significant attention as a promising, low-overhead countermeasure. Logic encryption modifies the circuit in a way such that a user can only access the correct circuit functionality after providing a correct key sequence. Otherwise, the circuit function remains hidden, and the output different from the correct one. Various logic encryption techniques [3]- [6] and potential attacks [10]- [12] have appeared in the literature, as well as methods to systematically evaluate them [13], [14]. A category of techniques [3]- [5] is designed to modify and protect the combinational logic portions of the chip and can be extended to sequential circuits by assuming that the scan chains are not accessible by the attacker, e.g., due to scan chain encryption and obfuscation [15]- [17]. Another category of techniques, namely, sequential logic encryption [6], [18], [19], targets, instead, the state transitions of the original finite state machine (FSM). Sequential logic encryption introduces additional states and transitions in the original FSM, essentially partitioning the state space into two sets. After being powered on or reset, the FSM enters the encrypted mode, exhibiting an incorrect output behavior. The FSM transitions, instead, to the functional mode, providing the correct functionality, upon receiving a sequence of key patterns. A set of attacks have been reported against sequential encryption schemes, aiming to retrieve the correct key sequence or circuit function. Shamsi et al. [20] adapted the Boolean satisfiability (SAT)-based attack [10], traditionally targeted to combinational logic encryption, by leveraging methods from bounded model checking to unroll the sequential circuit. Recently, an attack based on automatic test pattern generation (ATPG) [21] uses concepts from excitation and propagation of stuck-at faults to search the key sequence among the input vectors generated by ATPG. When the attackers have some knowledge of the topology of the encrypted FSM, then they can extract and analyze the state transition graph and bypass the encrypted mode [22]. Overall, the continuous advances in FSM extraction and analysis tools tend to challenge any of the existing sequential encryption schemes and call for approaches that can significantly increase their robustness. This paper proposes SANSCrypt, a Sporadic-Authenticationbased Sequential Logic Encryption (SANSCrypt) scheme, which raises the attack difficulty via a multiple-authentication protocol, whose decryption relies on retrieving a set of key sequences as well as the time at which the sequences should be applied. Our contributions can be summarized as follows: • A robust, multi-authentication based sequential logic encryption method that for the first time, to the best of our knowledge, systematically incorporates the robustness of multi-factor authentication (MFA) [23] in the context of hardware obfuscation. • An architecture for sporadic re-authentication where key sequences must be applied at multiple random times, determined by a random number generator, to access the correct circuit functionality. • Security analysis and empirical validation of SANSCrypt on a set of ISCAS'89 benchmark circuits [24], showing exponential resilience against existing attacks, including sequential SAT-based attacks, and reasonably low overhead. Analysis and validation results show that SANSCrypt can significantly enhance the resilience of sequential logic encryption under different attack assumptions. 978-1-7281-5409-1/20/$31.00 ©2020 IEEE. II. BACKGROUND AND RELATED WORK Among the existing sequential logic encryption techniques, HARPOON [6] defines two modes of operation. When powered on, the circuit is in the encrypted mode and exhibits an incorrect functionality. The user must apply a sequence of input patterns during the first few clock cycles to enter the functional mode, in which the correct functionality is recovered. However, the encrypted mode and functional mode FSMs are connected by only one transition (edge), which can be exploited by an attacker to perform FSM extraction and analysis, and bypass the encrypted mode [22]. Interlocking [18] sequential encryption modifies the circuit FSM such that multiple paths are available between the states of the encrypted and the ones of the functional FSMs, making it harder for the attacker to detect the only correct transition between the two modes. However, in both HARPOON and Interlocking encryption, once the circuit enters the functional mode, it remains there until reset. Dynamic State-Deflection [25] requires, instead, an additional key input verification step while in the functional mode. If the additional key input is incorrect, the FSM transitions to a black-hole state cluster which can no longer be left. However, because the additional key input is fixed over time, the scheme becomes more vulnerable to sequential SAT-based attacks [20]. Finally, instead of corrupting the circuit function immediately after reset, DESENC [19] counts the occurrence of a specific but rare event in the circuit. Once the counter reaches a threshold, the circuit enters the encryption mode. This scheme is more resilient to sequential SAT-based attacks [26] because it requires unrolling the circuit FSM a large number of times to find the key. However, the initial transparency window may still expose critical portions of the circuit functionality. III. SANSCRYPT We introduce design and implementation details for SAN-SCrypt, starting with the underlying threat model. A. Threat Model SANSCrypt assumes a threat model that is consistent with the previous literature on sequential logic encryption [6], [20], [22]. The goal of the attack is to access the correct circuit functionality, by either reconstructing the deobfuscated circuit or finding the correct key sequence. To achieve this goal, the attacker can leverage one or more of the following resources: (i) the encrypted netlist; (ii) a working circuit providing correct input-output pairs; (iii) knowledge of the encryption technique. In addition, we assume that the attacker has no access to the scan chain and cannot directly observe or change the state of the circuit. B. Authentication Protocol As shown in Fig. 1a, existing logic encryption techniques are mostly based on a single-authentication protocol, requiring users to be authenticated only once before using the correct circuit function. After authentication, the circuit remains functional unless it is powered off or reset. To attack the circuit, it is then sufficient to discover the correct key sequence that must be applied in the initial state. We adopt, instead, the authentication protocol in Fig. 1b, where the functional circuit can "jump" back to the encrypted mode from the functional mode. Once the back-jumping occurs, another round of authentication is required to resume the normal operation. The back-jumping can be triggered multiple times and involve a different key sequence for each re-authentication step. The hardness of attacking this protocol stems from both the increased number of the key sequences to be produced and the uncertainty on the time at which each sequence should be applied. A new temporal dimension adds to the decryption procedure, which poses a significantly higher threshold to the attackers. C. Overview of the Encryption Scheme SANSCrypt is a sequential logic encryption scheme which supports random back-jumping, as represented in Fig. 2. When the circuit is powered or reset, the circuit falls into the reset state E0 of the encrypted mode. To transition to the initial (or reset) state N 0 of the functional mode, the user must apply at startup the correct key sequence to the primary input ports. Once in the functional mode, the circuit can deliberately, but randomly, jump back, as denoted by the blue edges in Fig. 2, to a state s bj in the encrypted mode, called back-jumping state, after a designated number of clock cycles t bj , called backjumping period. The user needs to apply another key sequence to resume normal operations, as shown by the red arrows. Both the back-jumping state s bj and the back-jumping period t bj are determined by a pseudo-random number generator (PRNG) embedded in the circuit. Therefore, when and where the backjumping operation happens is unpredictable unless the attacker is able to break the PRNG or find its seed. The schematic of SANSCrypt is shown in Fig. 3 and consists of two additional blocks, a back-jumping module and an encryption finite state machine (ENC-FSM), besides the original circuit. We discuss each of these blocks in the following subsections. D. Back-Jumping Module The back-jumping module consists of an n-bit PRNG, an n-bit Counter, and a Back-Jumping Finite State Machine (BJ-FSM) which sends back-jumping commands to the rest of the circuit. As summarized in the flowchart in Fig. 4, when the circuit is in the encrypted mode, BJ-FSM checks whether the authentication has occurred. If this is the case, BJ-FSM stores the current PRNG output as the back-jumping period t bj and initializes the counter. The counter increments its output at each clock cycle until it reaches t bj . This event triggers BJ-FSM to sample again the current PRNG output r, which is generally different from t bj , and use it to determine the back-jumping state s bj = f (r). For example, if s bj is an l-bit binary number, BJ-FSM can arbitrarily select l bits from r and assign the value to s bj . If the first l bits of r are selected, we have f (r) = r[0 : l − 1]. At the same time, BJ-FSM sends a back-jumping request to the other blocks of the circuit and returns to its initial state, where it keeps checking the authentication status of the circuit. On receiving the back-jumping request, the circuit jumps back to state s bj in the encrypted mode and will stay there unless re-authentication is performed. Any PRNG architecture can be selected in this scheme, based on the design budget and the desired security level. For example, linear PRNGs, such as Linear Feedback Shift Registers (LFSRs), provide higher speed and lower area overhead but tend to be more vulnerable than cipher algorithm-based PRNGs, such as AES, which are, however, more expensive. E. Encryption Finite State Machine (ENC-FSM) The Encryption Finite State Machine (ENC-FSM) determines whether the user's key sequence is correct and, if it is not correct, takes actions to hide the functionality of the original circuit. The input of the ENC-FSM can be provided via the primary input ports, without the need to create extra input ports for authentication. The output enc out of ENC-FSM, which is n bit long, together with a set of nodes in the original circuit netlist, can be provided as an input to a set of XOR gates, to corrupt the circuit function as in combinational logic encryption [3]. For example, in Fig. 5, a 3-bit array enc out is connected to six nodes in the original circuit via XOR gates. In this paper, XOR gates are inserted at randomly selected nodes. However, any other combinational logic encryption technique is also applicable. As a design parameter, we denote by node coverage the ratio between the number of inserted XOR gates and the total number of combinational logic gates in the circuit. Only one state of ENC-FSM, termed auth, is used in the functional mode. In state auth, all bits in enc out are set to zero and the original circuit functionality is activated. In the other states, the value of enc out changes based on the state, but at least one of its bits is set to one to guarantee that the final output is incorrect. A sample truth table for a 3-bit enc out array is shown in Table I. Even if the circuit is in the encrypted mode, enc out changes its value based on the state of the encryption FSM. Such an approach makes it difficult for signal analysis attacks, aiming to locate signals with low switching activity in the encrypted mode, to find enc out and bypass ENC-FSM. After a valid authentication, the circuit resumes its normal operation. Additional registers are, therefore, required in the ENC-FSM to store the circuit state before back-jumping so that it can be resumed after authentication. IV. PERFORMANCE ANALYSIS We analyze SANSCrypt's resilience against existing attacks and estimate its overhead. A. Brute-Force Attack Let us suppose that the number of primary input bits used as key inputs is i and each re-authentication procedure requires c clock cycles to apply the key sequence. If the attacker has no preference in selecting the key sequence, then she would have, on average, (2 i·c + 1)/2 ≈ 2 i·c−1 attempts for each reauthentication procedure, which amounts to the same bruteforce attack complexity of HARPOON. However, because the correct key sequence of each re-authentication procedure depends on the PRNG output, the number N prng of possible values of the PRNG output will also contribute to the attack effort. If each PRNG output corresponds to a unique key sequence which is independent from other key sequences, the average attack effort will be N prng · 2 i·c−1 . For a 10-bit PRNG, i = 32, and c = 8, the average attack effort will reach 5.93 × 10 79 . B. Sequential SAT-Based Attack A SAT-based attack can be carried out on sequential encryption by unrolling the sequential portions of the circuit [20]. This attack can be remarkably successful especially when the correct key is the same at each time (clock cycle) and the key input ports are different from the primary input ports. Similarly to HARPOON, SANSCrypt is resilient to this SATbased attack variant, since the correct keys are generally not the same at different clock cycles. We therefore analyze the resilience of SANSCrypt via a modified version of the sequential SAT-based attack [22] that is appropriate for schemes such as HARPOON and SANSCrypt, as shown in Fig. 6. Let us first assume that the encryption scheme requires n clock cycles after reset to enter the functional mode. Then, the attacker can start the attack by unrolling the circuit (n+1) times. The first n copies of the circuit receive the keys at their primary input ports (K a and K b ), while the primary input and output ports of the (n + 1) th circuit replica can be used to read the circuit input and output signals after n cycles. If the SAT-based attack fails to find the correct key with (n + 1) circuit replicas, as in Fig. 6, the circuit will be unrolled one more time (see, e.g., [20]). The attack above would be still ineffective on SANSCrypt, since it can retrieve the first key sequence but would fail to discover when the next back-jumping occurs and what would be the next key sequence. Even if the attacker knows when the next back-jumping occurs, the above SAT-based attack will fail due to the large number of circuit replicas needed to find all the key sequences, as empirically observed in Section V. C. FSM Extraction and Structural Analysis As discussed in Section II, a possible shortcoming of certain sequential encryption schemes is the clear boundary between the encrypted mode and the functional mode FSMs. As shown in Fig. 3, SANSCrypt addresses this issue by designing more than one transition between the two FSMs. An attacker may also try to locate and isolate the output of ENC-FSM by looking for low signal switching activities when the circuit is in the encrypted mode. SANSCrypt addresses this risk by expanding the output of ENC-FSM from one bit to an array. The value of each bit changes frequently based on the state of the encrypted mode FSM, which makes it difficult for attackers to find the output of ENC-FSM based on signal switching activities. D. Cycle Delay Analysis Due to multiple back-jumping and authentication operations in SANSCrypt, additional clock cycles will be required in which no other operation can be executed. Suppose that authentication requires t a clock cycles and the circuit stays in the functional mode for t b clock cycles before the next backjumping occurs, as shown in Fig. 7. The cycle delay overhead can be computed as the ratio O cd = t a /t b . Specifically, for an n-bit PRNG, the average t b is equal to the average output value, i.e., 2 n−1 . To illustrate how the cycle delay overhead is influenced by this encryption, Fig. 8 shows the relation between average cycle delay overhead and PRNG bit length. The clock cycles (t a ) required for (re-)authentication are set as 8, 16, 64, and 128. When the PRNG bit length is small, the average cycle delay increases significantly with the increase of t a . However, the cycle delay can be reduced by increasing the PRNG bit length. For example, the average cycle delay overhead becomes negligible for all the four cases when the PRNG bit length is 11 or larger. A key manager, available to the trusted user, will be in charge of automatically applying the key sequences from a tamper-proof memory at the right time, as computed from a hard-coded replica of the PRNG. V. SIMULATION RESULTS We first evaluate the effectiveness of SANSCrypt on seven ISCAS'89 sequential benchmark circuits with different sizes, as summarized in Table III. All the experiments are executed on a Linux server with 48 2.1-GHz processor cores and 500-GB memory. We implement our technique on the selected circuits with different configurations and use a 45-nm Nan-gateOpenCellLibrary [27] to synthesize the encrypted netlists for area optimization under a critical-path delay constraint that targets the same performance as in the non-encrypted versions. For the purpose of illustration, we realize the PRNG using Linear Feedback Shift Registers (LFSR) with different sizes, ranging from 5 to 15 bits. An LFSR provides an areaefficient implementation and has often been used in other logic encryption schemes in the literature [9], [28]. We choose a random 8-cycle-long key sequence as the correct key, and select 5%, 10%, 15%, and 20% as node coverage levels. Finally, we use the Hamming distance (HD) between the correct and the corrupted output values as a metric for the output corruptibility. If the HD is 0.5, the effort spent to identify the incorrect bits is maximum. We run functional simulations on all the encrypted circuits with the correct key sequences (case 1) and without the correct sequences (case 2), by applying 1000 random input vectors. We then compare the circuit output with the golden output from the original netlist and calculate the HD between the two. Moreover, we demonstrate the additional robustness of SANSCrypt by simulating a scenario (case 3) in which the attacker assumes that the encryption is based on a singleauthentication protocol and provides only the first correct key sequence upon reset. Fig. 9a-d show the average HD in these three cases. For all the circuits, the average HD is zero only in case 1, when all the correct key sequences are applied at the right clock cycles. Otherwise, in case 2 (orange) and case 3 (green), we observe a significant increase in the average HD. The average HD in case 3 is always smaller than that of case 2 because, in case 3, the correct functionality is recovered for a short period of time, after which the circuit jumps back to the encrypted mode. The longer the overall runtime, the smaller will be the impact of the transparency window in which the circuit exhibits the correct functionality. We then apply the sequential SAT-based attack in Section IV to circuit s1238 with 5-bit LFSR and 20% node coverage, under a stronger attack model, in which the attacker knows when to apply the correct key sequences. Table IV shows the runtime to find the first set of 7 key sequences. The runtime remains exponential in the number of key sequences, which makes sequential SAT-based attacks impractical for large designs. Finally, Table II reports the synthesized area, power, and delay overhead due to the implementation of our technique. In more than 70% of the circuits the delay overhead is less than 1%, and exceeds the required clock cycle by at most 5.8%. Except for s27 and s298, characterized by a small gate count, all the other circuits show average area and power overhead of 141.1% and 160.8%, respectively, which is expected due to the additional number of registers required in ENC-FSM to guarantee that the correct state is entered upon re-authentication. However, because critical modules in large SoCs may only account for a small portion of the area, this overhead becomes affordable under partial obfuscation. For example, we encrypted a portion of state registers in s38584, the largest ISCAS'89 benchmark, using SANSCrypt. We then randomly inserted additional XOR gates to achieve the same HD as in the case of full encryption. Table V reports the overhead results after synthesis, when the ratio between the encrypted state registers and the total number of state registers decreases from 100% to 1%. Encrypting 10% of the registers will only cost 33.4% of the area while incurring negative power overhead and 4.2% delay overhead. VI. CONCLUSION We proposed SANSCrypt, a robust sequential logic encryption technique relying on a sporadic authentication protocol, in which re-authentications are carried out at pseudo-randomly selected time slots to significantly increase the attack effort. Future work includes optimizing the implementation to further reduce the overhead and hide any structural traces that may expose the correct key sequence. Further, we plan to investigate key manager architectures to guarantee reliable timing and operation in real-time applications. Fig. 2 . 2State transition diagram of SANSCrypt. Fig. 3 . 3Schematic view of SANSCrypt. Fig. 4 . 4Flowchart of BJ-FSM. Fig. 5 . 5enc out controls the original circuit via XOR gates. Fig. 6 . 6An unrolled encrypted circuit which requires n clock cycles to find the key sequence. Fig. 7 . 7Circuit mode switching for an authenticated user.Fig. 8. Average cycle delay as a function of PRNG bit length when the key sequence cycle length ta is 8, 16, 64, and 128. Fig. 9 . 9The average HD for different node coverage: (a) 5%, (b) 10%, (c) 15%, and (d) 20%. TABLE I TRUTH ITABLE FOR A 3-BIT enc out ARRAYState E0 E1 E2 E3 E4 Auth enc out[0] 0 1 1 1 1 0 enc out[1] 1 0 1 1 0 0 enc out[2] 1 1 1 0 0 0 TABLE II SYNTHESIS IIRESULT OF AREA, POWER, DELAYCircuit s27 s298 s1238 s9234 Node Coverage 5% 10% 15% 20% 5% 10% 15% 20% 5% 10% 15% 20% 5% 10% 15% 20% Area [%] 1418.5 1418.5 1403.2 1403.2 413.0 427.3 425.2 453.8 144.8 165.7 176.0 189.2 114.6 131.7 144.5 160.1 Power [%] 1627.7 1627.7 1627.5 1627.5 385.7 390.6 389.9 402.8 217.8 232.1 235.0 249.8 179.8 197.5 188.0 190.6 Delay [%] 0.0 0.0 1.4 1.4 0.0 0.0 0.0 0.5 0.0 0.0 0.0 5.8 0.0 0.0 0.9 3.6 Circuit s15850 s35932 s38584 Average (s27 and s298 excluded) Node Coverage 5% 10% 15% 20% 5% 10% 15% 20% 5% 10% 15% 20% 5% 10% 15% 20% Area [%] 92.9 112.1 120.1 133.9 116.3 129.5 139.4 151.6 133.5 140.9 158.7 165.6 120.4 136.0 147.8 160.1 Power [%] 127.4 142.3 153.2 163.0 98.4 101.9 101.2 103.0 123.9 128.8 142.0 140.3 149.5 160.5 163.9 169.4 Delay [%] -0.3 0.0 0.1 0.6 -0.4 0.0 4.3 5.3 0.6 2.0 0.4 4.9 0.0 0.4 1.1 4.0 TABLE III OVERVIEW OF THE SELECTED BENCHMARK CIRCUITS Circuit s27 s298 s1238 s9234 s15850 s35932 s38584 Input 4 3 14 36 77 35 38 Output 1 6 14 39 150 320 304 DFF 3 14 18 211 534 1728 1426 Gate 10 119 508 5597 9772 16065 19253 TABLE IV SAT IV-BASED ATTACK RUNTIME FOR FINDING THE FIRST 7 KEY SEQUENCESKey Seq. Index 1 (HARPOON) 2 3 4 5 6 7 Runtime [s] 4 123 229 1941 1301 2202 25571 TABLE V ADP VOVERHEAD RESULTS FOR PARTIAL ENCRYPTIONEncrypted registers/Total registers 100% 50% 25% 10% 5% 2.5% 1% Area [%] 133.5 71.6 49.1 33.4 27.8 23.5 22.4 Power [%] 123.9 40.2 9.6 -12.8 -20.5 -22.1 -25.0 Delay [%] 0.6 1.8 2.1 4.2 5.4 3.9 4.6 ACKNOWLEDGMENT This work was partially sponsored by the Air Force Research Laboratory (AFRL) and the Defense Advanced Research Projects Agency (DARPA) under agreement number FA8560-18-1-7817. Trustworthy hardware: Identifying and classifying hardware trojans. R Karri, J Rajendran, K Rosenfeld, M Tehranipoor, Computer. 4310R. Karri, J. Rajendran, K. Rosenfeld, and M. Tehranipoor, "Trustworthy hardware: Identifying and classifying hardware trojans," Computer, vol. 43, no. 10, pp. 39-46, 2010. A survey of hardware trojan taxonomy and detection. M Tehranipoor, F Koushanfar, IEEE Design & Test of Computers. 271M. Tehranipoor and F. Koushanfar, "A survey of hardware trojan taxonomy and detection," IEEE Design & Test of Computers, vol. 27, no. 1, pp. 10-25, 2010. Fault analysis-based logic encryption. J Rajendran, H Zhang, C Zhang, G S Rose, Y Pino, O Sinanoglu, R Karri, IEEE Trans. Computers. 642J. Rajendran, H. Zhang, C. Zhang, G. S. Rose, Y. Pino, O. Sinanoglu, and R. Karri, "Fault analysis-based logic encryption," IEEE Trans. Computers, vol. 64, no. 2, pp. 410-424, 2013. Provably-secure logic locking: From theory to practice. M Yasin, A Sengupta, M T Nabeel, M Ashraf, J J Rajendran, O Sinanoglu, Proc. SIGSAC Conf. Computer and Communications Security. SIGSAC Conf. Computer and Communications SecurityM. Yasin, A. Sengupta, M. T. Nabeel, M. Ashraf, J. J. Rajendran, and O. Sinanoglu, "Provably-secure logic locking: From theory to prac- tice," in Proc. SIGSAC Conf. Computer and Communications Security, pp. 1601-1618, 2017. SARLock: SAT attack resistant logic locking. M Yasin, B Mazumdar, J J Rajendran, O Sinanoglu, IEEE Int. Symp. Hardware Oriented Security and Trust (HOST). M. Yasin, B. Mazumdar, J. J. Rajendran, and O. Sinanoglu, "SARLock: SAT attack resistant logic locking," in IEEE Int. Symp. Hardware Oriented Security and Trust (HOST), pp. 236-241, 2016. HARPOON: an obfuscationbased SoC design methodology for hardware protection. R S Chakraborty, S Bhunia, IEEE Trans. Computer-Aided Design of Integrated Circuits and Systems. 2810R. S. Chakraborty and S. Bhunia, "HARPOON: an obfuscation- based SoC design methodology for hardware protection," IEEE Trans. Computer-Aided Design of Integrated Circuits and Systems, vol. 28, no. 10, pp. 1493-1502, 2009. CamoPerturb: Secure IC camouflaging for minterm protection. M Yasin, B Mazumdar, O Sinanoglu, J Rajendran, 2016 IEEE/ACM Int. Conf. Computer-Aided Design (ICCAD). M. Yasin, B. Mazumdar, O. Sinanoglu, and J. Rajendran, "CamoPerturb: Secure IC camouflaging for minterm protection," in 2016 IEEE/ACM Int. Conf. Computer-Aided Design (ICCAD), pp. 1-8, 2016. Hierarchical watermarking in IC design. E Charbon, IEEE Proc. Custom Integrated Circuits Conf. E. Charbon, "Hierarchical watermarking in IC design," in IEEE Proc. Custom Integrated Circuits Conf., pp. 295-298, 1998. Efficient and secure split manufacturing via obfuscated built-in self-authentication. K Xiao, D Forte, M M Tehranipoor, IEEE Int. Symp. Hardware Oriented Security and Trust (HOST). K. Xiao, D. Forte, and M. M. Tehranipoor, "Efficient and secure split manufacturing via obfuscated built-in self-authentication," in IEEE Int. Symp. Hardware Oriented Security and Trust (HOST), pp. 14-19, 2015. Evaluating the security of logic encryption algorithms. P Subramanyan, S Ray, S Malik, IEEE Int. Symp. Hardware Oriented Security and Trust (HOST). P. Subramanyan, S. Ray, and S. Malik, "Evaluating the security of logic encryption algorithms," in IEEE Int. Symp. Hardware Oriented Security and Trust (HOST), pp. 137-143, 2015. SURF: Joint structural functional attack on logic locking. P Chakraborty, J Cruz, S Bhunia, IEEE Int. Symp. Hardware Oriented Security and Trust (HOST). P. Chakraborty, J. Cruz, and S. Bhunia, "SURF: Joint structural func- tional attack on logic locking," in IEEE Int. Symp. Hardware Oriented Security and Trust (HOST), pp. 181-190, 2019. SigAttack: New highlevel sat-based attack on logic encryptions. Y Shen, Y Li, S Kong, A Rezaei, H Zhou, Design, Automation and Test in Europe Conference and Exhibition (DATE). Y. Shen, Y. Li, S. Kong, A. Rezaei, and H. Zhou, "SigAttack: New high- level sat-based attack on logic encryptions," in Design, Automation and Test in Europe Conference and Exhibition (DATE), pp. 940-943, 2019. System-level framework for logic obfuscation with quantified metrics for evaluation. V V Menon, G Kolhe, A Schmidt, J Monson, M French, Y Hu, P A Beerel, P Nuzzo, Secure Development Conf. (SecDev). V. V. Menon, G. Kolhe, A. Schmidt, J. Monson, M. French, Y. Hu, P. A. Beerel, and P. Nuzzo, "System-level framework for logic obfuscation with quantified metrics for evaluation," in Secure Development Conf. (SecDev), pp. 89-100, 2019. Security-driven metrics and models for efficient evaluation of logic encryption schemes. Y Hu, V V Menon, A Schmidt, J Monson, M French, P Nuzzo, ACM-IEEE Int. Conf. Formal Methods and Models for System Design (MEMOCODE). Y. Hu, V. V. Menon, A. Schmidt, J. Monson, M. French, and P. Nuzzo, "Security-driven metrics and models for efficient evaluation of logic encryption schemes," in ACM-IEEE Int. Conf. Formal Methods and Models for System Design (MEMOCODE), pp. 1-5, 2019. Secured flipped scan-chain model for crypto-architecture. G Sengar, D Mukhopadhyay, D R Chowdhury, IEEE Trans. Computer-Aided Design of Integrated Circuits and Systems. 2611G. Sengar, D. Mukhopadhyay, and D. R. Chowdhury, "Secured flipped scan-chain model for crypto-architecture," IEEE Trans. Computer-Aided Design of Integrated Circuits and Systems, vol. 26, no. 11, pp. 2080- 2084, 2007. Vim-scan: A low overhead scan design approach for protection of secret key in scan-based secure chips. S Paul, R S Chakraborty, S Bhunia, IEEE VLSI Test Symp. (VTS). S. Paul, R. S. Chakraborty, and S. Bhunia, "Vim-scan: A low overhead scan design approach for protection of secret key in scan-based secure chips," in IEEE VLSI Test Symp. (VTS), pp. 455-460, 2007. Secure scan and test using obfuscation throughout supply chain. X Wang, D Zhang, M He, D Su, M Tehranipoor, IEEE Trans. Computer-Aided Design of Integrated Circuits and Systems. 379X. Wang, D. Zhang, M. He, D. Su, and M. Tehranipoor, "Secure scan and test using obfuscation throughout supply chain," IEEE Trans. Computer-Aided Design of Integrated Circuits and Systems, vol. 37, no. 9, pp. 1867-1880, 2017. Interlocking obfuscation for anti-tamper hardware. A R Desai, M S Hsiao, C Wang, L Nazhandali, S Hall, Proc. Cyber Security and Information Intelligence Research Workshop. Cyber Security and Information Intelligence Research WorkshopA. R. Desai, M. S. Hsiao, C. Wang, L. Nazhandali, and S. Hall, "Interlocking obfuscation for anti-tamper hardware," in Proc. Cyber Security and Information Intelligence Research Workshop, pp. 1-4, 2013. Deep state encryption for sequential logic circuits. Y Kasarabada, S R T Raman, R Vemuri, IEEE Computer Society Annual Symp. VLSI (ISVLSI). Y. Kasarabada, S. R. T. Raman, and R. Vemuri, "Deep state encryption for sequential logic circuits," in IEEE Computer Society Annual Symp. VLSI (ISVLSI), pp. 338-343, 2019. KC2: Key-condition crunching for fast sequential circuit deobfuscation. K Shamsi, M Li, D Z Pan, Y Jin, Design, Automation and Test in Europe Conference and Exhibition (DATE). K. Shamsi, M. Li, D. Z. Pan, and Y. Jin, "KC2: Key-condition crunching for fast sequential circuit deobfuscation," in Design, Automation and Test in Europe Conference and Exhibition (DATE), pp. 534-539, 2019. Characterization of locked sequential circuits via ATPG. D Duvalsaint, Z Liu, A Ravikumar, R Blanton, IEEE Int. Test Conf. in Asia (ITC-Asia). D. Duvalsaint, Z. Liu, A. Ravikumar, and R. Blanton, "Characterization of locked sequential circuits via ATPG," in IEEE Int. Test Conf. in Asia (ITC-Asia), pp. 97-102, 2019. Revisit sequential logic obfuscation: Attacks and defenses. T Meade, Z Zhao, S Zhang, D Pan, Y Jin, IEEE Int. Symp. Circuits and Systems (ISCAS). T. Meade, Z. Zhao, S. Zhang, D. Pan, and Y. Jin, "Revisit sequential logic obfuscation: Attacks and defenses," in IEEE Int. Symp. Circuits and Systems (ISCAS), pp. 1-4, 2017. Privacy preserving multi-factor authentication with biometrics. A Bhargav-Spantzel, A C Squicciarini, S Modi, M Young, E Bertino, S J Elliott, Journal of Computer Security. 155A. Bhargav-Spantzel, A. C. Squicciarini, S. Modi, M. Young, E. Bertino, and S. J. Elliott, "Privacy preserving multi-factor authentication with biometrics," Journal of Computer Security, vol. 15, no. 5, pp. 529-560, 2007. Combinational profiles of sequential benchmark circuits. F Brglez, D Bryan, K Kozminski, IEEE Int. Symp. Circuits and Systems (ISCAS). F. Brglez, D. Bryan, and K. Kozminski, "Combinational profiles of sequential benchmark circuits," in IEEE Int. Symp. Circuits and Systems (ISCAS), pp. 1929-1934, 1989. Novel dynamic state-deflection method for gate-level design obfuscation. J Dofe, Q Yu, IEEE Trans. Computer-Aided Design of Integrated Circuits and Systems. 37J. Dofe and Q. Yu, "Novel dynamic state-deflection method for gate-level design obfuscation," IEEE Trans. Computer-Aided Design of Integrated Circuits and Systems, vol. 37, pp. 273-285, Feb. 2018. On SAT-based attacks on encrypted sequential logic circuits. Y Kasarabada, S Chen, R Vemuri, Int. Symp. Quality Electronic Design (ISQED). Y. Kasarabada, S. Chen, and R. Vemuri, "On SAT-based attacks on encrypted sequential logic circuits," in Int. Symp. Quality Electronic Design (ISQED), pp. 204-211, 2019. 45nm open cell library. Silvaco, Silvaco, "45nm open cell library," 2019. Dynamically obfuscated scan chain to resist oracle-guided attacks on logic locked design. M S Rahman, A Nahiyan, S Amir, F Rahman, F Farahmandi, D Forte, M Tehranipoor, IACR Cryptol. ePrint Arch. 2019946M. S. Rahman, A. Nahiyan, S. Amir, F. Rahman, F. Farahmandi, D. Forte, and M. Tehranipoor, "Dynamically obfuscated scan chain to resist oracle-guided attacks on logic locked design.," IACR Cryptol. ePrint Arch., vol. 2019, p. 946, 2019.
[]
[ "Temperature check: theory and practice for training models with softmax-cross-entropy losses", "Temperature check: theory and practice for training models with softmax-cross-entropy losses" ]
[ "Atish Agarwala ", "Google Research ", "Jeffrey Pennington ", "Google Research ", "Yann Dauphin ", "Google Research ", "Sam Schoenholz ", "Google Research " ]
[]
[]
The softmax function combined with a cross-entropy loss is a principled approach to modeling probability distributions that has become ubiquitous in deep learning. The softmax function is defined by a lone hyperparameter, the temperature, that is commonly set to one or regarded as a way to tune model confidence after training; however, less is known about how the temperature impacts training dynamics or generalization performance. In this work we develop a theory of early learning for models trained with softmax-cross-entropy loss and show that the learning dynamics depend crucially on the inverse-temperature β as well as the magnitude of the logits at initialization, ||βz|| 2 . We follow up these analytic results with a large-scale empirical study of a variety of model architectures trained on CIFAR10, ImageNet, and IMDB sentiment analysis. We find that generalization performance depends strongly on the temperature, but only weakly on the initial logit magnitude. We provide evidence that the dependence of generalization on β is not due to changes in model confidence, but is a dynamical phenomenon. It follows that the addition of β as a tunable hyperparameter is key to maximizing model performance. Although we find the optimal β to be sensitive to the architecture, our results suggest that tuning β over the range 10 −2 to 10 1 improves performance over all architectures studied. We find that smaller β may lead to better peak performance at the cost of learning stability.Preprint. Under review.
null
[ "https://arxiv.org/pdf/2010.07344v1.pdf" ]
222,380,046
2010.07344
3c45b34a0f90f77593e4ef401a003be1479715e5
Temperature check: theory and practice for training models with softmax-cross-entropy losses Atish Agarwala Google Research Jeffrey Pennington Google Research Yann Dauphin Google Research Sam Schoenholz Google Research Temperature check: theory and practice for training models with softmax-cross-entropy losses The softmax function combined with a cross-entropy loss is a principled approach to modeling probability distributions that has become ubiquitous in deep learning. The softmax function is defined by a lone hyperparameter, the temperature, that is commonly set to one or regarded as a way to tune model confidence after training; however, less is known about how the temperature impacts training dynamics or generalization performance. In this work we develop a theory of early learning for models trained with softmax-cross-entropy loss and show that the learning dynamics depend crucially on the inverse-temperature β as well as the magnitude of the logits at initialization, ||βz|| 2 . We follow up these analytic results with a large-scale empirical study of a variety of model architectures trained on CIFAR10, ImageNet, and IMDB sentiment analysis. We find that generalization performance depends strongly on the temperature, but only weakly on the initial logit magnitude. We provide evidence that the dependence of generalization on β is not due to changes in model confidence, but is a dynamical phenomenon. It follows that the addition of β as a tunable hyperparameter is key to maximizing model performance. Although we find the optimal β to be sensitive to the architecture, our results suggest that tuning β over the range 10 −2 to 10 1 improves performance over all architectures studied. We find that smaller β may lead to better peak performance at the cost of learning stability.Preprint. Under review. Introduction Deep learning has led to breakthroughs across a slew of classification tasks [1,2,3]. Crucial components of this success have been the use of the softmax function to model predicted classprobabilities combined with the cross-entropy loss function as a measure of distance between the predicted distribution and the label [4,5]. Significant work has gone into improving the generalization performance of softmax-cross-entropy learning. A particularly successful approach has been to improve overfitting by reducing model confidence; this has been done by regularizing outputs using confidence regularization [6] or by augmenting data using label smoothing [7,8]. Another way to manipulate model confidence is to tune the temperature of the softmax function, which is otherwise commonly set to one. Adjusting the softmax temperature during training has been shown to be important in metric learning [9,10] and when performing distillation [11]; as well as for post-training calibration of prediction probabilities [12,13]. The interplay between temperature, learning, and generalization is complex and not well-understood in the general case. Although significant recent theoretical progress has been made understanding generalization and learning in wide neural networks approximated as linear models, analysis of linearized learning dynamics has largely focused on the case of squared error losses [14,15,16,17,18]. Infinitely-wide networks trained with softmax-cross-entropy loss have been shown to converge to max-margin classifiers in a particular function space norm [19], but timescales of convergence are not known. Additionally, many well-performing models operate best away from the linearized regime [17,20]. This means that understanding the deviations of models from their linearization around initialization is important for understanding generalization [16,21]. In this paper, we investigate the training of neural networks with softmax-cross-entropy losses. In general this problem is analytically intractable; to make progress we pursue a strategy that combines analytic insights at short times with a comprehensive set of experiments that capture the entirety of training. At short times, models can be understood in terms of a linearization about their initial parameters along with nonlinear corrections. In the linear regime we find that networks trained with different inverse-temperatures, β = 1/T , behave identically provided the learning rate is scaled as η = ηβ 2 . Here, networks begin to learn over a timescale τ z ∼ Z 0 2 /η where Z 0 are the initial logits of the network after being multiplied by β. This implies that we expect learning to begin faster for networks with smaller logits. The learning dynamics begin to become nonlinear over another, independent, timescale τ nl ∼ β/η, suggesting more nonlinear learning for small β. From previous results we expect that neural networks will perform best in this regime where they quickly exit the linear regime [21,22,23]. We combine these analytic results with extensive experiments on competitive neural networks across a range of architectures and domains including: Wide Residual networks [3] on CIFAR10 [24], ResNet-50 [25] on ImageNet [26], and GRUs [27] on the IMDB sentiment analysis task [28]. In the case of residual networks, we consider architectures with and without batch normalization, which can appreciably change the learning dynamics [29]. For all models studied, we find that generalization performance is poor at Z 0 2 1 but otherwise largely independent of Z 0 2 . Moreover, learning becomes slower and less stable at very small β; indeed, the optimal learning rate scales like η * ∼ 1/β and the resulting early learning timescale can be written as τ * z ∼ Z 0 2 /β. For all models studied, we observe strong performance for β ∈ [10 −2 , 10 1 ] although the specific optimal β is architecture dependent. Emphatically, the optimal β is often far from 1. For models without batch normalization, smaller β can give stronger results on some training runs, with others failing to train due to instability. Overall, these results suggest that model performance can often be improved by tuning β over the range of [10 −2 , 10 1 ]. Theory We begin with a precise description of the problem setting before discussing a theory of learning at short times. Basic model and notation We consider a classification task with K classes. For an N dimensional input x, let z(x, θ) be the pre-softmax output of a classification model parameterized by θ ∈ R P , such that the classifier predicts the class i corresponding to the largest output value z i . We will mainly consider θ trained by SGD on a training set (X , Y) of M input-label pairs. We focus on models trained with cross-entropy loss with a non-trivial inverse temperature β. The softmax-cross-entropy loss can be written as L(θ, X , Y) = K i=1 Y i · ln(σ(βz i (X , θ))) = K i=1 Y i · ln(σ(Z i (X , θ)))(1) where we define the rescaled logits Z = βz and σ(Z) i = e Zi / j e Zj is the softmax function. Here Z(X , θ) is the M × K dimensional matrix of rescaled logits on the training set. As we will see later, the statistics of individual σ(Z) i will have a strong influence on the learning dynamics. While the statistics of σ(Z) i are intractable for intermediate magnitudes, Z 2 , they can be understood in the limits of large and small Z 2 . For a fixed model z(x, θ), β controls the certainty of the predicted probabilities. Values of β such that β 1/ z 2 will give small values of Z 2 1, and the outputs of the softmax will be close to 1/K independent of i (the maximum-entropy distribution on K classes). Larger values of β such that β 1/ z 2 will lead to large values of Z 2 1; the resulting distribution has probability close to 1 on one label, and (exponentially) close to 0 on the others. The continuous time learning dynamics (exact in the limit of small learning rate) are given by: θ = ηβ K i=1 ∂z i (X , θ(t)) ∂θ T (Y i − σ(Z i (X , θ(t)))(2) for learning rate η. We will drop the explicit dependence of Z i on θ from here onward, and we will denote time dependence as Z i (X , t) explicitly where needed. In function space, the dynamics of the model outputs on an input x are given by dz i (x) dt = ηβ K j=1 (Θ θ ) ij (x, X )(Y j − σ(Z j (X )))(3) where we define the M × M × K × K dimensional tensorΘ θ , the empirical NTK, as (Θ θ ) ij (x, X ) ≡ ∂z i (x) ∂θ ∂z j (X ) ∂θ T(4) for class indices i and j, which is block-diagonal in the infinite-width limit. From Equation 3, we see that the early-time dynamics in function space depend on β, the initial softmax input Z(X , 0) on the training set, and the initialΘ θ . Changing these observables across a model family will lead to different learning trajectories early in learning. Since significant work has already studied the effects of the NTK, here we focus on the effects of changing β and Z 0 F ≡ Z(X , 0) F (the norm of the M × K dimensional matrix of training logits), independent ofΘ θ . Linearized dynamics For small changes in θ, the tangent kernel is approximately constant throughout learning [14], and we drop the explicit θ dependence in this subsection. The linearized dynamics of z(x, t) only depend on the initial value ofΘ and the β-scaled logit values Z(X , t). This suggests that there is a universal timescale across β and η which can be used to compare linearized trajectories with different parameter values. Indeed, if we define an effective learning rateη ≡ ηβ 2 , we have dZ i (x) dt =η K j=1 (Θ) ij (x, X )(Y j − σ(Z j (X )))(5) which removes explicit β dependence of the dynamics. We note that a similar rescaling exists for the continuous time versions of other optimizers like momentum (Appendix B). Figure 1: For fixed initial training set logits Z 0 , plotting learning curves againstηt = β 2 ηt causes the learning curves to collapse to the learning curve of the linearized model at early times (right), in contrast to un-scaled curves (left). Models with large β follow linearized dynamics the longest. The effective learning rateη is useful for understanding the nonlinear dynamics, as plotting learning curves versusηt causes early-time collapse for fixed Z 0 across β and η ( Figure 1). We see that there is a strong, monotonic, dependence of the time at which the nonlinear model begins to deviate from its linearization on β. We will return to and explain this phenomenon in Section 2.4. Unless otherwise noted, we will analyze all timescales in units ofη instead of η, as it will allow for the appropriate early-time comparisons between models with different β. Early learning timescale We now define and compute the early learning timescale, τ z , that measures the time it takes for the logits to change significantly from their initial value. Specifically, we define τ z such that for t τ z we expect Z(x, t)−Z(x, 0) F Z(x, 0) F and for t τ z , Z(x, t)−Z(x, 0) F ∼ Z(x, 0) F (or larger). This is synonymous with the timescale over which the model begins to learn. As we will show below, τ z ∝ Z 0 F /η. Therefore in units ofη, τ z only depends on Z 0 F and not β. To see this, note that at very short times it follows from Equation 5 that Z i (x, t) − Z i (x, 0) ≈η K j=1 (Θ) ij (x, X )(Y j − σ(Z j (X )))t + O(t 2 )(6) It follows that we can define a timescale over which the logits (on the training set) change appreciably from their initial value as τ z ≡ 1 η Z 0 F Θ (X , X )(Y − σ(Z 0 )) F .(7) where the norms are once again taken across all classes as well as training points. This definition has the desired properties for t τ z and t τ z . In units ofη, τ z depends only on Z 0 F , in two ways. The first is a linear scaling in Z 0 F ; the second comes from the contribution from the gradient Θ (X , X )(Y − σ(Z(X , 0))) F . As previously discussed, since σ(Z 0 ) saturates at small and large values of Z 0 F , it follows that the gradient term will also saturate for large and small Z 0 F , and the ratio of saturating values is some O(1) constant independent of Z 0 F and β. . τ z depends linearly on Z 0 F , up to an O(1) coefficient which saturates at large and small Z 0 F (left, main). Accuracy increases more quickly for small initial Z 0 F , though late time dynamics are similar (center). Rescaling time to t/τ z causes early accuracy curves to collapse (right). The quantitative and conceptual nature of τ z can both be confirmed numerically. When plotted over a wide range of Z 0 F and β, the ratio τ z / Z 0 F (in rescaled time units) undergoes a saturating, O(1) variation from small to large Z 0 F (Figure 2, left). The quantitative dependence of the transition on the NTK is confirmed in Appendix C. Additionally, for fixed β and varying Z 0 F , rescaling time by 1/τ z causes accuracy curves to collapse at early times ( Figure 2, middle), even if they are very different at early times without the rescaling (right). We note here that the late time accuracy curves seem similar across Z 0 F without rescaling, a point which we will return to in Section 3.2. Nonlinear timescale While linearized dynamics are useful to understand some features of learning, the best performing networks often reside in the nonlinear regime [17]. Here we define the nonlinear timescale, τ nl , corresponding to the time over which the network deviates appreciably from the linearized equations. We will show that τ nl ∝ β/η. Therefore, in terms of β and Z 0 F , networks with small β will access the nonlinear regime early in learning, while networks with large β will be effectively linearized throughout training. We note that a similar point was raised in Chizat et al. [21], primarily in the context of MSE loss. We define τ nl to be the timescale over which the change inΘ θ (which contributes to the second order term in Equation 6) can no longer be neglected. Examining the second time derivative of Z, we have d 2 Z i dt 2 =η K j=1      −(Θ θ ) ij (x, X ) d dt σ(Z(X )) linearized dynamics + d dt (Θ θ ) ij (x, X ) (Y j − σ(Z j (X )) nonlinearized dynamics ≡Z nl      (8) The first term is the second derivative under a fixed kernel, while the second term is due to the change in the kernel (neglected in the linearized limit). A direct calculation shows that the second term, which we denoteZ nl , can be written as (Z nl ) i = β −1η2 K j=1 K k=1 (Y k (X ) − σ(Z k (X )) T ∂z k (X ) ∂θ · ∂ ∂θ [Θ θ ] ij (Y j (X ) − σ(Z j (X ))) (9) This gives us a nonlinear timescale τ nl defined, at initialization, by τ nl ≡ Ż (X , 0) F / Z nl (X , 0) F . We can interpret τ nl as the time it takes for changes in the kernel to contribute to learning. Though computing Z nl (X , 0) F in exactly is analytically intractable, its basic scaling in terms of β and Z 0 F (and therefore, that of τ nl ) is computable. We first note the explicit β −1η2 dependence. The remaining terms are independent of β and vary by at most O(1) with Z 0 F ; indeed as described above, Y(X ) − σ(Z(X , 0)) F saturates for large and small Z 0 F . Morevoer, the derivative, ∂z(X ,0) ∂θ , is the square root of the NTK and, at initialization, it is independent of Z 0 F . Together with our analysis of τ z we have that, up to some O(1) dependence on Z 0 F , τ nl ∝ β/η. Therefore, the degree of nonlinearity early in learning is controlled via β alone. = β 2 η. There is an O(1) dependence on Z 0 F which is consistent across varying β for fixed Z 0 F . Once again we can confirm the quantitative and conceptual understanding of τ nl numerically. Qualitatively, we see that for fixed Z 0 F , models with smaller β deviate sooner from the linearized dynamics when learning curves are plotted againstηt ( Figure 1). Quantitatively, we see that τ nl /β (in units ofη) has an O(1) dependence on Z 0 F only ( Figure 3). Experimental Results Optimal Learning Rate We begin our empirical investigation by training wide resnets [8] without batch normalization on CIFAR10. In order to understand the effects of the different timescales on learning, we control β and Z 0 F independently by using a correlated initialization strategy outlined in Appendix D.1. Before considering model performance, it is first useful to understand the scaling of the optimal learning rate with β. To do this, we initialize networks with different β and conduct learning rate sweeps for each β. The optimal learning rate η * has a clear 1/β dependence (Figure 4). Plugging this optimal learning rate into the two timescales identified above gives τ * z ∼ Z 0 F /β and τ * nl ∼ O(1). Note that these timescales are now in units of SGD steps. This suggests that the maximum learning rate is set so that nonlinear effects become important at the fastest possible rate without leading to instability. We notice that τ z will be large at small β and small at large β. Thus, at small β we expect learning to take place slowly and nonlinear effects to become important by the time the function has changed appreciably. At large β, by contrast, our results suggest that the network will have learned a significant amount before the dynamics become appreciably nonlinear. Phase plane In the preceding discussion two quantities emerged that control the behavior of early-time dynamics: the inverse-temperature, β, and the rescaled logits Z 0 F . In attempting to understand the behavior of real neural networks trained using softmax-cross-entropy loss, it therefore makes sense to try to reason about this behavior by considering neural networks that span the β − Z 0 F phase plane, the space of allowable pairs (β, Z 0 F ). By construction, the phase plane is characterized by the timescales involved in early learning. To summarize, τ z ∼ Z 0 F /η sets the timescale for early learning, with larger values of Z 0 F leading to longer time before significant accuracy gains are made (Section 2.3). Meanwhile, τ nl ∼ β/η controls the timescale for learning dynamics to leave the linearized regime -with small β leading to immediate departures from linearity, while models with large β may stay linearized throughout their learning trajectories (Section 2.4). Figure 5: Properties of early learning dynamics, which affect generalization, can be determined by location in the β-Z 0 F phase plane (a). At optimal learning rate η * , small β and larger Z 0 F leads to slower early learning (b), and larger β increases time before nonlinear dynamics contributes to learning. Large Z 0 F has poorly conditioned linearized dynamics. Generalization for a wide resnet trained on CIFAR10 is highly sensitive to β, and relatively insensitive to Z 0 F outside poor conditioning regime. Final logit variance is relatively insensitive to parameters (c). In Figure 5 (a), we show a schematic of the phase plane. The colormap shows the test performance of a wide residual network [3], without batch normalization, trained on CIFAR10 in different parts of the phase plane. The value of β makes a large difference in generalization, with optimal performance achieved at β ≈ 10 −2 . In general, larger β performed worse than small β as expected. Moreover, we observe similar generalization for all sufficiently large β; this is to be expected since models in this regime are close to their linearization throughout training (see Figure 1) and we expect the linearized models to have β-independent performance. Generalization was largely insensitive to Z 0 F so long as the network was sufficiently well-conditioned to be trainable. This suggests that long term learning is insensitive to τ z . In Figure 5 (b), we plot the accuracy after 20 steps of optimization (with the optimal learning rate). For fixed Z 0 F , the training speed was slow for the smallest β and then became faster with increasing β. For fixed β the training speed was fastest for small Z 0 F and slowed as Z 0 F increased. Both these phenomena were predicted by our theory and shows that both parameters are important in determining the early-time dynamics. However, we note that the relative accuracy across the phase plane at early times did not correlate with the performance at late times. This highlights that differences in generalization are a dynamical phenomenon. Another indication of this fact is that at the end of training, at time t f , the final training set logit values Z f F ≡ Z(X , t f ) F tend towards 1 independent of the initial β and Z 0 F ( Figure 5, (c)). With the exception of the poorly-performing large Z 0 F regime, the different models reach similar levels of certainty by the end of training, despite having different generalization performances. Therefore generalization is not well correlated with the final model certainty (a typical motivation for tuning β). Architecture Dependence of the Optimal β Having demonstrated that β controls the generalization performance of neural networks with softmaxcross-entropy loss, we now discuss the question of choosing the optimal β. Here we investigate this question through the lens of a number of different architectures. We find the optimal choice of β to be strongly architecture dependent. Whether or not the optimal β can be predicted analytically is an open question that we leave for future work. Nonetheless, we show that all architectures considered display optimal β between approximately 10 −2 and 10 1 . We observe that by taking the time to tune β it is often the case that performance can be improved over the naive setting of β = 1. Even poorly conditioned networks (achieved by increasing weight scale σ w ) recover performance. (b) For β < 10 −2 , learning is less stable, as evidenced by low average performance but high maximum performance (over 10 random seeds). (c) We see similar phenomenology on the IMDB sentiment analysis task trained with GRUs -where average-case best performance is near β = 1 but peak performance is at small β. Wide Resnet on CIFAR10 In Figure 6 (a) we show the accuracy against β for several wide residual networks whose weights are drawn from normal distributions of different variances, σ 2 w , trained without batchnorm, as well as a network with σ 2 w = 1 trained with batchnorm (averaged over 10 seeds). The best average performance is attained for β < 1, σ w = 1 without batchnorm, and in particular networks with large σ w are dramatically improved with β tuning. The network with batchnorm is better at all β, with optimal β ≈ 10. However, we see that the best performing seed is often at a lower β (Figure 6 (b)), with larger σ w networks competitive with σ w = 1, and even with batchnorm at fixed β (though batchnorm with β = 10 still performs the best). This suggests that small β can improve best case performance, at the cost of stability. Our results emphasize the importance of tuning β, especially for models that have not otherwise been optimized. Table 1: Accuracy on Imagenet dataset for ResNet-50. Tuning β significantly improves accuracy. Resnet50 on ImageNet Method Accuracy (%) ResNet-50 [30] 76.51 ± 0.07 ResNet-50 + Dropout [30] 76.80 ± 0.04 ResNet-50 + Label Smoothing [30] 77.17 ± 0.05 ResNet-50 + Temperature check (β = 0.3) 77.37 ± 0.02 Motivated by our results on CIFAR10, we experimentally explored the effects of β as a tunable hyperparameter for ResNet-50 trained on Imagenet. We follow the experimental protocol established by [30]. A key difference between this procedure and standard training is that we train for substantially longer: the number of training epochs is increased from 90 to 270. Ghiasi et al. [30] found that this longer training regimen was beneficial when using additional regularization. Table 1 shows that scaling β improves accuracy for ResNet-50 with batchnorm. However, we did not find that using β < 1 was optimal for ResNet-50 without normalization. This further emphasizes the subtle architecture dependence that warrants further study. GRUs on IMDB Sentiment Analysis To further explore the architecture dependence of optimal β, we train GRUs (from Maheswaranathan et al. [31]) whose weights are drawn from two different distributions on an IMDB sentiment analysis task that has been widely studied [28]. We plot the results in Figure 6 (c) and observe that the results look qualitatively similar to the results on CIFAR10 without batch normalization. We observe a peak performance near β ∼ 1 averaged over an ensemble of networks, but we observe that smaller β can give better optimal performance at the expense of stability. Conclusions Our empirical results show that tuning β can yield sometimes significant improvements to model performance. Perhaps most surprisingly, we observe gains on ImageNet even with the highlyoptimized ResNet50 model. Our results on CIFAR10 suggest that the effect of β may be even stronger in networks which are not yet highly-optimized, and results on IMDB show that this effect holds beyond the image classification setting. It is possible that even more gains can be made by more carefully tuning β jointly with other hyperparameters, in particular the learning rate schedule and batch size. One key lesson of our theoretical work is that properties of learning dynamics must be compared using the right units. For example, τ nl ∝ 1/βη, which at first glance suggests that models with smaller β will become nonlinear more slowly than their large β counterparts. However, analyzing τ nl with respect to the effective learning rateη = β 2 η yields τ nl ∝ β/η. Thus we see that, in fact, networks with smaller β tend to become more non-linearized before much learning has occurred, compared to networks with large β which can remain in the linearized regime throughout training. Our numerical results confirm this intuition developed using the theoretical analysis. As discussed above, our analysis does not predict the optimal β or η. Extending the theoretical results to make predictions about these quantities is an interesting avenue for future work. Another area that warrants further study is the instability in training at small β. A Linearized learning dynamics A.1 Fixed points For the linearized learning dynamics, the trajectory z(x, t) can be written in terms of the trajectories of the training set as z(x, t) − z(x, 0) =Θ(x, X )Θ + (X , X )(z(X , t) − z(X , 0))(10) where + is the pseudo-inverse. Therefore, if one can solve for z(X , t), then in principle properties of generalization are computable. However, in general Equation 3 does not admit an analytic solution even for fixedΘ, in contrast to the case of mean squared loss. It not even have an equilibrium -if the model can achieve perfect training accuracy, the logits will grow indefinitely. However, there is a guaranteed fixed point if the appropriate L 2 regularization is added to the training objective. Given a regularizer 1 2 λ θ δθ 2 on the change in parameters δθ = θ(t) − θ(0), the dynamics in the linearized regime are given bẏ z(x) = βηΘ(x, X )(Y(X ) − σ(βz(X ))) − λ θ δz(x)(11) where the last term comes from the fact that ∂z ∂θ δθ = δz(x) in the linearized limit. We can write down self-consistent equations for equilibria, which are approximately solvable in certain limits. For an arbitrary input x, the equilibrium solution z * (x) is 0 = βΘ(x, X )(Y(X ) − σ(βz * (X ))) − λ θ δz * (x)(12) This can be rewritten in terms of the training set as δz * (x) =Θ(x, X )Θ + (X , X )z * (X )(13) similar to kernel learning. It remains then to solve for z * (X ). We have: δz * (X ) = β λ θΘ (X , X )[Y(X ) − σ(βz * (X ))](14) We immediately note that the solution depends on the initialization. We assume z(x, 0) = 0, so δz = z in order to simplify the analysis. The easiest case to analyze is when βz * (X ) F 1. Then we have: z * (X ) = β λ θΘ (X , X ) Y(X ) − 1 K (1 + βz * (X ))(15) which gives us z * (X ) = β λ θ 1 + β Kλ θΘ (X , X ) −1Θ (X , X )(Y(X ) − 1/K)(16) Therefore the self-consistency condition for this solution is β λ θΘ F 1, which simplifies the solution to z * (X ) = β λ θΘ (X , X )(Y(X ) − 1/K)(17) This is equivalent to the solution after a single step of (full-batch) SGD with appropriate learning rate. We note that unlike linearized dynamics with L 2 loss and a full-rank kernel, there is no guarantee that the solution converges to 0 training error. The other natural limit is βz * (X ) 2 1. We focus on the 2 class case, in order to take advantage of the conserved quantity of learning with cross-entropy loss. We note that the vector on the right hand side of Equation 3 sums to 1 for every training point. Suppose at initialization,Θ θ has no logit-logit interactions, as is the case for most architectures in the infinite width limit with random initialization. More formally, we can writeΘ θ = Id K×K ⊗Θ x whereΘ x is M × M . Then, the sum of the logits for any input x is conserved during linearized training, as we have: Multiplying the right hand side through, we get A.3 Conditioning of dynamics Understanding the conditioning of the linearized dynamics requires understanding the spectrum of the Hessian matrix H = Id z ⊗Θ(X , X ) σ z (βz * (X )). In the limit of large model size, the first factor is block-diagonal with training set by training set blocks (no logit-logit interactions), and the second term is block-diagonal with K × K blocks (no datapoint-datapoint interactions). We will use the following lemma to get bounds on the conditioning: Lemma: Let M = AB be a matrix that is the product of two matrices. The condition number κ(M) ≡ λ M,max λ M,min has bound κ(B)/κ(A) ≤ κ(M) ≤ κ(A)κ(B)(26) Proof: Consider the vector v that is the eigenvector of B associated with λ B,min . Note that ||Av||/||v|| ≤ λ A,max . Analogously, for w, the eigenvector associated with λ B,max , ||Aw||/||w|| ≥ λ A,min . This gives us the two bounds: λ M,min ≤ λ A,max λ B,min , λ M,max ≥ λ A,min λ B,max(27) This means that the condition number κ(H) ≡ λ M,max λ M,min is bounded by κ(M) ≥ λ A,max λ B,min λ A,min λ B,max = κ(B)/κ(A)(28) In total, we have the bound of Equation 26, where the upper bound is trivial to prove. In particular, this means that a poorly conditioned σ z (βz * (X )) will lead to poor conditioning of the linearized dynamics if the NTKΘ(X , X ) is (relatively) well conditioned. This bound will be important in establishing the poor conditioning of the linearized dynamics for the large logit regime ||βz|| 1. A.3.1 Small logit conditioning For βz * (X ) F 1, the Hessian H is H = 1 K 1 − 1 K 11 T ⊗Θ(X , X )(29) Since H is the Kroenecker product of two matrices, the condition numbers multiply, and we have κ(H) = κ(Θ)(30) which is well-conditioned so long as the NTK is. Regardless, the well-conditioned regularization due to λ θ dominates the approach to equilibrium. A.3.2 Large logit conditioning Now consider βz * (X ) F 1. Here we will show that the linearized dynamics is poorly conditioned, and that κ(H) is exponentially large in β. We first try to understand σ z (βz * (x)) for an individual x ∈ X . To 0th order (in an as-of-yetundefined expansion), σ z is zero -at large temperature the softmax returns either 0 or 1, which by Equation 24 gives 0 in all entries. The size of the corrections end up being exponentially dependent on βz * F ; the entries will have broad, log-normal type distributions with magnitudes which scale as exp(−β|z * 1 |). There will be two scaling regimes one with a small number of labels in the sense √ β ln(K), where the largest logit dominates the statistics, and one where the number of labels is large (and the central limit theorem applies to the partition function). In both cases, however, there is still exponential dependence on β; we will focus on the first which is easier to analyze and more realistic (e.g. for 10 6 labels "large" β is only ∼ 15). Let z 1 be the largest of K logits, z 2 the second largest, and so on. Then using Equation 24 we have: (σ z ) i1 = −e −β(z1−zi)(31) for i = 1, (σ z ) ij = δ ij e −β(z1−zi) − e −β(2z1−zi−zj )(32) for i = j and (σ z ) 11 = e −β(z1−z2) The eigenvectors and eigenvalues can be approximately computed as: (v 1 ) 1 = 1 √ 2 , (v 1 ) 2 = − 1 √ 2 , λ 1 = 2e −β(z1−z2) (34) (v 2 ) 1 = 1 √ 2 , (v 2 ) 2 = 1 √ 2 , λ 2 = − 1 2 e −2β(z1−z2)(35) and for i > 2, (v i ) i = 1, (v i ) 1 = e −β(z2−zi) , λ i = e −β(z1−zi)(36) with all non-explicit eigenvector components 0. This expansion is valid provided that β/K 1 (so that e β(z1−zi) e β(z1−zi+1) ). Therefore the spectrum of any individual block σ z (βz(x)) is exponentially small in β. Using the bound in the Lemma, we have: κ(H) ≥ e O(β|z * 1 |) /κ(Θ(X , X ))(37) This is a very loose bound, as it assumes that the largest eigendirections of σ z are aligned with the smallest eigendirections of Id z ⊗Θ, and vice versa. It is possible κ(H) is closer in magnitude to the upper bound e β(z2−z K ) κ(Θ(X , X )). Regardless, κ(H) is exponentially large in β -meaning that the conditioning is exponentially poor for large βz * F . B SGD and momentum rescalings B.1 Discrete equations Consider full-batch SGD training. The update equations for the parameters θ are: θ t+1 = θ t − η∇ θ L(38) We will denote g t ≡ ∇ θ L for ease of notation. Training with momentum, the equations of motion are given by: v t+1 = (1 − γ)v t − g t (39) θ t+1 = θ t + ηv t+1(40) where γ ∈ [0, 1]. One key point to consider later will be the relative magnitude ∆ θ of updates to the parameters. For SGD, the magnitude of updates is η||g||. For momentum with slowly-varying gradients the magnitude is η||g||/γ. B.2 Continuous time equations We can write down the continuous time version of the learning dynamics as follows. For SGD, for small learning rates we have: dθ dt = −ηg(41) For the momentum equations we have dv dt = −γv − g (42) dθ dt = ηv(43) From these equations, we can see that in the continuous time limit, there are coordinate transformations which can be used to cause sets of trajectories with different parameters to collapse to a single trajectory. SGD is the simplest, where rescaling time to τ ≡ ηt causes learning curves to be identical for all learning rates. For momentum, instead of a single universal learning curve, there is a one-parameter family of curves controlled by the ratio T mom ≡ η/γ 2 . Consider rescaling time to τ = at and ν = bv, where a and b will be chosen to put the equations in a canonical form. In our new coordinates, we have dν dτ = −(γ/a)ν − (b/a)g (44) dθ dτ = ην/(ab)(45) The canonical form we choose is dν dτ = −λν − g (46) dθ dτ = ν(47) From which we arrive at a = b = √ η, which gives us λ = γ/ √ η. Note that this is not a unique canonical form; for example, if we fix a coefficient of −1 on ν, we end up with dν dτ = −ν − (η/γ 2 )g (48) dθ dτ = ν(49) with a = γ. This is a different time rescaling, but still controlled by T mom . Working in the canonical form of Equations 46 and 47, we can analyze the dynamics. One immediate question is the difference between λ 1 and λ 1. We note that the integral equation ν(τ ) = ν(0) + τ 0 e −λ(τ −τ ) g(τ )dτ(50) solves the differential equation for ν. Therefore, for λ 1, ν(t) only depends on the current value g(t) and we have ν(τ ) ≈ g(τ )/λ. Therefore, we have, approximately: dθ dτ ≈ 1 λ g(51) This means that for large λ all the curves will approximately collapse, with timescale given by √ ηλ −1 = γη (dynamics similar to SGD). For λ 1, the momentum is essentially the integrated gradient across all time. If ν(0) = 0, then we have dθ dτ ≈ τ 0 g(τ )dτ(52) In this limit, θ(τ ) is the double integral of the gradient with respect to time. Given the form of the controlling parameter T mom , we can choose to parameterize γ =γ √ η. Under this parameterization, we have T mom =γ 2 . The dynamical equations then become: dν dτ = −γν − g (53) dθ dτ = ν(54) which automatically removes explicit dependence on η. One particular regime of interest is the early-time dynamics, starting from ν(0) = 0. Integrating directly, we have: θ(τ ) = − 1 2 gτ 2 + 1 6γ gτ 3 + . . .(55) This means that τ alone is the correct timescale for early learning, at least until τγ ∼ 1 -which in the original parameters corresponds to t ∼ 1/γ (the time it takes for the momentum to be first "fully integrated"). B.3 Detailed analysis of momentum timescales One important subtlety is that 1 is not the correct value to compare λ to. The real timescale involved is the one over which g changes significantly. We can approximate this in the following way. Suppose that there is some relative change ∆ θ ||θ|| ∼ c of the parameters that leads to an appreciable relative change in g. Then the timescale over which θ changes by that amount is the one we must compare λ to. We can compute that timescale in the following way. We assume g fixed for what follows. Therefore, Equation 51 approximately holds. The timescale τ c of the change is then given by: ∆ θ ||θ|| = 1 λ ||g|| ||θ|| τ c ∼ c(56) which gives τ c ∼ cλ||θ||/||g|| (57) In particular, this means that the approximation is good when λτ c 1, which gives γ 2 /η ||g|| ||θ||the former being a function of the dynamical parameters, the latter being a function of the geometry of L with respect to θ. One consequence of this analysis is that if the ||θ|| remains roughly constant, for fixed η and γ, late in learning when the gradients become small the dynamics shifts into the regime where λ is large, and we effectively have SGD. B.4 Connecting discrete and continuous time One use for the form of the continuous time rescalings is to use them to compare learning curves for the actual discrete optimization that is performed with different learning rates. For small learning rates, the curves are expected to collapse, while for larger learning rates the deviations from the continuous expectation can be informative. With momentum, we only have perfect collapse when γ and η are scaled together. However, one typical use case for momentum is to fix the parameter γ, and sweep through different learning rates. With this setup, if g is changing slowly compared to γ (more precisely, γ 2 /η ||g||/||θ||), as may be the case at later training times, the change in parameters from a single step is ∆ θ ∼ (η/γ))||g|| and the rescaling of taking t to ηt (as for SGD) collapses the dynamics. Therefore given a collection of varying η, but fixed γ curves, it is possible to get intermediate and late time dynamics on the same scale. However, at early times, while the momentum is still getting "up to speed" (i.e. in the first 1/γ steps), the appropriate timescale is η −1/2 . Therefore, in order to get learning curves to collapse across different η at early times, we need to rescale γ with η as implied by Equations 46 and 47. Namely, one must fixγ and rescale γ =γ √ η. We note that, since γ < 1, this gives us a restriction η <γ −2 for the maximum learning that can be supported by the rescaled momentum parameter. B.5 Momentum equations with softmax-cross-entropy learning For cross-entropy learning with softmax inputs βz, all the scales acquire dependence on β. If we define z ≡ ∂L ∂βz and g z ≡ ∂z ∂θ F , then we have, approximately, ||g|| ≈ β z g z . Consider the goal of obtaining identical early-time learning curves for different values of β. (The curves are only globally consistent across β in the linearized regime.) In order to get learning curves to collapse, we want dL dτ to be independent of β in the rescaled time units. We note that the change in the loss function ∆ L from a single step of SGD goes as ∆ L ∼ ηβ 2 2 z g 2 z(58) This suggests that one way to collapse learning curves is to plot them against the rescaled learning rateηt, whereη = ηβ 2 . While hyperparameter tuning across β, one could use η =η/β 2 , sweeping overη in order to easily obtain comparable learning curves. However, a better goal for a learning rate rescaling is to try and stay within the continuous time limitthat is, to control the change in parameters ∆ θ for a single step to be small across β. We have ∆ θ ∼ ηβ z g z(59) which suggests that maximum allowable learning rates will scale as 1/β. This suggests setting η =ηβ −1 , and rescaling time asηβ in order to best explore the continuous learning dynamics. We can perform a similar analysis for the momentum optimizers. We begin by analyzing the continuous time equations for the dynamics of the loss. Starting with the rescalings from Equations 53 and 54 we have dν dτ = −γν − βg (60) dθ dτ = ν (61) dL dτ = βg · ν(62) where g = ∂L ∂βz ∂z ∂θ . Rescaling τ by β gets us: dν dβτ = −γ β ν − g (63) dθ dβτ = 1 β ν (64) dL dβτ = g · ν(65) This rescaling causes a collapse of the trajectories of the L at early times ifγ/β is constant for varying β. One scheme to arrive at the final canonical form, across β, is by the following definitions of η, γ, and τ : • η =ηβ −2 • γ ≡ β √ ηγ = √ηγ • τ ≡ β √ ηt = √η t where curves with fixedγ will collapse. The latter two equations are similar to before, except with η replaced withη. The dynamical equations are then: dν dτ = −γν − g (66) dθ dτ = 1 β ν (67) dL dτ = g · ν(68) The change in parameters from a single step (assuming constant g and saturation) is ∆ θ = ||g|| γβ η(69) If we instead want the change in parameters from a single step to be invariant of β so the continuous time approximation holds, while maintaining collapse of trajectories, we first note that ∆ θ ∼ √ η γ β z g z (70) from a single step of the momentum optimizer. To keep ∆ θ invariant of β, we can set: • η =ηβ −1 • γ ≡ β √ ηγ = √ηγ = β 1/2 √ηγ • τ ≡ β √ ηt = β 1/2 √η t Note that the relationship between γ andγ is the same in both schemes when measured with respect to the raw learning rate η. C Softmax-cross-entropy gradient magnitude C.1 Magnitude of gradients in fully-connected networks The value of τ z has nontrivial (but bounded) dependence on Z 0 F via the Θ (Y − σ(Z 0 (X ))) F term in Equation 7. We can confirm the dependence for highly overparameterized models by using the theoreticalΘ. In particular, for wide neural networks, the tangent kernel is block-diagonal in the logits, and easily computable. The numerically computed τ z / Z 0 F correlates well with Θ (Y − σ(Z 0 (X ))) −1 F for wide (2000 hidden units/layer) fully connected networks (Figure 7). The ratio depends on details like the nonlinearities in the network; for example, Relu units tend to have a larger ratio than Erf units (left and middle). The ratio also depends on the properties of the dataset. For example, the ratio increases on CIFAR10 when the training labels are randomly shuffled (right). Figure 7: τ z / Z 0 F is highly correlated with Θ (Y − σ(Z 0 )) −1 , withΘ computed in the infinite width limit (in units of effective learning rateη = β 2 η). Ratio between normalized timescales at large and small Z 0 F depends on nonlinearity (left and middle), as well as training set statistics (right, CIFAR10 with shuffled labels). Therefore in general the ratio of τ z / Z 0 F at large and small Z 0 F depends subtly on the relationship between the NTK and the properties of the data distribution. A full analysis of this relationship is beyond the scope of this work. The exact form of the transition is likely even more sensitive to these properties and is therefore harder to analyze than the ratio alone. D Experimental details D.1 Correlated initialization In order to avoid confounding effects of changing β and Z 0 F with changes toΘ, we use a correlated initialization strategy (similar to [21]) which fixesΘ while allowing for independent variation of β and Z 0 F . Given a network with final hidden layer h(x, θ) and output weights W O , we define a combined network z c (x,θ) explicitly as where, at initialization, θ 1 = θ 2 , and the elements of W O,a have statistics E[(W O,a ) ij (W O,b ) kl ] = δ ik δ jl for a = b c · δ ik δ jl for a = b(72) for correlation coefficient c ∈ [−1, 1], where δ ij is the Kronecker-delta which is 1 is i = j and 0 otherwise. Under this approach, the initial magnitude of the training set logits is given by Z 0 F = β (1 + c) z 0 F , where z 0 F is the initial magnitude of the logits of the base model. By manipulating β and c, we can independently change β and Z 0 F with the caveat that Z 0 F ≤ √ 2β z 0 F since c ≤ 1. It follows that the small β, large Z 0 F region of the phase plane (upper left in Figure 5) is inaccessible with most well-conditioned models where z 0 F ∼ 1 at initialization. If we only train one set of weights, theΘ is independent of c. Unless otherwise noted, all empirical studies in Sections 2 and 3.2 involve training a wide resnet on CIFAR10 with SGD, using GPUs, using the above correlated initialization strategy to fixΘ. All experiments used JAX [32] and experiments involving linearization or direct computation of the NTK used Neural Tangents [33]. 2 Figure 2 : 22vs. time, β = 1.00e − 02 ||Z 0 ||F = 10 −5 ||Z 0 ||F = 10 −4 ||Z 0 ||F = 10 −3 ||Z 0 ||F = 10 vs. time, β = 1.00e − 02 ||Z 0 ||F = 10 −5 ||Z 0 ||F = 10 −4 ||Z 0 ||F = 10 −3 ||Z 0 ||F = 10 −The timescale τ z depends only on Z 0 F in units ofη = β 2 η (left, inset) Figure 3 : 3The time to deviation from linearized dynamics, τ nl , has large deviation over β and Z 0 F (left), which can be largely explained by linear dependence on β (right), in units ofη Figure 4 : 4Optimal learning rate η * for WRN on CIFAR10 scales as 1/β. Figure 6 : 6Dependence of test accuracy for various architectures with β tuning. (a) For WRN with batchnorm, trained on CIFAR10, the optimal β ≈ 10. Without batchnorm, the performance of the network can be nearly recovered with β-scaling alone with β ≈ 10 −2 . τz, Erf (shuffled labels) τz/||Z 0 ||F ||Θ(Y − σ(Z))|| −1 F z c (x, W O,1 , W O,2 , θ 1 , θ 2 ) = W O,1 h(x, θ 1 ) + W O,2 h(x, θ 2 ) Tż (x) = ηβ1 T Id K×K ⊗Θ x (Y − σ(βz(X )))(18) | ; therefore, though the σ z term of H is exponentially small, it dominates the linearized dynamics near the fixed point, and the approach to equilibrium is slow. We will analyze the conditioning of the dynamics in the remainder of this section. 1 Tż (x) = ηβΘ x 1 T (Y − σ(βz(X ))) = 0(19)(Note that ifΘ θ has explicit dependence on the logits, there still is a conserved quantity, which is more complicated to compute.) Now we can analyze βz * (X )F1. With two classes, and z(X ) = 0 at initialization, we have z * 1 = −z * 2 . Therefore, without loss of generality, we focus on z * 1 , the logit of the first class. In this limit, the leading order correction to the softmax is approximately:The self-consistency equation is then:The vector on the right hand side has entries that are O(e −2β|z * 1 | ) for correct classifications, and O(1) for incorrect ones. If we assume that the training error is 0, then we have:This is still non-trivial to solve, but we see that the self consistency condition is that ln(β||Θ|| F /λ θ ) 1.Here also it may be difficult to train and generalize well. The individual elements of the right-handside vector are broadly distributed due to the exponential -so the outputs of the model are sensitive to/may only depend on a small number of datapoints. Even if the equilibrium solution has no training loss, generalization error may be high for the same reasons.This suggests that even for NTK learning (with L 2 regularization), the scale of ||βz|| plays an important role in both good training accuracy and good generalization. In the NTK regime, there is one unique solution so (in the continuous time limit) the initialization doesn't matter; rather, the ratio of β and λ θ (compared to the appropriate norm ofΘ) needs to be balanced to prevent falling into the small βz regime (where training error might be large) or the large βz regime (where a few datapoints might dominate and reduce generalization).A.2 Dynamics near equilibriumThe dynamics near the equilibrium can be analyzed by expanding around the fixed point equation.We focus on the dynamics on the training set. The dynamics of the differencez(X ) = z(X ) − z * (X ) for small perturbations is given bẏwhere σ z is the derivative of the softmax matrixWe can perform some analysis in the large and small β cases (once again ignoring λ z ). For small βz * (X ) F , we have β λ θΘ F 1 which leads to:This matrix has K − 1 eigenvalues with value 1/K, and one zero eigenvalue (corresponding to the conservation of probability). Therefore β 2 [Id z ⊗Θ(X , X )]σ z (βz * (X )) F λ θ , and the well-conditioned regularizer dominates the approach to equilibrium.In the large β case (ln(β||Θ||/λ θ ) 1), the values of σ(βz(X )) are exponentially close to 0 (K − 1 values) or 1 (the value corresponding to the largest logit). This means that σ z (βz(X )) has exponentially small values in βz(X ) F -if any one of σ(βz i (X )) and σ(βz j (X )) is exponentially small, the corresponding element of σ z (βz(X )) is as well; for the largest logit i the diagonal is σ(βz i (X ))(1 − σ(βz i (X ))) which is also exponentially small.From Equation 22, we have λ θ β 2 e 2β|z * Backpropagation Applied to Handwritten Zip Code Recognition. Y Lecun, B Boser, J S Denker, D Henderson, R E Howard, W Hubbard, L D , 10.1162/neco.1989.1.4.541Neural Computation. 14Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation Applied to Handwritten Zip Code Recognition. Neural Computation, 1(4): 541-551, December 1989. ISSN 0899-7667. doi: 10.1162/neco.1989.1.4.541. ImageNet Classification with Deep Convolutional Neural Networks. Alex Krizhevsky, Ilya Sutskever, Geoffrey E Hinton, Advances in Neural Information Processing Systems. Curran Associates, Inc25Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet Classification with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems 25, pages 1097-1105. Curran Associates, Inc., 2012. Wide Residual Networks. Sergey Zagoruyko, Nikos Komodakis, arXiv:1605.07146Sergey Zagoruyko and Nikos Komodakis. Wide Residual Networks. arXiv:1605.07146 [cs], June 2017. Revisiting squared-error and cross-entropy functions for training neural network classifiers. M Douglas, Victor L Kline, Berardi, 10.1007/s00521-005-0467-yNeural Computing & Applications. 144Douglas M. Kline and Victor L. Berardi. Revisiting squared-error and cross-entropy functions for training neural network classifiers. Neural Computing & Applications, 14(4):310-318, December 2005. ISSN 1433-3058. doi: 10.1007/s00521-005-0467-y. Squared Error Training: A Theoretical and Experimental Comparison. Pavel Golik, Patrick Doetsch, Hermann Ney, Interspeech. 13Cross-Entropy vsPavel Golik, Patrick Doetsch, and Hermann Ney. Cross-Entropy vs. Squared Error Training: A Theoretical and Experimental Comparison. In Interspeech, volume 13, pages 1756-1760, August 2013. Gabriel Pereyra, George Tucker, Jan Chorowski, Lukasz Kaiser, Geoffrey Hinton, arXiv:1701.06548Regularizing Neural Networks by Penalizing Confident Output Distributions. Gabriel Pereyra, George Tucker, Jan Chorowski, Lukasz Kaiser, and Geoffrey Hinton. Regular- izing Neural Networks by Penalizing Confident Output Distributions. arXiv:1701.06548 [cs], January 2017. When does label smoothing help?. Rafael Müller, Simon Kornblith, Geoffrey E Hinton, Advances in Neural Information Processing Systems. Curran Associates, Inc32Rafael Müller, Simon Kornblith, and Geoffrey E Hinton. When does label smoothing help? In Advances in Neural Information Processing Systems 32, pages 4694-4703. Curran Associates, Inc., 2019. Rethinking the Inception Architecture for Computer Vision. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, Zbigniew Wojna, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionChristian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethink- ing the Inception Architecture for Computer Vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2818-2826, 2016. Improving Generalization via Scalable Neighborhood Component Analysis. Zhirong Wu, Alexei A Efros, Stella X Yu, arXiv:1808.04699Zhirong Wu, Alexei A. Efros, and Stella X. Yu. Improving Generalization via Scalable Neighborhood Component Analysis. arXiv:1808.04699 [cs], August 2018. Andrew Zhai, Hao-Yu Wu, arXiv:1811.12649Classification is a Strong Baseline for Deep Metric Learning. Andrew Zhai and Hao-Yu Wu. Classification is a Strong Baseline for Deep Metric Learning. arXiv:1811.12649 [cs], August 2019. Geoffrey Hinton, Oriol Vinyals, Jeff Dean, arXiv:1503.02531Distilling the Knowledge in a Neural Network. cs, statGeoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the Knowledge in a Neural Network. arXiv:1503.02531 [cs, stat], March 2015. Probabilistic Outputs for Support Vector Machines and Comparisons to Regularized Likelihood Methods. John Platt, Adv. Large Margin Classif. 10John Platt. Probabilistic Outputs for Support Vector Machines and Comparisons to Regularized Likelihood Methods. Adv. Large Margin Classif., 10, June 2000. On calibration of modern neural networks. Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q Weinberger, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine LearningSydney, NSW, Australia70JMLR.orgChuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. On calibration of modern neural networks. In Proceedings of the 34th International Conference on Machine Learning -Volume 70, ICML'17, pages 1321-1330, Sydney, NSW, Australia, August 2017. JMLR.org. Neural Tangent Kernel: Convergence and Generalization in Neural Networks. Arthur Jacot, Franck Gabriel, Clement Hongler, Advances in Neural Information Processing Systems 31. Curran Associates, IncArthur Jacot, Franck Gabriel, and Clement Hongler. Neural Tangent Kernel: Convergence and Generalization in Neural Networks. In Advances in Neural Information Processing Systems 31, pages 8571-8580. Curran Associates, Inc., 2018. Gradient Descent Provably Optimizes Over-parameterized Neural Networks. Simon S Du, Xiyu Zhai, Barnabás Póczos, Aarti Singh, 7th International Conference on Learning Representations. New Orleans, LA, USAOpenReview.netSimon S. Du, Xiyu Zhai, Barnabás Póczos, and Aarti Singh. Gradient Descent Provably Optimizes Over-parameterized Neural Networks. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019. Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent. Jaehoon Lee, Lechao Xiao, Samuel Schoenholz, Yasaman Bahri, Roman Novak, Jascha Sohl-Dickstein, Jeffrey Pennington, Advances in Neural Information Processing Systems. Curran Associates, Inc32Jaehoon Lee, Lechao Xiao, Samuel Schoenholz, Yasaman Bahri, Roman Novak, Jascha Sohl- Dickstein, and Jeffrey Pennington. Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent. In Advances in Neural Information Processing Systems 32, pages 8570-8581. Curran Associates, Inc., 2019. Roman Novak, Lechao Xiao, Jaehoon Lee, Yasaman Bahri, Greg Yang, Jiri Hron, Daniel A Abolafia, Jeffrey Pennington, Jascha Sohl-Dickstein, arXiv:1810.05148Bayesian Deep Convolutional Networks with Many Channels are Gaussian Processes. cs, statRoman Novak, Lechao Xiao, Jaehoon Lee, Yasaman Bahri, Greg Yang, Jiri Hron, Daniel A. Abolafia, Jeffrey Pennington, and Jascha Sohl-Dickstein. Bayesian Deep Convolutional Net- works with Many Channels are Gaussian Processes. arXiv:1810.05148 [cs, stat], February 2019. Disentangling trainability and generalization in deep learning. Lechao Xiao, Jeffrey Pennington, Samuel S Schoenholz, arXiv:1912.13053cs, statLechao Xiao, Jeffrey Pennington, and Samuel S. Schoenholz. Disentangling trainability and generalization in deep learning. arXiv:1912.13053 [cs, stat], December 2019. Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks Trained with the Logistic Loss. Lenaic Chizat, Francis Bach, arXiv:2002.04486cs, math, statLenaic Chizat and Francis Bach. Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks Trained with the Logistic Loss. arXiv:2002.04486 [cs, math, stat], March 2020. Laurence Aitchison, arXiv:1910.08013Why bigger is not always better: On finite and infinite neural networks. cs, statLaurence Aitchison. Why bigger is not always better: On finite and infinite neural networks. arXiv:1910.08013 [cs, stat], November 2019. On Lazy Training in Differentiable Programming. Lénaïc Chizat, Edouard Oyallon, Francis Bach, Advances in Neural Information Processing Systems. Curran Associates, Inc32Lénaïc Chizat, Edouard Oyallon, and Francis Bach. On Lazy Training in Differentiable Programming. In Advances in Neural Information Processing Systems 32, pages 2937-2947. Curran Associates, Inc., 2019. Roman Novak, and Jascha Sohl-Dickstein. Finite Versus Infinite Neural Networks: An Empirical Study. Jaehoon Lee, Samuel S Schoenholz, Jeffrey Pennington, Ben Adlam, Lechao Xiao, Jaehoon Lee, Samuel S. Schoenholz, Jeffrey Pennington, Ben Adlam, Lechao Xiao, Roman Novak, and Jascha Sohl-Dickstein. Finite Versus Infinite Neural Networks: An Empirical Study. July 2020. Jascha Sohl-Dickstein, and Guy Gur-Ari. The large learning rate phase of deep learning: The catapult mechanism. Aitor Lewkowycz, Yasaman Bahri, Ethan Dyer, Aitor Lewkowycz, Yasaman Bahri, Ethan Dyer, Jascha Sohl-Dickstein, and Guy Gur-Ari. The large learning rate phase of deep learning: The catapult mechanism. March 2020. Learning Multiple Layers of Features from Tiny Images. A Krizhevsky, University of TorontoMaster's thesisA. Krizhevsky. Learning Multiple Layers of Features from Tiny Images. Master's thesis, University of Toronto, 2009. Deep Residual Learning for Image Recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770-778, 2016. ImageNet: A largescale hierarchical image database. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Li Fei-Fei, 10.1109/CVPR.2009.52068482009 IEEE Conference on Computer Vision and Pattern Recognition. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large- scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248-255, June 2009. doi: 10.1109/CVPR.2009.5206848. Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, Yoshua Bengio, arXiv:1412.3555Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. arXiv:1412.3555 [cs], December 2014. Learning word vectors for sentiment analysis. Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, Christopher Potts, 978-1-932432-87-9Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesUSAAssociation for Computational Linguistics1Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies -Volume 1, HLT '11, pages 142-150, USA, June 2011. Association for Computational Linguistics. ISBN 978-1-932432-87-9. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. Sergey Ioffe, Christian Szegedy, Sergey Ioffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. February 2015. DropBlock: A regularization method for convolutional networks. Golnaz Ghiasi, Tsung-Yi Lin, Quoc V Le, Advances in Neural Information Processing Systems 31. Curran Associates, IncGolnaz Ghiasi, Tsung-Yi Lin, and Quoc V Le. DropBlock: A regularization method for convolutional networks. In Advances in Neural Information Processing Systems 31, pages 10727-10737. Curran Associates, Inc., 2018. Reverse engineering recurrent networks for sentiment classification reveals line attractor dynamics. Niru Maheswaranathan, Alex Williams, Matthew Golub, Surya Ganguli, David Sussillo, Advances in Neural Information Processing Systems. Curran Associates, Inc32Niru Maheswaranathan, Alex Williams, Matthew Golub, Surya Ganguli, and David Sussillo. Reverse engineering recurrent networks for sentiment classification reveals line attractor dynam- ics. In Advances in Neural Information Processing Systems 32, pages 15696-15705. Curran Associates, Inc., 2019. James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, Skye Wanderman-Milne, JAX: Composable transformations of Python+NumPy programs. James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, and Skye Wanderman-Milne. JAX: Composable transformations of Python+NumPy programs, 2018. Roman Novak, Lechao Xiao, Jiri Hron, Jaehoon Lee, Alexander A Alemi, Jascha Sohl-Dickstein, Samuel S Schoenholz, arXiv:1912.02803Neural Tangents: Fast and Easy Infinite Neural Networks in Python. cs, statRoman Novak, Lechao Xiao, Jiri Hron, Jaehoon Lee, Alexander A. Alemi, Jascha Sohl- Dickstein, and Samuel S. Schoenholz. Neural Tangents: Fast and Easy Infinite Neural Networks in Python. arXiv:1912.02803 [cs, stat], December 2019.
[]
[ "AN APPROXIMATE HERBRAND'S THEOREM AND DEFINABLE FUNCTIONS IN METRIC STRUCTURES", "AN APPROXIMATE HERBRAND'S THEOREM AND DEFINABLE FUNCTIONS IN METRIC STRUCTURES" ]
[ "Isaac Goldbring " ]
[]
[]
We develop a version of Herbrand's theorem for continuous logic and use it to prove that definable functions in infinite-dimensional Hilbert spaces are piecewise approximable by affine functions. We obtain similar results for definable functions in Hilbert spaces expanded by a group of generic unitary operators and Hilbert spaces expanded by a generic subspace. We also show how Herbrand's theorem can be used to characterize definable functions in some absolutely ubiquitous structures from classical logic.
10.1002/malq.201110061
[ "https://export.arxiv.org/pdf/1107.3783v1.pdf" ]
520,721
1107.3783
a2bf9bf35fb331b90786aa35b66dc6477323f48d
AN APPROXIMATE HERBRAND'S THEOREM AND DEFINABLE FUNCTIONS IN METRIC STRUCTURES 19 Jul 2011 Isaac Goldbring AN APPROXIMATE HERBRAND'S THEOREM AND DEFINABLE FUNCTIONS IN METRIC STRUCTURES 19 Jul 2011 We develop a version of Herbrand's theorem for continuous logic and use it to prove that definable functions in infinite-dimensional Hilbert spaces are piecewise approximable by affine functions. We obtain similar results for definable functions in Hilbert spaces expanded by a group of generic unitary operators and Hilbert spaces expanded by a generic subspace. We also show how Herbrand's theorem can be used to characterize definable functions in some absolutely ubiquitous structures from classical logic. Introduction The main motivation for this paper comes from the study of definable functions in metric structures; this study was initiated by the author in [11], where a study of the definable functions in Urysohn's metric space was undertaken, and continued in [10], where the definable linear operators in (infinite-dimensional) Hilbert spaces were characterized. However, lacking any understanding of arbitrary definable functions in Hilbert spaces, we conjectured that they were, in some sense, "piecewise affine" in analogy with the classical case of an infinite vector space over a division ring. In unpublished lecture notes by van den Dries on motivic integration [9], we came upon a proof of the piecewise affineness of definable functions in such vector spaces using the following classical theorem of Herbrand: Theorem 1.1 (Herbrand [12]). Suppose that L is a first-order signature and T is a universal L-theory with quantifier elimination. Let ϕ( x, y) be a formula, where x = (x 1 , . . . , x m ), y = (y 1 , . . . , y n ), m ≥ 1. Then there are L-terms t 11 ( x), . . . , t 1n ( x), . . . , t k1 ( x), . . . , t kn ( x), (k ∈ N >0 ) such that T |= ∀ x∀ y ϕ( x, y) → k i=1 ϕ( x, t i1 ( x), . . . , t in ( x) . Although this theorem is not immediately applicable to the case of an infinite vector space V over a division ring (for the axioms expressing that V is infinite are existential), Herbrand's theorem does apply to the theory The author's work was partially supported by NSF grant DMS-1007144. of V with constants added for names of elements of V . Since terms in this extended language name affine functions, we get the aforementioned characterization of definable functions in V . (According to van den Dries, this use of Herbrand's theorem is well-known and often used.) Although Theorem 1.1 has an easy model-theoretic proof using compactness, we should remark that the result was first established using proof-theoretic techniques; see [5] and [6] for more on the history of Herbrand's result. In this paper, we prove a version of Herbrand's theorem for continuous logic (Theorem 2.7 and Corollary 2.8 below) and use it to characterize definable functions in Hilbert spaces and some of their generic expansions, proving, in the case of pure Hilbert spaces, that definable functions are "piecewise approximable by affine functions." Along the way, we note that this method works whenever T is a ∃∀-axiomatizable theory with quantifier elimination. In particular, we show that one can use Herbrand's theorem to understand definable functions in some absolutely ubiquitous structures from classical logic. We assume that the reader is familiar with the basic definitions of continuous logic; otherwise, they can consult the survey article [1]. The author would like to thank Vinicius C.L., Aleksander Ivanov, and Dugald Macpherson for helpful discussions concerning this work and Matthias Aschenbrenner for pointing out the paper [14] on absolutely ubiquitous structures. Herbrand's Theorem in Continuous Logic In this section, we let L denote an arbitrary continuous signature. We will use the following abuse of notation: whenever ∆ is a set of closed L-conditions and σ is an L-sentence, we write σ ∈ ∆ to indicate that the condition "σ = 0" belongs to ∆. Definition 2.1. Suppose that ∆ is a set of closed L-conditions. (1) We say that ∆ is closed under min if whenever σ 1 , . . . , σ n are sentences with σ i ∈ ∆ for each i, then min 1≤i≤n σ i ∈ ∆. (2) We say that ∆ is closed under weakening if whenever σ ∈ ∆, then σ − . r ∈ ∆ for every r ∈ [0, 1]. The following lemma is in a similar spirit to Lemma 3.4 of [19]; the classical version, whose proof we mimic, can be found in [7]. Lemma 2.2. Suppose that T is a satisfiable L-theory and ∆ is a set of closed L-conditions that is closed under min and weakening. Then the following are equivalent: (1) T is axiomatizable by a collection of conditions Γ ⊆ ∆; (2) For all L-structures M and N satisfying M |= T and σ N = 0 for all σ ∈ ∆ with σ M = 0, we have N |= T . Proof. Clearly (1) ⇒ (2), so we need to prove (2) ⇒ (1). Consider the set Γ = {"σ = 0" : σ ∈ ∆ and T |= σ = 0}. We claim that Γ axiomatizes T . Suppose N |= Γ. Let Σ = {"δ ≥ r 2 " : N |= δ = r, r > 0, δ ∈ ∆}. We claim that T ∪ Σ is consistent. Suppose otherwise. Then there are δ 1 , . . . , δ k , r 1 , . . . , r k such that T |= min 1≤i≤k (δ i − . r i 2 ) = 0. Since ∆ is closed under min and weakening, we have that min 1≤i≤k (δ i − . r i 2 ) ∈ Γ, so N |= min 1≤i≤k (δ i − . r i 2 ) = 0, which is a contradiction to the fact that δ N i = r i for each i. Let M |= T ∪ Σ. Now suppose that σ ∈ ∆ and σ M = 0. Then σ N = 0, else "σ ≥ r 2 " ∈ Σ for some r > 0, contradicting σ M = 0. By (2), we have N |= T . Let us call a sentence σ universal if it is of the form sup x ϕ( x), where ϕ is quantifier-free. Let us call a closed condition "σ = 0" universal if σ is universal. We call a closed condition "σ = 0" almost universal if there is a universal sentence τ such that, in every L-structure M, we have σ M = 0 if and only if τ M = 0. Proof. Suppose that σ = 0 and τ = 0 are almost universal conditions. Suppose that σ = 0 is equivalent to sup x σ ′ ( x) = 0 and τ = 0 is equivalent to sup y τ ′ ( y) = 0, with σ ′ , τ ′ quantifier-free and x, y disjoint tuples of distinct variables. Then min(σ, τ ) = 0 is equivalent to sup x sup y (min(σ ′ ( x), τ ′ ( y))) = 0. Similarly, the condition σ − . r = 0 is equivalent to sup x (σ ′ ( x) − . r) = 0. If Γ is a set of closed L-conditions, we set Γ + := {"σ ≤ 1 n " : σ ∈ Γ, n ≥ 1}. We say that T has a universal axiomatization if T is axiomatizable by a set of universal conditions. Clearly if T is axiomatizable by a set of almost universal conditions, then T has a universal axiomatization. (1) T has a universal axiomatization; (2) For any M |= T and substructure N of M, we have N |= T . Proof. Clearly (1) implies (2), so we prove that (2) implies (1). We use the criterion developed in Lemma 2.2 applied to the set of almost universal conditions. Suppose that M |= T and for all almost universal conditions "σ = 0", we have σ M = 0 implies σ N = 0. We want N |= T . Let T ′ = T ∪D(N ) + . We claim that T ′ is satisfiable. Fix atomic L(N )-sentences σ 1 ( b), . . . , σ n ( b) such that σ N i ( b) = 0. Then N |= inf x max(σ i ( x)) = 0. Sup- pose, towards a contradiction, that M |= inf x max(σ i ( x)) = 0. Then there is r ∈ (0, 1] such that M |= sup x (r − . max(σ i ( x))) = 0. By assumption, we have N |= sup x (r − . max(σ i ( x))) = 0, which is a contradiction. Consequently, for any k ≥ 1, there is a ∈ M such that M |= max(σ i ( a)) ≤ 1 k . It follows by compactness that T ′ is satisfiable. Let A ′ |= T ′ and let A be the L-reduct of A ′ . Then A |= T and N is (isomorphic to) a substructure of A, whence N |= T . By Theorem 3.5 of [1], any L-formula ϕ( x) has a modulus of uniform continuity ∆ ϕ : (0, 1] → (0, 1], that is, for any L-structure M, any ǫ > 0, and Proof. Consider the set of closed L-conditions Γ( x) given by any tuples a, b from M , if d( a, b) < ∆ ϕ (ǫ), then |ϕ M ( a) − ϕ M ( b)| ≤ ǫ.{inf y ϕ( x, y) = 0} ∪ {ϕ( x, t 1 ( x), . . . , t n ( x)) ≥ 2ǫ : t 1 ( x), . . . , t n ( x) L-terms}. By compactness, it is enough to prove that Γ is unsatisfiable. Suppose, towards a contradiction, that M |= Γ( a), where a = (a 1 , . . . , a m ) ∈ M m . Fix δ ∈ (0, 1] such that δ < ǫ 3 . Let χ( x) be a quantifier-free L-formula such that T |= sup x | inf y ϕ( x, y) − χ( x)| − . δ = 0. Then χ M ( a) ≤ δ. Let N be the substructure of M generated by {a 1 , . . . , a m }. Then since χ( x) is quantifier- free, we have χ N ( a) ≤ δ. Since N |= T , we have N |= inf y ϕ( a, y) ≤ 2δ. Thus, there is c ∈ N n such that ϕ N ( a, c) ≤ 3δ. Now let t i ( x) be a term so that d(t i ( a), c i ) < ∆ ϕ (δ), whence ϕ N ( a, t 1 ( a), . . . , t n ( a)) ≤ 4δ. Let θ( x, y) be a quantifier-free L-formula so that T |= sup x, y (|ϕ( x, y) − θ( x, y)| − . δ) = 0. Then θ N ( a, t 1 ( a), . . . , t n ( a)) ≤ 5δ, whence θ M ( a, t 1 ( a), . . . , t n ( a)) ≤ 5δ and hence ϕ M ( a, t 1 ( a), . . . , t n ( a)) ≤ 6δ. Since 6δ < 2ǫ, this is a contradiction to the fact that M |= Γ( a). The following rephrasing of the previous theorem more closely resembles the usual statement of Herbrand's theorem. Corollary 2.8. Suppose that T is a complete L-theory with quantifier elimination that admits a universal axiomatization. Let x = (x 1 , . . . , x m ) and y = (y 1 , . . . , y n ). Then for any formula ϕ( x, y) and any ǫ > 0, there are L-terms t 11 ( x), . . . , t 1n ( x), . . . , t k1 ( x), . . . , t kn ( x) (k ∈ N >0 ) and an increasing continuous function α : [0, 1] → [0, 1] satisfying α(0) = 0 such that T |= sup x (( min 1≤i≤k ϕ( x, t i1 ( x), . . . , t in ( x)) − . ǫ) − . α(inf y (ϕ( x, y))) = 0. Proof. This is immediate from the preceding theorem and Proposition 7.15 of [1]. Primitive theories with QE In this short section, L continues to denote an arbitrary (continuous) signature and T denotes an L-theory. Definition 3.1. Following [14] (in the classical setting), we say that T is primitive if there exists sets of closed L-conditions Γ and ∆, where Γ consists of universal conditions and ∆ consists of existential conditions, such that Γ ∪ ∆ axiomatizes T . Proof. Let Γ be a set of universal sentences and ∆ a set of existential sentences such that Γ ∪ ∆ axiomatizes T . In order to prove that T M has a universal axiomatization, it suffices to prove that T M is axiomatized by Γ ∪ D(M). Suppose that N |= Γ ∪ D(M). Then M is a substructure of N . Now any axiom from ∆ is true in N since it is witnessed by things in M. Consequently, N |= T , whence N |= T M by model-completeness of T . The moreover statement is clear. We will meet some examples of (classical and continuous) primitive theories with quantifier elimination in the next section. The following proposition explains how we use Herbrand's theorem in connection with definable functions. ǫ > 0, there are L(M )-terms t 1 ( x), . . . , t k ( x) such that: for all a ∈ M n , there is i ∈ {1, . . . , k} with d(f ( a), t i ( a)) ≤ ǫ. Proof. Fix ǫ > 0. Let ϕ( x, y) be an L(M )-formula such that |d(f ( a), b) − ϕ M ( a, b)| ≤ ǫ 3 for all a ∈ M n and b ∈ M . By Herbrand's theorem applied to T M (which is applicable by Proposition 3.3), there are L(M )-terms t 1 ( x), . . . , t k ( x) such that, for all a ∈ M n , if M |= inf y (ϕ( a, y) − . ǫ 3 ) = 0, then M |= (ϕ( a, t i ( a)) − . ǫ 3 ) ≤ ǫ 3 for some i ∈ {1, . . . , k}. Notice that the antecedent of the preceding con- ditional statement holds since ϕ M ( a, f ( a)) ≤ ǫ 3 . Consequently, for every a ∈ M n , there is i ∈ {1, . . . , k} such that d(f ( a), t i ( a)) ≤ ǫ. Applications In this section, we present some (classical and continuous) primitive theories with quantifier-elimination and use Proposition 3.4 above to understand the definable functions in models of these theories. 4.1. Infinite-dimensional Hilbert spaces and some of their generic expansions. In this subsection, we suppose that K ∈ {R, C} and we set D := {λ ∈ K : |λ| ≤ 1}. Also, L denotes the (1-sorted) continuous signature for unit balls of K-Hilbert spaces. More specifically, L contains: • a constant symbol 0; • a binary function symbol f α,β for every α, β ∈ D with |α| + |β| ≤ 1; • a binary predicate symbol ·, · that takes values in [−1, 1]. If H is a K-Hilbert space, the unit ball of H, B 1 (H), is naturally an Lstructure, where 0 is interpreted as the zero vector of H, f α,β is interpreted as the function (x, y) → αx+βy, and ·, · is interpreted as the inner product of H. For sake of readability, we often write H instead of B 1 (H) when speaking of this way of treating B 1 (H) as an L-structure. Let T be the L-theory of (the unit ball of) an infinite-dimensional K-Hilbert space. Then T is primitive as the Hilbert space axioms are universal and the axioms for infinite-dimensionality are existential. We must remark that we cannot work in the many-sorted setting for Hilbert spaces (as in [10]) because the axioms for the inclusion mappings are ∀∃; indeed, for n ≤ m, one must declare that the inclusion mapping I n,m : B n (H) → B m (H) is onto the set of elements of B m (H) of norm at most n. In the rest of this subsection, H |= T and H * is an elementary extension of H. In order to make any sense of Proposition 3.4 in this context, we must first understand L(H)-terms. Proof. One proves this by induction on the complexity of t(x), the base case being immediate. Now suppose that t i (x) = λ i x + v i for i = 1, 2 and α, β are so that |α| + |β| ≤ 1. Then f α,β (t 1 (a), t 2 (a)) = αt 1 (a) + βt 2 (a) = (αλ 1 + βλ 2 )a + (αv 1 + βv 2 ). It remains to observe that |αλ 1 + βλ 2 | ≤ 1. Fix a ∈ B 1 (H * ). Then there are sequences (λ n ) from D and (v n ) from B 1 (H) with λ n a + v n → f (a) as n → ∞. By taking subsequences, we may suppose that λ n → λ ∈ D. It then follows that (v n ) is a Cauchy sequence in B 1 (H), whence v n → v ∈ B 1 (H). It follows that f (a) = λa + v. We have just proven the following result: 0 = f (a), v = λa + v, v = v, v . Thus, f (a) = λa. Let (a n ) be an orthonormal basis for H. Then the following set of conditions is unsatisfiable in H * : { x, a n = 0 : n < ω} ∪ {d(f (x), λ i x) ≥ ǫ : i = 1, . . . , m}. By saturation, there is n < ω such that, setting K := span(a 1 , . . . , a n ), we have d(f (x), λ i x) < ǫ for all x ∈ B 1 (H * ) ∩ K ⊥ . How does Corollary 4.2 relate to functions definable in the many-sorted language for Hilbert spaces considered in [10]? In order to elucidate this, we first clarify how the syntax of continuous logic works in the case that the predicates take values in intervals other than [0, 1]. (This is omitted in the survey [1] and was communicated to me by Ward Henson.) Let L ′ be a many-sorted (continuous) signature with sort set S. In particular, one associates to each predicate symbol P of L a closed, bounded interval I P in R. Then one also associates to each formula ϕ a closed, bounded interval I ϕ in R as follows: • Given two terms t 1 ( x) and t 2 ( x) of arity (s 1 , . . . , s n , s n+1 ), the for- mula ϕ( x) = d(t 1 ( x), t 2 ( x)) is an atomic formula with I ϕ := [0, N ], where N is the bound on the metric of sort s n+1 . • If P is a predicate symbol of arity (s 1 , . . . , s n ) and t 1 ( x), . . . , t n ( x) are terms such that t i takes values in sort s i , then the formula ϕ( x) = P (t 1 ( x), . . . , t n ( x)) is an atomic formula with I ϕ := I P . • Suppose that ϕ 1 ( x), . . . , ϕ n ( x) are formulae with associated intervals I ϕ 1 , . . . , I ϕn . Suppose that u is a continuous function with domain I ϕ 1 × · · · × I ϕn and range I, a closed, bounded interval in R. Then ϕ( x) = u(ϕ 1 ( x), . . . , ϕ n ( x)) is a formula with I ϕ := I. • If ϕ is a formula with associated interval I ϕ , then ψ = sup x ϕ is a formula with I ψ := I ϕ . Similarly for inf x ϕ. For an interval I = [a, b] ⊆ R with a < b, define u I : I → [0, 1] by u I (x) := 1 b−a (x−a). Note that u I is a homeomorphism with inverse u −1 I (x) = a + (b − a)x. We let L ms denotes the many-sorted theory of Hilbert spaces used in [10]. Proof. The proof goes by induction on the complexity of ϕ, the main work taking place in the case when ϕ is atomic, which involves a painful case distinction. Let us illustrate the idea by considering terms t i (x, y) = λ i x+µ i y (i = 1, 2) where |λ i |, |µ i | ≤ n. (In the general situation, terms can be much more complicated due to the number of variables and the inclusion maps.) First suppose that ϕ(x, y) = d(t 1 (x, y), t 2 (x, y)). Since each t i takes values in B 2n , we have I ϕ = [0, 4n]. Then I ϕ (ϕ(x, y)) = 1 4n d(t 1 (x, y), t 2 (x, y)). Let ψ(x, y) = λ 1 −λ 2 4n x + µ 1 −µ 2 4n y . Since | λ 1 −λ 2 4n | + | µ 1 −µ 2 4n | ≤ 1, we have that ψ is an L-formula with I ψ = [0, 1]. Clearly ψ is as desired. Now suppose that ϕ(x, y) = t 1 (x, y), t 2 (x, y) . Now I ϕ = [−4n 2 , 4n 2 ], so u Iϕ (ϕ(x, y)) = 1 8n 2 ( t 1 (x, y), t 2 (x, y) + 4n 2 ). This time, let ψ(x, y) = 1 2 λ 1 2n x + µ 1 2n y, λ 2 2n + µ 2 2n y + 1 2 . It is easily verified that this ψ is as desired. For the induction step, suppose that ϕ = u(ϕ 1 , . . . , ϕ n ), where u : I ϕ 1 × · · · × I ϕn → I ϕ is a surjective continuous function. By the induction hypothesis, there are L-formulae ψ i (x) (i = 1, . . . , n) with each I ψ i = [0, 1] such that H |= sup x |u Iϕ i (ϕ i ( x)) − ψ i ( x)| = 0. Consider the L-formula ψ(x) = u Iϕ (u(u −1 Iϕ 1 (ψ 1 ( x)), . . . , u −1 Iϕ n (ψ n ( x)))). It is clear that H |= sup x |u ϕ (ϕ( x)) − ψ( x))| = 0. Corollary 4.6. If P : B 1 (H) n → [0, 1 ] is a uniformly continuous function, then P is an L-definable predicate if and only if P is an L ms -definable predicate Proof. This follows from the preceding corollary and the fact that the L mstheory of H admits quantifier-elimination. The definition of an L ms -definable function is given in [10]. Remark 4.8. It follows from the preceding corollary and Corollary 4.2 that for any L ms -definable function f : H → H, any n ≥ 1, and any ǫ > 0, there are scalars λ 1 , . . . , λ k and vectors v 1 , . . . , v k ∈ B m(n,f ) (H) such that, for all x ∈ B n (H), there is i ∈ {1, . . . , k} with d(f (x), λ i x + v i ) ≤ ǫ. Using the main result of [10], we can give a different proof of this fact in the case that f is linear. Indeed, write f = λI + K, where K is a compact operator. Let {v 1 , . . . , v k } be a finite ǫ-net for K(B n (H)). Then for a ∈ B 1 (H), we have d(K(a), v i ) ≤ ǫ for some i ∈ {1, . . . , k}, whence d(f (a), λa + v i ) ≤ ǫ. (Notice here that λ i = λ for all i.) We now suppose that K = C and set S 1 := {λ ∈ C : |λ| = 1}. We let L U := L ∪ {U, U −1 }, where U and U −1 are both unary function symbols. We let T ∀ U denote the L-theory obtained from T by adding (universal) axioms saying that U is linear, preserves the inner product, and U and U −1 are inverses. (T U axiomatizes the theory of an infinite-dimensional Hilbert space equipped with a unitary operator; one adds a symbol for U −1 so as to avoid the ∀∃ axiom stating that U is onto.) We add to T ∀ U the following axioms: inf x [| x, x − 1| ∔ d(U x, σx)|] = 0, where σ ranges over a countable dense subset of S 1 . (These axioms assert that the spectrum of U is S 1 .) Then T U is complete and admits quantifier elimination (see [2]); T U is the theory of infinite-dimensional Hilbert spaces equipped with a generic automorphism. Since T U is primitive, we can once again apply Proposition 3.4. Proof. This is proved by induction on the complexity of t(x) exactly as in Lemma 4.1. Suppose that (H * , U * ) is an elementary extension of (H, U ). d(f (x), v i + m j=l α i j U j (x)) < ǫ. One can generalize this situation as follows: Let G be a countable (discrete group) and let L G be the language for Hilbert spaces as above augmented by unary function symbols τ g for g ∈ G. Let T G be the universal L G -theory of a unitary representation of G on an infinite-dimensional Hilbert space. (As above, the axiom sup x d((τ g (τ g −1 (x)), x) = 0 allows us to assert that τ g is onto without using a ∀∃ axiom.) Let π : G → U (H) be a unitary representation of G on an (infinite-dimensional) Hilbert space H such that (H, π) is an existentially closed model of T G (such an existentially closed model exists because T G is an inductive theory). Let Σ be the set of existential consequences of (H, π). Then it is shown in [3] that T GA := T G ∪ Σ axiomatizes the class of existentially closed models of T G , whence is the model companion of T G . Moreover, since T G has the amalgamation property (see [3]), it follows that T GA admits quantifier elimination. As above, one can show that any L G term t(x) has the form v + n i=1 λ i g i x for some v ∈ B 1 (H), some λ 1 , . . . , λ n ∈ D, and some g 1 , . . . , g n ∈ G. (Here we abuse notation and write gx instead of τ g (x).) Consequently, we have: . . , λ k 1 , . . . , λ k m ∈ D, and group elements g 1 , . . . , g k ∈ G such that, for all a ∈ B 1 (H * ), there is i ∈ {1, . . . , k} such that d(f (a), v i + m j=1 λ i j g j a) < ǫ. There is yet another expansion of Hilbert spaces that fits into this context. Let L P := L ∪ {P }, where P is a new unary predicate symbol. We consider the theory T P obtained from the theory of infinite-dimensional Hilbert spaces obtained by adding the following axioms (the latter two are axiom schemes, including one such axiom for every n ≥ 1): • P is linear; • sup x d(P 2 (x), P (x)) = 0; • sup x,y | P (x), y − x, P (y) | = 0; • inf v 1 · · · inf vn max(max i,j | v i , v j − . δ ij |, max i d(P (v i ), v i ))) = 0; • inf v 1 · · · inf vn max(max i,j | v i , v j − . δ ij |, max i d(P (v i ) , 0))) = 0. The first three axioms say that P is a projection operator on H and the latter two axiom schemes say that P (H) and P (H) ⊥ are infinite-dimensional. Then T P is a complete theory with quantifier elimination ( [4]); in fact, it is the theory of beautiful pairs of Hilbert spaces and its unique separable model is the Fraisse limit of the family of finite-dimensional Hilbert spaces equipped with projection operators. Since T P is a primitive theory with quantifier elimination, we may use Proposition 3.4. Let (H, P ) be a model of T P . Then in (H, P ), all L-terms t(x) are easily seen to equivalent to terms be of the form αx + βP (x) + v, where α, β ∈ D and v ∈ B 1 (H). Thus: Proposition 4.12. Let f : B 1 (H) → B 1 (H) be an L P -definable function. Then for any ǫ > 0, there are v 1 , . . . , v k ∈ B 1 (H) and α 1 , . . . , α k , β 1 , . . . , β k ∈ D such that, for all a ∈ B 1 (H), there is i ∈ {1, . . . , k} such that d(f (a), α i a + β i P (a) + v i ) < ǫ. Consequently, for any elementary extension (H * , P * ) of (H, P ) and any a ∈ B 1 (H * ), there are α, β ∈ D and v ∈ B 1 (H) such that f (a) = αa+βP * (a)+v. Absolutely ubiquitous structures. A source of primitive theories in classical logic comes from the notion of an absolutely ubiquitous structure. Suppose that L is a finite first-order signature and M is a countable L-structure. Recall that M is said to be locally finite if every finitely generated substructure of M is finite and M is said to be uniformly locally finite if there is a function g : N >0 → N >0 such that, for all A ⊆ M , if |A| ≤ n, then | A | ≤ g(n), where A denotes the substructure of M generated by A. Also recall that the age of M, denoted Age(M), is the set of isomorphism classes of finitely generated substructures of M. Finally, we say that M is absolutely ubiquitous if: (1) M is uniformly locally finite, and (2) whenever N is a countable, locally finite L-structure with Age(M) = Age(N ), then M ∼ = N . It follows immediately from the definition that if M is an absolutely ubiquitous L-structure and T := Th(M), then T is primitive and ℵ 0 -categorical, whence model-complete (see also Lemma 2.1 of [18]). Consequently, if T has quantifier elimination, then T meets the hypothesis of Proposition 3.4. It is interesting to ask when an absolutely ubiquitous structure has quantifier elimination? Note that an absolutely ubiquitous structure admits quantifier elimination if and only if it is ultrahomogeneous. Thus, we can use the classifications of absolutely ubiquitous graphs [17] and ultrahomogeneous countable graphs [16] to see that there are only two situations when a countable ultrahomogeneous graph is absolutely ubiquitous: • a disjoint union of finitely many copies of the complete graph on ℵ 0 many vertices; • a k-partite graph, where each part is of size ℵ 0 . It follows from Proposition 3.4 that if G is such a graph and f : G n → G is a definable function, then there are vertices g 1 , . . . , g k ∈ G so that, for any a ∈ G n , we have f (a) = a i for some i or f (a) = g j for some j. It is interesting to note that in the case of absolutely ubiquitous structures in finite relational signatures, we can always expand the language to ensure that we have quantifier elimination while maintaining absolute ubiquity. To see this, suppose that M is an L-structure, where L is a finite relational (classical) signature. We say that M is finitely partitioned if there is a finite partition X 1 , . . . , X n of M such that Sym(X 1 ) × · · · × Sym(X n ) is a subgroup of Aut(M). The main result of [13] states that M is absolutely ubiquitous if and only if M is finitely partitioned. Suppose now that M is absolutely ubiquitous. Let X 1 , . . . , X n be a finite partition of M witnessing that M is finitely partitioned. Consider the signature L ′ := L∪{R 1 , . . . , R n }, where R 1 , . . . , R n are new unary function symbols, and consider the expansion M ′ := (M; X 1 , . . . , X n ) of M to an L ′ -structure. Then X 1 , . . . , X n witness that M ′ is finitely partitioned, whence M ′ is absolutely ubiquitous. However, we now have: What about when the language has function symbols? Here is an example from [20]: Let M := (N n , E 1 , . . . , E n , f ), where E i is the binary relation on N n given by E i ( a, b) ⇔ a i = b i , and f is the n-ary function on N n given by f ( a 1 , . . . , a n ) = (a 11 , . . . , a nn ). It is argued in [20] that M is an absolutely ubiquitous structure with quantifier elimination. It is shown in [18] that if G is an absolutely ubiquitous group (considered as a structure in the pure group language), then G has a characteristic subgroup A of finite index such that A is a finite direct product of elementary abelian groups of infinite rank. Conversely, if G is a countable group with a characteristic subgroup A of index m < ∞ which is a finite direct product of elementary abelian groups of infinite rank such that either G = A × F for some group F of cardinality m or m is relatively prime to the orders of elements of A, then G is absolutely ubiquitous. If the absolutely ubiquitous group G admits quantifier elimination, then given any definable function f : G n → G, there is a tuple b from G and words w 1 ( x, b), . . . , w k ( x, b), such that, for all a ∈ G n , there is i ∈ {1, . . . , k} such that f ( a) = w i ( a, b). The question remains: which absolutely ubiquitous groups admit quantifier elimination? It is easy to see that if G itself is a finite direct product of elementary abelian groups of infinite rank, then G is ultrahomogeneous, so admits quantifier elimination. More generally: Proposition 4.15. If G = A × F , where A is a finite direct product of elementary abelian groups of infinite rank, F is a finite ultrahomogeneous group, and gcd(|a|, |b|) = 1 for all a ∈ A and b ∈ F , then G is ultrahomogeneous. Proof. Suppose that φ : B → C is an isomorphism, where B and C are finite subgroups of G. Let A 1 , F 1 denote the projections of B onto A and F respectively; note that A 1 and F 1 are finite subgroups of A and F respectively. Next note that, for each a ∈ A 1 , we have that (a, 1) ∈ B. Indeed, if (a, b) ∈ B, then choosing n ∈ N such that |b| divides n and n ≡ 1 mod |a|, we see that (a, 1) = (a, b) n ∈ B. Likewise, for every b ∈ F 1 , we have (1, b) ∈ B. Now observe that, for all (a, 1) ∈ B, there is a ′ ∈ A such that φ(a, 1) = (a ′ , 1). Indeed, writing φ(a, 1) = (a ′ , b), we have (1, 1) = φ(a, 1) |a| = (1, b |a| ), whence b = 1. Similarly, for every b ∈ F , there is b ′ ∈ F such that φ(1, b) = (1, b ′ ). We can thus define φ ′ : A 1 → A by φ ′ (a) = a ′ , where φ(a, 1) = (a ′ , 1); note that φ ′ is an isomorphism between finite subgroups of A, so can be lifted to an automorphismφ ′ : A → A. Likewise, one obtains a partial automorphism φ ′′ : F 1 → F that can be lifted to an automorphismφ ′′ : F → F . Finally, φ : G → G defined byφ(a, b) = (φ ′ (a),φ ′′ (b)) is an automorphism of G extending φ Remark 4.16. The ultrahomogeneous finite groups are characterized in [8]. Question 4.17. Given an absolutely ubiquitous group G, is there an extension L ′ of the language of groups by relation symbols and an expansion G of G to an L ′ -structure so that G admits quantifier elimination and is still absolutely ubiquitous (or at least has a primitive theory)? If the answer to this question is positive, then definable functions in absolutely ubiquitous groups are piecewise given by words as mentioned above. Given an L-structure M, let D(M) be the set of closed L(M)-conditions of the form σ = 0, where σ is an atomic L(M) sentence and σ M = 0; this is just the atomic diagram of M. The following lemma is proved just as in classical logic. Lemma 2.3. If N |= D(M), then the L-reduct of N contains a substructure isomorphic to M. Lemma 2 . 4 . 24The set of almost universal conditions is closed under min and weakening. Corollary 2 . 5 . 25The following are equivalent: Definition 2 . 6 . 26Suppose that M is an L-structure and A ⊆ M . Let A 0 be the L-prestructure generated by A. Then the closure of A 0 in M is the completion of A 0 , whence a substructure of M, called the substructure of M generated by A. Theorem 2 . 7 ( 27Continuous Herbrand Theorem). Suppose that T is a complete L-theory with quantifier elimination that admits a universal axiomatization. Let x = (x 1 , . . . , x m ) and y = (y 1 , . . . , y n ). Then for any formula ϕ( x, y) and any ǫ > 0, there are L-terms t 11 ( x), . . . , t 1n ( x), . . . , t k1 ( x), . . . , t kn ( x) (k ∈ N >0 ) such that, for any M |= T and any a ∈ M m , if M |= inf y ϕ( a, y) = 0, then M |= min 1≤i≤k ϕ( a, t i1 ( a), . . . , t in ( a)) ≤ ǫ. Remark 3 . 2 . 32In classical logic, it is mentioned in[14] that T is primitive if and only if: whenever M 0 , M 1 |= T and M 0 ⊆ N ⊆ M 1 , then N |= T . It is also mentioned in[14] that T is ∃∀-axiomatizable if and only if: whenever M 0 , M 1 |= T , M 0 M 1 , and M 0 ⊆ N ⊆ M 1 , then N |= T . It follows that for model-complete theories T , T is primitive if and only if T is ∃∀axiomatizable. An interesting example of a model-complete ∃∀-theory is Example 3 of[15]. Proposition 3. 3 . 3Suppose that T is a complete, model-complete primitive L-theory. Let M |= T and let T M be the L(M)-theory of M. Then T M is universally axiomatizable. Moreover, T M has quantifier elimination if T does. Proposition 3. 4 . 4Suppose that T is primitive and admits quantifier elimination. Suppose M |= T and f : M n → M is a definable function. Then for any Remark 3 . 5 . 35Fix a definable function f : M n → M . Fix ǫ > 0 and let the L(M)-terms t 1 ( x), . . . , t k ( x) be as in the conclusion of the previous proposition. Suppose that M N and f : N n → N is the natural extension of f to a definable function in N . Then, for every a ∈ N n , there is i ∈ {1, . . . , k} such that d(f ( a), t i ( a)) ≤ ǫ. Indeed, repeat the proof of the preceding proposition, using Corollary 2.8 instead of Theorem 2.7. Lemma 4. 1 . 1If t(x) is an L(H)-term, then there are λ ∈ D and v ∈ B 1 (H) so that t(a) = λa + v for all a ∈ B 1 (H). Corollary 4. 2 . 2Let f : H → H be definable. Then given ǫ > 0, there are λ 1 , . . . , λ k ∈ D and v 1 , . . . , v k ∈ B 1 (H) such that, for all a ∈ B 1 (H * ), there is i ∈ {1, . . . , k} with d(f (a), λ i a + v i ) ≤ ǫ. Corollary 4. 3 . 3For any a ∈ B 1 (H * ), there are λ ∈ D and v ∈ B 1 (H) such that f (a) = λa + v. Corollary 4. 4 . 4Suppose that H * is ω 1 -saturated and f (H ⊥ ) ⊆ H ⊥ . Fix ǫ > 0 and let λ 1 , . . . , λ m be a finite ǫ-net for D. Then there is a finitedimensional subspace K of H such that, for all a ∈ B 1 (H * ) ∩ K ⊥ , there is i ∈ {1, . . . , m} such that d(f (a), λ i a) < ǫ.Proof. Let a ∈ B 1 (H * ) ∩ H ⊥ . Take λ ∈ D and v ∈ B 1 (H) such that f (a) = λa + v. Then Lemma 4. 5 . 5For any quantifier-free L ms -formula ϕ( x), where x is a tuple of variables of sort B 1 (H), there is a quantifier-free L-formula ψ( x) with I ψ = [0, 1] such that H |= sup x |u Iϕ (ϕ( x)) − ψ( x)| = 0.In particular, when I ϕ = [0, 1], we have H |= sup x |ϕ( x) − ψ( x)| = 0. Corollary 4 . 7 . 47Suppose that f : H → H is an L ms -definable function such that f (B 1 (H)) ⊆ B 1 (H). Then f |B 1 (H) is an L-definable function. Lemma 4 . 9 . 49If t(x) is an L U (H)-term, then there are l, m ∈ Z, l ≤ m, α l , . . . , α m ∈ D and a vector v ∈ B 1 (H) such that, for all a ∈ B 1 (H), we have t(a) = v + m j=l α j U j (a). Corollary 4 . 10 . 410Suppose that f : H → H is an L U -definable function and ǫ > 0. Then there are l, m ∈ Z, l ≤ m, λ 1 l , . . . , λ 1 m , . . . , λ k l , . . . , λ k m ∈ D, and v 1 , . . . , v k ∈ B 1 (H), such that, for all a ∈ B 1 (H * ), there is i ∈ {1, . . . , k} such that Corollary 4 . 11 . 411Let (H, π) be any model of T GA and let f : H → H be an L G -definable function. Then, for any ǫ > 0, there are v 1 , . . . , v k ∈ B 1 (H), scalars λ 1 1 , . . . , λ 1 m , . Lemma 4.13. M ′ is ultrahomogeneous, whence Th(M ′ ) admits quantifier elimination. Proof. Suppose that A, B ⊆ M are finite and f : A → B is a partial automorphism of M ′ . Then for any i ∈ {1, . . . , n}, f (A ∩ X i ) ⊆ X i . Extend f tof : M → M so thatf |X i ∈ Sym(X i ) for each i ∈ {1, . . . , n}. Then by assumption,f ∈ Aut(M ′ ). Corollary 4 . 14 . 414Given any definable (in M ′ ) function f : M n → M , there are elements b 1 , . . . , b m ∈ M so that, for all a ∈ M n , we have either f ( a) = a i for some i ∈ {1, . . . , n} or f ( a) = b j for some j ∈ {1, . . . , m}. Model theory for metric structures, Model theory with applications to algebra and analysis. I Ben Yaacov, A Berenstein, C W Henson, A Usvyatsov, London Math. Soc. Lecture Note Ser. 2350Cambridge Univ. PressI. Ben Yaacov, A. Berenstein, C. W. Henson, A. Usvyatsov, Model theory for metric structures, Model theory with applications to algebra and analysis. Vol. 2, pgs. 315- 427, London Math. Soc. Lecture Note Ser. (350), Cambridge Univ. Press, Cambridge, 2008. Generic automorphism of a Hilbert space. I Ben Yaacov, A Usvyatsov, M Zadka, preprint. Available atI. Ben Yaacov, A. Usvyatsov, M. Zadka, Generic automorphism of a Hilbert space, preprint. Available at http://ptmat.fc.ul.pt/~alexus/papers.html Hilbert spaces equipped with generic groups of automorphisms. A Berenstein, Arch. Math. Logic. 463A. Berenstein, Hilbert spaces equipped with generic groups of automorphisms, Arch. Math. Logic, vol 46 (2007) no. 3, pp. 289-299. Hilbert spaces with generic predicates. A Berenstein, A Villaveces, preprint. Available atA. Berenstein, A. Villaveces, Hilbert spaces with generic predicates, preprint. Available at http://pentagono.uniandes.edu.co/~aberenst/publications.html S R Buss, Handbook of Proof Theory. Studies in Logic and the Foundations of Mathematics. AmsterdamNorth-Holland Publishing Co137x+811 ppS. R. Buss (ed.), Handbook of Proof Theory. Studies in Logic and the Foundations of Mathematics, 137. North-Holland Publishing Co., Amsterdam, 1998. x+811 pp. On Herbrand's Theorem, Logic and Computational Complexity. S R Buss, Lecture Notes in Computer Science #960. Springer-VerlagS. R. Buss, On Herbrand's Theorem, Logic and Computational Complexity, Lecture Notes in Computer Science #960, 1995, Springer-Verlag, pp. 195-209. C C Chang, J Keisler, Studies in Logic and the Foundations of Mathematics. AmsterdamNorth-Holland Publishing Company73Model theoryC.C. Chang, J. Keisler, Model theory, Third edition. Studies in Logic and the Foun- dations of Mathematics, 73. North-Holland Publishing Company, Amsterdam, 1990. Homogeneous finite groups. G Cherlin, U Felgner, J. London Math. Soc. 2G. Cherlin, U. Felgner, Homogeneous finite groups, J. London Math. Soc. (2) 62 (2000), no. 3, pp. 784-794. Lou van den Dries, Lectures on motivic integration. Lou van den Dries, Lectures on motivic integration, available at http://www.math.uiuc.edu/~vddries/ Definable operators on Hilbert spaces. I Goldbring, submitted. arXiv 1010.2243I. Goldbring, Definable operators on Hilbert spaces, submitted. arXiv 1010.2243 Definable functions in Urysohn's metric space. I Goldbring, submited. arXiv 1001.4999I. Goldbring, Definable functions in Urysohn's metric space, submited. arXiv 1001.4999 Recherches sur la thèorie de la démonstration. J Herbrand, University of ParisPh.D. thesisJ. Herbrand, Recherches sur la thèorie de la démonstration, Ph.D. thesis, University of Paris, 1930. Relational structures determined by their finite substructures. I M Hodkinson, H D Macpherson, Journal of Symbolic Logic. 53I. M. Hodkinson and H.D. Macpherson, Relational structures determined by their finite substructures, Journal of Symbolic Logic, Vol. 53 (1988), pp. 222-230. Complete theories with only universal and existential axioms. A Lachlan, Journal of Symbolic Logic. 52A. Lachlan, Complete theories with only universal and existential axioms, Journal of Symbolic Logic, Vol. 52 (1987), pp. 698-711. Complete coinductive theories I. A Lachlan, Trans. Amer. Math. Soc. 3191A. Lachlan, Complete coinductive theories I, Trans. Amer. Math. Soc. Vol. 319 No. 1 (1990), pp. 209-241. Countable ultrahomogeneous undirected graphs. A Lachlan, R Woodrow, Trans. Amer. Math. Soc. 262A. Lachlan and R. Woodrow, Countable ultrahomogeneous undirected graphs, Trans. Amer. Math. Soc. 262 (1980), pp. 51-94. Graphs determined by their finite induced subgraphs. H D Macpherson, Journal of Combinatorial Theory, Series B. 41H.D. Macpherson, Graphs determined by their finite induced subgraphs, Journal of Combinatorial Theory, Series B 41 (1986), pp. 230-234. Absolutely ubiquitous structures and ℵ0-categorical groups. H D Macpherson, Quart. J. Math. Oxford Ser. 2H.D. Macpherson, Absolutely ubiquitous structures and ℵ0-categorical groups, Quart. J. Math. Oxford Ser. (2) 39 (1988), no. 156, pp. 483-500. Generic Separable Metric Structures. A Usvyatsov, Topology Appl. 15514A. Usvyatsov, Generic Separable Metric Structures, Topology Appl. 155 (2008), no. 14, pp. 1607-1617. Countably categorical structures with n-degenerate algebraic closure. E Vassiliev, Math. Log. Quart. 45E. Vassiliev, Countably categorical structures with n-degenerate algebraic closure, Math. Log. Quart. 45 (1999) 1, pp. 85-94. CA 90095-1555, USA E-mail address: [email protected]. Portola Plaza, Box 951555. Los AngelesPortola Plaza, Box 951555, Los Angeles, CA 90095-1555, USA E-mail address: [email protected] URL: www.math.ucla.edu/~isaac
[]
[]
[ "Member, IEEEDavid A Deen ", "Eric J Olson ", "Student Member, IEEEMona A Ebrish ", "Senior Member, IEEESteven J Koester " ]
[]
[]
A wireless vapor sensor based upon the quantum capacitance effect in graphene is demonstrated. The sensor consists of a metal-oxide-graphene variable capacitor (varactor) coupled to an inductor, creating a resonant oscillator circuit. The resonant frequency is found to shift in proportion to water vapor concentration for relative humidity (RH) values ranging from 1% to 97% with a linear frequency shift of 5.7 + 0.3 kHz / RH%. The capacitance values extracted from the wireless measurements agree with those determined from capacitancevoltage measurements, providing strong evidence that the sensing arises from the variable quantum capacitance in graphene. These results represent a new sensor transduction mechanism and pave the way for graphene quantum capacitance sensors to be studied for a wide range of chemical and biological sensing applications.
10.1109/jsen.2013.2295302
[ "https://export.arxiv.org/pdf/1306.6940v1.pdf" ]
25,372,775
1306.6940
4deb8719e048f478766499617cc232dc01b174c2
Member, IEEEDavid A Deen Eric J Olson Student Member, IEEEMona A Ebrish Senior Member, IEEESteven J Koester 1 > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < Index Terms-graphenesensorwirelessquantum capacitancevaractor A wireless vapor sensor based upon the quantum capacitance effect in graphene is demonstrated. The sensor consists of a metal-oxide-graphene variable capacitor (varactor) coupled to an inductor, creating a resonant oscillator circuit. The resonant frequency is found to shift in proportion to water vapor concentration for relative humidity (RH) values ranging from 1% to 97% with a linear frequency shift of 5.7 + 0.3 kHz / RH%. The capacitance values extracted from the wireless measurements agree with those determined from capacitancevoltage measurements, providing strong evidence that the sensing arises from the variable quantum capacitance in graphene. These results represent a new sensor transduction mechanism and pave the way for graphene quantum capacitance sensors to be studied for a wide range of chemical and biological sensing applications. I. INTRODUCTION HE quantum capacitance effect is a direct, observable manifestation of the Pauli exclusion principle. While this effect is particularly prominent in the two-dimensional material graphene [1][2][3][4][5][6][7][8][9][10][11][12][13] due to its low density of states, few if any practical uses for this effect have been demonstrated to date. It has recently been proposed that the quantum capacitance effect could be utilized to realize wireless sensors due to graphene's energy-dependent density of states and excellent surface sensitivity [14]. Such a device could have significant advantages over alternative techniques, such as resistance-based sensing [15][16][17][18][19][20][21] and wireless sensing based upon microelectromechanical systems [22,23]. Here we demonstrate graphene-based wireless vapor sensors that utilize the variable capacitance that arises due to the energydependent density of states as the sensor transduction mechanism. Graphene variable capacitors (varactors) are coupled to an inductor coil whereby the resonant frequency of the resulting LRC circuit shifts in response to the H 2 O vapor concentration, as determined using a secondary readout inductor [24]. We show strong evidence that the frequency shift arises from changes in the quantum capacitance in graphene, and that the resonant frequency shift shows a monotonic dependence on vapor concentration over a wide relative humidity range of 1% to 97%. Moreover, the response is shown to be reversible and stable upon repeated concentration cycling. The response time of the sensors was characterized and found to be comparable to the temporal resolution of the measurement setup. The advantages of graphene quantum capacitance wireless sensors compared to alternative passive sensing approaches include excellent noise immunity, greatly improved size scalability, fast response and potential for sensing a wide range of species depending upon the surface functionalization utilized. Our results suggest that graphene quantum capacitance wireless sensors can enable a powerful platform for detection of a wide range of chemical and biological targets [21,[25][26][27][28][29]. The basic transduction mechanism for the sensors utilized in this work is shown conceptually in Fig. 1. A change in the concentration, M, of adsorbed molecules on the graphene surface can change the carrier concentration in the graphene, n. Due to the low density of states in graphene, this leads to a measureable shift in the Fermi energy,E F , as well as the quantum capacitance, C Q . If the graphene is used as the electrode in a metal-graphene-oxide capacitor and this capacitor is integrated with an inductor, changes in the quantum capacitance lead to a resonant frequency shift, f, of the resulting LRC resonator circuit. 2 > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < In order for this transduction mechanism to be utilized for gas sensing, the graphene must be exposed to the external environment, suggesting an inverted capacitor geometry with the graphene on top of the metal gate electrode. In addition, the capacitor dielectric must be sufficiently thin so that the quantum capacitance can significantly affect the overall capacitance of the system. Finally, the resonator must have high quality factor, Q, suggesting a multi-finger geometry in order to reduce the series resistance. We note that the transduction mechanism illustrated in Fig. 1 is fundamentally different than the graphene-based wireless sensor demonstrated in reference [21] where the resistance change of the graphene functionalized to be sensitive to bacteria was used to change the Q of an LRC circuit rather than the resonant frequency. II. METHODS A. Device Fabrication The graphene varactors were fabricated by first preparing a substrate by depositing Si 3 N 4 followed by SiO 2 by plasmaenhanced CVD on a quartz substrate. The insulating quartz substrate minimizes parasitic capacitances associated with contact pads during high frequency measurements. Device processing relied upon conventional photolithography techniques and was initiated by a reactive ion recess etch of the SiO 2 layer and subsequent electron-beam deposition of the local back-gate metal (Ti/Pd). An 8-nm-thick HfO 2 layer was deposited by ALD for gate insulation and vias were patterned and dry etched through the HfO 2 layer to allow access to the gate pad. CVD-grown graphene was then transferred onto the patterned wafer. The single-layer graphene was grown on a copper foil, and spin-coated with PMMA. After baking, the graphene on the uncoated side of the foil was removed using an O 2 plasma etch. Next, the Cu was removed by using a FeCl 3 -based etch and rinsed multiple times in HCl and deionized water. Finally the graphene layer attached to the PMMA, was transferred onto a substrate using an aqueous transfer process and the PMMA removed using a solvent etch. The graphene was then patterned using an O 2 plasma to define the desired active device geometries. Ohmic contacts were formed by electron-beam evaporation of a Ti/Pd/Au (1 nm / 25 nm / 35 nm) metal stack. Finally, thick Ti/Al (10 nm / 300 nm) pad metallization was deposited to allow bond wires to be attached to the devices. Following device fabrication, the presence of single-layer graphene was verified using Raman spectroscopy. The final chip had numerous devices. All varactors had gate length of 5 m and were arranged in multi-finger geometries, with finger length of either 40 m or 100 m. The multi-finger design allows large capacitances to be obtained while maintaining low series resistance. This graphene-on-top geometry has the additional advantage that it allows the dielectric to be made extremely thin, a requirement in order to observe strong quantum capacitance tuning, since no nucleation layers are needed, as would be the case for HfO 2 deposition on graphene [31]. A diagram of the device design as well as an optical micrograph of a single graphene varactor are shown in Fig. 2. Fig. 2 also shows a diagram of the flow cell geometry used for the humidity measurements as well as the circuit diagram for the coupled-inductor oscillator used for the wireless transduction. The relative humidity in the cell was controlled by mixing known flow rates of water-saturated and dry air (100% and ~0% relative humidity, respectively). Water-saturated air was produced by passing compressed air through a diffusing stone immersed in warm deionized water while dry air was produced by passing through a chamber packed with anhydrous calcium sulfate as a drying agent. To prevent condensed droplets of water from entering the sample chamber, a condensation trap was included in the watersaturated line immediately before mixing the wet and dry stream. Desired relative humidity values were achieved by carefully controlling the ratio of wet and dry air using valves and monitoring flow rate with rotameters inserted in each line. As an external calibrant, the relative humidity within the sample chamber was also monitored using an Electro-Tech Systems Model 514 humidity controller. In all of the measurements, at no time did condensation appear on the chip, which, on separate samples, was observed to abruptly change the resonant frequency. B. Humidity Sensing After a 24 hour thermal bake at 380 K in vacuum to desorb water from the graphene surface, the sensor was immediately installed into the vapor chamber and the sensor inductor was aligned with a secondary read inductor (the read inductor did not include a Fe core) positioned on the exterior of the sample chamber. The read inductor was directly coupled to an Agilent 4291B impedance analyzer to measure the impedance and phase of the coupled inductor system. To improve the accuracy of the quantitative fits, the inductances and self-capacitances of the sense and read inductors were independently determined using an Agilent 4291B impedance analyzer. The measured inductance values of the read and sense inductors were found to be L 1 = 1.16 H and L 2 = 645 nH, respectively, with self-capacitances of C S1 = 2.16 pF and C S2 = 2.30 pF, respectively. These values were used when performing all quantitative fits for the wireless measurements. III. RESULTS AND DISCUSSION A. Graphene Varactor Performance Before testing, the chip was mounted on a printed circuit board and five varactors wire-bonded in parallel in order to increase the total capacitance. Prior to measurement, the mounted chip was baked at 380 K in vacuum to remove adsorbed water. Capacitance-Voltage (C-V) measurements were taken on the parallel wire-bonded varactors prior to removing from vacuum. The resulting 1 MHz C-V curve is shown in Fig. 3. The characteristic quantum capacitance minimum is clearly observed just above the zero bias point. The capacitance tuning range (C max /C min ) was found to be ~ 1.20. Fitting of the C-V curve to a theoretical model [13] allowed for determination of the following device characteristics. The extracted equivalent oxide thickness (EOT) for the 8 nm-thick HfO 2 gate oxide was 2.52 nm (corresponding to a relative permittivity of 12.3). The fit also revealed a residual temperature, T 0, of 1500 K, where T 0 is related to the magnitude of the random potential disorder in the graphene. Furthermore, the area of the varactors was used as a fitting factor to account for tearing and delamination of the graphene in the active device area. The extracted value was A = 7975 m 2 . Additional detail of the quantum capacitance fitting procedure is described in the appendix. It is important to note that the C-V curve exhibits a steep slope near zero applied gate voltage. This condition is required to achieve high sensitivity during sensor operation. The quality factor, Q var , of the parallel varactors was also measured as a function of frequency, f, and these results are shown in the inset of Fig. 3. For the stand-alone varactors, Q var is defined as 1/2fR s C G , where R s is the series resistance and C G is the varactor capacitance. The relatively low frequency at which Q var rolls off indicates that R s is higher than would be expected given the graphene mobilities and contact resistances typically measured using our fabrication process. This excess series resistance is believed to be associated with graphene tearing at the edges of the gate electrode and is expected to be minimized using a more sophisticated planarization process, such as chemical-mechanical polishing. Nevertheless, the observed Q var value was sufficient to perform the wireless sensing measurements described in the next section. B. Wireless Humidity Sensing To make a basic demonstration of the quantum capacitancebased sensing, the graphene varactor was tested as a humidity sensor. While many more technologically interesting analytes exist, water vapor sensing represents the simplest method to demonstrate the quantum capacitance-based transduction mechanism, which is the focus of this paper. While pristine graphene has been shown to be intrinsically insensitive to changes in relative humidity, the presence of polymeric residues resulting from the transfer and subsequent lithography of graphene has been shown to impart sensitivity to the graphene [15]. Moreover, the presence of defect sites and crystalline boundaries in CVD-grown graphene lead to oxygen-containing moieties on the graphene [32]. Such functionalities have previously been suggested as active sites which lead to the sensitivity of CVD graphene-based devices [33]. In the intended mode of operation, adsorbed water on the graphene surface increases the hole concentration in the already slightly p-type graphene [34]. The increasing hole concentration shifts the Fermi-level further from the Dirac energy, increasing the capacitance and thus decreasing the resonant frequency of the LRC circuit. As an initial test of the sensors,  z vs. f for the external inductor was measured first in the dry condition, then in the humid condition and again in dry air. Here, the "dry" state corresponds to ~ 1% RH, with the "humid" state occurring at RH ~ 97%. In this initial test, the chamber RH was allowed to fully equilibrate under dry conditions before the measurements were taken and  z vs f recorded at several time increments while changing RH. The  z vs f curves taken at dry and humid conditions are shown in Fig. 4a. The minimum phase dip, which corresponds to the resonant frequency of the LRC sensor circuit, is clearly seen to shift to lower values under humid conditions and then returns to its original value in dry air. Fig. 4b also shows the measured impedance magnitude for the dry and humid conditions. To demonstrate the time response of the quantum capacitance sensor, a plot of the resonant frequency as a function of time is shown in Fig. 4c, while the RH vs. time plot measured using a commercial humidity sensor is shown in Fig. 4d. In Fig. 4c, two profiles are plotted which correspond to successive measurements of the graphene sensor on different days. The first profile was taken immediately after baking out in vacuum, while the second profile was performed after cycling the sensor between dry and humid conditions numerous times. In the first plot, it can be seen that the resonant frequency does not return to its original value after humidity cycling, but that the second curve does. The second profile showed a net downward shift in resonant frequency of approximately 400 kHz with respect to the initial humidity ramp. The time response of the resonant frequency follows an approximate exponential curvature, and has a time response that is nearly equal to the commercial humidity sensor. It is speculated that the improved response observed in the second profile is a result of "seasoning" of the graphene in which the first profile contains some amount of transients related to the freshly dehydrated surface that are later equilibrated after exposure to a humid environment. Specifically, the surface of the hafnium oxide gate dielectric is expected to become dehydrated during a vacuum bake-out. Upon exposure to humid atmosphere, this surface is expected to again become hydrated [35]. Our results indicate that equilibration of the sensor is largely complete after 24 hours of exposure to atmosphere. Additionally, the sensor shows a steady response after 30 minutes in the second humidity cycle, indicating that the transients involved in the first cycle have been largely eliminated. It is also noted that no measurable difference in the response time of the graphene quantum capacitance sensor and the commercial humidity sensor was observed. C. Effect of Concentration Cycling To characterize both the concentration response and reproducibility of the sensor, three concentration-dependent resonant frequency profiles were measured, as summarized in Fig. 5a. The first profile followed a decreasing sequence from high to low concentration (Fig. 5b). Between each concentration, the humidity was brought to a minimum (~2% relative humidity) to track hysteretic behavior. The second profile followed an increasing sequence from low to high concentration (Fig. 5b). Finally, the third profile was taken such that the humidity concentration target was randomized, and the concentration sequence for this measurement is shown in Fig. 5c. It is notable that the resonant frequency shift as a function of concentration is roughly linear regardless of sweep direction, though a slight difference in the slopes (c), the RH was cycled back to the dry condition between each concentration value. The measured RH values using a commercial humidity sensor are shown by the gray bars, while the resonant frequency shifts of the graphene sensor are depicted using the symbols. > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < corresponding to increasing and decreasing RH is apparent in Fig. 5a. Furthermore, we note that the slope of the frequency shift vs. concentration plot obtained from the randomized RH sequence is approximately the average of the slopes corresponding to the increasing and decreasing humidity sweeps. This indicates that a small but non-negligible hysteretic mechanism could still be at work that causes the frequency shift to be dependent on the direction of the concentration ramp. It is interesting to note that although the results in Fig. 5a show a linear dependence of the frequency shift on humidity, such a functional dependence is not necessarily expected, as noted originally in reference [14]. Rather, the precise functional dependence is expected to depend upon numerous factors, including the interaction of the adsorbed molecules on the graphene surface, the precise shape of the C-V profile and the initial "doping" in the graphene. In order to determine the precise operating conditions of our devices, we modeled the response of the sensors using the circuit impedance method described in reference [24] with a quantum capacitance model including random potential variations adapted from reference [13]. Our circuit model includes the effect of inductor selfresonance due to inter-winding capacitance. Fig. 6 shows the results of fitting the measured impedance phase data to the circuit model described in the appendix. Fig. 6a shows the measured phase dip under dry and humid conditions along with the modeled phase dip data. The only free fitting parameters were the sensor capacitance and resistance, the read inductor series resistance and the inductor coupling coefficients, while the coil inductance and selfcapacitance values had been measured independently as described earlier. In total, eight relative humidity points were chosen for parameter extraction from the model which allowed the estimation of the change in resistance and capacitance of the sensor circuit as a function of RH. The extracted resistance and capacitance values vs. RH are shown in Fig. 6b and Fig. 6c, respectively. The capacitance is observed to decrease by roughly 10% over the range of vapor concentrations tested, while the resistance changed by < 1%. It is important to point out that if resistance changes were the primary transduction mechanism, these changes would mostly manifest as a change in the full-width half-maximum (FWHM) of phase dip signal, since the varactor resistance serves as the primary damping factor of the resonant circuit. Instead a frequency shift is observed, which is indicative of capacitance modulation. Therefore, these results provide firm evidence that the fundamental sensing mechanism involved in these sensors is in fact due to the quantum capacitance modulation of the graphene varactor. However, it should be noted that the resistance change extracted from the phase-dip measurements shows little change with increasing RH values, an unexpected result given previous studies on resistive graphene moisture sensors [15]. This discrepancy can be partially explained by the high series resistance in our devices, which would be expected to reduce the percent resistance change resulting from a shift in the carrier concentration relative to reference [15]. However, further study of the coincident resistance and capacitance changes in these sensors is still needed. D. Equivalent Circuit Modeling As a final demonstration of the quantum capacitance transduction mechanism, the known C-V characteristics shown in Fig. 3 were used to extract the quantum capacitance vs. RH and these results are shown in Fig. 7. Using these values, it is observed that the humidity shifts the quantum capacitance between values of 3.5 F/cm 2 and 4.9 F/cm 2 . This information could be extremely useful in understanding the fundamental properties of surface adsorption onto graphene since, unlike resistance-based sensors, the quantum The fitting parameters were the resistance and capacitance of the graphene varactor, the read inductor resistance and coupling coefficient between the two inductors. All other parameters were measured independently. Extracted (b) resistance and (c) capacitance of the graphene varactors vs. RH using the fitting procedure shown in (a). > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < capacitance sensor provides a method to directly link the adsorbed molecular concentration to carrier concentration changes. IV. CONCLUSIONS In conclusion, graphene vapor sensors that utilize the quantum capacitance effect as their principle of operation have been demonstrated. The sensors transduce a change in adsorbed water vapor concentration on the graphene surface, into a shift in the resonant frequency of a resonant oscillator circuit. The sensors show fast response to abrupt changes in the humidity and further show a monotonic frequency shift with relative humidity that is reversible and stable, particularly after conditioning using repetitive humidity cycling. Our results suggest that graphene quantum capacitance wireless sensors can be utilized to realize passive sensors for detection of a wide range of chemical and biological analytes, provided that appropriate surface functionalization approaches can be developed. V. APPENDIX A. Quantum Capacitance Model The varactor C-V characteristics in Fig. 3 were fit to a theoretical model assuming series connected oxide and quantum capacitances. For the quantum capacitance model, a fitting procedure has been established that takes into account the random potential fluctuations that can be particularly prominent in CVD-grown graphene. Using this model, the total varactor capacitance can be expressed as 1 1 1             Q ox G c c A C (1) where A is the active area of the graphene, and c ox and c Q are the oxide capacitance and quantum capacitance per unit area, respectively. The oxide and quantum capacitance values can then be expressed as follows: EOT c ox 0 9 . 3   (2) and                           eff F F eff Q kT E v qkT c cosh 2 2 ln 2 2  (3) Here,  0 is the permittivity of free space, EOT is the equivalent oxide thickness of the dielectric between the metal gate electrode and the graphene, q is the electronic charge, k is Boltzmann's constant,  is the reduced Planck's constant, v F = 1.1 x 10 6 cm/s is the Fermi velocity, and E F is the Fermi energy relative to the Dirac point energy. T eff is the effective temperature, and is determined using: 2 2 0 T T T eff   (4) where T is the sample temperature and T 0 is a fitting parameter intended to approximate the Dirac point "smearing" associated with random potential fluctuations [36]. B. Impedance Model for Wireless Measurements The basic principle of the phase-dip measurement is as follows. The frequency-dependent input impedance for the coupled readout and sensor circuit shown in Fig. 2, using the transformer equations for the inductively coupled circuit, is given as: In the above equations, Z 1 is the impedance of the read branch of the circuit, and Z 2 is the portion of the impedance of the sensor branch excluding the varactor elements. In addition, is the angular frequency, L x is the inductance of coil x, C x is the inter-winding capacitance of coil x, R i is the resistance of the read coil, m = k(L 1 L 2 ) 1/2 is the mutual inductance between the coils, and k is the coupling coefficient. The varactor series resistance and capacitance are denoted by R S and C G , respectively. When the sensor-side LRC circuit is at its resonant frequency, a plot of the phase of Z 1 vs. frequency has a minimum. Sensing occurs when the varactor capacitance varies in response to an external stimulus, which changes the resonant frequency, and therefore the value of the phase dip frequency. The fitting results are shown in Fig. 6, and for all fits, the values of R i and k were used as free fitting parameters, where values of R i = 0.093 , k = 0.16 were determined in all cases. Manuscript received ________, 2013. This work was supported by the Minnesota Partnership for Biotechnology and Medical Genomics Decade of Discovery Initiative. This work also utilized the University of Minnesota Nanofabrication and Characterization Facilities, which receive partial support from the National Science Foundation. D. A. Deen, E. J. Olson, M. A. Ebrish, and S. J. Koester are with the Department of Electrical and Computer Engineering, University of Minnesota, 200 Union St. SE, Minneapolis, MN 55455 (e-mail: [email protected]; [email protected]; [email protected]; [email protected]). D. A. Deen is now with Seagate in Bloomington, MN (email: [email protected]) FIG. 2 . 2Circuit diagram for the sensing system utilized in this work (top left) along with an optical micrograph of a typical varactor utilized for these experiments (right) and a cross-sectional schematic of the varactor structure (bottom left). Areas that include graphene have been highlighted with transparent white boxes in the micrograph. For the actual sensing experiments, five varactors similar to the one shown above were wirebonded in parallel. > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < FIG. 3 . 3Measured and modeled capacitance vs. voltage characteristics of graphene varactor utilized for sensing experiments. The device consisted of 5 multi-finger graphene varactors wire bonded in parallel, with aggregate area estimated to be 7975 m 2 . The measurement frequency is 1 MHz. Inset: log-log plot of quality factor vs. frequency for the graphene varactors. > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < FIG. 4 . 4(a) Plot of external inductor impedance phase versus frequency for successive measurements in dry (1% RH), humid (97% RH) and dry air. (b) Plot of external inductor impedance magnitude for the first two dry and humid conditions in (a). (c) Resonant frequency shift vs. time for two successive measurements where the RH was switched from the dry to humid states. The first profile was taken immediately after baking out in vacuum, while the second profile was performed after cycling the sensor between dry and humid conditions numerous times. (d) RH vs. time plot measured using a commercial humidity sensor. FIG. 5 . 5(a) Dependence of resonant frequency shift vs. RH measured using three different concentration sequences: increasing, decreasing and random. The dashed line shows a linear fit including all three measurement sequences. (b) Measurement sequence for decreasing and increasing concentration-dependent measurements. (c) Measurement sequence for random concentration-dependent measurements. For all measurements in (b) and FIG. 7 . 7Plot of quantum capacitance vs. RH extracted from the total capacitance vs. RH shown inFig. 6and the theoretical fit of the CV curve plotted inFig. 3. FIG. 6 . 6(a) Measured phase dip under dry and humid condition along with the results of modeling using the equivalent circuit shown in the inset. Dr. Koester's current research involves investigations into the device applications of graphene, including novel sensors, spintronics, and optoelectronic devices. He has authored or coauthored more than 160 technical publications and conference presentations, and is the holder of 46 U.S. patents. He was the general chair of the 2009 Device Research Conference and is currently an associate editor of IEEE Electron Device Letters. Carrier statistics and quantum capacitance of graphene sheets and ribbons. T Fang, A Konar, H L Xing, Appl. Phys. Lett. 91992109T. Fang, A. Konar, H. L. Xing et al., "Carrier statistics and quantum capacitance of graphene sheets and ribbons," Appl. Phys. Lett., vol. 91, no. 9, p. 092109, Aug 2007. Quantum capacitance and density of states of graphene. S Droscher, P Roulleau, F Molitor, Appl. Phys. Lett. 9615152104S. Droscher, P. Roulleau, F. Molitor et al., "Quantum capacitance and density of states of graphene," Appl. Phys. Lett., vol. 96, no. 15, p. 152104, Apr 2010. The transport and quantum capacitance properties of epitaxial graphene. J L Xia, F Chen, J L Tedesco, Appl. Phys. Lett. 9616162101J. L. Xia, F. Chen, J. L. Tedesco et al., "The transport and quantum capacitance properties of epitaxial graphene," Appl. Phys. Lett., vol. 96, no. 16, p. 162101, Apr 2010. Measurements and microscopic model of quantum capacitance in graphene. H L Xu, Z Y Zhang, L M Peng, Appl. Phys. Lett. 987H. L. Xu, Z. Y. Zhang, and L. M. Peng, "Measurements and microscopic model of quantum capacitance in graphene," Appl. Phys. Lett., vol. 98, 7 . &gt; Replace, Line, Your, Identification Number, 133122> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < no. 13, p. 133122, Mar 2011. Measurement of the quantum capacitance of graphene. J L Xia, F Chen, J H Li, Nature Nanotech. 48J. L. Xia, F. Chen, J. H. Li et al., "Measurement of the quantum capacitance of graphene," Nature Nanotech., vol. 4, no. 8, pp. 505-509, Aug 2009. Quantum Capacitance Limited Vertical Scaling of Graphene Field-Effect Transistor. H L Xu, Z Y Zhang, Z X Wang, ACS Nano. 53H. L. Xu, Z. Y. Zhang, Z. X. Wang et al., "Quantum Capacitance Limited Vertical Scaling of Graphene Field-Effect Transistor," ACS Nano, vol. 5, no. 3, pp. 2340-2347, Mar 2011. Screening Length and Quantum Capacitance in Graphene by Scanning Probe Microscopy. F Giannazzo, S Sonde, V Raineri, Nano Lett. 91F. Giannazzo, S. Sonde, V. Raineri et al., "Screening Length and Quantum Capacitance in Graphene by Scanning Probe Microscopy," Nano Lett., vol. 9, no. 1, pp. 23-29, Jan 2009. Modeling of graphene metal-oxide-semiconductor field-effect transistors with gapless largearea graphene channels. S A Thiele, J A Schaefer, F Schwierz, J. Appl. Phys. 107994505S. A. Thiele, J. A. Schaefer, and F. Schwierz, "Modeling of graphene metal-oxide-semiconductor field-effect transistors with gapless large- area graphene channels," J. Appl. Phys., vol. 107, no. 9, p. 094505, May 2010. Density of States and Zero Landau Level Probed through Capacitance of Graphene. L A Ponomarenko, R Yang, R V Gorbachev, Phys. Rev. Lett. 10513136801L. A. Ponomarenko, R. Yang, R. V. Gorbachev et al., "Density of States and Zero Landau Level Probed through Capacitance of Graphene," Phys. Rev. Lett., vol. 105, no. 13, p. 136801, Sep 2010. An integrated capacitance bridge for high-resolution, wide temperature range quantum capacitance measurements. A Hazeghi, J A Sulpizio, G Diankov, Rev. Sci. Instrum. 82553904A. Hazeghi, J. A. Sulpizio, G. Diankov et al., "An integrated capacitance bridge for high-resolution, wide temperature range quantum capacitance measurements," Rev. Sci. Instrum., vol. 82, no. 5, p. 053904, May 2011. Transport scattering time probed through rf admittance of a graphene capacitor. E Pallecchi, A C Betz, J Chaste, Phys. Rev. B. 8312125408E. Pallecchi, A. C. Betz, J. Chaste et al., "Transport scattering time probed through rf admittance of a graphene capacitor," Phys. Rev. B, vol. 83, no. 12, p. 125408, Mar 2011. Mobility extraction and quantum capacitance impact in high performance graphene field-effect transistor devices. Z Chen, J Appenzeller, IEEE IEDM Tech. Digest. San Francisco, CAZ. Chen, and J. Appenzeller, "Mobility extraction and quantum capacitance impact in high performance graphene field-effect transistor devices," in IEEE IEDM Tech. Digest, San Francisco, CA, 2008, pp. 509-512. Operation of multi-finger graphene quantum capacitance varactors using planarized local bottom gate electrodes. M A Ebrish, H Shao, S J Koester, Appl. Phys. Lett. 10014143102M. A. Ebrish, H. Shao, and S. J. Koester, "Operation of multi-finger graphene quantum capacitance varactors using planarized local bottom gate electrodes," Appl. Phys. Lett., vol. 100, no. 14, p. 143102, Apr 2012. High quality factor graphene varactors for wireless sensing applications. S J Koester, Appl. Phys. Lett. 9916163105S. J. Koester, "High quality factor graphene varactors for wireless sensing applications," Appl. Phys. Lett., vol. 99, no. 16, p. 163105, Oct 2011. Intrinsic Response of Graphene Vapor Sensors. Y P Dan, Y Lu, N J Kybert, Nano Lett. 94Y. P. Dan, Y. Lu, N. J. Kybert et al., "Intrinsic Response of Graphene Vapor Sensors," Nano Lett., vol. 9, no. 4, pp. 1472-1475, Apr 2009. Selective Gas Sensing with a Single Pristine Graphene Transistor. S Rumyantsev, G X Liu, M S Shur, Nano Lett. 125S. Rumyantsev, G. X. Liu, M. S. Shur et al., "Selective Gas Sensing with a Single Pristine Graphene Transistor," Nano Lett., vol. 12, no. 5, pp. 2294-2298, May 2012. Oxygen sensors made by monolayer graphene under room temperature. C W Chen, S C Hung, M D Yang, Appl. Phys. Lett. 9924243502C. W. Chen, S. C. Hung, M. D. Yang et al., "Oxygen sensors made by monolayer graphene under room temperature," Appl. Phys. Lett., vol. 99, no. 24, p. 243502, Dec 2011. Carbon dioxide gas sensor using a graphene sheet. H J Yoon, D H Jun, J H Yang, Sensors Actuators B: Chem. 1571H. J. Yoon, D. H. Jun, J. H. Yang et al., "Carbon dioxide gas sensor using a graphene sheet," Sensors Actuators B: Chem., vol. 157, no. 1, pp. 310-313, Sep 2011. Graphene Films and Ribbons for Sensing of O-2, and 100 ppm of CO and NO2 in Practical Conditions. R K Joshi, H Gomez, F Alvi, J. Phys. Chem. C. 11414R. K. Joshi, H. Gomez, F. Alvi et al., "Graphene Films and Ribbons for Sensing of O-2, and 100 ppm of CO and NO2 in Practical Conditions," J. Phys. Chem. C, vol. 114, no. 14, pp. 6610-6613, Apr 2010. Graphene Based Electrochemical Sensors and Biosensors: A Review. Y Y Shao, J Wang, H Wu, Electroanalysis. 2210Y. Y. Shao, J. Wang, H. Wu et al., "Graphene Based Electrochemical Sensors and Biosensors: A Review," Electroanalysis, vol. 22, no. 10, pp. 1027-1036, May 2010. Graphene-based wireless bacteria detection on tooth enamel. M S Mannoor, H Tao, J D Clayton, Nat. Comm. 3763M. S. Mannoor, H. Tao, J. D. Clayton et al., "Graphene-based wireless bacteria detection on tooth enamel," Nat. Comm., vol. 3, p. 763, Mar 2012. A wireless implantable passive microdosimeter for radiation oncology. C Son, B Ziaie, IEEE Trans. Biomed. Eng. 556C. Son, and B. Ziaie, "A wireless implantable passive microdosimeter for radiation oncology," IEEE Trans. Biomed. Eng., vol. 55, no. 6, pp. 1772-1775, Jun 2008. Wireless Intraocular Pressure Sensing Using Microfabricated Minimally Invasive Flexible-Coiled LC Sensor Implant. P J Chen, S Saati, R Varma, IEEE J. Micr. Sys. 194P. J. Chen, S. Saati, R. Varma et al., "Wireless Intraocular Pressure Sensing Using Microfabricated Minimally Invasive Flexible-Coiled LC Sensor Implant," IEEE J. Micr. Sys., vol. 19, no. 4, pp. 721-734, Aug 2010. Wireless Readout of Passive LC Sensors. R Nopper, R Niekrawietz, L Reindl, IEEE Tran. Inst. Meas. 599R. Nopper, R. Niekrawietz, and L. Reindl, "Wireless Readout of Passive LC Sensors," IEEE Tran. Inst. Meas., vol. 59, no. 9, pp. 2450-2457, Sep 2010. Nanoelectronic biosensors based on CVD grown graphene. Y X Huang, X C Dong, Y M Shi, Nanoscale. 28Y. X. Huang, X. C. Dong, Y. M. Shi et al., "Nanoelectronic biosensors based on CVD grown graphene," Nanoscale, vol. 2, no. 8, pp. 1485- 14882010. Flexible glucose sensor using CVD-grown graphene-based field effect transistor. Y H Kwak, D S Choi, Y N Kim, Biosens. Bioelectron. 371Y. H. Kwak, D. S. Choi, Y. N. Kim et al., "Flexible glucose sensor using CVD-grown graphene-based field effect transistor," Biosens. Bioelectron., vol. 37, no. 1, pp. 82-87, Aug-Sep 2012. Flexible graphene bio-nanosensor for lactate. P Labroo, Y Cui, Biosens. Bioelectron. 41P. Labroo, and Y. Cui, "Flexible graphene bio-nanosensor for lactate," Biosens. Bioelectron., vol. 41, pp. 852-856, Mar 2013. Graphene for electrochemical sensing and biosensing. M Pumera, A Ambrosi, A Bonanni, Trends Anal. Chem. 299M. Pumera, A. Ambrosi, A. Bonanni et al., "Graphene for electrochemical sensing and biosensing," Trends Anal. Chem., vol. 29, no. 9, pp. 954-965, Oct 2010. Fabrication, Optimization, and Use of Graphene Field Effect Sensors. R Stine, S P Mulvaney, J T Robinson, Anal. Chem. 852R. Stine, S. P. Mulvaney, J. T. Robinson et al., "Fabrication, Optimization, and Use of Graphene Field Effect Sensors," Anal. Chem., vol. 85, no. 2, pp. 509-521, Jan 2013. Large-Area Synthesis of High-Quality and Uniform Graphene Films on Copper Foils. X S Li, W W Cai, J H An, Science. 3245932X. S. Li, W. W. Cai, J. H. An et al., "Large-Area Synthesis of High- Quality and Uniform Graphene Films on Copper Foils," Science, vol. 324, no. 5932, pp. 1312-1314, Jun 2009. Atomic-layer-deposited nanostructures for graphene-based nanoelectronics. Y Xuan, Y Q Wu, T Shen, Appl. Phys. Lett. 92113101Y. Xuan, Y. Q. Wu, T. Shen et al., "Atomic-layer-deposited nanostructures for graphene-based nanoelectronics," Appl. Phys. Lett., vol. 92, no. 1, p. 013101, Jan 2008. Atomic and Electronic Structure of Graphene-Oxide. K A Mkhoyan, A W Contryman, J Silcox, Nano Lett. 93K. A. Mkhoyan, A. W. Contryman, J. Silcox et al., "Atomic and Electronic Structure of Graphene-Oxide," Nano Lett., vol. 9, no. 3, pp. 1058-1063, Mar 2009. Graphene Transistors Are Insensitive to pH Changes in Solution. W Y Fu, C Nef, O Knopfrnacher, Nano Lett. 119W. Y. Fu, C. Nef, O. Knopfrnacher et al., "Graphene Transistors Are Insensitive to pH Changes in Solution," Nano Lett., vol. 11, no. 9, pp. 3597-3600, Sep 2011. Probing Charge Transfer at Surfaces Using Graphene Transistors. P L Levesque, S S Sabri, C M Aguirre, Nano Lett. 111P. L. Levesque, S. S. Sabri, C. M. Aguirre et al., "Probing Charge Transfer at Surfaces Using Graphene Transistors," Nano Lett., vol. 11, no. 1, pp. 132-137, Jan 2011. Direct measurements of water adsorption enthalpy on hafnia and zirconia. S V Ushakov, A Navrotsky, Appl. Phys. Lett. 8716164103S. V. Ushakov, and A. Navrotsky, "Direct measurements of water adsorption enthalpy on hafnia and zirconia," Appl. Phys. Lett., vol. 87, no. 16, p. 164103, Oct 2005. Observation of electron-hole puddles in graphene using a scanning single-electron transistor. J Martin, N Akerman, G Ulbricht, T Lohmann, J H Smet, K Klitzing, A Yacoby, Nature Physics. 4J. Martin, N. Akerman, G. Ulbricht, T. Lohmann, J. H. Smet, K. von Klitzing, and A. Yacoby, "Observation of electron-hole puddles in graphene using a scanning single-electron transistor," Nature Physics, vol. 4, pp. 144-148, 2008.
[]
[ "Measurement of magic-wavelength optical dipole trap by using the laser-induced fluorescence spectra of trapped single cesium atoms", "Measurement of magic-wavelength optical dipole trap by using the laser-induced fluorescence spectra of trapped single cesium atoms" ]
[ "Bei Liu \nState Key Laboratory of Quantum Optics and Quantum Optics Devices\nShanxi University\nShan Xi Province030006Tai YuanPeople's Republic of China\n\nInstitute of Opto-Electronics\nShanxi University\nShan Xi Province030006Tai YuanPeople's Republic of China\n", "Gang Jin \nState Key Laboratory of Quantum Optics and Quantum Optics Devices\nShanxi University\nShan Xi Province030006Tai YuanPeople's Republic of China\n\nInstitute of Opto-Electronics\nShanxi University\nShan Xi Province030006Tai YuanPeople's Republic of China\n", "Rui Sun \nState Key Laboratory of Quantum Optics and Quantum Optics Devices\nShanxi University\nShan Xi Province030006Tai YuanPeople's Republic of China\n\nInstitute of Opto-Electronics\nShanxi University\nShan Xi Province030006Tai YuanPeople's Republic of China\n", "Jun He \nState Key Laboratory of Quantum Optics and Quantum Optics Devices\nShanxi University\nShan Xi Province030006Tai YuanPeople's Republic of China\n\nInstitute of Opto-Electronics\nShanxi University\nShan Xi Province030006Tai YuanPeople's Republic of China\n\nCollaborative Innovation Center of Extreme Optics\nShanxi University\nShan Xi Province030006Tai YuanPeople's Republic of China\n", "Junmin Wang \nState Key Laboratory of Quantum Optics and Quantum Optics Devices\nShanxi University\nShan Xi Province030006Tai YuanPeople's Republic of China\n\nInstitute of Opto-Electronics\nShanxi University\nShan Xi Province030006Tai YuanPeople's Republic of China\n\nCollaborative Innovation Center of Extreme Optics\nShanxi University\nShan Xi Province030006Tai YuanPeople's Republic of China\n" ]
[ "State Key Laboratory of Quantum Optics and Quantum Optics Devices\nShanxi University\nShan Xi Province030006Tai YuanPeople's Republic of China", "Institute of Opto-Electronics\nShanxi University\nShan Xi Province030006Tai YuanPeople's Republic of China", "State Key Laboratory of Quantum Optics and Quantum Optics Devices\nShanxi University\nShan Xi Province030006Tai YuanPeople's Republic of China", "Institute of Opto-Electronics\nShanxi University\nShan Xi Province030006Tai YuanPeople's Republic of China", "State Key Laboratory of Quantum Optics and Quantum Optics Devices\nShanxi University\nShan Xi Province030006Tai YuanPeople's Republic of China", "Institute of Opto-Electronics\nShanxi University\nShan Xi Province030006Tai YuanPeople's Republic of China", "State Key Laboratory of Quantum Optics and Quantum Optics Devices\nShanxi University\nShan Xi Province030006Tai YuanPeople's Republic of China", "Institute of Opto-Electronics\nShanxi University\nShan Xi Province030006Tai YuanPeople's Republic of China", "Collaborative Innovation Center of Extreme Optics\nShanxi University\nShan Xi Province030006Tai YuanPeople's Republic of China", "State Key Laboratory of Quantum Optics and Quantum Optics Devices\nShanxi University\nShan Xi Province030006Tai YuanPeople's Republic of China", "Institute of Opto-Electronics\nShanxi University\nShan Xi Province030006Tai YuanPeople's Republic of China", "Collaborative Innovation Center of Extreme Optics\nShanxi University\nShan Xi Province030006Tai YuanPeople's Republic of China" ]
[]
Based on the multi-level model, we have calculated light shifts for Zeeman states of hyperfine levels of cesium (Cs) 6S1/2 ground state and 6P3/2 excited state. The magicwavelength linearly-polarized optical dipole trap (ODT) for Cs 6S1/2 F=4, mF=+4 -6P3/2 F ' =5, mF=+5 transition is experimentally constructed and characterized by using the laser-induced fluorescence spectra of trapped single Cs atoms. The magic wavelength is 937.7 nm which produces almost the same light shift for 6S1/2 F=4, mF=+4 ground state and 6P3/2 F ' =5, mF=+5 excited state with linearly-polarized ODT laser beam. Compared to undisturbed Cs 6S1/2 F=4, mF=+4 -6P3/2 F ' =5, mF=+5 transition frequency in free space, the differential light shift is less than 0.7 MHz in a linearly-polarized 937.7 nm ODT, which is less than 1.2% of the trap depth. We also discussed influence of the trap depth and the bias magnetic field on the measurement results.
10.1364/oe.25.015861
[ "https://arxiv.org/pdf/1706.06305v2.pdf" ]
46,864,968
1706.06305
b9a44a09571d00f778f4baa2da98c91d16b7f7dc
Measurement of magic-wavelength optical dipole trap by using the laser-induced fluorescence spectra of trapped single cesium atoms Bei Liu State Key Laboratory of Quantum Optics and Quantum Optics Devices Shanxi University Shan Xi Province030006Tai YuanPeople's Republic of China Institute of Opto-Electronics Shanxi University Shan Xi Province030006Tai YuanPeople's Republic of China Gang Jin State Key Laboratory of Quantum Optics and Quantum Optics Devices Shanxi University Shan Xi Province030006Tai YuanPeople's Republic of China Institute of Opto-Electronics Shanxi University Shan Xi Province030006Tai YuanPeople's Republic of China Rui Sun State Key Laboratory of Quantum Optics and Quantum Optics Devices Shanxi University Shan Xi Province030006Tai YuanPeople's Republic of China Institute of Opto-Electronics Shanxi University Shan Xi Province030006Tai YuanPeople's Republic of China Jun He State Key Laboratory of Quantum Optics and Quantum Optics Devices Shanxi University Shan Xi Province030006Tai YuanPeople's Republic of China Institute of Opto-Electronics Shanxi University Shan Xi Province030006Tai YuanPeople's Republic of China Collaborative Innovation Center of Extreme Optics Shanxi University Shan Xi Province030006Tai YuanPeople's Republic of China Junmin Wang State Key Laboratory of Quantum Optics and Quantum Optics Devices Shanxi University Shan Xi Province030006Tai YuanPeople's Republic of China Institute of Opto-Electronics Shanxi University Shan Xi Province030006Tai YuanPeople's Republic of China Collaborative Innovation Center of Extreme Optics Shanxi University Shan Xi Province030006Tai YuanPeople's Republic of China Measurement of magic-wavelength optical dipole trap by using the laser-induced fluorescence spectra of trapped single cesium atoms Light shiftSingle atomslaser-induced fluorescence spectraMagic wavelength Based on the multi-level model, we have calculated light shifts for Zeeman states of hyperfine levels of cesium (Cs) 6S1/2 ground state and 6P3/2 excited state. The magicwavelength linearly-polarized optical dipole trap (ODT) for Cs 6S1/2 F=4, mF=+4 -6P3/2 F ' =5, mF=+5 transition is experimentally constructed and characterized by using the laser-induced fluorescence spectra of trapped single Cs atoms. The magic wavelength is 937.7 nm which produces almost the same light shift for 6S1/2 F=4, mF=+4 ground state and 6P3/2 F ' =5, mF=+5 excited state with linearly-polarized ODT laser beam. Compared to undisturbed Cs 6S1/2 F=4, mF=+4 -6P3/2 F ' =5, mF=+5 transition frequency in free space, the differential light shift is less than 0.7 MHz in a linearly-polarized 937.7 nm ODT, which is less than 1.2% of the trap depth. We also discussed influence of the trap depth and the bias magnetic field on the measurement results. Introduction As a sort of non-classical light source, single-photon sources are important for quantum communication protocols, quantum cryptography protocols as well as linear optics quantum computing [1][2][3]. Furthermore, photons can be used to code quantum information. Single photon emission has been demonstrated for many different sources. Most of single-photon sources are based on single-emitter quantum systems, such as single atoms [4][5][6][7], single ions [8], single quantum-dots [9][10], single molecules [11] and single N/V-centers in diamond [12]. The single atoms, trapped in an optical dipole trap (ODT), is a candidate for single-photon source in principle. It emits a single photon during its spontaneous decay from an excited state to a ground state, and can't emit a second photon until it is re-excited. Some single photon applications require photons to be indistinguishable from one to another. However, in an ODT, the trapping laser induces a light shift on the atom that shifts the transition frequency between the ground and the excited states. This shift is position-dependent and time-dependent due to the thermal motion of the trapped atom, which directly connected to the mean kinetic energy of the atom. The thermal motion of the atom in the trap will explore differential light shift of the transition, which will lead to broadening of the emission spectrum [13] and can thereby reduce the achievable two-photon interference contrast [14]. In order to eliminate the differential light shift, the ODT can be switched off during the excitation/emission processes [5]. However, with this method, the trapping lifetime is reduced, which leads to a lower repetition rate of single photons. The ac Stark shift of the trapping laser can also be eliminated in the blue-detuned dark ODT [15][16]. However, although it is possible to trap atoms, the micrometer-scale blue-detuned dark ODT generally require more complicated experimental setup and not easy to implement. Another alternative method is constructing a magic-wavelength ODT, which can eliminate the differential light shift [17][18][19][20][21]. Using this method, the transition frequency and the emission of the photons are therefore the same as in free space. The specific wavelength of the trapping beam required to achieve zero differential light shift for the concerned transition is called the magic wavelength. Magic wavelength in neutral atomic systems have been calculated and verified in several experiments [17][18][19][20][21]. In 2003, McKeever et al [18] have calculated the magic wavelength of ODT laser for cesium (Cs) atoms, and constructed a magic-wavelength standing-wave ODT for trapping single atom. In 2010, Phoonthong et al [21] have measured the differential light shift of Cs (F=4) -(F ' =3) transition as a function of the trapping wavelength. However, the calculation of the magic wavelengths indicate that the differential light shift is sensitive to the Zeeman state. Thus the trapped atoms can still experience a small light shift. Specifically, considering the different mF states, we construct a magic-wavelength ODT for Cs 6S1/2F=4, mF=+4 -6P3/2 F ' =5, mF=+5 transition, which was also measured by using the laser-induced fluorescence (LIF) spectra of trapped single Cs atoms [22]. The fluorescence spectra is obtained by exciting the single atoms with a   -polarized Cs 6S1/2 F=4, mF=+4 -6P3/2 F ' =5, mF=+5 probe pulse laser, and collecting the spontaneously emitted photons. In this paper, we report our experiment to eliminate the differential light shift of Cs 6S1/2 F=4, mF=+4 -6P3/2 F ' =5, mF=+5 transition by using the magic-wavelength ODT. By using the single atoms, it has eliminate the collision induced spectral broadening and shifting. We measure the differential light shift of Cs 6S1/2 F=4, mF=+4 -6P3/2 F ' =5, mF=+5 transition as a function of the trapping laser wavelength and analyze the various influence mechanisms of the measurement result. Experimental Setup The schematic of experimental setup is shown in Fig. 1(a). An external-cavity diode laser (ECDL) and tapered amplifier (TA) are used to construct a linearly polarized magic-wavelength ODT. The ECDL can be tuned among 930 ~ 940 nm. The ODT beam is strongly focused by a high numerical aperture (NA = 0.29) objective lens assembly mounted outside the ultra-high vacuum glass cell. The vacuum glass cell has an external dimensions of 30×30×120 mm 3 . At the focal point, the trapping beam has a 1/e 2 radius of 1.6 μm. The trap depth is about 3 mK for laser power about 18 mW. The LIF signal of single trapped atom is also collected by the same objective lens and detected by the single-photon-counting module (SPCM, Perkin-Elmer SPCM-AQR15). The P7888 card (two-input multiple-event time digitizers, FAST Com Tech) is used to record the SPCM signal [6]. The quantization axis is defined by a 0.5 Gauss bias magnetic field along the z direction. The probe beam is derived from the main laser system at 852 nm and locked using polarization spectroscopy. The frequency fluctuation of the probe laser is about ±100 kHz in 200 s [23]. The probe beam is σ + -polarized using a Glan-Taylor prism and a quarter-wave plate, relative to the quantization axis and the 1/e 2 waist is 12 μm [24]. In order to measure the differential light shift of Cs 6S1/2 F=4, mF=+4 -6P3/2 F ' =5, mF=+5 cycling transition, the frequency of the probe laser is adjusted by an acousto-optical modulator (AOM). Experiment results and discussions The focused trapping laser beam induces a light shift of the internal states of Cs atoms. Thus the transition frequency between the ground and the excited states are shifted. Taking the multilevel structure of the atom into account, the light shift for the atoms in a linearly polarized ODT is given by [18,25,26]:   2 2 2 , 1 (2 1)(2 1)(2 1) 1 3 ( ) 2 2 2 ( )                             JJ Fm F F F F F A J F F JJ c I r m m F F I J F m JJ JJ     (1) where I(r) is the intensity of the trap laser, ωJJ' is the transition frequency, ω is the frequency of the trap laser, AJ→J' is the transition rate, also known as the Einstein A coefficient. ε is the polarization vector, for the linearly polarization ODT ε=0, for the circular polarization ODT ε=±1 .The coefficients in round and curly brackets are the 3-j and 6-j symbols. Fig. 2 shows the light shift for each Zeeman levels of 6S1/2 (F=4) and 6P3/2 (F ' =5) states. In the calculation, the trap depth is about 3 mK. All Zeeman levels of the ground state 6S1/2 are identical shifted, while the excited states are no longer degenerate. Fig. 2(a) shows that the magic wavelength for Cs 6S1/2 F=4, mF=+4 -6P3/2 F ' =5, mF=+5 transition of Cs is around 937 nm, and it is different for each Zeeman levels. Fig. 2(b) shows the ratio of the U to Ug, when the ODT laser wavelength is set to 937 nm. U is the light shift for all the Zeeman levels of 6P3/2 excited state and Ug is the light shift of 6S1/2 ground state. For 6 P3/2 (F ' =5) state, Zeeman levels are split as much as about 15% relative to the ground state shift. We experimentally construct a magic-wavelength ODT for Cs 6S1/2 F=4, mF=+4 -6P3/2 F ' =5, mF=+5 transition. In the experiment, the single photon collection efficiency is about 2.1%, the temperature of the trapped atoms is about 60 μk and the lifetime of the atoms in the ODT is about 3.5 s. The magic wavelength is obtained by measuring the differential light shift of Cs 6S1/2 F=4, mF=+4 -6P3/2 F ' =5, mF=+5 transition. The typical experimental sequence is shown in Fig. 3. The magneto-optical trap (MOT) is loaded from the background vapor for 2 s. After the MOT loading phase, the ODT beam is turned on. After 25 ms MOT and ODT overlap and 10 ms polarization gradient cooling (PGC) phase, the MOT is turned off, and the single atoms are trapped in the ODT [27]. Then a unidirectional σ + -polarized probe laser is used to obtain the LIF spectra of trapped single atoms. During the probing process, the quantization axis magnetic field and repumping laser are always on. In order to detect enough LIF photons and suppress the possibility of losing atoms due to the heating from the probing process. The gated probing/cooling procedure is used and repeated for 500 times [6,26]. The single atoms is probed for 100 μs and cooled for 900 μs. A low probe saturation parameter s = Iprobe/Isat  0.7 is used to avoid remarkable power broadening. A typical duration of 500 ms probing/cooling procedure, the total number of scattered photons N(t)dt can be estimated by [28] : 2 0 ( ) / (1 4 ) 2               N t dt s s dt   (2) where s is the saturation parameter, δ-δ0 is the detuning of the probe laser which is far from the atomic resonance, Γ/2π = 5.2 MHz is the natural linewidth, the total effective detection efficiency η is about 0.6%. In the detection process, the SPCM is gated to decrease the influence of the background signal. In order to get a large signal to noise ratio, we average over typically 1000 sequences. After averaging over many sequences, the profile of the fluorescence signal measured as a function of the probe detuning δ should be the Lorentzian profile. 3. Experimental sequence. After 10 ms PGC, the single atom is trapped in an ODT. The atom is probed for 100 μs and cooled for 900 μs. In the probing process, the quantum magnetic field is always on. This sequence is typically repeated 1000 times to improve the signal to noise ratio. Using the technique of LIF spectra of trapped single Cs atoms, we measure the value of the magic wavelength for Cs 6S1/2 F=4, mF=+4 -6P3/2 F ' =5, mF=+5 transition. As shown in Fig. 4, the LIF spectra are measured for different trap wavelength. In these measurements, the trap beam is linearly polarized and the trap depth is set to ~ 3 mK. The number of detected LIF photons depends on the power and the detuning of the probe laser beam. In Fig. 4(a), the intensity of the probe beam is hold at s ~ 0.7 and the probe frequency is scanned over a range of 80 MHz by tuning the AOM driver frequency. The quantization magnetic field is about 0.5 Gauss. The obtained detected photons for each data is an average over 1000 sequences for the same atom. The data was fitted by Lorentzian profile. The differential light shift is δ = -16.7 MHz, -0.7 MHz, 5.8 MHz for the trap wavelength of 932.7 nm, 937.7 nm, 940.7 nm, respectively. In Fig. 4(b), we compare the experimental data for the differential light shift as a +5 +4 +3 +2 +1 0 -1 -2 -3 -4 -5 (b) U/Ug m F 6P 3/2 F'=5 6P 3/2 F'=4 6P 3/2 F'=3 6S 1/2 F=4 6S 1/2 F=4 (6) (a) 6P 3/2 F'=5 Light shift (MHz) Wavelength of ODT laser (nm) (1) (1) F'=5, m F =0 (2) F'=5, m F = 1 (3) F'=5, m F = 2 (4) F'=5, m F = 3 (5) F'=5, m F = 4 (6) F'=5, m F = 5 function of the trap wavelength to the theoretical value as calculated based on multi-level model. The zero differential light shift wavelength is identified as magic wavelength. Our result confirms that the magic wavelength is about 937.7 nm. For this wavelength, we can implement a magic-wavelength ODT with almost zero differential light shift for Cs 6S1/2 F=4, mF=+4 -6P3/2 F ' =5, mF=+5 transition for the linearly-polarized ODT laser beam. Compared to the undisturbed transition frequency in free space, the frequency shift is less than 0.7 MHz, which is about 1.2% of the ODT depth. We also measure the differential light shift for different trap depths when the trap wavelength is set to 937.7 nm. As shown in Fig. 5, experimental data are obtained by measuring the differential light shift of Cs 6S1/2 F=4, mF=+4 -6P3/2 F ' =5, mF=+5 transition. For the different trap depth, the probe detuning is ~ 0 MHz. This result shows that the differential light shift does not depend on the trap depth in 937.7 nm magic-wavelength ODT, which further verify our measurement result. One major error of the magic wavelength measurement comes from the power and frequency fluctuation of the probe laser. The uncertainty of measurement result can be improved by reducing the frequency fluctuation of the probe laser. Otherwise, several processes may limit the uncertainty of measurement result. Firstly, the light shift depends on the polarization of the trap laser beam. Considering the polarization of the ODT laser beam, the polarization-dependent light shift can be derived [28] as follows: where ε is the unit-normalized polarization vector. The ODT laser beam with different polarizations can result in different light shifts, which will influence the value of the magic wavelength. In the experiment, a Glan-Taylor prism which has a typical extinction ratio of ~ 1 ×10 5 : 1 is used to create a purely linear polarization of ODT laser beam. The second important contribution to the shifts of atomic transition frequency is the Zeeman shifted induced by bias magnetic fields. The 6S1/2 F=4, mF=+4 ground state experiences a magnetic shift of 0.37 MHz/Gauss and 6P3/2 F ' =5, mF=+5 excited state experiences a magnetic shift of 0.56 MHz/Gauss. In our experiment, the residual magnetic field is actively stabilized to values of B < 10 mG. So the main influence factors of magnetic fields come from the quantization axis magnetic fields. In Fig. 6, we measure the differential light shift of Cs 6S1/2 F=4, mF=+4 -6P3/2 F ' =5, mF=+5 transition as a function of the quantization axis magnetic field. Experimental data are obtained by measuring LIF spectra of trapped single atoms. When the magnetic field is about 1.5 Gauss, the differential light shift is ~ 0 MHz. Fig. 6. Differential light shift versus the quantization axis magnetic field. When the magnetic field is about 1.5 Gauss, the differential light shift is ~ 0 MHz. The error bars show the fitting errors of the LIF spectra of trapped single atoms. Conclusions The magic-wavelength ODT is valuable for precision measurement, frequency metrology, and coherent manipulations of quantum systems [29][30]. To conclude, we have constructed a magic wavelength for Cs 6S1/2 F=4, mF=+4 -6P3/2 F ' =5, mF=+5 cycling transition. In such ODT, the ground state and the excited state have almost the same light shift, which resulting the transition frequency and the emission of the photons are therefore the same as in the undisturbed case in free space. We also have presented a technique to measure the magic wavelength. By using the LIF spectra of single trapped atom, the magic wavelength for Cs 6S1/2 F=4, mF=+4 -6P3/2 F ' =5, mF=+5 transition has been verified experimentally. The measured value is influenced by the he power and frequency fluctuation of the probe laser, ODT polarization, and Zeeman shift. Our experimental method can also be applied to measure magic wavelength for other different mF states. In the future, we will perform the two-photon interference experiment to analyze the indistinguishability of the single photons. Bias magenetic field (Guass) Fig. 1 . 1Experimental setup. The trapped single atoms are excited by a probe beam. The emitted photons are collected by an objective and then coupled into SPCM and P7888 card for measurement. Fig. 2 . 2Light shift for the 6S1/2 F=4, mF and 6P3/2 F ' =5, mF state in a linearly polarized ODT. The shifts were calculated by taking the multi-level model into account. (a) The light shift as a function of the wavelength of the ODT laser. The ground states are homogeneous and the excited states shifts are state dependent. (b) In a 937 nm ODT, the ratio of U to Ug as a function of Zeeman states. U is the light shift for all the Zeeman states of 6P3/2 (F'=3, 4, and 5) excited states and Ug is the light shift of 6S1/2 (F=4) ground state. Fig. Fig. 3. Experimental sequence. After 10 ms PGC, the single atom is trapped in an ODT. The atom is probed for 100 μs and cooled for 900 μs. In the probing process, the quantum magnetic field is always on. This sequence is typically repeated 1000 times to improve the signal to noise ratio. Fig. 4 . 4Measurement of the magic wavelength. (a) LIF spectra of single trapped atoms. Detected photon counts as a function of the detuning of the probe laser for different trap wavelength. The trap depth is ~ 3 mK. Solid lines are Lorentzian profile. (b) The differential light shift versus the trap wavelength. The blue date is the experimental results, the black line is the theoretically expected values. The error bars show the fitting errors of the Lorentzian profile. Fig. 5 . 5Differential light shift versus the trap depth. The trap wavelength is 937.7 nm. Each data is obtained by measuring the LIF spectra of single trapped atoms. The differential light shift does not depend on the trap depth in a 937.7 nm ODT. The error bars show the fitting errors of the LIF spectra of trapped single atoms. Funding Natural National Science Foundation of China (NSFC projects nos. 61475091 and 11274213) and by the National Key Research and Development Program of China (2017YFA0304502). Single-photon sources. B Lounis, M Orrit, Rep. Prog. Phys. 68B. Lounis, and M. Orrit, "Single-photon sources," Rep. Prog. Phys., 68, 1129-1179 (2005). Multi-photon entanglement and interferometry. J W Pan, Z B Chen, C Y Lu, H Weinfurter, A Zeilinger, M Żukowski, Rev. Mod. Phys. 84J. W. Pan, Z. B. Chen, C. Y. Lu, H. Weinfurter, A. Zeilinger, and M. Żukowski, "Multi-photon entanglement and interferometry," Rev. Mod. Phys., 84, 777-838 (2012). Linear optical quantum computing with photonic qubits. P Kok, W J Munro, K Nemoto, T C Ralph, J P Dowling, G J Milburn, Rev. Mod. Phys. 79P. Kok, W. J. Munro, K. Nemoto, T. C. Ralph, J. P. Dowling, and G. J. Milburn, "Linear optical quantum computing with photonic qubits," Rev. Mod. Phys., 79, 135-174 (2007). Controlled single-photon emission from a single trapped two-level atom. B Darquie, M P A Jones, J Dingjan, J Beugnon, S Bergamini, Y Sortais, G Messin, A Browaeys, P Grangier, Science. 309B. Darquie, M. P. A. Jones, J. Dingjan, J. Beugnon, S. Bergamini, Y. Sortais, G. Messin, A. Browaeys, and P. Grangier, "Controlled single-photon emission from a single trapped two-level atom," Science, 309, 454-456 (2005). Fiber-pigtailed optical dipole trap for single-atom trapping and single-photon generation. S Garcia, D Maxein, L Hohmann, J Reichel, R Long, Appl. Phys. Lett. 103114103S. Garcia, D. Maxein, L. Hohmann, J. Reichel, and R. Long, "Fiber-pigtailed optical dipole trap for single-atom trapping and single-photon generation," Appl. Phys. Lett., 103, 114103 (2013). Suppression of single-cesium-atom heating in a microscopic optical dipole trap for demonstration of an 852-nm triggered single-photon source. B Liu, G Jin, J He, J M Wang, Phys. Rev. A. 9413409B. Liu, G. Jin, J. He, and J. M. Wang, "Suppression of single-cesium-atom heating in a microscopic optical dipole trap for demonstration of an 852-nm triggered single-photon source," Phys. Rev. A, 94, 013409 (2016). Telecom-heralded single-photon absorption by a single atom. A Lenhard, M Bock, C Becher, Phys. Rev. A. 9263827A. Lenhard, M. Bock, and C. Becher, "Telecom-heralded single-photon absorption by a single atom," Phys. Rev. A, 92, 063827 (2015). A calcium ion in a cavity as a controlled singlephoton source. M Keller, B Lange, K Hayasaka, W Lange, H Walther, New J. Phys. 610095M. Keller, B. Lange, K. Hayasaka, W. Lange, and H. Walther, "A calcium ion in a cavity as a controlled single- photon source," New J. Phys., 6, 010095 (2004). Triggered single photons from a quantum dot. C Santori, M Pelton, G Solomon, Phys. Rev. Lett. 86C. Santori, M. Pelton, and G. Solomon, "Triggered single photons from a quantum dot," Phys. Rev. Lett., 86, 1502-1505 (2001). Generation, guiding and splitting of triggered single photons from a resonantly excited quantum dot in a photonic circuit. M Schwartz, U Rengstl, T Herzog, M Paul, J Kettler, S L Portalupi, M Jetter, P Michler, Opt. Express. 24M. Schwartz, U. Rengstl, T. Herzog, M. Paul, J. Kettler, S. L. Portalupi, M. Jetter, and P. Michler, "Generation, guiding and splitting of triggered single photons from a resonantly excited quantum dot in a photonic circuit," Opt. Express, 24, 3089-3094 (2016). Triggered source of single photons based on controlled single molecule fluorescence. C Brunel, B Lounis, P Tamarat, M Orrit, Phys. Rev. Lett. 83C. Brunel, B. Lounis, P. Tamarat, and M. Orrit, "Triggered source of single photons based on controlled single molecule fluorescence," Phys. Rev. Lett., 83, 2722-2725 (1999). Investigation of the silicon vacancy color center for quantum key distribution. Y Liu, P Siyushev, Y Y Rong, B T Wu, L P Mcguinness, F Jelezko, S Tamura, T Tanii, T Teraji, S Onoda, T Ohshima, J Isoya, T Shinada, H P Zeng, E Wu, Opt. Express. 23Y. Liu, P. Siyushev, Y. Y. Rong, B. T. Wu, L. P. McGuinness, F. Jelezko, S. Tamura, T. Tanii, T. Teraji, S. Onoda, T. Ohshima, J. Isoya, T. Shinada, H. P. Zeng, and E. Wu, "Investigation of the silicon vacancy color center for quantum key distribution," Opt. Express, 23, 32961-32967 (2015). Analysis of a single-atom dipole trap. M Weber, J Volz, K Saucke, C Kurtsiefer, H Weinfurter, Phys. Rev. A. 7343406M. Weber, J. Volz, K. Saucke, C. Kurtsiefer, and H. Weinfurter, "Analysis of a single-atom dipole trap," Phys. Rev. A, 73, 043406 (2006). Quantum interference between two single photons emitted by independently trapped atoms. J Beugnon, M P A Jones, J Dingjan, B Darquie, G Messin, A Browaeys, P Grangier, Nature. 440J. Beugnon, M. P. A. Jones, J. Dingjan, B. Darquie, G. Messin, A. Browaeys, and P. Grangier, "Quantum interference between two single photons emitted by independently trapped atoms," Nature, 440, 779-781 (2006). Trapping a single atom in a blue detuned optical bottle beam trap. P Xu, X D He, J Wang, M S Zhan, Opt. Lett. 35P. Xu, X. D. He, J. Wang, and M. S. Zhan, "Trapping a single atom in a blue detuned optical bottle beam trap," Opt. Lett., 35, 2164-2166 (2010). Crossed vortex bottle beam trap for single-atom qubits. G Li, S Zhang, L Isenhower, K Maller, M Saffman, Opt. Lett. 37G. Li, S. Zhang, L. Isenhower, K. Maller, and M. Saffman, "Crossed vortex bottle beam trap for single-atom qubits," Opt. Lett., 37, 851-853 (2012). Optical dipole trap without inhomogeneous ac stark broadening. J Y Kim, J S Lee, J H Han, D Cho, J. Korean Phys. Soc. 42J. Y. Kim, J. S. Lee, J. H. Han, and D. Cho, "Optical dipole trap without inhomogeneous ac stark broadening," J. Korean Phys. Soc., 42, 483-488 (2003). Stateinsensitive cooling and trapping of single atoms in an optical cavity. J Mckeever, J R Buck, A D Boozer, A Kuzmich, H.-C Nagerl, D M Stamper-Kurn, H J Kimble, Phys. Rev. Lett. 90133602J. McKeever, J. R. Buck, A. D. Boozer, A. Kuzmich, H.-C. Nagerl, D.M. Stamper-Kurn, and H. J. Kimble, "State- insensitive cooling and trapping of single atoms in an optical cavity," Phys. Rev. Lett., 90, 133602 (2003). Doubly magic conditions in magic-wavelength trapping of ultracold alkali-metal atoms. A Derevianko, Phys. Rev. Lett. 10533002A. Derevianko, "Doubly magic conditions in magic-wavelength trapping of ultracold alkali-metal atoms," Phys. Rev. Lett., 105, 033002 (2010). A state-insensitive, compensated nanofiber trap. C Lacroûte, K S Choi, A Goban, D J Alton, D Ding, N P Stern, H J Kimble, New J. Phys. 1423056C. Lacroûte, K. S. Choi, A. Goban, D. J. Alton, D. Ding, N. P. Stern, and H. J. Kimble, "A state-insensitive, compensated nanofiber trap," New J. Phys. 14, 023056 (2012). Characterization of a state-insensitive dipole trap for cesium atoms. P Phoonthong, P Douglas, A Wickenbrock, F Renzoni, Phys. Rev. A. 8213406P. Phoonthong, P. Douglas, A. Wickenbrock, and F. Renzoni, "Characterization of a state-insensitive dipole trap for cesium atoms," Phys. Rev. A, 82, 013406 (2010). Measurement of fluorescence emission spectrum of few strongly driven atoms using an optical nanofiber. M Das, A Shirasaki, K P Nayak, M Morinaga, F L Kien, K Hakuta, Opt. Express. 18M. Das, A. Shirasaki, K. P. Nayak, M. Morinaga, F. L. Kien, and K. Hakuta, "Measurement of fluorescence emission spectrum of few strongly driven atoms using an optical nanofiber," Opt. Express, 18, 17154-17164 (2010). Improvement of the signal-to-noise ratio of laser-inducedfluorescence photon-counting signals of single-atoms magneto-optical trap. J He, B D Yang, T C Zhang, J M Wang, J. Phys. D: Appl. Phys. 44135102J. He, B. D. Yang, T. C. Zhang and J. M. Wang, "Improvement of the signal-to-noise ratio of laser-induced- fluorescence photon-counting signals of single-atoms magneto-optical trap," J. Phys. D: Appl. Phys., 44, 135102 (2011). Amplification of a nanosecond laser pulse chain via dynamic injection locking of a laser diode. J He, G Jin, B Liu, J M Wang, Opt. Lett. 41J. He, G. Jin, B. Liu, J. M. Wang, "Amplification of a nanosecond laser pulse chain via dynamic injection locking of a laser diode," Opt. Lett., 41, 5724-5728 (2016). State-insensitive dichromatic opticaldipole trap for rubidium atoms: calculation and the dichromatic laser's realization. J M Wang, S L Guo, Y L Ge, Y J Cheng, B D Yang, J He, J. Phys. B: At. Mol. & Opt. Phys. 4795001J. M. Wang, S. L. Guo, Y. L. Ge, Y. J. Cheng, B. D. Yang, and J. He, "State-insensitive dichromatic optical- dipole trap for rubidium atoms: calculation and the dichromatic laser's realization," J. Phys. B: At. Mol. & Opt. Phys., 47, 095001 (2014). Nondestructive light-shift measurements of single atoms in optical dipole traps. C Y Shih, M S Chapman, Phys. Rev. A. 8763408C.Y. Shih and M. S. Chapman, "Nondestructive light-shift measurements of single atoms in optical dipole traps," Phys. Rev. A, 87, 063408 (2013). Efficient loading of a single neutral atom into an optical microscopic tweezer. J He, B Liu, W T Diao, J Y Wang, G Jin, J M Wang, Chinese Phys. B. 2443701J. He, B. Liu, W. T. Diao, J. Y. Wang, G. Jin, and J. M. Wang, "Efficient loading of a single neutral atom into an optical microscopic tweezer," Chinese Phys. B, 24, 043701 (2015). Optical dipole traps for neutral atoms. R Grimm, M Weidemuller, Y B Ovchinnikov, Adv. At. Mol. Opt. Phys. 42R. Grimm, M. Weidemuller, and Y. B. Ovchinnikov, "Optical dipole traps for neutral atoms," Adv. At. Mol. Opt. Phys., 42, 95-133 (2000). Measurement of magic wavelengths for the 40 Ca + clock transition. P L Liu, Y Huang, W Bian, H Shao, H Guan, Y B Tang, C B Li, J Mitroy, K L Gao, Phys. Rev. Lett. 114223001P. L. Liu, Y. Huang, W. Bian, H. Shao, H. Guan, Y. B. Tang, C. B. Li, J. Mitroy, and K. L. Gao, "Measurement of magic wavelengths for the 40 Ca + clock transition," Phys. Rev. Lett., 114, 223001 (2015). Quantum state engineering and precision metrology using state-insensitive light traps. J Ye, H J Kimble, H Katori, Science. 320J. Ye, H. J. Kimble and H. Katori, "Quantum state engineering and precision metrology using state-insensitive light traps," Science 320, 1734-1738 (2008).
[]
[ "Gravitational Wilson Lines in 3D de Sitter", "Gravitational Wilson Lines in 3D de Sitter" ]
[ "Alejandra Castro [email protected] \nInstitute for Theoretical Physics\nUniversity of Amsterdam\nScience Park 90494485, 1090 GLPostbus, AmsterdamThe Netherlands\n", "Philippe Sabella-Garnier \nLorentz Institute\nLeiden University\nNiels Bohrweg 22333-CALeidenThe Netherlands\n", "Claire Zukowski [email protected] \nInstitute for Theoretical Physics\nUniversity of Amsterdam\nScience Park 90494485, 1090 GLPostbus, AmsterdamThe Netherlands\n" ]
[ "Institute for Theoretical Physics\nUniversity of Amsterdam\nScience Park 90494485, 1090 GLPostbus, AmsterdamThe Netherlands", "Lorentz Institute\nLeiden University\nNiels Bohrweg 22333-CALeidenThe Netherlands", "Institute for Theoretical Physics\nUniversity of Amsterdam\nScience Park 90494485, 1090 GLPostbus, AmsterdamThe Netherlands" ]
[]
We construct local probes in the static patch of Euclidean dS 3 gravity. These probes are Wilson line operators, designed by exploiting the Chern-Simons formulation of 3D gravity. Our prescription uses non-unitary representations of so(4) su(2) L × su(2) R , and we evaluate the Wilson line for states satisfying a singlet condition. We discuss how to reproduce the Green's functions of massive scalar fields in dS 3 , the construction of bulk fields, and the quasinormal mode spectrum. We also discuss the interpretation of our construction in Lorentzian signature in the inflationary patch, via SL(2, C) Chern-Simons theory.
10.1007/jhep07(2020)202
[ "https://arxiv.org/pdf/2001.09998v1.pdf" ]
210,932,526
2001.09998
0380f897b281d19acb8220485f29e30d1da328e7
Gravitational Wilson Lines in 3D de Sitter January 29, 2020 27 Jan 2020 Alejandra Castro [email protected] Institute for Theoretical Physics University of Amsterdam Science Park 90494485, 1090 GLPostbus, AmsterdamThe Netherlands Philippe Sabella-Garnier Lorentz Institute Leiden University Niels Bohrweg 22333-CALeidenThe Netherlands Claire Zukowski [email protected] Institute for Theoretical Physics University of Amsterdam Science Park 90494485, 1090 GLPostbus, AmsterdamThe Netherlands Gravitational Wilson Lines in 3D de Sitter January 29, 2020 27 Jan 2020 We construct local probes in the static patch of Euclidean dS 3 gravity. These probes are Wilson line operators, designed by exploiting the Chern-Simons formulation of 3D gravity. Our prescription uses non-unitary representations of so(4) su(2) L × su(2) R , and we evaluate the Wilson line for states satisfying a singlet condition. We discuss how to reproduce the Green's functions of massive scalar fields in dS 3 , the construction of bulk fields, and the quasinormal mode spectrum. We also discuss the interpretation of our construction in Lorentzian signature in the inflationary patch, via SL(2, C) Chern-Simons theory. Introduction The Chern-Simons formulation of three-dimensional gravity seems more amenable to quantization than the more traditional metric formulation [1,2]. One advantage is that the gauge theory formulation makes evident the topological nature of Einstein's theory in three dimensions. Also, Chern-Simons theory has inherently holographic properties: upon specifying a gauge group and boundary conditions on a 3-manifold with a boundary, the Chern-Simons theory can be viewed as dual to a conformal theory living on the boundary [3][4][5]. These features have propelled the use of the Chern-Simons formulation as a computational tool in perturbative gravity. However, this alternative formulation of 3D gravity comes with a cost: local observables that are intuitive in a metric formulation-such as distances, surfaces, volumes, and local fieldsare seemingly lost in Chern-Simons theory. To reintroduce this intuition, Wilson lines present themselves as reasonable objects in Chern-Simons that could restore portions of our geometric and local intuition [6]. In the early stages, it was clear that a Wilson line anchored at the boundary would correspond to a conformal block in the boundary theory [5,7]; more recently this proposal has been made more precise and explicit for SL(2) Chern-Simons theory [8][9][10][11][12][13][14][15]. In the context of AdS 3 gravity, where the relevant gauge group is SO(2, 2), Wilson lines have been applied in a plethora of different contexts [16][17][18][19][20], with recent applications ranging from the computation of holographic entanglement entropy [21][22][23][24][25][26][27] to the probing of analytic properties of an eternal black hole [28,29]. Applications of Wilson lines in Chern-Simons to flat space holography includes [30], and to ultra-relativistic cases [31,32]. In the present work we will study SO(4) Chern-Simons theory on a Euclidean compact manifold. This theory can be interpreted as a gravitational theory with positive cosmological constant, i.e. Euclidean dS 3 gravity. This instance is interesting from a cosmological perspective, where Chern-Simons theory could provide insights into appropriate observables in quantum cosmology. It is also powerful, since there is an extensive list of exact results in Chern-Simons theory for compact gauge group. Previous efforts that exploited this direction of Chern-Simons theory as a toy model for quantum cosmology include [33][34][35][36][37][38]. Our main emphasis is to interpret Wilson lines in SO(4) Chern-Simons theory as local probes for dS 3 gravity, which follows closely the proposal in [6] for SO(2, 2) Chern-Simons. The basic idea is as follows. We will consider a connection A valued on so (4), and a Wilson line stretching from a point x i to x f : W R (x i , x f ) = U f | Pexp − x f x i A |U i . (1.1) There are two important ingredients in defining this object. First we need to select a representation R of so (4). This choice will encode the physical properties of the local probe, such as mass and spin. The second ingredient is to select the endpoint states |U i,f : the freedom in this choice encodes the gauge dependence of W R (x i , x f ). More importantly, their choice will allow us to relate W R (x i , x f ) to the Euclidean Green's function of a massive field propagating on S 3 . And while our choices are inspired by the analogous computations in AdS 3 gravity, they have a standing on their own. We will motivate and introduce the ingredients needed to have a interesting interpretation of (1.1) using solely SO(4) Chern-Simons theory. The interpretation of our results in the Euclidean theory will have its limitations if they are not analytically continued to Lorentzian signature. For example, recognising if the information contained in W R (x i , x f ) is compatible with causality necessitates a Lorentzian understanding of the theory. This is tied with the issue of bulk locality and reconstruction in de Sitter, which remains intriguing in cosmological settings. In the Chern-Simons formulation, the Lorentzian theory corresponds to a theory with gauge group SL(2, C). We will present the basics of how to discuss our results in SL(2, C) Chern-Simons theory, and their relation to the Euclidean theory. One interesting finding is that our choice of representation in SO(4) Chern-Simons naturally leads to quasinormal modes in the static patch of dS 3 when analytically continued. Overview In Sec. 2, we review the Chern-Simons formulation of Euclidean dS 3 (EdS 3 ) gravity, establishing our conventions along the way. In Sec. 3, we describe Wilson lines in SO(4) SU (2) × SU (2) Chern-Simons. We show how the Green's function on EdS 3 of a scalar field of given mass can be described by a Wilson line evaluated in a non-unitary representation of the algebra, which we construct in detail. These unusual representations of su(2) resemble the usual spin-l representation, with the important distinction that −1 < l < 0. And while it might be odd to treat l as a continuous (negative) parameter, these features will be key to recover local properties we attribute to dS 3 in Chern-Simons theory. In Sec. 4, we take this further and show how this description of the Wilson line can be used to define local states in the geometry. We present a map between states in the algebraic formulation and the value of a corresponding scalar pseudofield in the metric formulation, and we build an explicit position-space representation of the basis states. We also match the action of the generators of the algebra to the Killing vectors of the geometry. The local pseudofields constructed from the Wilson line continue to quasinormal modes in the static patch, and they are acted on by an sl(2, R) × sl(2, R) inherited from our representations. This can be contrasted to a similar sl(2, R) structure of the quasi-normal mode spectrum that was discovered and dubbed a "hidden symmetry" of the static patch in [39]. In Sec. 5, we discuss how to analytically continue our results to Lorentzian dS 3 gravity, which is described by an SL(2, C) Chern-Simons theory. We find that our exotic so(4) representations analytically continue to a highest-weight representation of an sl(2, R) × sl(2, R) slice of sl(2, C). In Sec. 6, we highlight our main findings and discuss future directions to further explore quantum aspects of dS 3 gravity. Finally, App. A collects some of our conventions for easy reference, and App. B reviews some basic facts about the metric formulation of dS 3 . In App. C, we give more details about how to construct an analytic continuation between the SO(4) and SL(2, C) Chern-Simons theories. 2 Chern-Simons formulation of Euclidean dS 3 gravity For the purposes of setting up notation and conventions we begin with a short review of Chern-Simons gravity, focusing on its relation to Euclidean dS 3 gravity. This is based on the original formulation of 3D gravity as a Chern-Simons theory [1,2]; and related work on Euclidean dS 3 in the Chern-Simons formulation are [35,38], although we warn the reader that conventions there might be different than ours. In App. B we provide a review of the metric formulation of dS 3 gravity. Consider Chern-Simons theory on M = S 3 with gauge group SO(4). This group manifestly splits into SO(4) SU (2) L × SU (2) R , and in terms of its Lie algebra we use generators L a for su(2) L andL a for su(2) R , a = 1, 2, 3. Our conventions are such that [L a , L b ] = i abc L c ,(2.1) and similarly for theL a ; we also set 123 ≡ 1. There is an invariant bilinear form given by the trace: we take Tr(L a L b ) = Tr(L aLb ) = 1 2 δ ab . (2.2) Indices in (2.1) are raised with δ ab . The SO(4) Chern-Simons action relevant for Euclidean dS 3 gravity is 4) and the individual actions are S E = S CS [A] − S CS [Ā] , (2.3) where A = A a µ L a dx µ ,Ā =Ā a µL a dx µ ,(2.S CS [A] = − k 4π M Tr A ∧ dA + 2 3 A ∧ A ∧ A ,(2.5) and similarly for S CS [Ā]. The relation to the first-order formulation of the Einstein-Hilbert action is as follows. The algebra that describes the isometries of Euclidean dS 3 is [J ab , P c ] = −δ ac P b + δ bc P a , [J ab , J cd ] = −δ ac J bd + δ bc J ad + δ ad J bc − δ bd J ac , [P a , P b ] = −ΛJ ab , (2.6) where Λ = 1 2 , and is the radius of the 3-sphere. Here P a and J ab are the generators of translations and rotations of the ambient R 4 , respectively. We also raise indices with δ ab . It is convenient to define the dual J a = 1 2 abc J bc , J ab = abc J c . (2.7) In relation to the su(2) generators, we identify L a = − i 2 (J a + P a ) ,L a = − i 2 (J a − P a ) . (2.8) The variables in the gravitational theory are the vielbein and spin connection, e a = e a µ dx µ , ω a = 1 2 a bc ω bc µ dx µ . (2.9) The vielbein is related to the metric as g µν = e a µ e b ν δ ab . We define the gauge field in terms of these geometrical variables as A = i ω a + 1 e a L a ,Ā = i ω a − 1 e a L a . (2.10) Using (2.10), the action (2.3) becomes S E = k 2π M e a ∧ (dω a − 1 2 abc ω b ∧ ω c ) − 1 6 2 abc e a ∧ e b ∧ e c ,(2.11) which reduces the Einstein-Hilbert action with positive cosmological constant given the identi- fication k = 4G 3 . (2.12) The equations of motion from (2.3) simply give the flatness condition, F = dA + A ∧ A = 0 ,F = dĀ +Ā ∧Ā = 0 ,(2.13) which are related to the Cartan and Einstein equation derived from (2.11) after using (2.10). The background we will mostly focus on is S 3 , which we will cast as ds 2 2 = dr 2 + cos 2 rdτ 2 + sin 2 rdφ 2 , (2.14) with (τ, φ) ∼ (τ, φ) + 2π(m, n) and m, n ∈ Z; see App. B.1 for further properties of this background. In the Chern-Simons language, the associated connections that reproduce the vielbein and spin connection are A = iL 2 dr + i (L 3 cos r + L 1 sin r) (dφ + dτ ) , A = −iL 2 dr + i (L 3 cos r − L 1 sin r) (dφ − dτ ) . (2.15) Note that we are using the same basis of su(2) generators for both A andĀ. This is convenient since we then can read off the metric as g µν = − 2 2 Tr A µ −Ā µ A ν −Ā ν . (2.16) The corresponding group elements that we will associate to each flat connection read 1 A = g L dg −1 L , g L = e −irL 2 e −i(φ+τ )L 3 , A =g −1 R dg R ,g R = e i(φ−τ )L 3 e −irL 2 . (2.17) This can be checked explicitly by using the following corollary of the Baker-Campbell-Hausdorff formula, e −iαLa L b e iαLa = cos(α)L b + sin(α) abc L c . (2.18) Wilson lines in SO(4) Chern-Simons A gauge-invariant observable in Chern-Simons theory is the Wilson loop operator, which in the Euclidean theory with gauge group SO(4) SU (2) L × SU (2) R reads W R (C) = Tr R Pexp − C A Pexp − CĀ , (3.1) where C is a closed loop in the 3-manifold M. Here R is a particular representation of the Lie algebra associated to the Chern-Simons gauge group. One of the challenges of the Chern-Simons formulation of 3D gravity is to build local probes in a theory that insists on being topological. Here we will design those probes by considering a Wilson line operator, i.e., we will be interested in W R (x i , x f ) = U f | Pexp − γ A Pexp − γĀ |U i . (3.2) The curve γ(s) is no longer closed but has endpoints x i , x f . This operator is no longer gaugeinvariant, which is reflected on the fact that we need to specify states at its endpoint, denoted as |U i , |U f . In the following we will discuss representations R of so(4), and suitable endpoint states, giving W R (x i , x f ) local properties we can naturally relate to the metric formulation. Our strategy to select the representation and the endpoint states is inspired by the proposal in [6,21], which is a prescription to use Wilson lines as local probes in AdS 3 gravity. The basic observation is to view W R (x i , x f ) as the path integral of a charged (massive) point particle. In this context the representation R parametrizes the Hilbert space for the particle, with the Casimir of R carrying information about the mass and spin (i.e., quantum numbers) of the particle [16,17,40]. With this perspective, our first input is to consider representations of so (4) that carry a continuous parameter that we can identify with the mass of particle. As we will show in the following, this requirement will force us to consider non-unitary representations of the group which we will carefully construct. In the subsequent computations we will leave the connections A andĀ fixed, and quantize appropriately the point particle for our choice of R. From this perspective, W R (x i , x f ) captures how the probe is affected by a given background characterized by A andĀ. Here is where our choice of endpoint states will be crucial: our aim is to select states in R that are invariant under a subgroup of so (4). Selecting this subgroup appropriately will lead to a novel way of casting local fields in the Chern-Simons formulation of dS 3 gravity. Non-unitary representations of so(4) Since so(4) su(2) L × su(2) R , let us focus first on a single copy of su (2). Recall that in our conventions, the su(2) generators satisfy the algebra (2.1). The unique Casimir operator is the quadratic combination 2 L 2 = L 2 1 + L 2 2 + L 2 3 . (3.3) We can build raising and lowering operators by defining For unitary representations we would have that all of the L a 's are Hermitian. Here we will relax this condition and choose generators that are not necessarily Hermitian. In particular, a consistent choice for a non-unitary representation that respects the Lie algebra is to take L 1 , L 2 to be anti-Hermitian and L 3 to be Hermitian, which results in L ± ≡ L 1 ± iL 2 , L 0 ≡ L 3 .L † ± = −L ∓ , L † 0 = L 0 . (3.5) While it is not unique, this is the choice we will use to build a non-unitary representation. Notice that it is inconsistent to take all the generators to be anti-Hermitian, as this would violate the commutation relations (2.1). 2 Note that this definition of the Casimir discards the overall normalization of the bilinear form in (2.2). Our representation, despite its lack of unitarity, has to satisfy some minimal requirements which we will now discuss. We have a basis of vectors (states) that are joint eigenstates of L 2 and L 0 . These are denoted |l, p with L 2 |l, p = c 2 (l)|l, p , (3.6) L 0 |l, p = (l − p)|l, p . (3.7) Here l labels the representation, i.e. controls the quadratic Casimir c 2 (l), and p labels the L 0 eigenvalue. Note that in a unitary representation we would use m = l − p, but we will find it more useful to use p as a label. We seek to build a representation such that the spectrum of L 0 is bounded (either from above or below), and that the norm squared of the states |l, p is positive. To achieve these requirements, we build a representation by introducing a highest weight state. We define this state as L 0 |l, 0 = l |l, 0 , L + |l, 0 = 0 . (3.8) This in particular implies that we will create states by acting with L − on |l, 0 , and hence a basis for eigenstates is schematically given by |l, p ∼ (L − ) p |l, 0 with p a positive integer. 3 Next we need to ensure that the norm of these states is positive; this will impose restrictions on the Casimir, and hence l. A useful identity in this regard is |L ± |l, p | 2 = − l, p| L ∓ L ± |l, p = −c 2 (l) + (l − p)(l − p ± 1) . (3.9) The minus sign in the first line comes from anti-Hermiticity in (3.5). In going from the first to the second line we used L ∓ L ± = L 2 − L 2 0 ∓ L 0 . The norm of L + |l, 0 vanishing gives c 2 (l) = l(l + 1) , (3.10) relating the label l with the Casimir of the representation. Positivity of the norm of the first descendant requires |L − |l, 0 | 2 = −2l > 0 , which clearly dictates that l is strictly negative. Any other state in the representation will be of the form |l, p = c p (L − ) p |l, 0 ,(3.11) where the normalization c p is adjusted such that l, p |l, p = δ p ,p , p = 0, 1, 2, · · · . (3.12) Demanding this relation leads to L + |l, p = − p(p − 2l − 1) |l, p − 1 , (3.13) L − |l, p = (p + 1)(p − 2l) |l, p + 1 . (3. 14) The fact that the roles of the raising and lowering operators appear flipped, in other words L + lowers and L − raises p, simply results from our convention in (3.6). If we had labelled states by their eigenvalue of L 0 they would raise and lower in the same way as the usual unitary sl(2, R) representations. The minus sign in (3.13) is more fundamental. It was not present in highest weight representations of sl(2, R); here it is necessary for the action of L ± to be consistent with the su(2) commutation relations. 4 In the unitary case, representations are finite-dimensional since there is an upper bound for p. Additionally, the Casimir is strictly positive, and l is constrained to be either integer or half-integer. These constraints all come from demanding the positivity of squared norms. For our non-unitary representations, relaxing the requirement of Hermiticity means that p is not bounded and the Casimir is not necessarily positive. Our choices also lead to a spectrum of L 0 unbounded from below, whose eigenstates are (3.11)-(3.12). We also note that the Casimir is allowed to be negative since l < 0; in particular, for the range −1 < l < 0 we have − 1 2 < c 2 < 0 . (3.15) Our representation has a well-defined character too. Suppose we have a group element M ∈ SU (2) which can be decomposed as M = V −1 e iαL 0 V . (3.16) Its character is simply given by Tr(M ) = ∞ p=0 l, p|e iαL 0 |l, p = e iα(l+1) e iα − 1 . (3.17) Finally, notice that for a fixed Casimir there are actually two distinct representations labelled by the two solutions for l in (3.10). These solutions are l ± = − 1 ± √ 1 + 4c 2 2 . (3.18) One representation has −1 < l + < − 1 2 while the other has − 1 2 < l − < 0, and each of these representations will be coined as R ± . The role of R ± will become important later, when we compare the Wilson line to the Euclidean Green's function, and in the construction of local pseudofields. In particular, we will see that both representations are necessary to generate a 4 Normalization only determines L±|l, p up to a phase. complete basis of solutions for local fields in dS 3 . Singlet states Returning to so(4) su(2) L ×su(2) R , let's add a set of operatorsL a with the same commutation relations as the unbarred ones and which commute with them: [L a ,L b ] = 0 . (3.19) In the following we will be interested in building a state |U , assembled from the non-unitary representations of su (2), that is invariant under a subset of the generators in so(4). These states, denoted singlet states, will serve as endpoint states which we will use to evaluate the Wilson line (3.2). This construction is motivated by the derivations for so(2, 2) sl(2, R) L × sl(2, R) R presented in [6]. Here we will review the derivation as presented there, adapted appropriately to so(4). Singlet states of so(4) can be constructed as follows. Consider a group element U ∈ SU (2), and define the rotated linear combination Q a (U ) = L a + D a a (U )L a , (3.20) where D a a corresponds to the adjoint action of the group; see App. A for our conventions. We define a state |U through its annihilation by Q a (U ), Q a (U ) |U = 0 . (3.21) In other words, |U is a state that is invariant under a linear combination of so(4) generators specified by Q a (U ). This equation is crucial: the inclusion of both copies of su(2) will ensure that the states |U will prevent a factorization in our observables, and will allow us to interpret our choices in the metric formulation. There are two interesting choices of |U for which it is useful to build explicit solutions to (3.21). We refer to our first choice as an Ishibashi state: it is defined by selecting a group element U = Σ Ish such that D k k (Σ Ish ) L k = Σ Ish L k Σ −1 Ish = −L −k ,(3.22) where we are using the basis (3.4), and therefore k = −, 0, +. The corresponding group element is Σ Ish = e π 2 (L + −L − ) = e iπL 2 . (3.23) The corresponding singlet state, i.e., Ishibashi state, is the solution to (L k −L −k ) |Σ Ish = 0 . (3.24) This equation has a non-trivial solution for the non-unitary representations built in Sec. 3.1. Consider the basis of states in (3.11)-(3.12) for each copy of su(2) of the form p,p a p,p |l, p ⊗ l ,p , (3.25) with coefficients a p,p , as an ansatz for |Σ Ish . The k = 0 condition in (3.24) sets l =l, and k = ± will give a p,p = (−1) p δ p,p , up to an overall normalization independent of p. The resulting state is |Σ Ish = ∞ p=0 (−1) p |l, p, p , (3.26) where |l, p,p ≡ |l, p ⊗ |l,p . The second choice will be coined crosscap state. In this instance, we select U = Σ cross such that D k k (Σ cross ) L k = Σ cross L k Σ −1 cross = −(−1) k L −k , (3.27) which leads to the group element Σ cross = e iπ 2 (L + +L − ) = e iπL 1 . (3.28) Using (3.27) in (3.21), the crosscap state satisfies 29) and in terms of the non-unitary su (2) representations the solution to these conditions are (L k − (−1) kL −k ) |Σ cross = 0 ,(3.|Σ cross = ∞ p=0 |l, p, p . (3.30) In contrast to the Virasoro construction, it is important to emphasise that here we don't have an interpretation of (3.24) and (3.29) as a boundary condition of an operator in a CFT 2 as in [41,42]. We are using (and abusing) the nomenclature used there because of the resemblance of (3.24) and (3.29) with the CFT 2 conditions, and its close relation to the so(2, 2) states used in [6]. In this regard, it is useful to highlight some similarities and key differences in so (4) relative to so(2, 2). A similarity is that our choice to use p rather than the eigenvalue of L 0 to label the states in the non-unitary representation was precisely motivated to make the states match with those in sl(2, R). However, one difference is that the group elements (3.23) and (3.28) differ by a factor of i in the exponent compared to their sl(2, R) counterparts in [6]. Also we note that, unlike in the sl(2, R) case, the relative phase in the state now appears in the Ishibashi state rather than the crosscap state. This is due to the extra minus sign in (3.13). Another important property of the singlet states is their transformation under the action of SU (2) group elements. Consider G(L) ∈ SU (2) L , andḠ(R −1 ) ∈ SU (2) R for each copy appearing in SO(4). A simple manipulation shows that G(L)Ḡ(R −1 )Q a (U ) |U = D a a (L −1 )Q a (LU R)G(L)Ḡ(R −1 ) |U = 0 . (3.31) Thus we have G(L)Ḡ(R −1 ) |U = |LU R . (3.32) This identity will be used heavily in the following derivations. Wilson line and the Green's function We now come back to evaluating the Wilson line (3.2). We select as endpoints states |U i = |U f = |Σ ,(3.W R (x i , x f ) = Σ|G(L)Ḡ(R −1 ) |Σ , (3.35) where we identify G(L) = Pexp − γ A ,Ḡ(R −1 ) = Pexp − γĀ . (3.36) Given the properties of our singlet states, we can easily evaluate (3.35) as follows, W R (x i , x f ) = Σ|G(L)Ḡ(R −1 ) |Σ = Σ|G(LR) |Σ = ∞ p=0 l, p|G(LR)|l, p = e iα(l+1) e iα − 1 . (3.37) In the second line we used (3.32) to move the right group element R to the left, wherẽ R ≡ Σ R Σ −1 . (3.38) To obtain the third line in (3.37) we use the explicit form of the states given by (3.26) and (3.30), where both the Ishibashi and cross cap state report the same answer. Finally in the last equality we used the formula for the character in (3.17), where α in this case is defined via the equation LΣ R Σ −1 = V −1 e iαL 0 V , (3.39) i.e., assuming we can diagonalise the left hand side, α captures the eigenvalue of the group element in the inner product. The interpretation of (3.37) in the metric formulation of dS 3 gravity is interesting. First, we observe that for a pair of su(2) Chern-Simons connections, A = g L dg −1 L ,Ā = g −1 R dg R ,(3.40) we have G(L) = g L (x f )g L (x i ) −1 ,Ḡ(R −1 ) = g R (x f ) −1 g R (x i ) , (3.41) where we evaluated the path ordered integral for a path γ with endpoints (x i , x f ). For concreteness, we will make the choice g L = e −irL 2 e −i(φ+τ )L 3 ,g R ≡ Σ g R Σ −1 = e i(φ−τ )L 3 e −irL 2 ,(3.42) which for SU (2) L is the group element associated to S 3 in (2.17). But it is important to stress that, with some insight, we are specifyingg R rather than g R , since this is all we need at this stage to evaluate W R (x i , x f ). Using (3.42) we find that the solution for α in (3.39) is cos α 2 = cos(r f ) cos(r i ) cos(τ f − τ i ) + sin(r f ) sin(r i ) cos(φ f − φ i ) . (3.43) α, which labels the equivalence class of LΣRΣ −1 , can then be related to the geodesic distance between points (x i , x f ) on S 3 (see (B.31)): α = ±2Θ + 4πn , n ∈ Z ,(3.44) with n accounting for winding. As explained in App. B.2, the propagator of a scalar field of mass m in dS 3 can be written as G(Θ) = G h (Θ) + G 1−h (Θ) , G h (Θ) = a h e −2ihΘ e −2iΘ − 1 , (3.45) with a h = i 2π 1 1 − e −4πih , h = 1 + 1 − (m ) 2 2 . (3.46) Equations (3.37) and (3.45) lead us to conclude that if we pick a representation R = R + in (3.18) with l = −h then W R + (x i , x f ) = 1 a h G h (Θ) . (3.47) Similarly, picking instead a representation R = R − in (3.18), where now l = h − 1, leads to W R − (x i , x f ) = 1 a 1−h G 1−h (Θ) . (3.48) The full propagator can then be written as G(Θ) = a h W R + (x i , x f ) + a 1−h W R − (x i , x f ) . (3.49) R ± are the two possible representations with the same Casimir c 2 = h(h − 1) = − m 2 2 4 . We emphasize that, unlike in AdS 3 , we need to consider both of these representations to obtain the correct propagator. This is related to the fact that the de Sitter propagator is not simply given by the analytic continuation from AdS 3 due to differences in causal structures [43,44]. Moving away from the specificity of group elements (3.42), for any pair of flat connections (3.40), we will have that W R ± (x i , x f ) gives the G h,1−h (Θ) contribution to the Green's function between the points (x i , x f ) in the Euclidean space with metric g µν = − 2 2 Tr A µ − ΣĀ µ Σ −1 A ν − ΣĀ ν Σ −1 . (3.50) A proof of this statement, beyond the explicit computation done here for S 3 , follows step by step the derivations in [6] for so(2, 2) adapted to so(4). The geometric role of our singlet states is now more clear: Σ is the group element that controls how the right connectionĀ acts as a left element relative to A, and vice-versa. These derivations also establish the gravitational Wilson line as a local probe of the Euclidean dS 3 geometry, and hence will allow us to investigate notions of locality in the Chern-Simons formulation of gravity. Local pseudofields from Wilson lines The aim of this section is to further extract local quantities from the gravitational Wilson line. We will focus on the background connections associated to the round 3-sphere for concreteness, and show how to build local pseudofields from the singlet states used in the previous section. We use the term "pseudofields" because while the objects we will build from a single irreducible representation R (either R + or R − ) are local, and behave in many ways like fields, both representations are needed to form a complete basis for local fields in dS 3 . Wilson line as an overlap of states W R (x i , x f ) = Σ|Ḡ(g R (x f ) −1 )G(g L (x f )) G(g L (x i ) −1 )Ḡ(g R (x i ))|Σ . (4.1) If our representation R used Hermitian generators, we would simply note that for unitary group elements, i.e., g −1 R = g † R , g −1 L = g † L , (4.2) we would have W R (x i , x f ) = U (x f )|U (x i ) with |U (x) = G(g L (x) −1 )Ḡ(g R (x))|Σ . However, our representation is non-unitary, and hence these manipulations require some care. Define the following state: |U (x) = G(g L (x) −1 )Ḡ(g R (x))|Σ ,(4.3) We will focus exclusively on the background introduced in (2.17). Because the representation we are using is non-unitary, we have 4) and the same relation for g R , which allow us to write the Wilson line as g L (τ, r, φ) † = g L (τ, −r, φ) −1 = g L (τ, r, φ + π) −1 e iπL 3 ,(4.W R (x i , x f ) = U (τ f , r f , φ f + π)|U (τ i , r i , φ i ) . (4.5) In this equality we used e iπL 3 e iπL 3 |Σ ∼ |Σ ,(4.6) since both singlet states are annihilated by Q a (Σ). Construction of local basis Having written W R (x i , x f ) as an overlap of states, we now can start the process of defining a local pseudofield from |U (x) . The most natural way to split (4.5) is as done in (4.3). Still this has its inherent ambiguities: in defining |U (x) we are splitting the cutting curve γ(s) at some midpoint x 0 , the choice of which is a gauge freedom at our disposal. More concretely, a general definition of the state should be |U (x) = G(g L (x 0 )g L (x) −1 )Ḡ(g R (x 0 ) −1 g R (x))|Σ (4.7) where we restored the dependence on this midpoint split. At this stage it is not clear to us that one choice of g L,R (x 0 ) is better than any other, so for sake of simplicity we will select g L,R (x 0 ) = 1, i.e. the identity element. Therefore we will be working with (4.3), and explore the local properties of |U (x) . First, we expand |U (x) in the eigenstate |l, p,p basis: |U (x) = ∞ p,p=0 Φ * p,p (x)|l, p,p , (4.8) which we can reverse as Φ p,p (x) = U (x)|l, p,p . (4.9) Φ p,p (x) will be our basis of local pseudofields that will support the local properties in |U (x) . To build this basis of eigenfunctions, we can translate the action of the generators L a on the basis vectors into the action of differential operators ζ a acting on Φ p,p . Specifically, we will find ζ a ,ζ a such that U (x)|L a |l, p,p = ζ a U (x)|l, p,p , (4.10) U (x)|L a |l, p,p =ζ a U (x)|l, p,p . (4.11) Using (3.13) and (3.14), the differential operators must therefore satisfy ζ + Φ p,p = − p(p − 2l − 1)Φ p−1,p (4.12) ζ − Φ p,p = (p + 1)(p − 2l)Φ p+1,p (4.13) ζ 0 Φ p,p = (l − p)Φ p,p ,(4.14) and similarly for the barred sector. It follows that Φ p,p satisfies the Casimir equation, where ∇ 2 = δ ab ζ a ζ b , and∇ 2 = δ abζ aζb . 5 Our strategy will be to build the differential operators for (ζ a ,ζ a ) based on (4.10)-(4.11), and then solve for Φ p,p (x) from the differential equations (4.12)-(4.15). ∇ 2 +∇ 2 Φ p,p (x) = 2l(l + 1)Φ p,p (x) . We will start by building the generators ζ a for Euclidean dS 3 . It is convenient to cast the state in (4.3) as |U (x) = G(g L (x) −1 )Ḡ(g R (x))|Σ = G(g L (x) −1g R (x) −1 )|Σ = e i(φ+τ )L 3 e 2irL 2 e −i(φ−τ )L 3 |Σ . (4.16) In the second line we moved all the group elements to left, as in (3.37), and in the third line we used (3.42). Next, consider the action of partial derivatives on Φ pp (x) = U (x)|l, p,p : ∂ + U (x)|l, p,p = −i U (x)|L 3 |l, p,p , 5 We will find that ∇ 2 +∇ 2 = − 1 2 ∇ 2 S 3 , where ∇ 2 S 3 is the ordinary Laplacian for EdS3. ∂ − U (x)|l, p,p = i cos(2r) U (x)|L 3 |l, p,p + i sin(2r) cos(θ + ) U (x)|L 1 |l, p,p − i sin(2r) sin(θ + ) U (x)|L 2 |l, p,p , ∂ r U (x)|l, p,p = 2i cos(θ + ) U (x)|L 2 |l, p,p + 2i sin(θ + ) U (x)|L 1 |l, p,p , (4.17) where we introduced the coordinates θ ± = φ ± τ , ∂ ± = ∂ ∂θ ± . (4.18) Inverting the relationship between ∂ a U (x)|l, p,p and U (x)|L a |l, p,p leads to 19) or, in terms of ζ ± = ζ 1 ± iζ 2 , ζ 1 = −i cos θ + sin (2r) (∂ − + cos (2r) ∂ + ) − i 2 sin θ + ∂ r , ζ 2 = i sin θ + sin (2r) (∂ − + cos (2r) ∂ + ) − i 2 cos θ + ∂ r , ζ 3 = i∂ + ,(4.ζ ± = −ie ∓iθ + (csc (2r) ∂ − + cot (2r) ∂ + ) ± 1 2 e ∓iθ + ∂ r ,(4.20) and ζ 0 = ζ 3 . These are simply three of the Killing vectors for S 3 , which together satisfy one copy of the su(2) algebra. To do the equivalent calculation for the barred sector, we should instead write |U (x) = G(g −1 L (x))Ḡ(g R (x))|Σ =Ḡ g R (x)Σ −1 g L (x)Σ |Σ =Ḡ Σ −1 g R (x)g L (x)Σ |Σ = Σ −1 e iθ −L3 e −2irL 2 e −iθ +L3 Σ|Σ . (4.21) This, after all, is the purpose of our definition of Σ: it lets us intertwine the two copies of su(2). Therefore, the exact action of Σ on group elements will affect the result of this calculation. We have two choices of Σ, given in (3.22) and (3.27), Σ cross = e iπL 1 , Σ Ish = e iπL 2 .(4.22) Working out the effect of the the Ishibashi state in (4.21) we find Σ −1 Ish e iθ −L3 e −2irL 2 e −iθ +L3 Σ Ish = e −iθ −L3 e −2irL 2 e iθ +L3 ,(4.23) in other words conjugation by Σ Ish flips θ ± → −θ ± while leaving r fixed. For the crosscap state we instead find Σ −1 cross e iθ −L3 e −2irL 2 e −iθ +L3 Σ cross = e −iθ −L3 e 2irL 2 e iθ +L3 ,(4.24) so that conjugation by Σ cross flips θ ± → −θ ± and in addition r → −r. From here on, the calculation to buildζ a is very similar to the unbarred case, but there will be differences depending on the choice of Σ. First, solving (5.29), for Σ = Σ Ish we find 25) or in terms ofζ ± =ζ 1 ± iζ 2 , ζ 1 = −i cos θ − sin 2r (∂ + + cos 2r∂ − ) − i 2 sin θ − ∂ r , ζ 2 = −i sin θ − sin 2r (∂ + + cos 2r∂ − ) + i 2 cos θ − ∂ r , ζ 3 = −i∂ − ,(4.ζ ± = −ie ±iθ − (csc 2r∂ + + cot 2r∂ − ) ∓ 1 2 e ±iθ − ∂ r ,(4.26) andζ 0 =ζ 3 . These are the three additional Killing vectors for S 3 , which are related to (4.19) by the replacement θ ± → −θ ∓ and r → −r. Together the generators ζ a satisfy su(2) L algebra, whilē ζ a correspond to the generators of the second su(2) R . Selecting Σ = Σ cross is not dramatically different: we will again obtain (4.25) with r → −r, and that flips the overall sign inζ 1,2 . Hence we will again find the second copy of Killing vectors obeying su(2) R ; the difference at this stage between the two singlet states is an orientation of r that does not affect the interpretation of (ζ a ,ζ a ) as the six Killing vectors for S 3 . Now we would like to find explicit expressions for Φ p,p . The procedure for either Σ Ish or Σ cross would produce the same special functions, with the difference being an overall normalization that depends on (p,p). For concreteness we will just focus on Σ Ish . We can construct the pseudofields by first solving for a highest weight state Φ 0,0 , and then acting with (ζ − ) p and (ζ − )p on this solution to generate Φ p,p . This will give a position-space representation of our abstract states |l, p,p . The highest weight state satisfies The descendant states are then given by ζ 3 Φ 0,0 =ζ 3 Φ 0,0 = lΦ 0,0 , (4.27) ζ + Φ 0,0 =ζ + Φ 0,0 = 0 .Φ p,p (r, τ, φ) = c pp e −2ilτ cos 2l (r)e i(pθ + −pθ − ) tanp −p (r)Pp −p,−(2l+1) p 1 + 2 tan 2 (r) , c pp = (−1) p p!(p − (2l + 1))! p!(p − (2l + 1))! ,(4.30) where here P α,β n (x) is a Jacobi polynomial. These satisfy (4.12)-(4.14) and their barred ana-logues. Wavefunction for the singlet states Where does our singlet state |Σ sit on S 3 ? This question is ambiguous, since the answer depends on a choice of gauge. In the context of the discussion presented here, positions will depend on how one selects the midpoint in (4.7). Still it is instructive to answer it for the simple purpose of illustrating what our prior choices imply. Consider first the Ishibashi state |Σ Ish . To see the position of this state in S 3 , it is very clear that at r = 0, we have Φ p,p (τ, r = 0, φ) = (−1) p e −2iτ (l−p) δ p,p ,(4.31) which follows from (4.30). This is to be expected since p =p introduces a φ dependence which we know is absent at r = 0. Therefore, we can write |U (τ, r = 0) = p (−1) p e 2iτ (l−p) |l, p, p ,(4.32) which at τ = 0 is simply the Ishibashi state (3.26). Thus we see that our Ishibashi state lives at (r = 0, τ = 0). If we had constructed a basis of Φ p,p from the (ζ a ,ζ a ) obtained from the crosscap states rather than the Ishibashi states, we would have seen that the crosscap state sits at (r = 0, τ = 0). The wave function we would attribute to the Ishibashi state can also be explicitly calculated: Σ Ish |U (x) = (cos(Θ NPole ) − i sin(Θ NPole )) 2l+1 2i sin(Θ NPole ) = e −2ilΘ NPole 1 − e 2iΘ NPole . (4.33) where Θ NPole is the geodesic distance (B.31) between x and r = 0, τ = 0 -the North Pole of the three-sphere. 6 Still we stress that the values of τ and r are somewhat artificial. For instance, in (4.32) the crosscap state can be seen to be related to the Ishibashi state by a simple shift in τ . This is a reflection of the fact that there is considerable gauge freedom in how we describe solutions. Wick rotation and quasi-normal modes Before proceeding to discuss SL(2, C) Chern-Simons theory, i.e. the Lorentzian formulation of dS 3 gravity, it is instructive to interpret our Euclidean results in Lorentzian signature. We will simply now use a Wick rotation of the metric formulation to provide a first interpretation of our results. As described in App. B, the metric analytic continuation is implemented by taking t → −i τ . (4.34) The Wick-rotated Φ p,p in (4.30) are therefore Φ p,p (r, t, φ) = c pp e il(z−z) cos 2l (r)e i(pz−pz) tanp −p (r)Pp −p,−(2l+1) p 1 + 2 tan 2 (r) , c pp = (−1) p p!(p − (2l + 1))! p!(p − (2l + 1))! , (4.35) with z ≡ φ + it, andz ≡ φ − it. In terms of the more familiar hypergeometric functions and radial coordinate u ≡ sin(r), we have (using that Φ p,p = e 2i(p−p)φ Φp ,p ): The Wick rotation can also be used to simply obtain Lorentzian Killing vectors from (4.19) and (4.25). These can then be re-organized in an sl(2, C) representation in the following way: Φ ω,k (r, t, φ) = c pp ω+|k| 2 + l ω−|k| 2 + l (1 − u 2 ) −ω/2 u |k| e −ikφ e −ωt 2 F 1 |k| − ω 2 − l, |k| − ω 2 + l + 1; |k| + 1; u 2 , ω = p +p − 2l > 0 , k =p − p .−iζ 1 −→ τ →it/ H 1 = − cos z sin 2r ∂ + cos 2r∂ − 1 2 sin z ∂ r , −iζ 2 −→ τ →it/ H 2 = sin z sin 2r ∂ + cos 2r∂ − 1 2 cos z ∂ r ζ 3 −→ τ →it/ H 3 = i∂ , iζ 1 −→ τ →it/ H 1 = cosz sin 2r ∂ + cos 2r∂ + 1 2 sinz ∂ r , −iζ 2 −→ τ →it/ H 2 = − sinz sin 2r ∂ + cos 2r∂ + 1 2 cosz ∂ r , −ζ 3 −→ τ →it/ H 3 = i∂ . (4.37) The operators (H a ,H a ) have been normalised such that they form an sl(2, R) × sl(2, R) algebra. More importantly, these operators have a simple action on the quasinormal modes. We can see this explicitly by reorganizing the operators into the combinations H 0 = −H 3 , H ± = H 2 ∓ iH 1 , H 0 =H 3 ,H ± =H 2 ± iH 1 . (4.38) The quasinormal mode Φ 00 is a highest weight state of our representation, H + Φ 00 = 0 ,(4.39) while the rest of the quasinormal modes obey H 0 Φ pp = (h + p)Φ p,p , H + Φ pp = p(p + 2h + 1)Φ p−1,p , H − Φ pp = (p + 1)(p + 2h)Φ p+1,p ,(4.40) and similarly for the barred sector. In this expression we have h = −l, 7 and hence the modes The Wick rotation gives an interpretation for the algebraic structure of the quasi-normal mode spectrum of the static patch. Our construction resonates with [39], where it was noticed that the quasinormal modes had a "hidden" SL(2, R) symmetry, but the origin of this remained mysterious. A similar result was found in [46]. Finally, the quasinormal modes additionally satisfy the Casimir equation for our representations, ∇ 2 +∇ 2 Φ p,p (x) = 2h(h − 1)Φ p,p (x) .(4.41) where ∇ 2 = −η ab H a H b , and∇ 2 = −η abH aHb , so that ∇ 2 +∇ 2 = − 1 2 ∇ 2 dS 3 is the d'Alembertian on Lorentzian dS 3 . With the insight of the Wick rotation, the representation (4.40) will be our focus in the subsequent section as we study SL(2, C) Chern-Simons theory. Wilson lines in SL(2, C) Chern-Simons Everything we have discussed so far has been based on Euclidean dS 3 . In this section, we discuss how our construction can be translated to Lorentzian signature, guided by the properties of our representation under analytic continuation. Based on the Euclidean analysis, we will select a suitable representation of sl(2, C), and implement this choice for the inflationary patch of dS 3 . Chern-Simons formulation of Lorentzian dS 3 gravity We start from SL(2, C) Chern-Simons theory with action S CS [A] = is 4π Tr A ∧ dA + 2 3 A ∧ A ∧ A − is 4π Tr Ā ∧ dĀ + 2 3Ā ∧Ā ∧Ā , (5.1) with A,Ā ∈ sl(2, C), and complex parameter s. The relation of (5.1) to Lorentzian dS 3 gravity was done in [47], and more recent discussions include [37,[48][49][50]. To build this gravitational interpretation, we expand the gauge fields over the generators L a ,L a of sl(2, C) as A = − iω a + 1 e a L a ,Ā = − iω a − 1 e a L a . (5.2) where the sl(2, C) generators can be related to the generators of so(1, 3) isometries as It is important to note that A andĀ are not independent variables. They are related by complex conjugation, and this relation depends on how we choose to relate L a toL a . For now it suffices to demand (5.5), which assures reality of the action (5.1), and we will constrain further the representation as we construct the appropriate probes. L a = i 2 (J a + i P a ) ,L a = i 2 (J a − i P a ) . Construction of probes in sl(2, C) As in the Euclidean case, we would like to build probes in SL ( L 0 = −L 3 , L ± = L 2 ∓ iL 1 , (5.8) with algebra [L 0 , L ± ] = ∓L ± , [L + , L − ] = 2L 0 . (5.9) The highest weight representation in this basis satisfies L 0 |h, p = (h + p)|h, p , L + |h, p = p(p + 2h + 1)|h, p − 1 , L − |h, p = (p + 1)(p + 2h)|h, p + 1 . For now, we take h to be a real parameter that controls the Casimir of the representation −η ab L a L b |h, p = (L 2 0 − L + L − − L − L + )|h, p = h(h − 1)|h, p . (5.11) Of course, we anticipate that this parameter will match h = 1+ √ 1−(m ) 2 2 (or the other solution which gives the same Casimir). In addition we demand the operators satisfy L † 0 = L 0 and L † ± = L ∓ ; this makes the representation unitary. For the barred sector we also select a highestweight representation of sl(2, R), which obeys L 0 |h,p = (h +p)|h,p , L + |h,p = p(p + 2h + 1)|h,p − 1 , L − |h,p = (p + 1)(p + 2h)|h,p + 1 . (5.12) The quadratic Casimir for this sector is −η abL aLb |h,p = 2h(h − 1)|h,p . (5.13) Singlet states in this case are defined in an analogous way as in Sec. 3.1.1: we will consider two possible conditions (L k − (−1) kL −k ) |Σ cross = 0 , (L k −L −k ) |Σ Ish = 0 , (5.14) for k = 0, ±, and the solutions are |Σ Ish = ∞ p=0 |h, p, p , |Σ cross = ∞ p=0 (−1) p |h, p, p , (5.15) where the singlet condition sets h =h, and we are using |h, p,p ≡ |h, p ⊗ |h,p . There is a difference in that the (−1) p factor appears for the crosscap state rather than for Ishibashi. This results from the fact that (5.10) and (5.12) do not contain a minus sign. In this sense they more closely resemble the AdS 3 rather than EdS 3 versions. There is, however, a more important conceptual difference when we move to Lorentzian de Sitter. Recall that in EdS 3 the singlet states played a role in relating the two (barred and unbarred) copies of SU (2), which are initially independent; in the same way, here they allow us to relate two copies of SL(2, R). Since in SL(2, C) Chern-Simons theory the components A a andĀ a are related by complex conjugation to ensure the reality of the Einstein-Hilbert action, the choice of a singlet state additionally picks out a reality condition on the fields propagating on the background created by A andĀ. We can now evaluate the Wilson line. We are treating sl(2, C) as two copies of sl(2), as decomposed in (5.4), and hence we want to evaluate W R (x i , x f ) = Σ| Pexp − γ A Pexp − γĀ |Σ , (5.16) where we selected the endpoint states to be one of the singlet states in (5.15): |U i,f = |Σ . Writing this as group elements acting on each copy of sl(2) we have W R (x i , x f ) = Σ|G(L)Ḡ(R −1 ) |Σ = Σ|G(LR) |Σ = ∞ p=0 h, p|G(LR)|h, p = e ihα 1 − e iα , (5.17) where A = gdg −1 , L ≡ g(x f )g(x i ) −1 , A =ḡ −1 dḡ , R −1 ≡ḡ(x f ) −1ḡ (x i ) , (5.18) andR = Σ R Σ −1 . As before, we have defined α by assuming we can diagonalize the group element as L Σ R Σ −1 = V −1 e iαL 0 V . (5.19) Other than the fact that we are using the states |h, p,p and generators L a associated to our unitary Lorentzian representation rather than the states |l, p,p and generators L a for the nonunitary Euclidean representation, everything proceeds as for the Euclidean case. In the end we can recognize that the Lorentzian Wilson line is just a character associated to our Lorentzian representations. Inflationary patch In this final portion we will consider the inflationary patch of dS 3 in order to illustrate our Lorentzian construction. The line element reads ds 2 2 = 1 η 2 −dη 2 + dwdw , (5.20) where η > 0, timelike past infinity is located at η → 0, and w = x + iy is a complex variable. See App. B.1 for a review of these coordinates. For the inflationary patch, we use the group elements g = e − iw η L + e log η L 0 ,g = e log η L 0 e iw η L − . (5.21) These give connections A = gdg −1 = − dη η L 0 + idw η L + ,Ā =g −1 dg = dη η L 0 + idw η L − . (5.22) In our conventions the Lorentzian metric is g µν = − 2 2 Tr A µ −Ā µ A ν −Ā ν , (5.23) where here we are using the same generators for barred and unbarred connections. It is easy to check this reproduces (5.20). As in the Euclidean case, we can define the local state from the group elements acting on the singlet state, |U (x) = G(g(x) −1 )Ḡ(ḡ(x))|Σ , = G(g(x) −1g (x) −1 )|Σ , (5.24) whereg = Σḡ Σ −1 . Evaluating this using the group elements (5.21), we find |U (x) = e − log η L 0 e iw η L + e − iw η L − e − log η L 0 |Σ . (5.25) Now we will construct local pseudofields from the states |U (x) . We follow an exactly analagous procedure to the EdS 3 case in Sec. 4.2, starting with expansion of the state over the states |h, p,p that form a basis for our unitary Lorentzian representations, |U (x) = ∞ p,p=0 Φ * p,p (x)|h, p,p . (5.26) Inverting this relation gives Φ p,p (x) = U (x)|h, p,p . (5.27) We can define a set of differential operators H a andH a as U (x)|L a |h, p,p = H a U (x)|h, p,p , (5.28) U (x)|L a |h, p,p =H a U (x)|h, p,p . Taking derivatives of the pseudofield Φ p,p (x) = U (x)|h, p,p , we find ∂ U (x)|h, p,p = i η 2 U (x)|L + |h, p,p − iw 2 η 2 U (x)|L − |h, p,p + 2w η 2 U (x)|L 0 |h, p,p , ∂ U (x)|h, p,p = −i U (x)|L − |h, p,p , ∂ η U (x)|h, p,p = 2iw η U (x)|L − |h, p,p − 2 η U (x)|L 0 |h, p,p ,(5.30) and from here we find H + = −i(η 2 ∂ + ηw∂ η +w 2∂ ) , H − = i∂ , H 0 = − η 2 ∂ η −w∂ . (5.31) These are three Killing vectors for the inflationary patch of dS 3 , whose boundary limits η → 0 give one (barred) set of conformal generators. The state |U (x) can be equivalently be written in terms of the barred sector as Restricting to the Ishibashi state for definiteness, we can follow a similar procedure and solve for the barred differential operators. We find |U (x) =Ḡ Σ −1 ḡ(x)g(x)Σ |Σ = Σ −1 e logH + = i(η 2∂ + ηw∂ η + w 2 ∂) , H − = −i∂ , H 0 = − η 2 ∂ η − w∂ . (5.35) Thus there is again a simple relation between the barred and unbarred differential operators. For the Ishibashi state the barred sector amounts to taking w ↔ −w. The procedure can be repeated for the crosscap state, and in that case we must take w ↔w. We obtain from this a second set of Killing vectors whose η → 0 limit matches onto the second (unbarred) set of conformal generators. Now we can build solutions that explicitly realize our unitary representations. The highest weight state satisfies H 0 Φ 0,0 =H 0 Φ 0,0 = hΦ 0,0 ,(5.36)H + Φ 0,0 =H + Φ 0,0 = 0 ,(5.37) and this equation is solved by Φ 0,0 (η, w,w) = U (x)|h, 0, 0 = η 2h (η 2 − ww) −2h . (5.38) We can again build the descendents by lowering starting from this highest weight state. For the case p >p we find Φ p,p (η, w,w) = b p,p η 2h+2n w p−p (η 2 − ww) −p−p−2h P (|p−p|,−p−p−2h) n 1 − 2ww η 2 , (5.39) where b p,p = i p (−i)p p!(p + 2h − 1)! p!(p + 2h − 1)! , n = 1 2 (p +p − |p −p|) . (5.40) Forp > p, the solution is Φ p,p (η, w,w) = (−i) p ipΦp ,p (η,w, w). The solutions are again Jacobi polynomials P α,β n (x), however in this case n depends nontrivially on both quantum numbers p,p. Just like the static patch quasinormal modes, these are eigenfunctions satisfying (4.40) and they solve the Klein-Gordon equation (4.41) in inflationary coordinates. Restricting to w =w = 0 at finite η, the solution for the pseudofield reduces to Φ p,p (η, 0, 0) = η −2(p+h) δ p,p . which at η = 1 is simply the Ishibashi state, (5.15). Thus we see that our Lorentzian Ishibashi state lives at w =w = 0, η = 1. By going to embedding coordinates (B.14), it is easy to see that, up to analytic continuation, this is the same bulk point as r = 0, τ = 0 where the Ishibashi state was located in static coordinates. Of course, once again we note that there is nothing special about that point: it is simply the product of various gauge choices we made along the way. Finally we turn to the Wilson line, which can be evaluated directly as W R (x i , x f ) = Σ|G(g(x f )g(x i ) −1 )Ḡ(g(x f ) −1g (x i ))|Σ = Σ|G(g(x f )g(x i ) −1g (x i ) −1g (x f ))|Σ . (5.43) Using (5.19) and the explicit inflationary group elements (5.21), we can solve for the parameter α describing the eigenvalue of the group element. We find cos α 2 = η 2 i + η 2 f − (w f − w i )(w f −w i ) 2η i η f . (5.44) The right hand side is again just the invariant distance but now in inflationary coordinates (see App. B.2). This is directly analagous to our analysis of the Euclidean case, where α was related to the invariant distance in Hopf coordinates. We again have α = ±2Θ + 4πn , n ∈ Z . (5.45) We can now relate the Wilson line to a Green's function. Recall that the Lorentzian Wilson line was equal to a character of our representation, W R (x i , x f ) = e ihα 1 − e iα . (5.46) Using (5.45), we can convert this to a function of the invariant distance. After again defining 47) we find that taking the irreducible representation R + with h = a h = i 2π 1 1 − e −4πih ,(5.1+ √ 1−(m ) 2 2 leads to W R + (x i , x f ) = 1 a h G h (Θ) . (5.48) As in the Euclidean case given by (3.49), to obtain the Green's function (B.30) it is necessary to use both representations R ± with highest weight h and 1 − h, G(Θ) = a h W R + (x i , x f ) + a 1−h W R − (x i , x f ) . (5.49) Discussion In this last section we highlight our main findings and discuss some interesting future directions. Singlet states in 3D de Sitter. where |l, p,p = |l, p ⊗|l,p are basis vectors of a non-unitary representation of su(2). One of the consequences tied to selecting this unconventional representation is that we have a continuous parameter that we can identify with the mass of particle: we take −1 < l < 0, and its relation to the mass is 4l(l + 1) = −m 2 2 . Although our discussion is limited to masses in the ranges 0 < m 2 2 < 1, our approach should be easily extendable to allow for arbitrary positive values of m 2 2 . We expect this would require building non-unitary representations of su(2) that resemble the continuous series in sl(2, R). These singlet states are very reminiscent of the description of bulk local states in AdS. In [51][52][53], it was shown that a bulk field configuration at the centre of AdS corresponds to a crosscap state in the CFT. While there are certainly similarities between the two stories (emphasized by our choice of terminology for singlet states), there are also some notable differences. In the context of AdS/CFT, the crosscap states are states in the full Virasoro algebra, not just the global sl(2, R) × sl(2, R) subalgebra. Furthermore, the CFT can be seen to set some bulk properties naturally through boundary conditions. These properties provide an external source for choices that otherwise seem arbitrary. For example, we found no obvious physical difference between the Ishibashi and crosscap states, because we had the freedom to relabel algebra generators. In AdS, these generators have an independent physical meaning in the boundary CFT that must be matched, hence the statement that the point at the origin must be a crosscap state rather than an Ishibashi state. We also performed an analytic continuation and considered singlet states in the Lorentzian case, where for illustration, we focused on the inflationary patch of Lorentzian dS 3 . To describe gravity in Lorentzian de Sitter we were led to consider SL(2, C) Chern-Simons theory. In this context, the choice of singlet state led to a natural reality condition for the SL(2, C) Chern-Simons gauge fields. Lorentzian Wilson lines had a direct interpretation in terms of unitary sl(2, R) representations that we motivated using an analytic continuation of our Euclidean su (2) representations. Since the inflationary patch has a large amount of apparent symmetry, it would also be interesting to repeat our analysis for less symmetric bulks such as Kerr-dS 3 [54]. Bulk reconstruction in 3D de Sitter. The comparison to AdS/CFT naturally raises the question of bulk reconstruction. Consider our Lorentzian results for the inflationary patch. We now have have an expression for pseudofields |U (x) in terms of an abstract basis of states |h, p,p that mimics the discussion in AdS. And while a dS/CFT correspondence [55][56][57] is far from established, suppose for the sake of argument that we take seriously the idea that our states |h, p,p can be described as operators in a putative CFT, in other words that there is a state-operator correspondence that maps our states to operators inserted at the origin w,w = 0: |h, p,p = O(0, 0)|0 . Then the Ishibashi state |Σ Ish = ∞ p=0 |h, p, p (6.2) can be expressed as |Σ Ish = ∞ p=0 Γ(2h) Γ(p + 1)Γ(p + 2h) H p −1H p −1 O(0, 0)|0 . (6.3) On the other hand, the Ishibashi state can be thought of as being localized at a particular bulk point, as seen in (5.42). This suggests that we can obtain pseudofields at arbitrary bulk points by acting on both sides of (6.3) with sl(2, R) generators. On the bulk side, this could be interpreted as diffeomorphisms that move the point while on the boundary side there is a natural interpretation in terms of conformal transformations. Thus, we are led to ask: is there then an analogue of the HKLL procedure [58,59], where local fields in de Sitter can be thought of as a smearing of states on in a region of a lower-dimensional surface? And is there an implementation of that procedure in Chern-Simons theory? To answer these questions, it is useful to compare to the existing literature on bulk reconstruction in de Sitter. A smearing function for the inflationary patch was constructed in [60], and further developments include [46,61,62]. Restricting to d = 2, the result is that a local scalar field Φ of mass m in the inflationary patch of dS 3 can be represented as Φ(η, w,w) = |w w |<η 2 dw dw Γ(∆) πΓ(∆ − 1) η 2 − |w w | η ∆−2 O + (w + w ,w +w ) + Γ(2 − ∆) πΓ(1 − ∆) η 2 − |w w | η −∆ O − (w + w ,w +w ) . (6.4) In de Sitter it was crucial to keep the contributions from not only a scalar operator O + with scaling dimension ∆ + = ∆ = 1 + √ 1 − m 2 2 dual to Φ, but also the shadow operator O − with scaling dimension ∆ − = 2 − ∆ = 1 − √ 1 − m 2 2 . Here it is necessary to have these two contributions for the two-point function of the field to reproduce the correct Green's function, (B.30), which differs substantially from AdS. The difference is related to the fact that the Euclidean Green's function we use for de Sitter is not simply the analytic continuation of the AdS Green's function, which would violate microcausality [43]. In our language the two terms come from considering the two representations with a fixed Casimir, with l = −h and l = h − 1. Other than this subtlety, and assuming the existence of a state operator correspondence for the states in our representations, the computation of the contribution to a bulk local field for each set of operators in terms of smearing functions proceeds exactly analogously to the Poincaré case considered in [63]. All that is needed is to express the singlet state, translated to a point in the bulk, in terms of differential operators acting on CFT operators. This can then be converted into an integral representation in terms of smearing functions. There is however a need to have a more fundamental understanding of the role of O + and O − and its implications in dS quantum gravity. Exact results in Chern-Simons theory. Chern-Simons theory on S 3 , with a compact gauge group, is exactly solvable using the techniques of non-abelian localization [64]. In particular, the Wilson loop expectation value can be computed exactly in this context [40,65]. This suggests an extension of our semiclassical Euclidean results to a full quantum computation. There are two crucial differences in our approach that prevent us from applying exact results directly. The first is that we consider Wilson line operators rather than loops, which means that our probes are not gauge invariant. Additionally, we compute the Wilson line for infinite dimensional (and subsequently non-unitary) rather than finite dimensional representations of su (2). The choice of this peculiar representation is in fact intricately linked to the the non-gauge invariance of the Wilson lines, as we required infinite dimensional representations to construct the singlet states describing the endpoints. In the semiclassical version these limitations did not end up presenting an obstruction to a generalization as in [6,21], and so it would be interesting to implement techniques of localization to construct and quantify our Wilson line as a quantum operator. It would be especially interesting to see if the quantization of the Wilson line sheds light on the necessity in de Sitter of using two representations R ± , which from the CFT standpoint led us to consider an additional set of shadow operators. We saw that these were necessary in our framework to generate the complete set of quasinormal modes for de Sitter, and they are also crucial to reproduce the correct Green's function from a smearing function representation of a bulk local field. Moving beyond kinematics, one might hope that a quantization would help us define a Hilbert space that incorporates both representations and gives a definition for their overlap. A Conventions In this appendix we collect some basic conventions related to the Lie group SU (2) and its algebra. For the algebra we use generators L a andL a , a = 1, 2, 3, and we have Indices are raised with δ ab . In the fundamental representation of su (2), we have L a = 1 2 σ a with the Pauli matrices given by σ 1 = 0 1 1 0 , σ 2 = 0 −i i 0 , σ 3 = 1 0 0 −1 . (A.3) To make an explicit distinction between the group and the algebra, we denote G(M ) as group element, and L a are the algebra generators as specified above. The general group action is given by where D s are the elements in the adjoint representation of su (2). As expected for any group, we also have G(M −1 )L a G(M ) = D a a(G(M 1 )G(M 2 ) = G(M 1 M 2 ) , D b a (M 1 )D c b (M 2 ) = D c a (M 1 M 2 ) . (A.5) B Metric formulation of dS 3 gravity B.1 Coordinates and patches Three-dimensional de Sitter is easily understood in terms of its embedding in four-dimensional Minkowski space: − (X 0 ) 2 + (X 1 ) 2 + (X 2 ) 2 + (X 3 ) 2 = 2 . (B.1) Global dS 3 corresponds to the following parametrization, which covers the whole space-time: X 0 = sinh(T / ) , X 1 = cosh(T / ) cos ψ , X 2 = cosh(T / ) sin ψ cos φ , X 3 = cosh(T / ) sin ψ sin φ , (B.2) with ψ and φ the polar and azimuthal coordinates of a two-sphere of unit radius. The metric is then ds 2 = −dT 2 + 2 cosh 2 (T / ) dψ 2 + sin 2 (ψ)dφ 2 . (B.3) The global time coordinate T , which has an infinite range, can be conformally rescaled: tan(σ) ≡ sinh(T / ) , − π 2 < σ < π 2 . (B.4) After this rescaling, the metric is ds 2 = 2 cos 2 σ −dσ 2 + dψ 2 + sin 2 ψdφ 2 . (B.5) With the metric in this form, it is easy to draw the Penrose diagram in Fig. 1. Another useful parametrization of embedding coordinates is the following: Constant t (orange) and r (or u, purple) slices on the static patch are shown on the static patch, with r = 0 at the North Pole and r increasing to π 2 at the horizon. Constant η ≥ 0 (orange) and x (for y = 0, purple) slices are shown on the inflationary patch, with η → 0 + corresponding to negative timelike infinity and increasing to +∞ at the horizon. X 0 = 2 − u 2 sinh(t/ ) , X 1 = 2 − u 2 cosh(t/ ) , X 2 = u cos φ , for which the metric can be written as ds 2 = − 1 − u 2 2 dt 2 + du 2 1 − u 2 2 + u 2 dφ 2 . (B.7) This is the static patch of dS 3 . It has the advantage of making a timelike Killing vector manifest, at the cost of covering only a portion of the whole manifold. We can see which portion by relating the two parametrizations: u = cosh(T / )| sin(ψ)| = sin(ψ) cos(σ) , sinh 2 (t/ ) = sinh 2 (T / ) 1 − cosh 2 (T / ) sin 2 (ψ) = sin 2 (σ) cos 2 (σ) − sin 2 (ψ) . (B.8) In particular, for the embedding coordinates to be real, we need 0 ≤ u ≤ . u = 0 corresponds to ψ = 0 and u = corresponds to σ = ± ψ − π 2 , so that these coordinates cover the left wedge of the Penrose diagram (or the right wedge, but not both if the coordinates are to be singlevalued). Trajectories of constant u or t are shown in Fig. 1a. A simple coordinate redefinition brings us to the coordinates used in the main text: u = sin(r) . (B.9) The embedding coordinates then take the form X 0 = cos(r) sinh(t/ ) , X 1 = cos(r) cosh(t/ ) , X 2 = sin(r) cos(φ) , X 3 = sin(r) sin(φ) , (B.10) and the metric is ds 2 = − cos 2 (r)dt 2 + 2 dr 2 + 2 sin 2 (r)dφ 2 . (B.11) It is instructive to go to Euclidean time in these coordinates: t → −iτ , 9 which leads to ds 2 2 = cos 2 (r)dτ 2 + dr 2 + sin 2 (r)dφ 2 , (B.12) and X 1 = cos(r) cos(τ ) , X 2 = sin(r) cos(φ) , X 3 = sin(r) sin(φ) , X 4 = cos(r) sin(τ ) , (B.13) where we've defined X 4 = iX 0 . These coordinates are simply the Hopf coordinates for a threesphere embedded in R 4 . Avoiding a conical singularity near r = π 2 requires that τ ∼ τ + 2π, from which we can read off the inverse temperature of the horizon: β = 2π . Another parametrization of dS 3 gives coordinates on the inflationary patch: X 0 = η 2 − 1 − x 2 − y 2 2η X 1 = η 2 + 1 − x 2 − y 2 2η X 2 = x η X 3 = y η . (B. 14) The metric in these coordinates is ds 2 2 = −dη 2 + dx 2 + dy 2 η 2 . (B.15) With 0 < η < ∞, these coordinates cover half of the space-time, with η −1 = cos ψ | cos σ| − tan σ. η = 0 + corresponds to σ = − π 2 (i.e. negative timelike infinity) and η → +∞ to σ + ψ → π 2 − . This is shown in Fig. 1b. B.2 Geodesics and Green's functions in dS 3 We now write down the propagator for a scalar field in the static patch of three-dimensional de Sitter. We can exploit the symmetry of the system to write the wave equation in terms of the geodesic distance between two points. This is easier to do in Euclidean signature, where we consider S 3 described by embedding coordinates X i given by equation (B.13). The only invariant quantity we can write out of two vectors X i and Y i is X · Y . In fact, the geodesic distance between two points is simply Θ = arccos X · Y 2 . (B.16) The Euclidean propagator obeys: ∇ 2 G(X, Y ) − m 2 G(X, Y ) = δ (3) (X − Y ) . (B.17) The propagator can only depend on coordinates through the quantity χ = cos(Θ). This implies ∇ 2 G(X, Y ) = ∇ 2 G(χ) = 1 sin 2 (Θ) d dΘ (sin(Θ)) 2 dG(χ) dΘ = (1 − χ 2 ) d 2 G dχ 2 − 3χ dG dχ . (B.18) Therefore, the homogeneous version of the wave equation is (1 − χ 2 ) d 2 G dχ 2 − 3χ dG dχ − (m ) 2 G(χ) = 0 . (B.19) This has solutions of the following form: G(χ) = c 1 P 1 − (m ) 2 − 1 2 , 1 2 ; χ (χ 2 − 1) 1/4 + c 2 Q 1 − (m ) 2 − 1 2 , 1 2 ; χ (χ 2 − 1) 1/4 , (B.20) where P and Q are associated Legendre polynomials. These associated Legendre polynomials simplify precisely when the second argument is 1/2: P (x, 1/2, cos(Θ)) = 2 π sin(Θ) cos x + 1 2 Θ , Q(x, 1/2, cos(Θ)) = − π 2 sin(Θ) sin x + 1 2 Θ . In three dimensions, we know that a properly normalized Green's function has a short-distance divergence that goes as − 1 4πd = − 1 4π , so this fixes A = − 1 4π . (B.24) Therefore, the Euclidean propagator is Choosing the value of the integration constant C corresponds to picking a particular vacuum. G(Θ) = i 4π (1 + iC) e 2ihΘ 1 − e 2iΘ − (1 − iC) e −2ihΘ 1 − e −2iΘ , The natural choice is C = cot(2hπ). This removes the singularity at Θ = π, in other words the singularity that appears on the lightcone of antipodal points. It is convenient to define Finally, note that if we want to analytically continue back to Lorentzian signature, we should take τ → it/ . In that case, we notice that | cos(Θ)| > 1 whenever points are timelike separated, in which case i Θ corresponds to the proper time between the points. When the points are spacelike separated, Θ remains the proper length between them. In terms of inflationary coordinates (B.14), the invariant distance is cos(Θ) = η 2 i + η 2 f − (∆x) 2 − (∆y) 2 2η i η f (B.32) = 1 + (∆η) 2 − (∆x) 2 − (∆y) 2 2η i η f , (B.33) where the distinction between spacelike-separated and timelike-separated points is manifest. C Analytic continuation in the Chern-Simons formulation Here, we provide more details on how to construct an analytic continuation between Euclidean and Lorentzian signature from the Chern-Simons perspective. The analytic continuation from Euclidean to Lorentzian signature is most easily understood in terms of these generators, which are simply related to rotations and boosts in embedding space. The Euclidean Chern-Simons action, (2.3) and (2.5), can be written in unsplit form as S E = − k 4π Tr A ∧ dA + 2 3 A ∧ A ∧ A , (C.1) where the gauge field is expanded in terms of the generators of Euclidean so(4) isometries as A = e a P a + ω a J a , One possibility is given by the following: J 1 = iP 1 J 2 = iJ 2 J 3 = −P 3 P 1 = J 1 P 2 = P 2 P 3 = iJ 3 (C.9) Under this map, the SO(4) bilinear form Tr(J a P b ) = −δ ab , Tr(J a J b ) = 2 Tr(P a P b ) = 0 (C.10) gets taken to Tr(J a P b ) = −i η ab , Tr(J a J b ) = 2 Tr(P a P b ) = 0 . (C.11) While the map we have constructed can be viewed as a map between real algebras, (C.11) is not an invariant bilinear form for real SO(3, 1). Indeed, the unique invariant bilinear form for SO(3, 1) is given by J a , J b = η ab , P a , P b = −Λη ab , J a , P b = 0 , (C.12) rather than (C.11). In the Chern-Simons formulation for gravity one typically chooses a Tr(J a P a ) bilinear form for a reason, as the Chern-Simons theory defined using (C.12) does not reduce to Einstein gravity (see [2]). It is for this reason that we have considered a complexification to SL(2, C). While the real SO(3, 1) algebra does not split as in (2.8), the complexification does split and therefore admits multiple bilinear forms, not only (C.12) but also (C.11). With the map defined above, the bilinear form for the barred sector has the wrong sign: Tr(L aLb ) = + 1 2 η ab . (C.13) We can flip the sign while simultaneously multiplying the barred action by a minus sign. Combined with an analytic continuation of the Chern-Simons coupling, s = ik , (C.14) this takes us from (C.1) to the SL(2, C) Chern-Simons action, (5.1). compact group like su(2) all unitary representations are finite dimensional and labelled by a fixed (half-)integer, the spin. To introduce a continuous parameter, we need to build representations that are more analogous to the infinite-dimensional, highest-weight representations of sl(2, R). This forces us to consider non-unitary representations, nevertheless a natural choice to make contact with local fields in dS 3 , as will show. , 0 0(r, τ, φ) = U (x)|l, 0, 0 = e −2ilτ cos 2l (r) . instead of oscillating in time, these functions are now purely decaying. In fact, the Φ ω,k are exactly (up to normalization) the quasi-normal modes of dS 3 [39, 45]. As discussed in Sec. 3.2, given a scalar field of mass m, there are two representations R ± that have the same Casimir: one with l = −h and one with l = h − 1 . These two representations have different characters (and thus Wilson lines), and both are needed to obtain the full Green's function: G(Θ) = a h W R + + a 1−h W R − . Each choice of l matches one of the two distinct sequences of quasi-normal modes in dS 3 . This reinforces the idea that both representations are needed to describe a bulk scalar field. Φ p,p characterize a highest weight representations of sl(2) with Casimir h(h − 1). Furthermore, the (anti-)Hermitian properties of the su(2) generators L 0,± in (3.5) combined with the map in(4.37), dictate that the generators H 0,± have the usual Hermiticity properties. This makes the representations unitary when organized in terms of the sl(2, R) basis. [ L a , L b ] = i abc L c , [L a ,L b ] = i abcL c , [L a ,L b ] = 0 , (5.4) with indices raised by η ab , and we take the convention that η 11 = η 22 = +1 and η 33 = −1. The trace is taken with the bilinear form Tr(L a L b ) = Tr(L aLb ) abc e a ∧ e b ∧ e c . (5.6) This reduces to the Einstein-Hilbert action with positive cosmological constant given the identification s = 4G 3 ∈ R . (5.7) 2 , 2C) Chern-Simons theory via the Wilson line operator (3.2). The most natural choice is to simply implement the discrete highest weight representation we inferred in Sec. 4.2.2 from the Euclidean theory. For a further motivation of this choice using an analytic continuation of the SO(4) and SL(2, C) Chern-Simons theories, see App. C. In the language of the SL(2, C) Chern-Simons, we will build this representation by using the sl(2) generators 8 have initially kept the state Σ arbitrary. Using the definitions (3.22) and (3.27) for the Ishibashi and crosscap states through their action on generators, e log ηL 0 Σ Ish = e − log ηL 0 e e log ηL 0 Σ cross = e − log ηL 0 e iw ηL + e − iw ηL − e − log ηL 0 . (5.34) [ L a , L b ] = i abc L c , (A.1) with 123 ≡ 1. For the invariant bilinear form, we take Tr(L a L b ) = 1 2 δ ab . (A.2) M )L a ,Ḡ(M −1 )L aḠ (M ) = D a a (M )L a , (A.4) Figure 1 : 1(Colour online) Penrose diagram of three-dimensional de Sitter space. Horizontal lines are slices of constant global time T (or σ), which correspond to 2-spheres. ψ is the polar angle on that sphere, so that each point on the diagram is a circle of radius sin ψ. Vertical lines are slices of constant ψ. The top and bottom of the diagram are asymptotic timelike infinity, and the left and right edges are the North and South poles of the 2-spheres at each instant in global time. (m ) 2 Θ − C sin 1 − (m ) 2 Θ . (B.22)Short distances correspond to χ approaching 1 from below, in other words taking Θ ∼ . In that regime, we get G(Θ = ) = A + · · · . (B.23) last line is manifestly the Green's function in the Euclidean vacuum of dS 3[43].We can write Θ explicitly in terms of the Hopf coordinates (B.13): cos(Θ) = cos(r i ) cos(r f ) cos(∆τ ) + sin(r i ) sin(r f ) cos(∆φ) .(B.31) a , J b ] = − abc J c , (C.3) [J a , P b ] = − abc P c , (C.4) [P a , P b ] = −Λ abc J c , (C.5)and indices are raised with δ ab . To construct an analytic continuation, we need to find a map to generators of the so(1, 3) algebra of isometries of Lorentzian dS 3 ,[J a , J b ] = abc J c , (C.6) [J a , P b ] = abc P c , (C.7) [P a , P b ] = −Λ abc J c , (C.8)with indices raised by η ab . Until now, we have described the Wilson line W R (x i , x f ) as the diagonal matrix element of an operator in a singlet state, as done in(3.35). For the purpose of building local probes, we want to rewrite this operator as a suitable overlap between states. From (3.41) we can write(3.35) as To summarize: the singlet states we constructed in Sec.3 take the form |Σ = p,p a p,p |l, p,p , (6.1) The notationgR here will be justified and explained in Sec. 3.2. This follows from the commutation relation between L± and L0. This corresponds to the North Pole of the S 2 time slices for Euclidean time τ = 0. It is a point on Penrose diagrams, not a line. We are focusing here on R+ for notational simplicity. Analogous results with h → 1 − h can be obtained for R−, which has l = h − 1. A similar discussion regarding representations of sl(2, C) is discussed in[46]. One difference is that the authors take sl(2, C) ∼ su(1, 1) × su(1, 1). Our Lorentzian metric has a mostly-+ signature. This fixes t → −iτ rather than t → +iτ in order to ensure that the equations of motion minimize the Hamiltonian rather than maximize it. The factor of is there to make the interpretation of τ as an angular coordinate manifest. Acknowledgements A Chern-Simons Action for Three-Dimensional anti-De Sitter Supergravity Theories. A Achucarro, P K Townsend, Phys. Lett. 18089A. Achucarro and P. K. Townsend, A Chern-Simons Action for Three-Dimensional anti-De Sitter Supergravity Theories, Phys. Lett. B180 (1986) 89. 2+1)-Dimensional Gravity as an Exactly Soluble System. E Witten, Nucl. Phys. 31146E. Witten, (2+1)-Dimensional Gravity as an Exactly Soluble System, Nucl. Phys. B311 (1988) 46. Taming the Conformal Zoo. G W Moore, N Seiberg, Phys. Lett. 220G. W. Moore and N. Seiberg, Taming the Conformal Zoo, Phys. Lett. B220 (1989) 422-430. Remarks on the Canonical Quantization of the Chern-Simons-Witten Theory. S Elitzur, G W Moore, A Schwimmer, N Seiberg, Nucl. Phys. 326S. Elitzur, G. W. Moore, A. Schwimmer and N. Seiberg, Remarks on the Canonical Quantization of the Chern-Simons-Witten Theory, Nucl. Phys. B326 (1989) 108-134. Conformal Field Theory, 2-D Quantum Gravity and Quantization of Teichmuller Space. H L Verlinde, Nucl.Phys. 337652H. L. Verlinde, Conformal Field Theory, 2-D Quantum Gravity and Quantization of Teichmuller Space, Nucl.Phys. B337 (1990) 652. Wilson lines and Ishibashi states in AdS 3 /CFT 2. A Castro, N Iqbal, E Llabrés, 1805.05398JHEP. 0966A. Castro, N. Iqbal and E. Llabrés, Wilson lines and Ishibashi states in AdS 3 /CFT 2 , JHEP 09 (2018) 066 [1805.05398]. Quantum field theory and the Jones polynomial. E Witten, Commun. Math. Phys. 121351E. Witten, Quantum field theory and the Jones polynomial, Commun. Math. Phys. 121 (1989) 351. Classical conformal blocks via AdS/CFT correspondence. K B Alkalaev, V A Belavin, 1504.05943JHEP. 0849K. B. Alkalaev and V. A. Belavin, Classical conformal blocks via AdS/CFT correspondence, JHEP 08 (2015) 049 [1504.05943]. Semiclassical Virasoro blocks from AdS 3 gravity. E Hijano, P Kraus, E Perlmutter, R Snively, 1508.04987JHEP. 1277E. Hijano, P. Kraus, E. Perlmutter and R. Snively, Semiclassical Virasoro blocks from AdS 3 gravity, JHEP 12 (2015) 077 [1508.04987]. Monodromic vs geodesic computation of Virasoro classical conformal blocks. K B Alkalaev, V A Belavin, 1510.06685Nucl. Phys. 904K. B. Alkalaev and V. A. Belavin, Monodromic vs geodesic computation of Virasoro classical conformal blocks, Nucl. Phys. B904 (2016) 367-385 [1510.06685]. Holographic conformal blocks from interacting Wilson lines. M Besken, A Hegde, E Hijano, P Kraus, 1603.07317JHEP. 0899M. Besken, A. Hegde, E. Hijano and P. Kraus, Holographic conformal blocks from interacting Wilson lines, JHEP 08 (2016) 099 [1603.07317]. Exact Virasoro Blocks from Wilson Lines and Background-Independent Operators. A L Fitzpatrick, J Kaplan, D Li, J Wang, 1612.06385JHEP. 0792A. L. Fitzpatrick, J. Kaplan, D. Li and J. Wang, Exact Virasoro Blocks from Wilson Lines and Background-Independent Operators, JHEP 07 (2017) 092 [1612.06385]. Renormalization of gravitational Wilson lines. M Beken, E Hoker, A Hegde, P Kraus, 1810.00766JHEP. 0620M. Beken, E. D'Hoker, A. Hegde and P. Kraus, Renormalization of gravitational Wilson lines, JHEP 06 (2019) 020 [1810.00766]. Conformal blocks from Wilson lines with loop corrections. Y Hikida, T Uetoko, Phys. Rev. 978 086014 [1801.08549Y. Hikida and T. Uetoko, Conformal blocks from Wilson lines with loop corrections, Phys. Rev. D97 (2018), no. 8 086014 [1801.08549]. . E Hoker, P Kraus, 1912.02750Gravitational Wilson lines in AdS. 3E. D'Hoker and P. Kraus, Gravitational Wilson lines in AdS 3 , 1912.02750. Topology Changing Amplitudes in (2+1)-Dimensional Gravity. E Witten, Nucl.Phys. 323113E. Witten, Topology Changing Amplitudes in (2+1)-Dimensional Gravity, Nucl.Phys. B323 (1989) 113. Exact Quantum Scattering in (2+1)-Dimensional Gravity. S Carlip, Nucl.Phys. 324106S. Carlip, Exact Quantum Scattering in (2+1)-Dimensional Gravity, Nucl.Phys. B324 (1989) 106. B Skagerstam, A Stern, Topological Quantum Mechanics in (2+1)-dimensions. 51575B. Skagerstam and A. Stern, Topological Quantum Mechanics in (2+1)-dimensions, Int.J.Mod.Phys. A5 (1990) 1575. On spin and (quantum) gravity in (2+1)-dimensions. P De Sousa Gerbert, Nucl.Phys. 346P. de Sousa Gerbert, On spin and (quantum) gravity in (2+1)-dimensions, Nucl.Phys. B346 (1990) 440-472. Wilson loops and black holes in (2+1)-dimensions. C Vaz, L Witten, gr-qc/9401017Phys. Lett. 327C. Vaz and L. Witten, Wilson loops and black holes in (2+1)-dimensions, Phys. Lett. B327 (1994) 29-34 [gr-qc/9401017]. Wilson Lines and Entanglement Entropy in Higher Spin Gravity. M Ammon, A Castro, N , 1306.4338JHEP. 1310110M. Ammon, A. Castro and N. Iqbal, Wilson Lines and Entanglement Entropy in Higher Spin Gravity, JHEP 1310 (2013) 110 [1306.4338]. Entanglement Entropy and Higher Spin Holography in AdS 3. J Boer, J I Jottar, 1306.4347JHEP. 1404J. de Boer and J. I. Jottar, Entanglement Entropy and Higher Spin Holography in AdS 3 , JHEP 1404 (2013) [1306.4347]. Unravelling Holographic Entanglement Entropy in Higher Spin Theories. A Castro, E Llabrés, 1410.2870JHEP. 03124A. Castro and E. Llabrés, Unravelling Holographic Entanglement Entropy in Higher Spin Theories, JHEP 03 (2015) 124 [1410.2870]. Higher spin entanglement and W N conformal blocks. J Boer, A Castro, E Hijano, J I Jottar, P Kraus, 1412.7520JHEP. 07168J. de Boer, A. Castro, E. Hijano, J. I. Jottar and P. Kraus, Higher spin entanglement and W N conformal blocks, JHEP 07 (2015) 168 [1412.7520]. General results for higher spin wilson lines and entanglement in vasiliev theory. A Hegde, P Kraus, E Perlmutter, Journal of High Energy Physics. 20161176A. Hegde, P. Kraus and E. Perlmutter, General results for higher spin wilson lines and entanglement in vasiliev theory, Journal of High Energy Physics 2016 (2016), no. 1 176. Higher spin entanglement entropy at finite temperature with chemical potential. B Chen, J.-Q Wu, 1604.03644JHEP. 0749B. Chen and J.-q. Wu, Higher spin entanglement entropy at finite temperature with chemical potential, JHEP 07 (2016) 049 [1604.03644]. Holographic Conformal Partial Waves as Gravitational Open Wilson Networks. A Bhatta, P Raman, N V Suryanarayana, 1602.02962JHEP. 06119A. Bhatta, P. Raman and N. V. Suryanarayana, Holographic Conformal Partial Waves as Gravitational Open Wilson Networks, JHEP 06 (2016) 119 [1602.02962]. Eternal Higher Spin Black Holes: a Thermofield Interpretation. A Castro, N Iqbal, E Llabrs, 1602.09057JHEP. 0822A. Castro, N. Iqbal and E. Llabrs, Eternal Higher Spin Black Holes: a Thermofield Interpretation, JHEP 08 (2016) 022 [1602.09057]. Asymptotic dynamics of AdS 3 gravity with two asymptotic regions. M Henneaux, W Merbis, A Ranjbar, 1912.09465M. Henneaux, W. Merbis and A. Ranjbar, Asymptotic dynamics of AdS 3 gravity with two asymptotic regions, 1912.09465. R Basu, M Riegler, Wilson Lines and Holographic Entanglement Entropy in Galilean Conformal Field Theories. 934 045003 [1511.08662R. Basu and M. Riegler, Wilson Lines and Holographic Entanglement Entropy in Galilean Conformal Field Theories, Phys. Rev. D93 (2016), no. 4 045003 [1511.08662]. A Castro, D M Hofman, N , 1511.00707Entanglement Entropy in Warped Conformal Field Theories. 33A. Castro, D. M. Hofman and N. Iqbal, Entanglement Entropy in Warped Conformal Field Theories, JHEP 02 (2016) 033 [1511.00707]. Warped Black Holes in Lower-Spin Gravity. T Azeyanagi, S Detournay, M Riegler, 026013 [1801.07263Phys. Rev. 992T. Azeyanagi, S. Detournay and M. Riegler, Warped Black Holes in Lower-Spin Gravity, Phys. Rev. D99 (2019), no. 2 026013 [1801.07263]. The Sum over topologies in three-dimensional Euclidean quantum gravity. S Carlip, hep-th/9206103Class. Quant. Grav. 10S. Carlip, The Sum over topologies in three-dimensional Euclidean quantum gravity, Class. Quant. Grav. 10 (1993) 207-218 [hep-th/9206103]. Sum over the geometries of three manifolds. E Guadagnini, P Tomassini, Phys. Lett. 336E. Guadagnini and P. Tomassini, Sum over the geometries of three manifolds, Phys. Lett. B336 (1994) 330-336. Quantum three-dimensional de Sitter space. M Banados, T Brotz, M E Ortiz, hep-th/9807216Phys. Rev. 5946002M. Banados, T. Brotz and M. E. Ortiz, Quantum three-dimensional de Sitter space, Phys. Rev. D59 (1999) 046002 [hep-th/9807216]. Symmetry algebras in Chern-Simons theories with boundary: Canonical approach. M.-I Park, hep-th/9811033Nucl.Phys. 544M.-I. Park, Symmetry algebras in Chern-Simons theories with boundary: Canonical approach, Nucl.Phys. B544 (1999) 377-402 [hep-th/9811033]. T R Govindarajan, R K Kaul, V Suneeta, hep-th/0203219Quantum gravity on dS. 19T. R. Govindarajan, R. K. Kaul and V. Suneeta, Quantum gravity on dS(3), Class. Quant. Grav. 19 (2002) 4195-4205 [hep-th/0203219]. A de Sitter Farey Tail. A Castro, N Lashkari, A Maloney, 1103.4620Phys. Rev. 83124027A. Castro, N. Lashkari and A. Maloney, A de Sitter Farey Tail, Phys. Rev. D83 (2011) 124027 [1103.4620]. Static Patch Solipsism: Conformal Symmetry of the de Sitter Worldline. D Anninos, S A Hartnoll, D M Hofman, 1109.4942Class. Quant. Grav. 2975002D. Anninos, S. A. Hartnoll and D. M. Hofman, Static Patch Solipsism: Conformal Symmetry of the de Sitter Worldline, Class. Quant. Grav. 29 (2012) 075002 [1109.4942]. Localization for Wilson Loops in Chern-Simons Theory. C Beasley, Adv. Theor. Math. Phys. 1710911.2687C. Beasley, Localization for Wilson Loops in Chern-Simons Theory, Adv. Theor. Math. Phys. 17 (2013), no. 1 1-240 [0911.2687]. The Boundary and Crosscap States in Conformal Field Theories. N Ishibashi, Mod. Phys. Lett. 4251N. Ishibashi, The Boundary and Crosscap States in Conformal Field Theories, Mod. Phys. Lett. A4 (1989) 251. J L Cardy, hep-th/0411189Boundary conformal field theory. J. L. Cardy, Boundary conformal field theory, hep-th/0411189. Conformal vacua and entropy in de Sitter space. R Bousso, A Maloney, A Strominger, hep-th/0112218Phys. Rev. 65104039R. Bousso, A. Maloney and A. Strominger, Conformal vacua and entropy in de Sitter space, Phys. Rev. D65 (2002) 104039 [hep-th/0112218]. Operator Dictionaries and Wave Functions in AdS/CFT and dS/CFT. D Harlow, D Stanford, 1104.2621D. Harlow and D. Stanford, Operator Dictionaries and Wave Functions in AdS/CFT and dS/CFT, 1104.2621. Quasinormal modes of D-dimensional de Sitter spacetime. A Lopez-Ortega, gr-qc/0605027Gen. Rel. Grav. 38A. Lopez-Ortega, Quasinormal modes of D-dimensional de Sitter spacetime, Gen. Rel. Grav. 38 (2006) 1565-1591 [gr-qc/0605027]. dS/CFT and the operator product expansion. A Chatterjee, D A Lowe, 1612.07785Phys. Rev. 96666031A. Chatterjee and D. A. Lowe, dS/CFT and the operator product expansion, Phys. Rev. D96 (2017), no. 6 066031 [1612.07785]. Quantization of Chern-Simons Gauge Theory With Complex Gauge Group. E Witten, Commun. Math. Phys. 137E. Witten, Quantization of Chern-Simons Gauge Theory With Complex Gauge Group, Commun. Math. Phys. 137 (1991) 29-66. CFT description of three-dimensional Kerr-de Sitter space-time. J Fjelstad, S Hwang, T Mansson, hep-th/0206113Nucl. Phys. 641J. Fjelstad, S. Hwang and T. Mansson, CFT description of three-dimensional Kerr-de Sitter space-time, Nucl. Phys. B641 (2002) 376-392 [hep-th/0206113]. Notes on de Sitter space and holography. V Balasubramanian, J Boer, D Minic, hep-th/0207245Class. Quant. Grav. 1959Annals Phys.V. Balasubramanian, J. de Boer and D. Minic, Notes on de Sitter space and holography, Class. Quant. Grav. 19 (2002) 5655-5700 [hep-th/0207245]. [Annals Phys.303,59(2003)]. Low-dimensional de Sitter quantum gravity. J Cotler, K Jensen, A Maloney, 1905.03780J. Cotler, K. Jensen and A. Maloney, Low-dimensional de Sitter quantum gravity, 1905.03780. H Verlinde, 1505.05069Poking Holes in AdS/CFT: Bulk Fields from Boundary States. H. Verlinde, Poking Holes in AdS/CFT: Bulk Fields from Boundary States, 1505.05069. Continuous Multiscale Entanglement Renormalization Ansatz as Holographic Surface-State Correspondence. M Miyaji, T Numasawa, N Shiba, T Takayanagi, K Watanabe, Phys. Rev. Lett. 11517 171602 [1506.01353M. Miyaji, T. Numasawa, N. Shiba, T. Takayanagi and K. Watanabe, Continuous Multiscale Entanglement Renormalization Ansatz as Holographic Surface-State Correspondence, Phys. Rev. Lett. 115 (2015), no. 17 171602 [1506.01353]. Bulk Locality and Boundary Creating Operators. Y Nakayama, H Ooguri, 1507.04130JHEP. 10114Y. Nakayama and H. Ooguri, Bulk Locality and Boundary Creating Operators, JHEP 10 (2015) 114 [1507.04130]. Three-Dimensional Cosmological Gravity: Dynamics of Constant Curvature. S Deser, R Jackiw, Annals Phys. 153S. Deser and R. Jackiw, Three-Dimensional Cosmological Gravity: Dynamics of Constant Curvature, Annals Phys. 153 (1984) 405-416. The dS/CFT correspondence. A Strominger, hep-th/0106113JHEP. 1034A. Strominger, The dS/CFT correspondence, JHEP 10 (2001) 034 [hep-th/0106113]. Quantum gravity in de Sitter space. E Witten, hep-th/0106109E. Witten, Quantum gravity in de Sitter space, hep-th/0106109. Non-Gaussian features of primordial fluctuations in single field inflationary models. J M Maldacena, astro-ph/0210603JHEP. 0513J. M. Maldacena, Non-Gaussian features of primordial fluctuations in single field inflationary models, JHEP 05 (2003) 013 [astro-ph/0210603]. Local bulk operators in AdS/CFT: A Boundary view of horizons and locality. A Hamilton, D N Kabat, G Lifschytz, D A Lowe, hep-th/0506118Phys. Rev. 7386003A. Hamilton, D. N. Kabat, G. Lifschytz and D. A. Lowe, Local bulk operators in AdS/CFT: A Boundary view of horizons and locality, Phys. Rev. D73 (2006) 086003 [hep-th/0506118]. Holographic representation of local bulk operators. A Hamilton, D N Kabat, G Lifschytz, D A Lowe, hep-th/0606141Phys. Rev. 7466009A. Hamilton, D. N. Kabat, G. Lifschytz and D. A. Lowe, Holographic representation of local bulk operators, Phys. Rev. D74 (2006) 066009 [hep-th/0606141]. Holographic representation of local operators in de sitter space. X Xiao, 1402.7080Phys. Rev. 90224061X. Xiao, Holographic representation of local operators in de sitter space, Phys. Rev. D90 (2014), no. 2 024061 [1402.7080]. Holographic Representation of Higher Spin Gauge Fields. D Sarkar, X Xiao, 1411.4657Phys. Rev. 91886004D. Sarkar and X. Xiao, Holographic Representation of Higher Spin Gauge Fields, Phys. Rev. D91 (2015), no. 8 086004 [1411.4657]. Higher Spin de Sitter Hilbert Space. D Anninos, F Denef, R Monten, Z Sun, 1711.10037JHEP. 1071D. Anninos, F. Denef, R. Monten and Z. Sun, Higher Spin de Sitter Hilbert Space, JHEP 10 (2019) 071 [1711.10037]. K Goto, T Takayanagi, 1704.00053CFT descriptions of bulk local states in the AdS black holes. K. Goto and T. Takayanagi, CFT descriptions of bulk local states in the AdS black holes, 1704.00053. Non-Abelian localization for Chern-Simons theory. C Beasley, E Witten, hep-th/0503126J. Diff. Geom. 702C. Beasley and E. Witten, Non-Abelian localization for Chern-Simons theory, J. Diff. Geom. 70 (2005), no. 2 183-323 [hep-th/0503126]. C Beasley, Remarks on Wilson Loops and Seifert Loops in Chern-Simons Theory. AMS/IP Stud501012.5064C. Beasley, Remarks on Wilson Loops and Seifert Loops in Chern-Simons Theory, AMS/IP Stud. Adv. Math. 50 (2011) 1-17 [1012.5064].
[]
[ "X-ray Hardness Evolution in GRB Afterglows and Flares: Late Time GRB Activity Without N H Variations", "X-ray Hardness Evolution in GRB Afterglows and Flares: Late Time GRB Activity Without N H Variations" ]
[ "Nathaniel R Butler \nSpace Sciences Laboratory\nTownes Fellow\nUniversity of California\n94720-7450BerkeleyCAUSA\n\nAstronomy Department\nUniversity of California\n445 Campbell Hall94720-3411BerkeleyCAUSA\n", "Daniel Kocevski \nAstronomy Department\nUniversity of California\n445 Campbell Hall94720-3411BerkeleyCAUSA\n" ]
[ "Space Sciences Laboratory\nTownes Fellow\nUniversity of California\n94720-7450BerkeleyCAUSA", "Astronomy Department\nUniversity of California\n445 Campbell Hall94720-3411BerkeleyCAUSA", "Astronomy Department\nUniversity of California\n445 Campbell Hall94720-3411BerkeleyCAUSA" ]
[]
We show that the X-ray and γ-ray spectra of Swift GRBs and their afterglows are consistent with the emission characteristic of an expanding, relativistic fireball. The classical afterglow due to the impact of the fireball on the external medium is often not observed until one to several hours after the GRB. Focusing on GRBs 061121, 060614, and 060124, but generalizing to the full (>50 Msec XRT exposure) Swift sample up to and including GRB 061210, we show that the early emission in >90% of early afterglows has a characteristic νF ν spectral energy E peak which likely evolves from the γ-rays through the soft X-ray band on timescales of 10 2 − 10 4 s after the GRB. The observed spectra are strongly curved when plotted with logarithmic axes and have often been incorrectly fitted in other studies with a time-varying soft X-ray absorption. The spectral evolution inferred from fitting instead models used to fit GRBs demonstrates a common evolution-a powerlaw hardness intensity correlation and hard to soft evolution-for GRBs and the early X-ray afterglows and X-ray flares. Combined with studies of short timescale variability, our findings indicate a central engine active for longer than previously suspected. The GRB spectra are observed to become very soft at late times due to an intrinsic spectral evolution and due to the surprising faintness of some afterglows. We discuss models for the early X-ray emission.
10.1086/518023
[ "https://arxiv.org/pdf/astro-ph/0612564v3.pdf" ]
14,567,979
astro-ph/0612564
028612cca177b27b38b4760317a8a0df55a7ee27
X-ray Hardness Evolution in GRB Afterglows and Flares: Late Time GRB Activity Without N H Variations arXiv:astro-ph/0612564v3 10 Mar 2007 Nathaniel R Butler Space Sciences Laboratory Townes Fellow University of California 94720-7450BerkeleyCAUSA Astronomy Department University of California 445 Campbell Hall94720-3411BerkeleyCAUSA Daniel Kocevski Astronomy Department University of California 445 Campbell Hall94720-3411BerkeleyCAUSA X-ray Hardness Evolution in GRB Afterglows and Flares: Late Time GRB Activity Without N H Variations arXiv:astro-ph/0612564v3 10 Mar 2007Submitted to ApJSubject headings: gamma rays: bursts -supernovae: general -X-rays: general We show that the X-ray and γ-ray spectra of Swift GRBs and their afterglows are consistent with the emission characteristic of an expanding, relativistic fireball. The classical afterglow due to the impact of the fireball on the external medium is often not observed until one to several hours after the GRB. Focusing on GRBs 061121, 060614, and 060124, but generalizing to the full (>50 Msec XRT exposure) Swift sample up to and including GRB 061210, we show that the early emission in >90% of early afterglows has a characteristic νF ν spectral energy E peak which likely evolves from the γ-rays through the soft X-ray band on timescales of 10 2 − 10 4 s after the GRB. The observed spectra are strongly curved when plotted with logarithmic axes and have often been incorrectly fitted in other studies with a time-varying soft X-ray absorption. The spectral evolution inferred from fitting instead models used to fit GRBs demonstrates a common evolution-a powerlaw hardness intensity correlation and hard to soft evolution-for GRBs and the early X-ray afterglows and X-ray flares. Combined with studies of short timescale variability, our findings indicate a central engine active for longer than previously suspected. The GRB spectra are observed to become very soft at late times due to an intrinsic spectral evolution and due to the surprising faintness of some afterglows. We discuss models for the early X-ray emission. Introduction The Swift satellite (Gehrels et al. 2004) and its X-ray telescope (Burrows et al. 2005b) have opened a new window into the early lives of γ-ray Bursts (GRBs) and their afterglows. We see a complex array of behaviors, many of which appear to directly conflict (e.g., O'Brien et al. 2006;Panaitescu et al. 2006;Willingale et al. 2006) the well tested internal-external shock GRB and afterglow model (Rees & Mészáros 1994;Sari & Piran 1997;Sari, Piran, & Narayan 1998;Wijers & Galama 1999). In this "fireball" model, the GRB is produced via collisions of shells in a relativistic outflow, and an afterglow arises later as the ejecta sweep up and heat the surrounding medium. The Swift afterglows exhibit dramatic flaring, rapidly decaying prompt emission tails, and typically a broad plateau phase until t ≈ 10 4 s (e.g. Nousek et al. 2006). Early afterglow observations prior to Swift (e.g., Frontera et al. 2000) suggested instead a ∼ 10s duration burst rapidly gone and replaced by the fading afterglow emission. How these observations are to be reconciled and what mechanisms produce the early afterglow emission are key open questions. Particularly intriguing, several recent studies fit the Swift X-ray Telescope (XRT) data and infer a time variable soft X-ray absorption (Starling et al. 2005;Rol et al. 2006;Campana et al. 2006c). This would imply that the early afterglow is stripping electrons from a dense shell of light-element-rich material located R ∼ < 1 pc from the GRB, which was not already fully ionized by the GRB. It is difficult to detect such an effect because of the strong spectral evolution common in the early afterglows (e.g., Vaughan et al. 2006;Butler 2007a, "Paper I"). A changing column density N H cannot easily be separated from intrinsic afterglow spectral evolution, given the narrow XRT bandpass. If the early X-ray spectra exhibit log-log curvature like that of GRBs, which have νF ν spectral turnovers at a characteristic energy E peak (e.g., Preece et al. 2000;Kaneko et al. 2006), then evolution in the curvature could be mistaken for variations in N H . As we discuss below, plots of early XRT spectra do show strong log-log curvature and an inferred E peak which typically passes in time through the X-ray band. This produces a changing X-ray hardness, which we observe to correlate with the flux. A close analogy can be found in the spectral evolution of GRBs observed by the Burst and Transient Source Experiment (BATSE; Fishman et al. 1999). A characteristic feature of these spectra and light curves is a hard-to-soft evolution in time and a powerlaw hardness-intensity correlation (Golenetskii et al. 1983;Kargatis et al. 1995;Norris et al. 1996;Fenimore et al. 1995;Fenimore, Madras, & Nayakshin 1996). The recent refined study of Borgonovo & Ryde (2001) measures a powerlaw relation between the characteristic energy E peak and the bolometric flux F bol valid for >57% of GRB pulses, E peak ∝ F 0.5±0.2 bol . We observe a consistent correlation in the soft, early-time XRT data. In Paper I, we present evidence for this outlier population of extremely soft afterglows in the first year of Swift XRT afterglow data. Although they were identified via an automated search for spectral lines, the spectra are also fitted well by models containing multiple continuum components. Below and in Butler & Kocevski (2007), we explore further the phenomenology associated with this soft emission. We demonstrate that GRB-like behavior is present in the first t ∼ < 1 hour of >90% of the afterglows and is especially prominent during the flaring. In two cases, thanks to Burst Alert Telescope (BAT) triggers on bright precursors, X-ray emission coincident in time with the classical GRB is detected and can be shown to have quite similar properties to the highly time-variable emission at later times. This is strong evidence-to be combined with the short timescale variabiliity studies (e.g., Burrows et al. 2005a;Falcone et al. 2006;Pagani et al. 2006;Kocevski & Butler 2007)-tying the flare and early afterglow emission to the GRB central engine. Data Reduction Our automated pipeline at U. C. Berkeley downloads the Swift data in near real time from the Swift Archive 1 and quicklook site. We use the calibration files from the 2006-04-27 BAT and XRT database release. The additional automated processing described below is done uniformly for all afterglows via custom IDL scripts. The final data products are available for general consumption 2 . The XRT suffers from a significant number of bad or unstable pixels and columns. Two central detector columns were lost due to a micro-meteorite strike 3 . For the early afterglows (t ∼ < 10 3 s), when the satellite initially points the XRT at the source without the precise localization information needed to offset from the bad columns, a large and time-dependent fraction of the flux can be lost. In order to produce accurate light curves and properly normalized spectra, it is necessary to accurately determine the position centroid and to precisely track the loss of source and background flux due to the bad detector elements on short (∼ few second) timescales. Photon Counting (PC) Mode Light Curves We begin by projecting the data in the 0.5-8.0 keV band from each PC mode followup observation onto a tangent plane centered at the source position quoted by the XRT Team. In raw coordinates, we reject all pixels with more than six counts and also containing more signal than contained in the surrounding 8 pixels summed. Using the aspect solution file (*sat*.fits), we determine the satellite pointing for each detection frame. We then map the bad pixels in raw detector coordinates determined by xrtpipeline and by our algorithm onto the sky on a frame-by-frame basis. This is used to generate exposure maps for the full observation and as a function of time. Using the full exposure map, we determine the afterglow position centroid (see , Butler 2007b) to fix the source extraction region. We consider a 16 pixel radius source extraction region, surrounding by an annular background extraction region of outer radius 64 pixels. Running wavdetect (see, Butler 2007b), we then determine the positions of field sources in the image. We mask out the regions corresponding to the field sources from the source and background extraction regions. Also, using the Point Spread Function (PSF) model (swxpsf20010101v003.fits) at 1.3 keV, we determine the level of residual field source contamination in the source extraction region (typically negligible) for later subtraction. Initially ignoring pileup, we extract the source and background counts for each good time interval of data acquisition. The fraction of lost signal and the scale factor relating the background in the source and background extraction regions is determined for each extraction using the time-dependent exposure map. assuming these exposure corrections for the entirety of each time interval, we subdivide the counts in each interval so that a fixed signal-to-noise of 3 is achieved. In order to check and to account for pileup, we perform a coarse Bayesian blocking (Scargle 1998), with a strong prior weight against adding a new segment (e −50 ). Using the maximum observed count rate in each segment thus determined, we find the minimum aperture necessary to reduce the source signal to levels where pileup is negligible. The coarse blocking results in a small number of regions (typically 2-3) of differing inner extraction radius for an afterglow. We assume pileup is important for count rates > 0.5 cps (see also, Nousek et al. 2006). The light curves are verified to transition smoothly across regions of different inner extraction region radius. Using the time intervals and pileup corrected apertures thus determined, we rebin the data to a signal-to-noise of 3 and recalculate the exposure correction for each time interval. The final time regions and exposure corrections define our temporal extractions regions for the extraction of light curves in different energy bands and for extraction of spectra below. Windowed Timing (WT) Mode Light Curves Our reduction of the WT mode data closely parallels our PC mode reduction, except that it is more natural to extract the WT mode data in raw detector coordinates than in sky coordinates as done above for the PC mode data. This is due to the readout mode; detector pixels are summed in RAW-Y and the resulting data are in column (RAW-X) format. Summing the data from each WT mode followup, we reject any RAW-X columns containing a > 10σ count rate relative to the background, after first ignoring pixels in the 16 pixel source extraction region. We also reject any RAW-X columns containing 100 times more signal than the highest neighboring column (or > 100 if the neighbors contain no signal). Using the sky image determined from the PC mode data and the satellite aspect, we project the background onto the RAW-X axis and form a background mask for the 64 pixel outer radius and 16 pixel inner radius extraction region. We do not allow masking of the pixels within the central 16 pixel source regions. If the source is bright (> 10 3 cps), we recenter the source and background apertures. Small aspect shifts ∼1 pixel are not uncommon between the PC and WT mode data and must be accounted for. We determine the exposure corrections as for the PC mode data, but also adjusting the PSF model for the WT mode summing of RAW-Y pixels. We note that our exposure corrections account for source signal contained in the background region. We determine a pileup correction as above, but with a limiting source count rate of 150 cps (see also, Nousek et al. 2006). PC and WT Mode Spectra Spectral response files are generated using the xrtmkarf task for each time interval of interest. Our invocation of the task ignores the exposure maps calculated above, determining the energy dependence of of the source extraction assuming only the inner and outer source and background regions. We then adjust the normalization of the resulting Ancillary Response File (ARF) to account for the actual loss in flux (0.5-8.0 keV) on a pixel by pixel basis using the divided time intervals and associated exposure corrections determined above. The spectra are fit in ISIS 4 . For each spectral bin, we require a S/N of 3.5. We define S/N as the background-subtracted number of counts divided by the square root of the sum of the signal counts and the variance in the background. As done in Paper I, we restrict our attention to time-resolved spectra containing 500 or more counts or to spectra formed by grouping two or more of the 500 counts spectra. We fit the PC and WT mode data over the 0.3-10.0 keV range, also accounting for the systematic calibration uncertainties ∼ 3% 5 . In WT mode, we allow the detector gain to vary by ±80 eV 6 . BAT Light Curves and Spectra We establish the energy scale and mask weighting for the BAT data by running the bateconvert and batmaskwtevt tasks. The mask-weighting removes flux from background sources. Spectra and light curves are extracted with the batbinevt task, and response matrices are produced by running batdrmgen. We apply the systematic error corrections to the low-energy BAT spectral data as suggested by the BAT Digest website 7 , and fit the data using ISIS. The spectral normalizations are corrected for satellite slews using the batupdatephakw task. For GRB 060124 below, BAT spectral fits are performed on the mask tagged light curve data in four channels, assuming the on-axis response and also accounting for the systematic error. The Joint BAT+XRT Spectra of Three Events There are two bright events in the XRT sample which overlap in time entirely with what would commonly be thought of as the prompt phase of GRB emission. The observations by the XRT were made possible by a bright precursor just minutes prior to each GRB observed in the BAT, on which the BAT triggered. We therefore have both BAT and XRT data for each event, GRB 060124 and GRB 061121. We also discuss the bright event GRB 060614, which has excellent XRT coverage due to an early, rapid spacecraft slew. Figure 1 displays spectral fits to a selected set of time-resolved intervals in each events. The best-fit model parameters are given in Table 1. The time evolution of these parameters are presented and discussed in detail in the next three subsections. GRB 060124 Swift-BAT triggered and located the precursor to GRB 060124, allowing the XRT to slew and begin simultaneous observations 106s later (Holland et al. 2006). This event is also discussed in Romano et al. (2006b). The 0.3-10.0 keV light curve is plotted in Figure 2. There are two prominent peaks. As shown in the background (lighter two shades of gray), the time profile in the soft XRT channel (0.3-1.3 keV) is broader than that in the hard channel (1.3-10.0 keV). The BAT light curve shows even narrower time structure and resolves the broad first XRT peak into at least 3 sub-peaks. The light curve after the flare (t > 10 4 s) and extending to 22 days is well fit by a powerlaw t −1.32±0.01 (χ 2 /ν = 535.2/465). We group the XRT data into ∼ > 500 counts spectra and fit powerlaws (Figure 3, left). Each fit is statistically acceptable, with a reduced χ 2 of order unity. The photon index Γ is observed to decrease in time, although with modulation in time that correlates with the X-ray flux and with N H (see explanation in Section 5.1). At late times (t > 10 4 s), the N H values asymptote to the blue, dashed curve (N H = 2.3 ± 0.2 × 10 21 cm −2 ) plotted in the figure. To study the time varying log-log curvature, we jointly fit the BAT and XRT data using the Band et al. (1993) model. Here, we choose extraction regions which allow for a BAT signal-to-noise of 20 or higher. We also fix the column density N H to the late time value. The model fit is actually a progression of fits of nested models (e.g., Protassov et al. 2002), from the simplest powerlaw model to a powerlaw times exponential model, to the smoothly broken powerlaw Band model. Each more complex model has one additional degree of freedom. We accept or reject the more complex model at each stage by requiring ∆χ 2 > 2.706 (i.e., 90% confidence). If the data are acceptably fit by only the powerlaw model, we quote a limit on E peak using either the exponential times powerlaw model (for Γ < 2) or the constrained Band formalism (Sakamoto et al. 2003, ; for Γ > 2). In order that E peak correspond to a peak in the νF ν spectrum, we require the low energy index α > −2 and the high energy index β < −2. After finding that the fits were consistent with α < 0, as also found for BATSE GRBs (Preece et al. 2000;Kaneko et al. 2006), we included this as a constraint to derive the tightest error bounds on the other model parameters. As shown in Figure 3 (right), the data are better fit (> 90% confidence) with the Band model in most of the time regions. The peak energy rises and declines with each of the four prominent light curve pulses. For each pulse, we present powerlaw fits to the E peak declines. The rises are not well measured, as is also typically the case for BATSE bursts (e.g., Kocevski, Ryde, & Liang 2003). Prior to t ∼ < 800s, the observed spectrum corresponds mostly to the low energy portion of the Band model spectrum, except episodically at the flare troughs, where E peak enters the X-ray band. These times regions are also those of highest N H in Figure 3 (left). The third pulse decline exhibits a strong evolution in both α and E peak . After t ∼ 800s the observed spectrum corresponds to the high energy portion of the model spectrum, and E peak has transited the X-ray band. Figure 1 (middle) plots the νF ν spectrum at 3 time epochs. Motivated by the watershed event GRB 060218 (Paper I; Campana et al. 2006a), we also attempt to fit the X-ray curvature using a powerlaw plus blackbody model. The fits to the X-ray data alone are provocative and show a smooth temperature decline after each of the two major pulses. However, the fits are statistically unacceptable when we also attempt to account for the BAT data. This is also true for the GRB 061121 spectra discussed in the next sub-section. This should be taken as a caveat also to the powerlaw plus blackbody fits presented for the XRT data in Paper I, where the derived blackbody temperature variation may imply instead to E peak variations. We note, however, that the X-ray spectra of the unusual GRB 060218 burst and afterglow are better fit by a blackbody plus powerlaw than by a Band model (Paper I). We do not consider the possibility of two powerlaws and a blackbody for the bursts discussed here. GRB 061121 Swift-BAT triggered on and began observing the precursor to GRB 061121 55s prior to the XRT slew toward and onset of the main GRB event (Page et al. 2006). The Swift team designated this event a "Burst of Interest" (Gehrels et al. 2006) due to the rare simultaneous detections in the BAT and XRT bands and at longer wavelengths. As shown in Figure 4, the γ-ray and X-ray light curves show multiple peaks, with most of the prominent time structure apparent in only the γ-ray band. In Figures 3 (top), we show the results of powerlaw and Band model fits to the 061121 data. The data are not of as high signal-to-noise in the X-ray band as the 060124 data, however, many of the same trends are apparent. There is a hard to soft evolution apparent in the powerlaw photon index and a correlation between the photon index and N H . The Band model photon index goes from the low energy side to the high energy once E peak has crossed the X-ray band. E peak also appears to rise and fall with flaring prior to 80s. The νF ν spectrum is plotted at two epoch in Figure 1 (top). For the Band fits, we use the late time (t > 10 4 s) N H = 2.5 ± 0.3 × 10 21 cm −2 . XMM data for this event beginning after t ≈ 6hrs show consistent powerlaw fits with our late-time fits ). In particular, N H = 1.71 +0.03 −0.02 × 10 21 cm −2 , consistent with our late-time N H at the 2σ level and well below the mean early-time value. XMM and XRT data generally agree well with respect to the late-time N H determinations (e.g., Moretti et al. 2006). GRB 060614 The GRB 060614 (Parsons et al. 2006) afterglow fades rapidly as a powerlaw from the prompt emission, with no flaring ( Figure 5). There is excellent BAT+XRT coverage during the prompt tail emission lasting to t ∼ 150s. We observe weak N H -Γ correlated modulations during the rapidly fading tail, which would imply an N H that decreases in time, reaching the value marked by the dashed line in Figure 3 (left) by t = 10 4 s. However, the Band model fits show an E peak which passes through the X-ray band without requiring a varying N H . Extrapolating backward through the prompt emission, the best fit decay also fits two E peak limits derived for the BAT only prompt emission. Figure 1 (bottom) plots the νF ν spectrum at two epochs. Expressed in terms of the flux F XRT as measured by the XRT rate, E peak ∝ F −0.72±0.03 XRT . The low-energy photon index α also appears to evolve in time after the main GRB emission. Hardness Plots for GRBs 061121, 060124 and 060614 It will be useful below to see how the spectral evolution in the early X-ray light curves of GRBs 061121, 060124 and 060614 impacts the X-ray hardness ratio. We define this is the ratio of counts in the 1.3-10.0 keV band to the counts in the 0.1-1.3 keV band. The average hardness ratio (HR) for most afterglows is 1. Figure 6 in 9 panels shows the hardness and rate time profiles for GRBs 061121, 060124, 060614. The middle panels (looking top to bottom) show the X-ray light curve fit using an extension of the Bayesian blocks algorithm (Scargle 1998) to piecewise logarithmic data. The rate and hardness data are fit jointly, allowing the minimum number of powerlaw segments such that χ 2 /ν ∼ 1. The fits to the rate and hardness are plotted in the top and middle panels, indexed according to time. The hardness tracks the flux and moves along roughly parallel tracks. In the bottom panels, the flux in both XRT bands (top panel) and the hardness (bottom panel) are plotted for each powerlaw segment. During the decline phase of each pulse, the hardness scales as the square-root of the rate for GRBs 061121 and 060124. For the GRB 060614, the hardness and flux track as found above for E peak and flux. Each pulse in GRB 060124 peaks at roughly the same time, independent of energy band. There is, however, a hardness rise during the flux rise because the hard band increases more rapidly. There is also a modest overall hard to soft trend throughout the light curve. The hardness plot does not capture the strong spectral variations between 500 and 600s in GRB 060124, which are apparent from the broad band fits (Figure 3 middle) and occur mostly for E peak above the XRT bandpass. The time dependences of E peak during this region and later are given in the figure. The E peak dependence can also be given in terms of the flux F , in order to sidestep the problem of unknown start time. For all but the last flare, where we use the XRT count rate, we use the BAT 15-350 keV count rate for the flux. For pulses 1-4, we find E peak ∝ F −3.6±1.7 BAT , F −1.8±0.5 BAT , F −0.3±0.1 BAT , F −1.2±0.2 XRT . In the bottom right left panels of Figure 6, we show that the hardness can be described by the square root of the observed flux, as is common for GRBs at higher energies observed with BATSE (e.g., Borgonovo & Ryde 2001;Ryde & Petrosian 2002;Kocevski, Ryde, & Liang 2003;Ryde 2005, Section 5). For GRB 061121, the hardness plots show an initial hardening followed by a decrease in the hardness which scales well with the square root of the X-ray rate. There may be broad pulses on top of the decline, although these have only a minor impact on the hardness. GRB 060614 appears to mostly to exhibit a secular decline in both flux and hardness, corresponding to the fading tail of the prompt emission. For each GRB, the hardness plot capture the E peak evolution in general terms. Both (HR and E peak ) decrease during rate declines at a similar power of the rate. It is apparently not possible to cleanly if at all separate evolution of α from evolution of E peak , given the hardness alone. From Figures 3 (top right) and 3 (middle right) and also from time-resolved spectral studies of many GRBs (Section 5), these parameters tend to evolve simultaneously. Example Spectra for 4 Other Early Afterglows Most early X-ray afterglows have a low signal-to-noise or no coincident detection by the BAT. It is possible to derive E peak values or limits for these early on, given the BAT data. Late time E peak from the X-ray data typically show values in or passing through the XRT band after one to several minutes. The spectral evolution from one such event, GRB 060714 (Krimm et al. 2006), is shown in Figure 7. The hardness plot (Figure 7 middle) allow for a finer time sampling of the spectra evolution. The hardness (likely also E peak ) rises and declines with the flux along the same track in the hardness-rate plane as two flares take place. The column density (not plotted) is a factor two larger in the time interval 140-170s than outside that interval, indicating an E peak passage. There are a handful of examples with higher signal-to-noise XRT observations. The GRB 060526 (Campana et al. 2006b) afterglow exhibits time-correlated Γ-N H variations and a corresponding rapid then smooth decline of E peak through the XRT band ( Figure 8 left). The initial GRB pulse (t < 9.4s) is well fit by a simple powerlaw (α = −1.6 ± 0.2, χ 2 /ν = 16.83/16), and we derive E peak > 80 keV (90% confidence). The flare at t ∼ 250s is detected by the BAT as well, and we use the BAT data to obtain the best Band model fits. The Band model photon indices are poorly measured. The composite flare and decline is shown in Figure 9. The hardness evolves similarly to the best-fit E peak values. The very bright afterglow to GRB 060729 (Grupe et al. 2006a) continues to be detected 4.5 months after the GRB. The GRB is over and done with by t ∼ 130s in the BAT. We find E peak > 50 keV (90% confidence). After t > 100s in the XRT, there is a rapid decline, interrupted by a flare or rise at 160s (Figure 9). Time-correlated N H -Γ variations and an E peak passage through the X-ray band are similar to those discussed above ( Figure 8). We observe that E peak declines with the X-ray rate as F −0.4±0.1 XRT both before and after the mild flare at t ∼ 180s. There is also a possibly significant decline in β with time. The hardness declines by an order of magnitude, reaching a minimum at t ∼ 250s, and then increases to the late time (t > 10 3 s) value. Note that no clear coincident change is present in the rate plot. The hardness plot demonstrates that the late time emission is spectrally different from the early emission and that its onset occurs at t ∼ 250s. Modest but clear N H -Γ variations are seen for GRB 060904B (Grupe et al. 2006b). The prompt emission (t < 8.3s) has E peak = 125 +135 −30 keV. E peak transits the X-ray band nicely ( Figure 8). The hardness evolution shows the usual time dependence in the declining tail of the flare (Figure 9). E peak decays versus the rate as F −0.7±0.2 XRT . The emission for GRB 060929 (Markwardt et al. 2006) at t < 13s exhibited E peak > 75 keV. The X-ray flare peaking at t ∼ 550s is weakly detected by the BAT. In the XRT, there is a clear softening trend (Figure 9), likely N H -Γ variations, and an E peak declining through the X-ray range (Figure 8). E peak drops with the X-ray rate as F −0.6±0.1 XRT . The hardness reaches a minimum at t = 630 ± 10s. Discussion Global Sample Properties In terms of the spectral evolution properties, we see no apparent difference between the fading tales of flare-like X-ray emission and the rapid X-ray declines often observed to trail flaring in the BAT (e.g., Tagliaferri et al. 2005;Barthelmy et al. 2005;Cusumano et al. 2006;Vaughan et al. 2006). Indeed, based solely on timing properties, many of the rapid declines also appear to have superimposed flaring (e.g., 060729, Figure 9; 061121, Figure 6). The rapid declines are thought to be the fading tail of the prompt emission (Panaitescu et al. 2006;Yamazaki et al. 2006;Lazzati & Begelman 2006;Zhang et al. 2006), and the X-ray flares are thought to be due to later central engine activity Ioka et al. 2005;Fan & Wei 2005). We observe a clear distinctions between the spectra measured before the light curve plateau and after the start of the plateau; only the late spectra exhibit a tight clustering with Γ ≈ 2 (Figure 10; Paper I; Butler & Kocevski (2007)). Figure 11 shows what we expect to measure from powerlaw fits to a time-evolving Band model spectrum. As E peak enters the X-ray band, the spectral curvature as would be seen on a plot with logarithmic axes increases and the inferred X-ray column density increases linearly with an increasing inferred photon index Γ. This occurs despite the fact that only E peak changes in the simulation. Figure 12 suggests that the effect is common in the XRT data (Section 5.2). Figure 13 (left) shows that the flares (Table 2) and rapid X-ray declines exhibit significant hardness-intensity and hardness-fluence correlations which match closely the correlations observed for GRBs (Section 5.3 below). For GRBs it is common to observe finer time structure at higher energies as compared to low energies (Norris et al. 1996;Fenimore et al. 1995;Fenimore, Madras, & Nayakshin 1996). Pulses tend to be narrower, fade more rapidly, and evolve stronger spectrally at high energies. Consistent with this, the X-ray flares (and also the rapid declines) appear longer (8 ± 1% on average, Figure 14 left) and with smoother time structure (e.g., Figure 2) at softer energies. This can be understood as the effect of E peak evolving into the X-ray band, which allows the X-ray emission to be observed for longer (e.g., Kocevski, Ryde, & Liang 2003, and Section 5.4). Although it is difficult to see by a eye, there is also evidence for a 25 ± 5% increase in the flare rise time with decreasing X-ray energy band (Figure 14 right). This is close to the expected pulse broadening fraction from an extrapolation of the GRB behaviour, 1 − (1.3/0.5) −0.4 ≈ 30%, where 0.5 and 1.3 keV are used as approximate lower bandpass energies. Given the possibility that resolved γ-ray flares are blurred together in the X-ray band (e.g. Figure 2), however, it is not clear how meaningful this apparent consistency is. The Physical X-ray Column Density Does Not Vary The time-resolved XRT afterglows are well fit by absorbed powerlaws at all epochs (see also Paper I). Prior to a characteristic hardness variation turn-off time T H ≈ 10 2 − 10 4 s, which we discuss for a large sample of bursts in (Butler & Kocevski 2007), there is strong evolution in both the best-fit photon indices and the best fit column densities N H . After this time, the quantities typically do not vary. To fit more complicated models to the early time afterglows, we have found it necessary to jointly fit the BAT and XRT data (when possible) and to tie the column density to the value measured at late time. The late time value is typically not the Galactic value. Band model fits are able to account for both the BAT and XRT emission without a time variable column density (see, also, Falcone et al. 2006). The ubiquitous hardness evolution appears to be best understood in terms of an evolving E peak , as we discuss in detail below. Several studies have claimed recently a decreasing N H based on fits to the XRT data (Starling et al. 2005;Rol et al. 2006;Campana et al. 2006c, GRBs 050730, 050716, and 050904, respectively). Each study presents a coarsely time-resolved set of spectral fits, which demonstrate a higher N H at early times. This is an artificial feature that we observe in fits to most Swift early afterglows. It is especially clear in the brightest afterglows, which often sample the declining tail of the prompt emission. For each of the 3 bursts with claimed N H variations (e.g., Figure 15), a fine times-scale spectral analysis reveals an N H which both increases and decreases in time (following Γ and the flux). Observed drops in N H ∼ > 10 21 cm −2 (or ∼ > 10 22 cm −2 in the rest-frame) on timescales of 10 − 100s are challenging enough, but drops and increases and drops again on these timescales are unphysical. We strongly caution against taking the early N H values at face value. Measurements of N H at t ∼ < 10 4 s will be artifically high. Also, although we cannot rule variations out in all cases, they are not required by the data and they are also not the simplest interpretation of the data. Firm measurements of N H variability will require finely time-sampled broadband data (e.g., Ultra-violet, X-ray, and γ-ray data) to disentangle the effects of the evolving Band model spectrum from the soft X-ray photoelectric absorption. For those fitting XRT spectra, we recommend measuring N H at late times (t ∼ > 10 4 s) or performing joint fits at different time intervals with a single N H parameter shared between multiple spectra. Jointly fit with BAT when possible. Fine time resolution is essential when testing variable N H ; it is not sufficient to fit exponential times powerlaw models or Band models (e.g., Rol et al. 2006;Campana et al. 2006c) with coarse time resolution. The hardness ratio can be utilized to diagnose cases where inferred N H values are likely to vary artificially. Is the Early XRT Emission the Same as Prompt GRB Emission? We have shown for seven events that the early X-ray spectra require a fit model which also has been shown to reliably fit all GRBs (e.g., Preece et al. 2000;Kaneko et al. 2006;Frontera et al. 2000;Sakamoto et al. 2003). The need for such a model is also clear from hardness variations (see, also, Butler & Kocevski 2007) and time correlated N H -Γ variations observed for even low signal-to-noise afterglows, which demonstrate a characteristic increase in spectral curvature in the XRT band. In cases where E peak is well measured, or using HR when E peak is poorly measured, we observe a hard-to-soft evolution and a strong hardness-intensity correlation, also commonly seen in GRB pulses. Our correlation can be described as a hardness which tracks the flux to a power 0.43 ± 0.07. From the Band model fits, our best fit E peak -F relation is 0.7 ± 0.2 (Table 3). A closely consistent powerlaw relation exists for most GRB pulses, also with a large scatter in observed values (Golenetskii et al. 1983;Kargatis et al. 1995). The scatter is apparently minimized for bolometric measures of flux (Borgonovo & Ryde 2001), yielding E peak ∝ F 0.5±0.2 bol . The fact that we have observed a consistent relation can be turned around to imply GRB-like emission with E peak ≈ 1keV, typically. That Swift observes bright Xray flares appears to be a consequence of this and also due to the surprising fact that the afterglow is faint at these times. It is interesting to speculate that there may be bright optical flares due to internal shocks at times of several hours after some GRBs with faint afterglows. The typical E peak values for the XRT are two orders of magnitude below the mode of the BATSE distribution (Preece et al. 2000;Kaneko et al. 2006). As we discuss below, some of the soft E peak values may be due to viewing effects of delayed emission with an intrinsically higher E peak . However, the soft flare emission implies intrinsic spectral evolution or soft late central engine activity which would extend the BATSE E peak distribution. Our derived values for α are poorly constrained, but likely consistent with the BATSE distribution. Finally, it is remarkable that very soft emission is observed in a few cases, extending the distribution in β to very low values < −6 (Figures 8 and 10; GRBs 050714B and 050822 discussed in Paper I). Interpretation of the Spectral Variations Although intrinsic spectral evolution is likely also present, most of the softening trend and hardness-intensity correlation in GRB pulses is attributed to the so called "curvature effect" (Fenimore, Madras, & Nayakshin 1996;Sari & Piran 1997;Norris 2002;Ryde & Petrosian 2002;Kocevski, Ryde, & Liang 2003;Qin et al. 2004;Qin & Lu 2005;Shen, Song, & Li 2005). This is also the widely-accepted explanation for the rapid decline X-ray tails of the prompt emission (Nousek et al. 2006;Zhang et al. 2006;Panaitescu 2007). Derivations from first principles of the curvature effect on the observed spectra can be found in Granot, Piran, & Sari (1999); Woods & Loeb (1999). If we imagine a spherical emitting shell at radius R that emits as a delta function at t • , the spectral flux F E scales with the Doppler factor δ as F E ∝ F E [Eδ]/δ 2 . Here, δ ≡ γ(1 − β c cos(θ))/(1 − β c ) ≈ 1 + γ 2 θ 2 , where θ is the viewing angle to emitting material off the line of site. The photons from larger angles will be delayed in time t − t • = (1 + z)(δ − 1)(1 − β c )R/(cβ c ) ≈ (1 + z)θ 2 R/(2c) ∝ δ. For a powerlaw spectrum F E ∝ E 1−|α| , the observed flux declines in time as a powerlaw (t − t • ) −|b| , with |b| = 1 + |α| and no hardness evolution (Kumar & Panaitescu 2000). For a Band spectrum, we see either the low energy index α or the high energy index β or some average of the two, depending on the location of E peak with respect to the bandpass. E peak will decline as (t − t • ) −1 . When E peak is in the band, the νF ν turnover implies −α eff ≈ 1 − 2, and we expect to see a powerlaw hardness-intensity correlation E peak ∝ F 0.3−0.5 . Larger values of the index are favored observationally, because they correspond to a higher flux. We will observe the hardness (which our simulations show to scale linearly with E peak for a range of Band model parameters) to approximately linearly correlate with the fluence. Departures from this expected behavior will occur for emitting shells of different shape, for an inhomogeneous emitting surface, for non-instantaneous emission, or if intrinsic spectral evolution dominates. Also, the measured flux decay in time is a strong function of the assumed t • (e.g., Liang et al. 2006). Our best fit HR-F relation index ( Figure 13) and our average E peak -F relation relation index (Section 5.3; Table 3) are consistent with those expected in this simple picture. Spectral variations are not inconsistent with the curvature effect, as recently suggested by Zhang, Liang, & Zhang (2006). Rather, they facilitate a higher order test of the curvature effect, and allow us to confirm the curvature effect in way that shows the X-ray phenomenology to closely parallel the γ-ray phenomenology. Moreover, the scatter in our HR-F relation (Figure 13 left) is less than that found for time-index-energy index relations (Nousek et al. 2006;Panaitescu 2007), which assume powerlaw X-ray spectra. The mean time index for the E peak decays in Table 3 is −1.4 ±0.6, consistent with unity. This indicates that our choice to associate t • with the start of the flare or pulse is roughly correct, in agreement with the findings of Liang et al. (2006). Although we see evidence that later flares often have lower E peak in the same event with multiple flares (e.g., Figure 3), we do not see a correlation between t • in Table 3 and E peak just after that time. We have observed two cases of α evolution (Figures 3 right) which accompanies the E peak evolution. Due to the proximity of E peak to the bottom of the XRT pass band and also due to the possibility of a modestly incorrectly measured N H , these cases should be interpreted cautiously. This evolution, or that observed for β (Section 5.3) cannot be accounted for by the curvature effect and must be intrinsic. In most cases, the X-ray light curve is simply declining early on (possibly with weak flaring superimposed), and we observe approximately secularly declining E peak and HR values. In a handful of cases where multiple flares follow a GRB (e.g., Figure 6, 060124; and 7, 060714), the hardness tracks the flux both upward and downward. Because the brightest case (060124 in Figure 3 right) also shows upward and downward E peak trends, we believe this behavior is likely responsible for the HR evolution. The parallel or overlapping tracks observed here for bursts with multiple flares on the HR-F diagram is also seen for GRB pulses (Borgonovo & Ryde 2001). Conclusions We have measured the spectral evolution properties for GRBs and afterglows in the Swift sample, taken prior to and including GRB 061210. We have established similar spectral evolution properties between the X-ray emission coincident with two GRBs (060124 and 061121) and the X-ray emission in the rapid declines following several GRBs and in 27 flares ocurring 10 2 − 10 3 after their GRBs. Indirectly from absorbed powerlaw fits which show a time-variable N H and directly from Band model fits, we have derived constraints on the νF ν spectrum peak energy E peak . We observe this quantity to evolve in time and to typically cross the XRT bandpass during the early X-ray afterglow. Because the X-ray hardness changes little for Band spectra with E peak outside the bandpass, the strong hardness variation we observe in >90% of Swift early afterglows (Butler & Kocevski 2007) imply E peak ≈ 1 keV, typically. We observe this evolution in data taken in both the WT and PC modes (e.g., 050607 and 050714B) and following both long duration and short duration (e.g., 050724 and 051227) GRBs. The hardness ratio and E peak values scale with the flux as would be expected from the relativistic viewing effects of an expanding fireball. This implies that the true variability timescale is even shorter than that measured from the observed flare durations. Because the late flares are typically softer than the GRB emission, and because the Band model α and β parameters also appear to evolve in some cases, there is likely an intrinsic evolution of the fireball. If the flares are due to shells moving out with lower bulk Lorentz factor or at larger radii than for the prompt emission, we may expect to see differences in the time properties of flares observed at different epochs. This will be explored in a separate paper (Kocevski & Butler 2007). If the evolution is occurring on longer timescale at later times, when the sensitive XRT is observing, the early X-ray afterglows would provide a unique test-bed for theories explaining GRBs, the emission mechanisms, and possibly the progenitors. The internal shocks must be active after 10 3 s and must be able to produce emission with E peak ≈ 1 keV and very soft β ∼ < −6 (see, also, Zhang et al. 2006). Especially relevant to the Gamma-ray Large Area Telescope (GLAST), electrons energized by the X-ray flares may Compton upscatter photons at larger radii or in the external shock to the γ-rays ). N. R. B gratefully acknowledges support from a Townes Fellowship at U. C. Berkeley Space Sciences Laboratory and partial support from J. Bloom and A. Filippenko. D. K. acknowledges financial support through the NSF Astronomy & Astrophysics Postdoctoral Fellowships under award AST-0502502. This work was conducted under the auspices of a DOE SciDAC grant (DE-FC02-06ER41453), which provides support to J. Bloom's group. Special thanks to the Swift team for impressively rapid public release and analysis of the XRT data. Thanks to J. Bloom and the U. C. Berkeley GRB team for comments on the manuscript and several useful conversations. We thank an anonymous referee for a very useful and critical reading of the manuscript. XRT BAT E peak =8.6 ± 1.2 keV (t=97-111s) E peak =1.1 ± 0.1 keV (t=237-297s) 060614 Fig. 1.-Selected νF ν spectra from GRBs 061121, 060124, and 060614, demonstrating the Band model fits to a time varying spectral curvature -as seen in plots with logarithmic axes -and E peak evolution. The X-ray data are corrected for photoelectric absorption using the best fit late-time values of N H in Figures 3 (left). The softest spectrum in the middle panel is divided by a factor ten for legibility. The counts spectra are jointly fit by forward folding the Band model through the instrument response matrices. For the spectral fits (Table 1), the BAT data are not binned as shown here. The shaded regions in the background depict the X-ray light curves in two energy bands (0.3-1.3 keV and 1.3-10.0 keV) and in the hard X-ray/γ-ray bands of BAT (15-100 keV and 100-350 keV). The background light curves are each denoised and normalized to their peak intensity. The harder regions are darker. The sub-panel shows the early and late XRT light curve. and 060614. Time-correlated N H -Γ variations in the left plots are better modelled by spectral models with time-evolving E peak 's in the right plots. The N H values peak when E peak ≈ 1 keV. The powerlaw fits are performed for only the X-ray data, whereas the Band fits (actually nested powerlaw then exponential times powerlaw then Band fits, as described in the text) apply to the Xray and γ-ray data. Trends in the Band model parameters, when observed, are fitted and presented in Table 3 and in the text. These time variations are given relative to the approximate pulse start times. Galactic column densities are taken from Dickey & Lockman (1990) The Band fits use the late time N H values plotted in the left panels, derived from X-ray fits at t > 10 4 s. (a plateau at t ∼ 200s followed by a decline beginning at t ∼ 3000s) is plotted in the sub-panel. The XRT light curve has been multiplied by 5 to bring it above the BAT light curve. Table 3 and in the text. See also Figure 3. In the Band model plots, α values which appear to be above and outside of the plotted range are those which reach and remain at the paramater bound α = 0 (see Section 3.1). Fig. 11.-Powerlaw fits to high signal-to-noise data (10 4 counts, 0.3-10.0 keV) simulated from a Band spectral model with α = −1 and β = −3. Each fit is statistically acceptable (χ 2 /ν ∼ 1). With the passage of the νF ν peak energy E peak , the best-fit photon index Γ steepens smoothly. An artificial increase in the inferred X-ray column density N H linearly proportional to Γ is observed for peak energies E peak in the XRT bandpass. The effect is present, with larger N H error bars, for spectra with few counts. Table 2, also shown in Figure 13, demonstrate a significant positive correlation between the column density parameter N H (observed minus Galactic) and the photon index Γ. Although these quantities are correlated for a given spectrum, we do not expect a correlation at different times for the same event (see below) or at any time for separate events as found here. This is evidence tying the X-ray flares to an excess spectral curvature at X-ray wavelengths. Table 2 (also Figure 12), the hardness ratio (HR), defined as the ratio of counts in the 1.3-10.0 keV band to the counts in the 0.3-1.3 keV band, correlates strongly (Kendall's τ K = 0.6) with the count rate (0.3-10.0 keV), following roughly a powerlaw relationship (left plot). There is a consistent and long known relation valid for a majority of pulses seen in GRBs (Golenetskii et al. 1983;Kargatis et al. 1995;Ford et al. 1995;Borgonovo & Ryde 2001). The hardness also correlates strongly with the fluence (right plot), as is also the case for GRBs (Liang & Kargatis 1996;Ryde 2005). That is, the hardness evolves more rapidly when the flares are brighter. (Table 2). The flare T 90 durations (left plot) and rise times (right plot) are systematically longer in the soft X-ray channel (left plot), by 8 ± 1% and 25 ± 5%, respectively. Norris et al. (1996); Fenimore et al. (1995); Fenimore, Madras, & Nayakshin (1996) discuss similar properties of GRB pulses. Note.-The quoted errors correspond to the 90% confidence region. The "Signif." column refers to the fit improvement significance relative to a simple powerlaw model, determined from a ∆χ 2 test. The quoted fluxes are unabsorbed. Fig. 2 . 2-The light curve for GRB 060124. The X-ray data (0.3-10.0 keV) are plotted in black. Photon Index Fig. 3 . 3-Powerlaw (left panels) and Band model (right panels) fits to the GRBs 061121, 060124, Fig. 4 . 4-The hard X-ray BAT and XRT light curves for GRB 061121. The late-time light curve Fig. 5 . 5-The hard X-ray BAT and XRT light curves for GRB 060614. The late-time light curve (a plateau at t ∼ 200s followed by a decline beginning at t ∼ 4000s) is plotted in the sub-panel. - Fig. 9 .Fig 9-Hardness plots for GRBs 060526, 060729, 060904B, and 060929. (top plots) Hardness versus rate fits, indexed as a function of time, showing evolution along roughly parallel tracks. (middle plots) The X-ray light curve and fit (red curve) as source of the time indexing. (bottom plots) The X-ray light curve in each band (hard is red, soft is black) for each time segment and the hardness during each time segment. This hardness is well fit during the declines by the rate to a power close to 0.5 (dotted red curves). See also Figure 6. . 10.-As also discussed in Paper I, there is an outlier population of very soft Swift XRT afterglow time regions with respect to the majority population clustering near photon index Γ ∼ 2. Photon Index Fig . 12.-Time integrated spectral fits to the flares in Fig . 13.-During the decline phase of the X-ray flares from Fig . 14.-Timing statistics for the bright flares in Fig. 15 . 15-We believe N H variations are an incorrect explanation for the spectral evolution in the flaring, high−z GRB 050904. These data are coarsely grouped into three time intervals byCampana et al. (2006c) and fit to show a time-decreasing X-ray column density. At finer time resolution (left plot), we see that the N H parameter decreases toward the late time value before and after an unphysical increase. The maximum N H corresponds to E peak in the XRT band (right plot). The hardness during this period tracks the flux to the 0.6 ± 0.2 power (see also, Figure X inButler & Kocevski 2007b), consistent with a Band model spectrum evolving via the curvature-effect (Section 5.4). http://swift.gsfc.nasa.gov/docs/heasarc/caldb/swift/docs/xrt/spie05 romano.pdf 6 http://swift.gsfc.nasa.gov/docs/heasarc/caldb/swift/docs/xrt/xrt bias.pdf 7 http://swift.gsfc.nasa.gov/docs/swift/analysis/bat digest.html5 Table 2 : 227 Bright XRT FlaresGRB Time Region [s] GRB Time Region [s] 050502B 400-1200 050712 150-300 050730 130-300 050730 300-600 050730 600-800 050822 410-650 050904 350-600 051117A 1250-1725 051117A 800-1250 060111A 200-500 060124 300-650 060124 650-900 060204B 100-270 060204B 270-450 060210 100-165 060210 165-300 060210 350-450 060418 83-110 060607A 93-130 060607A 220-400 060714 100-125 060714 125-160 060714 160-230 060729 156-300 060904A 250-600 060904A 600-1000 060904B 140-300 Table 3 : 3E peak Evolution PropertiesGRB t • [s] Time Index Flux Index Data Points Fit 060124 510 −2.9 ± 1.3 −3.6 ± 1.7 2 060124 555 −2.2 ± 0.3 −1.8 ± 0.5 3 060124 567 −0.8 ± 0.1 −0.3 ± 0.1 4 060124 685 −2.2 ± 0.4 −1.2 ± 0.2 3 060526 240 −1.2 ± 0.1 −1.0 ± 0.1 5 060614 0 −2.1 ± 0.1 −0.72 ± 0.03 19 060729 75 −2.0 ± 0.5 −0.4 ± 0.1 5 060729 155 −0.7 ± 0.2 −0.4 ± 0.1 4 060904B 140 −1.3 ± 0.3 −0.7 ± 0.2 5 060929 470 −1.1 ± 0.2 −0.6 ± 0.1 7 Notes: Changes in the best-fit E peak with time are relative to the start t • . The start time is somewhat arbitrary, based on the approximate start of each pulse (or flare). Fig. 7.-Plots of the hardness and E peak evolution for flares after GRB 060714. The Band fits allow only a coarse time resolution, whereas the hardness study demonstrates a fine timescale changes in the spectrum which track the flux across flares. E peak evolution from Band model fits to the BAT and XRT data (top plot). Typical values for the photon indices (α,β) are given. Hardness versus rate fit (middle and bottom plots), indexed as a function of time, showing parallel evolution tracks. Time Since BAT Trigger [s] Time Since BAT Trigger [s]Fig. 8.-Powerlaw (left panels) and Band model (right panels) fits to the GRBs 060526, 060729, 060904B, and 060929. Time-correlated N H -Γ variations in the left plots are fit by spectral models with time-evolving E peak 's in the right plots. Trends in the Band model parameters, when observed, are fitted and presented in8 -7 -6 -5 -4 -3 XRT Rate (mag) 1 Hardness [Cts 0.3-1.3 keV / Cts 1.3-10 keV] 0 1 2 3 4 5 6 7 8 9 10 60 80 100 120 140 160 180 200 Time Since BAT Trigger [s] -2 -4 -6 -8 XRT Rate (mag) 0 1 2 3 4 5 6 7 8 9 10 -7 -6 -5 -4 -3 -2 XRT Rate (mag) 1 Hardness [Cts 0.3-1.3 keV / Cts 1.3-10 keV] 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 300 400 500 600 700 800 900 Time Since BAT Trigger [s] -2 -4 -6 -8 XRT Rate (mag) 0 1 2 3 4 5 6 7 8 910 11 12 1314 15 161718 19 20 21 22 1 10 100 1000 XRT Rate [1/s] 0.3-1.3 keV 1.3-10.0 keV 300 400 500 600 700 800 900 Time Since BAT Trigger [s] 1 Hardness Ratio Rate 1/2 -8 -7 -6 -5 -4 -3 XRT Rate (mag) 1 Hardness [Cts 0.3-1.3 keV / Cts 1.3-10 keV] 0 1 2 3 4 5 6 7 8 9 10 11 12 100 200 300 400 500 Time Since BAT Trigger [s] -2 -4 -6 -8 XRT Rate (mag) 0 1 2 3 4 5 6 7 8 9 10 11 12 1 10 100 1000 XRT Rate [1/s] 100 200 300 400 500 Time Since BAT Trigger [s] 1 Hardness Ratio -2.0±3.2 0.71±0.05 0.4±0.1 1 10 100 1000 10000 XRT Rate [1/s] 0.3-1.3 keV 1.3-10.0 keV 60 80 100 120 140 160 180 200 Time Since BAT Trigger [s] 1 10 Hardness Ratio Rate 1/2 061121 060124 060614 Fig. 6.-The hardness evolution in GRBs 061121, 060124, and 060614. (top plots) Hardness versus rate fit, indexed as a function of time, showing evolution along roughly parallel tracks. (middle plots) The X-ray light curve and fit (red curve) as source of the time indexing. (bottom plots) The X-ray light curve in each band (hard is red, soft is black) for each time segment and the hardness during each time segment. This is well fit during the declines by the square-root of the rate (dotted red line) in GRBs 061121 and 060124 and by a power close to the square root of the rate for GRB 060614. 0.1 1 10 100 1000 0 50 100 150 200 250 E peak [keV] Observer Frame Time Since BAT Trigger -15.365s [s] α ~ -1.5 β ~ -2.5 BAT XRT -6 -5 -4 -3 -2 XRT Rate (mag) 1 Hardness [Cts 0.3-1.3 keV / Cts 1.3-10 keV] 0 1 2 3 4 5 6 7 8 9 10 11 100 150 200 250 Time Since BAT Trigger [s] -2 -3 -4 -5 -6 -7 XRT Rate (mag) 01 2 3 4 5 6 7 8 9 10 11 -5 -4.5 -4 -3.5 -3 -2.5 -2 -1.5 -1 Photon Index -Γ 0 2 4 6 8 10 12 14 140 160 180 200 220 240 260 280 300 N H [10 21 cm -2 ] Time Since BAT Trigger [s] Galactic Late-Time -5 -4 -3 -2 -1 0 Photon Index α β 1 10 140 160 180 200 220 240 260 280 300 E peak [keV] [t-140s] -1.3±0.3 -2.5 -2 -1.5 -1 Photon Index -Γ 0 1 2 3 4 5 6 7 500 550 600 650 700 750 800 N H [10 21 cm -2 ] Time Since BAT Trigger [s] Galactic Late-Time -3 -2.5 -2 -1.5 -1 -0.5 0 Photon Index α β 1 10 500 550 600 650 700 750 800 E peak [keV] Time Since BAT Trigger [s] [t-470s] -1.1±0.2 -3.5 -3 -2.5 -2 -1.5 -1 -0.5 0 Photon Index α β 1 10 240 260 280 300 320 340 360 380 400 E peak [keV] Time Since BAT Trigger [s] [t-240s] -1.2±0.1 -4 -3.5 -3 -2.5 -2 -1.5 -1 -0.5 Photon Index -Γ 0 2 4 6 8 240 260 280 300 320 340 360 380 400 N H [10 21 cm -2 ] Time Since BAT Trigger [s] Galactic Late-Time -5 -4.5 -4 -3.5 -3 -2.5 -2 Photon Index -Γ 0 1 2 3 4 5 6 7 8 140 160 180 200 220 240 260 280 300 N H [10 21 cm -2 ] Time Since BAT Trigger [s] Galactic Late-Time -5 -4 -3 -2 -1 0 Photon Index -(1.5±1.2)log[t-75s] α β 1 10 140 160 180 200 220 240 260 280 300 E peak [keV] [t-75s] -2.0±0.5 [t-155s] -0.7±0.2 GRB 060526 GRB 060729 GRB 060904B GRB 060929 Hardness [Cts 0.3-1.3 keV / Cts 1.3-10 keV] Hardness [Cts 0.3-1.3 keV / Cts 1.3-10 keV] Hardness [Cts 0.3-1.3 keV / Cts 1.3-10 keV]-7 -6 -5 -4 -3 -2 XRT Rate (mag) 1 0 1 2 3 4 5 6 7 8 9 10 11 12 250 300 350 400 450 Time Since BAT Trigger [s] -2 -3 -4 -5 -6 -7 -8 XRT Rate (mag) 0 12 3 4 5 6 7 8 9 10 11 12 1 10 100 1000 XRT Rate [1/s] 250 300 350 400 450 Time Since BAT Trigger [s] 1 Hardness Ratio 0.4±0.6 0.7±0.2 2.1±0.5 0.2±0.1 -8 -7 -6 -5 -4 -3 XRT Rate (mag) 1 0 1 2 3 4 5 6 7 8 9 150 200 250 300 Time Since BAT Trigger [s] -4 -6 -8 XRT Rate (mag) 01 2 3 4 5 6 7 8 9 1 10 100 1000 XRT Rate [1/s] 150 200 250 300 Time Since BAT Trigger [s] 0.1 1.0 Hardness Ratio 0.6±0.1 -0.1±0.5 -7 -6 -5 -4 -3 XRT Rate (mag) 1 0 1 2 3 4 5 6 7 8 9 10 150 200 250 300 Time Since BAT Trigger [s] -3 -4 -5 -6 -7 -8 XRT Rate (mag) 0 12 3 4 5 6 7 8 9 10 10 100 1000 XRT Rate [1/s] 150 200 250 300 Time Since BAT Trigger [s] 1 10 Hardness Ratio -0.4±0.3 0.6±0.1 -5 -4 -3 -2 -1 XRT Rate (mag) 1 Hardness [Cts 0.3-1.3 keV / Cts 1.3-10 keV] 0 1 2 3 4 5 6 500 550 600 650 700 750 800 Time Since BAT Trigger [s] -1 -2 -3 -4 -5 -6 XRT Rate (mag) 0 1 2 3 4 5 6 1 10 100 XRT Rate [1/s] 500 550 600 650 700 750 800 Time Since BAT Trigger [s] 1 Hardness Ratio 0.0±0.3 0.5±0.1 060526 060729 060904B 060929 Table 1 . 1Selected Band or Powerlaw*Exponential Model Spectral Fits erg cm −2 s −1 ] 060124 569-600 −1.23 ± 0.04 ... 108 +∞GRB Time α β E peak 0.3-10 keV Flux χ 2 ν (ν) Signif. [s] [keV] [10 −9 −22 26.8 ± 1.0 1.21 (154) 5.9σ 060124 400-569 −1.04 ± 0.03 −2.0 +0.0 −0.1 27.2 +3.3 −1.6 8.6 +0.2 −0.1 1.01 (410) 10 −77 060124 720-770 −0.3 +0.3 −0.6 −2.2 ± 0.1 1.3 ± 0.2 5.8 +0.3 −0.2 1.08 (158) 10 −12 061121 60-90 −1.12 +0.01 −0.02 ... 270 +∞ −40 55.3 +1.0 −1.3 1.07 (270) 9.3σ 061121 126-140 −0.0 +0.0 −0.9 −2.4 +0.1 −0.2 0.95 +0.05 −1.0 3.7 ± 0.2 1.16 (117) 3.9σ 060614 97-111 −0.7 ± 0.1 −2.4 ± 0.1 8.6 ± 1.2 59 +3 −2 0.86 (169) 10 −98 060614 237-297 −1.2 ± 0.2 −2.8 +0.2 −0.3 1.1 ± 0.1 3.2 ± 0.1 0.97 (246) 10 −27 http://space.mit.edu/CXC/ISIS/ . D L Band, ApJ. 413281Band, D. L., et al. 1993, ApJ, 413, 281 . S D Barthelmy, Nature. 438994Barthelmy, S. D., et al. 2005, Nature, 438, 994 . L Borgonovo, F Ryde, ApJ. 548770Borgonovo, L, & Ryde, F. 2001, ApJ, 548, 770 . D N Burrows, Science. 3091833Burrows, D. N., et al. 2005a, Science, 309, 1833 . D N Burrows, Space Sci. Rev. 120165Burrows, D. N., et al. 2005b, Space Sci. Rev., 120, 165 . N R Butler, ApJ. 6561001Paper IButler, N. R. 2007a, ApJ, 656, 1001 (Paper I) . N R Butler, astro-ph/0611031AJ in pressButler, N. R. 2007b, AJ in press, astro-ph/0611031 . N R Butler, Nature. 4421008Butler, N. R., et al. 2007, in preparation Butler, N. R., & Kocevski, D. 2007, in preparation Campana, S., et al. 2006, Nature, 442, 1008 S Campana, GCN #5162. Campana, S., et al. 2006b, GCN #5162 . S Campana, astro-ph/0611305ApJL Accepted. Campana, S., et al. 2006c, ApJL Accepted, astro-ph/0611305 . G Cusumano, Nature. 440164Cusumano, G., et al. 2006, Nature, 440, 164 . J M Dickey, F J Lockman, ARAA. 28215Dickey, J. M., & Lockman, F. J. 1990 ARAA, 28, 215 . Y Z Fan, D M Wei, MNRAS. 36442Fan, Y. Z., & Wei D. M. 2005, MNRAS, 364, L42 . A Falcone, ApJ. 6411010Falcone, A., et al. 2006, ApJ, 641, 1010 . E E Fenimore, ApJ. 448101Fenimore, E. E., et al. 1995, ApJ, 448, L101 . E E Fenimore, C D Madras, S Hayakshin, ApJ. 473998Fenimore, E. E., Madras, C. D., & Hayakshin, S. 1996, ApJ, 473, 998 G J Fishman, Proc. GRO Science Workshop (Greenbelt: NASA GSFC). GRO Science Workshop (Greenbelt: NASA GSFC)Fishman, G. J., et al. 1989, in Proc. GRO Science Workshop (Greenbelt: NASA GSFC), 2-39 . L A Ford, ApJ. 439307Ford, L. A., et al. 1995, ApJ, 439, 307 . F Frontera, ApJ. 12759Frontera, F., et al. 2000, ApJ, 127, 59 . N Gehrels, ApJ. 6111005Gehrels, N., et al. 2004, ApJ, 611, 1005 . N Gehrels, 5839Gehrels, N., et al. 2006, GCN #5839 . S V Golenetskii, Nature. 306451Golenetskii, S. V., et al. 1983, Nature, 306, 451 . J Granot, T Piran, R Sari, ApJ. 513679Granot, J., Piran, T., & Sari, R. 1999, ApJ, 513, 679 D Grupe, GCN #5365. Grupe, D., et al. 2006a, GCN #5365 D Grupe, GCN #5505. Grupe, D., et al. 2006b, GCN #5505 . S T Holland, 4570Holland, S. T., et al. 2006, GCN #4570 . K Ioka, ApJ. 631429Ioka, K., et al. 2005, ApJ, 631, 429 . Y Kaneko, ApJS. 166298Kaneko, Y., et al. 2006, ApJS, 166, 298 . V E Kargatis, Ap&SS. 231177Kargatis, V. E., et al. 1995, Ap&SS 231, 177 . D Kocevski, F Ryde, E Liang, ApJ. 596389Kocevski, D., Ryde, F., & Liang, E. 2003, ApJ, 596, 389 . D Kocevski, N R Butler, D Kocevski, 5311Kocevski, D., & Butler, N. R. 2007, in preparation Kocevski, D., et al. 2007, in preparation Krimm, H. A., et al. 2006, GCN #5311 . P Kumar, A Panaitescu, ApJ. 54151Kumar, P., & Panaitescu, A. 2000, ApJ, 541, L51 . D Lazzati, M C Begelman, ApJ. 641972Lazzati, D., & Begelman, M. C. 2006, ApJ, 641, 972L . E Liang, V Kargatis, Nature. 38149Liang, E., & Kargatis, V. 1996, Nature, 381, 49 . E W Liang, ApJ. 646351Liang, E. W., et al. 2006, ApJ, 646, 351 . D P O&apos;brien, ApJ. 6471230O'Brien, D. P., et al. 2006, ApJ, 647, 1230 . C B Markwardt, 5654Markwardt, C. B., et al. 2006, GCN #5654 . A Morreti, A&A. 451777Morreti, A., et al. 2006, A&A, 451, 777 . J P Norris, ApJ. 459393Norris, J. P., et al. 1996, ApJ, 459, 393 . J P Norris, ApJ. 579386Norris, J. P., 2002, ApJ, 579, 386 . J A Nousek, ApJ. 642389Nousek, J. A., et al. 2006, ApJ, 642, 389 . C Pagani, ApJ. 6451315Pagani, C., et al. 2006, ApJ, 645, 1315 . K L Page, 5823Page, K. L., et al. 2006, GCN #5823 . A Painetescu, MNRAS. 3661357Painetescu, A., et al. 2006, MNRAS, 366, 1357 A Painetescu, astro-ph/0612170MNRAS submitted. Painetescu, A. 2007, MNRAS submitted, astro-ph/0612170 . A M Parsons, 5252Parsons, A. M., et al. 2006, GCN #5252 . R D Preece, ApJ. 12619Preece, R. D., et al. 2000, ApJ, 126, 19S . R Protassov, ApJ. 571545Protassov, R., et al. 2002, ApJ, 571, 545 . Y.-P Qin, ApJ. 617439Qin, Y.-P., et al. 2004, ApJ, 617, 439 . Y.-P Qin, R J Lu, MNRAS. 3621085Qin, Y.-P., & Lu, R. J. 2005, MNRAS, 362, 1085 . M J Rees, P Mészáros, ApJ. 43093Rees, M. J., & Mészáros, P. 1994, ApJ, 430, L93 . E Rol, astro-ph/0611554MNRAS Accepted. Rol, E., et al. 2006, MNRAS Accepted, astro-ph/0611554 . P Romano, A&A. 45059Romano, P., et al. 2006a, A&A, 450, 59 . P Romano, A&A. 456917Romano, P., et al. 2006a, A&A, 456, 917 . F Ryde, V Petrosian, ApJ. 578290Ryde, F., & Petrosian, V. 2002, ApJ, 578, 290 . F Ryde, A&A. 429869Ryde, F. 2005, A&A, 429, 869 . T Sakamoto, ApJ. 602875Sakamoto, T., et al. 2003, ApJ, 602, 875 . P Sari, T Piran, ApJ. 485270Sari, P., & Piran, T. 1997, ApJ, 485, 270 . R Sari, T Piran, R Narayan, ApJ. 49717Sari, R., Piran, T., & Narayan, R. 1998, ApJ, 497, L17 . J D Scargle, ApJ. 504405Scargle, J. D. 1998, ApJ, 504, 405S . R F Shen, L M Song, Z Li, MNRAS. 36259Shen, R. F., Song, L. M., & Li, Z. 2005, MNRAS, 362, 59 . R L C Starling, A&A. 44221Starling, R. L. C., et al. 2005, A&A, 442, 21 . G Tagliaferri, Nature. 436985Tagliaferri, G., et al. 2005, Nature, 436, 985 . S Vaughan, ApJ. 638920Vaughan, S., et al. 2006, ApJ, 638, 920 . R Willingale, astro-ph/0612031Willingale, R., et al. 2006, astro-ph/0612031 . R A M J Wijers, T J Galama, ApJ. 523177Wijers, R. A. M. J., & Galama, T. J. 1999, ApJ, 523, 177 . E Woods, A Loeb, ApJ. 523187Woods, E., & Loeb, A. 1999, ApJ, 523, 187 . R Yamazaki, MNRAS. 369311Yamazaki, R., et al. 2006, MNRAS, 369, 311 . B Zhang, ApJ. 642354Zhang, B., et al. 2006, ApJ, 642, 354 . B.-B Zhang, E W Liang, B Zhang, astro-ph/0612246Zhang, B.-B., Liang, E. W., & Zhang, B. 2006, astro-ph/0612246
[]
[ "GEODESICS ORBITING A SINGULARITY", "GEODESICS ORBITING A SINGULARITY" ]
[ "Daniel Grieser ", "Jørgen Olsen Lye " ]
[]
[]
We study the behaviour of geodesics on a Riemannian manifold near a generalized conical or cuspidal singularity. We show that geodesics entering a small neighbourhood of the singularity either hit the singularity or approach it to a smallest distance δ and then move away from it, winding around the singularity a number of times. We study the limiting behaviour δ → 0 in the second case. In the cuspidal case the number of windings goes to infinity as δ → 0, and we compute the precise asymptotic behaviour of this number. The asymptotics have explicitly given leading term determined by the warping factor that describes the type of cuspidal singularity. We also discuss in some detail the relation between differential and metric notions of conical and cuspidal singularities.
null
[ "https://export.arxiv.org/pdf/2304.02895v1.pdf" ]
257,985,034
2304.02895
bd9079a8e1101e166229a18b0bae22f19046c27a
GEODESICS ORBITING A SINGULARITY Daniel Grieser Jørgen Olsen Lye GEODESICS ORBITING A SINGULARITY We study the behaviour of geodesics on a Riemannian manifold near a generalized conical or cuspidal singularity. We show that geodesics entering a small neighbourhood of the singularity either hit the singularity or approach it to a smallest distance δ and then move away from it, winding around the singularity a number of times. We study the limiting behaviour δ → 0 in the second case. In the cuspidal case the number of windings goes to infinity as δ → 0, and we compute the precise asymptotic behaviour of this number. The asymptotics have explicitly given leading term determined by the warping factor that describes the type of cuspidal singularity. We also discuss in some detail the relation between differential and metric notions of conical and cuspidal singularities. Figure 1 . Illustrations of Theorem C for a conical metric f (r) = r (above) and a cuspidal metric f (r) = r 2 (below). The right hand pictures show the product space (0, R) × Y while the left hand pictures illustrate the geometric situation. In all cases, R = 1.5 and δ = 0.3. The r resp. z direction is upwards. Only the downward moving part of the geodesic is shown in the cuspidal case. Introduction Geodesics in Riemannian manifolds are among the most fundamental objects of differential geometry. Besides their intrinsic interest as locally shortest curves, or as trajectories of a free particle, they are used to define normal coordinates, which are of great utility. Also, they play an essential role in studying solutions of the wave equation, whose singularities propagate along geodesics (see [Hör71], for instance). In this paper we study the behaviour of geodesics near an isolated conical or cuspidal singularity of a Riemannian space. The precise definition of such singularities is given below, but for a first illustration of our results consider the example of the surface in R 3 generated by rotating the curve x = z 2 , z ≥ 0, around the z-axis, see the bottom left picture in Figure 1. It has a cuspidal singularity at the origin p = 0. Consider the geodesics entering the neighbourhood U = {z < 1} of p at any given point. It is quite obvious that one of these will hit p after a finite time. Any other geodesic will move downward, reach a lowest point at z = δ say, then move up again, and finally leave U (this can be seen using the classical Clairaut integral, for example, but we will prove it in greater generality). Inside U this geodesic will wind around p a number of times, as illustrated in Figure 1. In this special case, our results (see Theorem C) imply that the number of windings is asymptotic to π C δ −1 as δ → 0, with C = π/2 −π/2 cos(ϑ) dϑ ≈ 2.4. Our setting and results are more general than this example in the following ways: we allow more general profile functions x = f 0 (z) instead of z 2 , including f 0 (z) = z and f 0 (z) = e −1/z , for example. We also consider the natural generalization of rotation surfaces to product manifolds (0, R)×Y in any dimension with warped product metrics, where the cross section Y is any closed Riemannian manifold (generalizing Y = S 1 in the surface case, where f 0 roughly corresponds to the warping function), and we also allow perturbations of such warped products. See below for the precise setting. Our Main Theorems A, B, C are stated in Subsection 1.2 below. They can be roughly summarized as follows. Theorem 1 (Rough summary of all results). Consider a space with a conical or cuspidal singularity p, with cross section Y . Any geodesic γ in a neighbourhood of p is either radial, i.e. it hits the singularity, or it approaches p, to a smallest distance δ, and then moves away from p, while winding around p. For a winding geodesic γ with small δ we have: • The distance of γ to p behaves as for a radial geodesic up to an error of order δ. • The Y -component of γ closely follows a geodesic in Y . • The length of this geodesic in Y (generalizing the number of windings in the case Y = S 1 ) is asymptotic (as δ → 0) to C f /f (δ), for a constant C f only depending on f . 1.1. Setting. We now give a rough definition of the spaces that we consider in this paper. We refer to Section 2 for the precise definitions of a Riemannian space with isolated conical or cuspidal singularity along with technical assumptions. The key point is that the metric has generalized warped product form near the singularity, which we define as follows. Definition 2. Let Y be a compact manifold and R > 0. A generalized warped product metric on (0, R) × Y is a smooth Riemannian metric of the form, with r the coordinate on (0, R), (1) g = dr 2 + f (r) 2 h r , h r = h(r, y)dy 2 where f : (0, R) → (0, ∞) is continuously differentiable and h r is a Riemannian metric 1 on Y for each r ∈ [0, R). The function f is called the warping function. If h does not depend on r then g is called a warped product metric. Of course, in (1) the factor f 2 could be incorporated into h, but we will be interested in the case where f (r) → 0 as r → 0 while h stays non-degenerate. In this case distances between points (r, y) and (r , y ) tend to zero as r → 0, r → 0, so geometrically one may complete the metric at r = 0 by adding a single point, which we call the singularity. We call the resulting metric space including the singularity a Riemannian space with isolated singularity. Figure 1 illustrates the idea. Definition 3. In the setting of Definition 2, assume the function f extends differentiably to [0, ∞), with f (0) = 0. If f (0) > 0 then we speak of a conical singularity, while if f (0) = 0 then we speak of a cuspidal singularity. 1.2. Results. We give a detailed description of the geodesics in the pointed neighbourhood (0, R) × Y of the conical/cuspidal singularity. We write geodesics as γ : I → (0, R) × Y , γ(t) = (r(t), y(t)) , I ⊂ R an interval, and assume unit speed, |γ| g ≡ 1, i.e.ṙ 2 + f (r) 2 |ẏ| 2 ≡ 1, where a dot always indicates the derivative with respect to t and |ẏ| is the length with respect to h r . First, we have the following dichotomy. Theorem A (Radial and winding geodesics). Let g be a generalized warped product metric on (0, R) × Y as in (1). Then any geodesic is of one of the following types. radial: y is constant, andṙ is constant equal to 1 or −1. winding:ẏ(t) = 0 for all t. If the warping function satisfies (6) (e.g. in the case of an isolated conical or cuspidal singularity) and if (8) is satisfied, then any maximal winding geodesic γ : I → (0, R)×Y satisfies in addition: (a) The function t → r(t) is strictly convex. (b) γ has finite length, i.e. I = (T − , T + ) with T ± ∈ R. (c) γ enters and leaves at r = R, i.e. r(t) → R as t → T ± . In the warped product case, t → y(t) is a time reparametrised geodesic of (Y, h). In the sequel, all geodesics will be assumed to be maximal, i.e. their domain I cannot be enlarged. We now give a more precise description of the winding geodesics γ = (r, y). By (a) and (c) above, r assumes its minimum at a unique time, which we may and will always take to be t = 0. We write δ = r(0) for the minimum. Since geodesics are uniquely determined by their initial point, we get a parametrisation (2) (0, R) × SY → {winding geodesics} , (δ, y 0 , v 0 ) → γ δ,y 0 ,v 0 where SY = {(y, v) ∈ T Y : |v| h 0 = 1} is the unit tangent bundle and γ δ,y 0 ,v 0 is the maximal geodesic starting at (δ, y 0 ) in direction (0, v 0 ). The next two theorems describe the behaviour of winding geodesics as δ → 0. Theorem B describes how the radial component r δ behaves, including a comparison theorem. Theorem C describes the Y component y δ , in particular the asymptotics of its length. Theorem B (Radial component of geodesics). Let (X, d) be a Riemannian space with an isolated conical or cuspidal singularity, and assume R is small enough so (8) is satisfied. (a) Let γ δ = (r δ , y δ ) : I δ → (0, R) × Y , δ ∈ (0, R), be any family of unit speed geodesics, with min t r δ (t) = r δ (0) = δ > 0, with maximal interval of existence I δ . As δ → 0 we have I δ → (−R, R) (i.e. the endpoints of I δ converge to ±R) and r δ (t) → |t| for all t ∈ (−R, R) . (b) In the case of a warped product with convex warping function f , the following comparison principle holds. If γ, γ are two unit speed geodesics which both reach their lowest point at t = 0 and r(0) < r(0), then r(t) < r(t) for all time. In fact, we have precise error estimates for the convergence in (1), see Lemma 14. Note that combining (1) and (2) we obtain r(t) > |t| for all t in the warped product case, for any winding geodesic. This is not obvious even for a conical surface. Theorem C (Angular component of geodesics). Assume the setup of Theorem B, and let γ δ = (r δ , y δ ) be a family of geodesics as in part (1) of that theorem. Assume that the warping function f satisfies the non-oscillation condition (7). (a) The length of the Y -projection of γ δ , (y δ ) := I δ |ẏ δ (t)| h dt satisfies (3) (y δ ) ∼ C f f (δ) where C f := π 2 − π 2 F 1 cos(ϑ) dϑ with F defined in (7). (b) Suppose that y δ (0) → y 0 , f (δ)ẏ δ (0) → v 0 as δ → 0 for some (y 0 , v 0 ) ∈ SY . Let y :Ĩ → Y be the unit speed geodesic in (Y, h 0 ) satisfyingỹ(0) = y 0 ,ẏ(0) = v 0 , wherẽ I = (− π 2 , π 2 ) in the conical case with f (0) = 1,Ĩ = R in the cuspidal case. Then y δ converges after unit speed reparametrisation toỹ, uniformly on compact subsets ofĨ. More explicitly, the statement in (2) is: Define rescaled time τ by τ (0) = 0 and (4) dτ dt = |ẏ δ | Then forĨ δ = τ (I δ ) andỹ δ :Ĩ δ → Y defined byỹ δ (τ (t)) = y δ (t) we have, as δ → 0, I δ →Ĩ , andỹ δ →ỹ uniformly on compact subsets ofĨ. The asymptotic length in (3) behaves as follows, as δ → 0: C f f (δ) → π f (0) (conical case), C f f (δ) → ∞ (cuspidal case). This shows that parts (a) and (b) of the theorem are consistent, since |Ĩ δ | = (γ δ ). In particular, Theorem C(b) implies that every compact segment of a geodesic on Y (having length less than π/f (0) in the conical case) is the uniform limit of the Y -parts of some family (γ δ ) of geodesics on X. Remark 4. (1) It is worth stressing that the constant on the right hand side of (3) only depends on f , and not on h, the dimension, or any detailed description of γ δ . We discuss the function F in more detail in Section 5, including bounds and several examples. (2) In the case of Y = S 1 of length 2π Equation (3) means that a geodesic which almost hits the singularity winds roughly C f 2π · 1 f (δ) times around the singularity before leaving. See Figure 1 for a couple of illustrations. (3) The assumed convergence of the initial conditions of y δ in part (2) of Theorem C are always satisfied (after passing to subsequences) by compactness. 1.3. Methods, outline of proofs. We formulate the geodesic equations as a Hamiltonian system of first order ODEs. In the warped product case, the equations for r completely decouple and depend only on f . The motion in the r-direction then determines the speed in the Y -direction via the unit speed condition. This allows us to study the asymptotic behaviour quite explicitly. In the generalized warped case, the equations of motion almost decouple. In particular, the leading order behaviour turns out to be the same as in the warped case, and we derive bounds which imply that the generalized warped case has the same asymptotic behaviour as the warped one. The structure of the paper is as follows. We start by describing the geometry of the spaces we work with in some more detail in Section 2. We proceed by stating the geodesic equations as a Hamiltonian system in Section 3. Here we also introduce several useful variables and deduce a Clairaut-like relation. Section 4 is the heart of the paper, where we analyse the equations of motion and prove Theorem A, B and C. Section 5 deals with the constant C f appearing in the length asymptotics (3), including explicit computations of it and the necessity of assuming convexity and (7). Section 6 relates our notion of conical/cuspidal spaces to metric-free definitions of such spaces. Here we also present some families of spaces for which our assumptions hold. 1.4. Related work and further remarks. Our results are new only in the cuspidal case but give a unified treatment of conical and cuspidal case. The conical case was treated by Melrose and Wunsch in [MeWu04, Definition 1.4, Lemma 1.5] in the context of wave propagation, after early work by Stone [Sto82]. The first author discusses conical metrics in detail in [Gri11]. The family of geodesics hitting the singularity was analysed for a more general class of k-cuspidal (i.e. f (r) = r k , k ≥ 2) metrics -those arising from differential k-cuspidal singularities, see Subsection 6.2 and in particular (45) -by Grandjean and the first author [GrGr15]. Previously, Bernig and Lytchak [BL07] obtained first order information for geodesics on general real algebraic sets X ⊂ R n , by showing that any geodesic reaching a singular point p of X in finite time must have a limit direction at p. For more on warped products and geodesics, one can consult [BiO'N69, Section 7] and [O'Ne83, Chapter 7]. Generalized Warped Product Geometry We start by being a bit more precise about the family of metrics h r in our definition of generalized warped products. Assumption 5 (Regularity assumption). The metrics h r in the generalized warped product metric of Definition 2 are uniformly positive definite and depend uniformly C 1 on r. These uniformity conditions are equivalent to (5) − 2ch ≤ ∂ r h ≤ 2ch for all r ∈ (0, R), for some constant c ≥ 0. This implies that the family extends to a continuous family of metrics on r ∈ [0, R]. Next, we introduce singular Riemannian spaces whose singularities are modelled on generalized warped products as defined above. Definition 6. A metric space (X, d) is called a Riemannian space with an isolated conical or cuspidal singularity at p ∈ X if • X \ {p} is a smooth manifold, and the metric d is induced by a Riemannian metric g X on X \ {p} • there is a neighbourhood U of p in X, a number R > 0 and a compact manifold Y , and a continuous map See Section 6 for examples, in particular for the proof that surfaces of revolution as indicated above fit into this framework, and for the relation of this notion of conical/cuspidal singularity to other natural such notions. β : [0, R) × Y → U which sends {0} × Y to p and restricts to an isometry (0, R) × Y, g → U \ {p}, g X where g is Remark 7. • As remarked above, the warping function f is not uniquely determined by the metric g: Replacing f by af and h by a −2 h, where a is positive and C 1 on [0, R), yields the same metric. This is the only freedom, by the conditions on h. Thus, if we call two warping functions equivalent if they differ by such a factor a then g determines the equivalence class [f ] of f , and it makes sense to speak of a singularity of type [f ]. • If f vanishes to finite order k at zero and is C k+1 on [0, R) then it is equivalent to f k (r) = r k . • We use convexity of f in several arguments. However, if f is non-convex but there is an equivalent function af which is convex then it follows that Theorems A, B(a) hold verbatim and Theorem C holds with f replaced by af . For instance, this is true in the C 2 conical case. Note that convexity of f implies f (r) > 0 for r > 0. The example of Section 5.3 demonstrates that the asymptotics of Theorem C can change without convexity. Convexity of f implies that the angle θ between a geodesic and the level sets {r} × Y increases in r, see (20). See also Remark 11 for an interpretation of the convexity of f in terms of curvature. Some classes of functions f to have in mind are 2 • f (x) = x α for α ∈ [1, ∞). • f (x) = exp − ln 1 x µ for µ ∈ [1, ∞). • f (x) = exp − 1 x β for β ∈ (0, ∞). We will identify U \ {p} with (0, R) × Y . The map β should be thought of as a generalization of polar coordinates: For X = R 2 the map β : [0, ∞) × S 1 → R 2 , (r, φ) → (r cos φ, r sin φ) is an isometry over r > 0 for the metric dr 2 + r 2 dφ 2 on (0, ∞) × S 1 (where S 1 = R/2πZ) and the Euclidean metric on R 2 \ {0}. So 0 ∈ R 2 can be considered a conical singularity in this sense, and the same is true for any smooth point of a Riemannian manifold (with Y equal to the sphere). The space [0, ∞) × Y is sometimes called the blow-up of X in p, and the map β the blow-down map. Note that, while g is a Riemannian metric in r > 0, it is only positive semi-definite at r = 0, with any two points at r = 0 having distance zero with respect to g. This reflects the fact that the map β crunches all points at r = 0 to the single point p. The warping 2 The first and second family overlap when α = 1 = µ. All three families are convex for small values of x. More precisely, the second family is convex as long as y = ln 1 x satisfies µy µ − y − (µ − 1) ≥ 0. The third family is convex when β β β+1 ≥ x. Both can be achieved by shrinking R. function determines the 'speed' at which the crunching happens as r → 0. Also, note that the form (1) of the metric implies that r is the distance to the singularity p. Remark 8. There is no unique answer to the question what the 'correct' definition of a Riemannian space with an isolated singularity is. While our notion of singularity is quite general in terms of f and h r , it is somewhat restrictive in requiring that there are no mixed terms in the metric (1), i.e. that coordinates r, y can be chosen so that the lines y = const are perpendicular to the hypersurfaces r = const. See Section 6.2.2 for a natural example where this is not satisfied. We leave it for future work to analyse the geodesic flow for some of these more general metrics. For Theorem C, we need f to satisfy the following condition: Assumption 9 (Non-oscillation condition). For the inverse function F := f −1 the limit Of course this can be achieved by shrinking R. The condition (8) is used to prove (20) below, which in turn gets used implicitly several times. (7) F(σ) := lim →0 F (σ ) F ( ) exists for σ ∈ [1, ∞), Remark 11. The small perturbation condition along with the convexity of f has an interpretation in terms of the mean curvature of the level sets of r: A standard computation shows that the mean curvature vector of {r}×Y ⊂ (0, R)×Y is H = H 0 ∂ r where H 0 = − dim(Y ) f f − 1 2 Tr h (∂ r h) . By the bound (5), we can bound H 0 as − dim(Y ) f f + c ≤ H 0 ≤ − dim(Y ) f f − c The convexity of f and f (0 ) = 0 imply f (r) f (r) ≥ 1 r ≥ 1 R (see Lemma 13) , so the small perturbation condition implies a negative upper bound for the scalar mean curvature, H 0 ≤ − dim(Y ) R (1 − Rc) < 0. We will not use this geometric interpretation directly. Geodesic equations In this section we analyse the geodesic equations for generalized warped product metrics of the type (1), but without extra conditions on the warping function f . 3.1. Review of the Hamiltonian approach. We use the Hamiltonian description of the geodesic flow. Recall that this means that (constant speed) geodesics on a Riemannian manifold (Y, h) are the projections to Y of curves in the cotangent bundle T * Y , which are the integral curves of the Hamiltonian vector field for the Hamilton function H h : T * Y → R, H h (y, η) = 1 2 |η| 2 hy , η ∈ T * y Y where by h y we also denote the metric on T * y Y dual to the metric h y on T y Y . In coordinates, |η| 2 hy = h ij (y)η i η j where (h ij ) is the inverse matrix of (h ij ). Often we write simply |η| h or |η|. The Hamilton vector field is, in coordinates, X h = ∂H h ∂η ∂ y − ∂H h ∂y ∂ η so its integral curves, also called lifted geodesics, are solutions t → (y(t), η(t)) of the system of differential equationṡ y = ∂H h ∂η (y, η)η = − ∂H h ∂y (y, η) . Geodesics are then the y(t) parts of solutions of this system. It is a basic fact that the Hamiltonian function is constant along integral curves, i.e. H h (y(t), η(t)) = const. Note that ∂H h ∂η (y, η) = η where η → η , T * Y → T Y is the isomorphism induced by the metric. In coordinates, (η ) i = h ij η j . So for a geodesic t → y(t) its lift is the curve (y(t), η(t)) where η (t) =ẏ(t), and this explains the relationship of the Hamiltonian approach to geodesics (with lift in the cotangent bundle) to the more standard approach where the lift is the curve (y(t),ẏ(t)) in the tangent bundle. 3.2. Geodesic equations for generalized warped product metrics. We apply this general discussion with Y replaced by (0, R) × Y , with the metric g in (1). Points are denoted (r, y), and the dual cotangent variables are denoted (ξ, η), so ξ ∈ R, η ∈ R n−1 in coordinates. The Hamilton function is (9) H g = 1 2 ξ 2 + |η| 2 h f (r) 2 As mentioned above, H g is constant along integral curves. We will always consider unit speed geodesics, i.e. integral curves lying on the hypersurface H g = 1 2 , or (10) 1 = ξ 2 + |η| 2 h f (r) 2 We calculate the Hamilton vector field for H g . Since H g depends on y, η only through |η| 2 h we have (11) X g = X rad + 1 f (r) 2 X h where X rad = ∂Hg ∂ξ ∂ r − ∂Hg ∂r ∂ ξ governs the radial motion and X h is the Hamilton vector field of H h , hence governs the motion in the Y directions. Recall that the latter depends parametrically on r since h does. We first observe that X g is tangential to the submanifold {η = 0} since the ∂ η coefficient, which is −f (r) −2 ∂|η| 2 hy /∂y, vanishes at η = 0. This implies that any integral curve of X g satisfies either η(t) = 0 for all t or η(t) = 0 for all t. The Hamilton equations forṙ andẏ arė r = ∂H g ∂ξ = ξ ,ẏ = ∂H g ∂η = 1 f (r) 2 ∂H h ∂η = 1 f (r) 2 η . From this we deduce the first part of Theorem A: Geodesics with η ≡ 0 have constant y, andṙ = ξ ≡ ±1 by the unit speed condition (10), so they are radial. All other geodesics haveẏ(t) = 0 for all t, so they are winding. We now consider winding geodesics, i.e. η(t) = 0 for all t, and derive the full Hamilton equations for them. Calculating ∂Hg ∂r = |η| 2 h f (r) 2 − f (r) f (r) + ∂r|η| |η| and using (10) we get (12) X rad = ξ∂ r + (1 − ξ 2 ) f (r) f (r) − ∂r|η| |η| ∂ ξ We write ∂r|η| |η| in small font to emphasize that it is to be considered as a small perturbation (it vanishes in the warped product case). Correspondingly, the lifted geodesics are solutions t → (r(t), y(t), ξ(t), η(t)) of the systeṁ r = ξξ = (1 − ξ 2 ) f (r) f (r) − ∂r|η| |η| (13)ẏ = f (r) −2 ∂ η H hη = −f (r) −2 ∂ y H h(14) By (10) the variable ξ is constrained to lie in [−1, 1], so we may introduce a new variable θ (i.e. coordinate on the hypersurface H g = 1 2 ) by ξ = sin θ , θ ∈ − π 2 , π 2 . This will simplify the calculations below. The equations (13) then turn intȯ r = sin θθ = f (r) f (r) − ∂r|η| |η| cos θ (13') and the unit speed condition (10) turns into (15) f (r) cos θ = |η| . Note thatṙ = sin θ, |γ| = 1 implies that θ is the angle betweenγ and the 'circle of latitude' {r} × Y . 3.3. The warped product case; Clairaut's integral. In the warped product case, i.e. where h does not depend on r, two simplifications happen: The vector field X rad is independent of η since the term ∂ r |η| vanishes, and the vector field X h is independent of r. This means that lifted geodesics t → (r(t), θ(t), y(t), η(t)) are given as follows: (1) (r(t), θ(t)) solves the system (13') (2) (y(t), η(t)) is an integral curve for the Hamilton vector field X h , but with time reparametrised using the factor f (r) −2 with r = r(t) obtained in step 1. Here we use that the vector field H h and the time-dependent vector field 1 f (r(t)) 2 H h on Y have the same integral curves except for time reparametrisation. We also see that, along each integral curve, (16) f (r) cos θ = const , either by direct calculation from (13') or using (15) and the fact that |η| is constant along integral curves for X h . The relation (16) completely determines the solutions of the r, θ system up to time parametrisation. The expression on the left of (16) is known as Clairaut's integral in the context of surfaces of revolution. 3.4. Estimates for the deviation from the warped product case. If h depends on r then the 'Clairaut integral ' f (r) cos θ is not constant along geodesics, but we can estimate its variation. By (15) the Clairaut integral equals |η|, so we consider this quantity. Note that |η| = |η| h depends on r through h. We recall that Assumption 5 says (5') − 2ch ≤ ∂ r h ≤ 2ch for a constant c ≥ 0. Lemma 12. With c as above, we have for any η ∈ T * Y \ {0}: Proof. We multiply (5') by h −1 from left and right to get −2ch −1 ≤ ∂ r h −1 ≤ 2ch −1 and therefore −2c|η| 2 ≤ ∂ r |η| 2 ≤ 2c|η| 2 . This implies (17). −c ≤ ∂r|η| |η| ≤ c .(17 Lifted geodesics are integral curves of H g , and along such a curve we have d dt H h = (∂ y H h )ẏ + (∂ η H h )η + (∂ r H h )ṙ = (∂ r H h )ṙ using (14), and this gives d dt |η| = ∂ r |η| sin θ from which we deduce (18) using (17). Proofs of the main theorems In this section we use the results from the previous section in combination with the properties (6) of f to prove Theorems A, B, and C. First, we supplement the estimates in Lemma 12 by estimates that use the special properties of f . Lemma 13. A warping function f as in (6) satisfies (5)) then along any lifted winding geodesic we have (19) and (17) implyθ ≥ ( 1 r − c) cos θ. Also cos θ > 0 for winding geodesics, which gives the second claim. (19) f (r) f (r) ≥ 1 r Furthermore, if R < 1 c (with c satisfying Proof of Theorem A. By the Clairaut-like relation (15), we see that cos θ = 0 for winding geodesics. Fromṙ = sin θ and (20) we geẗ r =θ cos θ > 0 , so r is strictly convex. Then (b), (c) follow easily since geodesics exist as long as r < R. The statement about reparametrisation in the warped product case was explained in Section 3.3. For the proof of Theorem B it is useful to introduce two new variables ρ and u (i.e. functions on (0, R) × Y , resp. its cotangent bundle), defined by the identities: (21) ρ = f (r) ρ sin θ = f (u) We will use the following bounds on the variation of η and r along a geodesic. Lemma 14. Assume R < 1 c , with c defined in (5). Let γ δ be a family of geodesics as in Theorem B(a). Then (22) f (δ)e −c(r δ −δ) ≤ |η δ | ≤ f (δ)e c(r δ −δ) .(23) (1 − Cδ) |t| ≤ r δ (t) ≤ |t| + δ holds for all time t and all δ. Here C = ce cR . Proof of Lemma 14. We will drop all δ-subscripts, writing r = r δ and so on. We may assume t > 0. Then θ > 0 since θ(0) = 0 andθ > 0 by (20). We first prove (22). We rewrite (18) as d dt log |η| ≤ c sin θ = cṙ, so −cṙ ≤ d dt log |η| ≤ cṙ . We integrate this from t = 0 and exponentiate. Using r(0) = δ and |η(0)| = f (δ) (from (15) since θ(0) = 0) we get (22). We now prove (23). The unit speed condition implies |ṙ(t)| ≤ 1, so r(t) − δ = r(t) − r(0) = t 0ṙ (s) ds ≤ t. This gives the upper bound in (23) For the lower bound we first consider the warped product case, for which c = 0, hence C = 0, so we need to show (24) |t| ≤ r δ (t). Differentiating f (u) = f (r) sin θ in t we get From u(0) = 0 we now get t ≤ u(t) ≤ r(t) as claimed. Now we prove the lower bound in (23) for the generalized warped product case. In fact, we will prove the stronger lower bound (27) 1 − C f (δ) f (δ) |t| ≤ r δ (t) which implies (23) by (19). We use (25) again, but now we have the extra summand − ∂r|η| |η| cos θ inθ (see (13')), which equals − |∂rη| f (r) by (15). Therefore, (26) is replaced bẏ uf (u) = f (r) − ∂ r |η| cos θ = f (r) 1 − ∂ r |η| f (r) cos θ This implies (27) by the same argument as in the warped case, provided 1 − ∂ r |η| f (r) cos θ ≥ 1 − C f (δ) f (δ) . 14 Finally, this inequality follows from f (r) ≥ f (δ) (since r ≥ δ and f is increasing), cos θ ≤ 1, and ∂ r |η| ≤ Cf (δ). The latter inequality follows from (17) and (22), with C = ce cR . Proof of Theorem B. By symmetry, it suffices to consider the t > 0 part of the geodesics. Then θ > 0 since θ(0) = 0 andθ > 0. We write I δ = (T δ − , T δ + ). By Theorem A part (b) and (c), we have r δ (T δ ± ) := lim t→T δ ± r δ (t) = R. By Lemma 14, we therefore have (1 − Cδ)|T δ ± | ≤ R ≤ |T δ ± | + δ, and sending δ → 0 gives T δ ± δ→0 − − → ±R. Also r δ (t) → |t| follows from (23). It remains to prove (b) in the warped product case. So let γ = (r, y) and γ = (r, y) be two geodesics with δ = r(0), δ = r(0) and δ < δ. We write u and u for the function defined in (21) associated to r and r respectively. By the Clairaut-like relation (16) and the definition of u (21) we have (28) f (r) 2 = f (u) 2 + f (δ) 2 and similarly for f (r). Since ∂ r h = 0, we may use (26) to find (29) d dt f (u) = f (r). and similarly for f (u). The convexity of f then implies d dt (f (u) − f (u)) ≥ 0 for all t such that r(t) ≥ r(t). Assume for contradiction that there is a maximal t 0 < T δ such that r(t) > r(t) on [0, t 0 ). By continuity, r(t 0 ) = r(t 0 ). By (29), we therefore have f (u(t 0 )) − f (u(t 0 )) = t 0 0 f (r(t)) − f (r(t)) dt ≥ 0. Inserting this into (28), we find f (r(t 0 )) 2 − f (δ) 2 = f (u(t 0 )) 2 ≥ f (u(t 0 )) 2 = f (r(t 0 )) 2 − f (δ) 2 which rearranges to yield f (r(t 0 )) 2 − f (r(t 0 )) 2 ≥ f (δ) 2 − f (δ) 2 > 0. This shows r(t 0 ) > r(t 0 ), contradicting the maximality of t 0 . Remark 15. The proof of the comparison theorem might seem a bit indirect, using f (u(t)) instead of r(t). The reason being that it is in general false that r(t) − r(t) is increasing. Indeed, for the conical case f (r) = r, one can solve the equation of motion explicitly to find r(t) = √ t 2 + δ 2 , and r(t) − r(t) is strictly decreasing but never 0 when δ > δ. The auxiliary function f (u(t)) is in this conical case simply f (u(t)) = t = f (u(t), so f (u(t)) − f (u(t)) = 0 is increasing. Proof of the first part of Theorem C. We will use the coordinates ρ = f (r) and sin θ = ξ again. We will drop the δ-subscripts on all the variables. Since (20) saysθ > 0, we can and will parametrise using θ instead of t. From the equation of motion (14), and the Clairaut-like relation (16), we have (30) |ẏ| = cos(θ) ρ (we write |ẏ| = |ẏ| h := |ẏ| h r δ (t) ). Recall that F is the inverse function of f . Using the equation of motion (13'), ∂ r = f (r)∂ ρ and 1 f (r) = F (ρ) we can write the measure as (31) |ẏ|dt = F (ρ) 1 − ρ∂ ρ log |η|) dθ . We again start by assuming h does not depend on r, since the argument becomes more transparent in this case. In this case the formula (31) simplifies to the elegant |ẏ|dt = F (ρ) dθ, and one can use the Clairaut-like relation (16) to find ρ = ρ 0 cos(θ) , where ρ 0 := f (δ). Hence (32) f (δ)|ẏ|dt = 1 F (ρ 0 ) |ẏ|dt = F ρ 0 cos θ F (ρ 0 ) dθ. When f is convex, F is decreasing, and the right hand integrand is bounded by 1. So by the dominated convergence theorem, lim δ→0 f (δ) T + T − |ẏ|dt = π/2 −π/2 lim ρ 0 →0 F ρ 0 cos θ F (ρ 0 ) dθ = π/2 −π/2 F 1 cos θ dθ = C f . This proves part (a) when h is independent of r. For the general case, we return to (31). Using the bounds (17) again results in F (ρ) 1 + cρF (ρ) dθ ≤ |ẏ|dt ≤ F (ρ) 1 − cρF (ρ) dθ . Since F is concave and F (0) = 0, we have ρF (ρ) ≤ F (ρ), thus (33) F (ρ) 1 + cF (ρ) dθ ≤ |ẏ|dt ≤ F (ρ) 1 − cF (ρ) dθ . We therefore need bounds on both sides of (33). Recall that ρ is a function of δ and θ. We consider its behaviour as δ → 0 for fixed θ. The bounds (22) say, since |η| = ρ cos θ, (34) f (δ) cos θ e −c(r−δ) ≤ ρ ≤ f (δ) cos θ e c(r−δ) , so lim δ→0 ρ(θ) = 0 for every θ ∈ (− π 2 , π 2 ). But then also r = F (ρ) δ→0 − − → 0. Hence, using (34) and f (δ) = ρ 0 we get lim δ→0 ρ ρ 0 cos θ = 1. Since, per assumption, lim →0 F ( σ) F ( ) = F(σ) and the convergence is uniform on compacts, we have lim δ→0 F (ρ) F (ρ 0 )(1 ± cF (ρ)) = lim δ→0 F ρ ρ 0 cos θ · ρ 0 cos θ F (ρ 0 ) = F 1 cos θ . Both the upper and lower bounds in (33) are bounded functions, hence integrable. By the dominated convergence theorem again, we find π/2 −π/2 F 1 cos θ dθ ≤ lim δ→0 f (δ) (y δ ) ≤ π/2 −π/2 F 1 cos θ dθ. This proves part (a) in general. We now turn to the second part of Theorem C. We will rescale the time τ according to (4), dτ dt = |ẏ δ | = |η| f (r(t)) 2 . subject to τ (0) = 0. We also introduce the rescaled angular momentum variable (35) η(t) := η(t) f (δ) , which by (14) implies η(0) = f (δ)ẏ(0). We recall our convention for the time-rescaled variables,ỹ (τ (t)) := y(t),η(τ (t)) := η(t). For most of the proof, we will not write out the δ-subscript on y,ỹ, η,η. We will use a slash for τ -derivatives and a dot˙for t−derivatives. Recall that we assumẽ η(0) = η(0) = f (δ)ẏ(0) → v 0 as δ → 0. Proof of the second part of Theorem C. By the unit speed parametrisation, the length of the time interval is the same as the length of the Y -component of the geodesic ∆τ = T δ + T δ − dτ dt dt = (y δ ). By the first part of Theorem C, this has the asymptotic behaviour (36) ∆τ ∼ C f f (δ) . The equations of motion (14) get rescaled by dt dτ and read η (τ ) =η (t) f (δ)|ẏ| = − ∂ y |η| 2 2|η|ỹ =η |η | . (37) We stress thatη is both a rescaled and time-reparametrised quantity. We first observe that the bound (22) tells us C ≤ |η| ≤ C uniformly in τ and δ, where C > 0. Furthermore, since h is C 1 in the y-directions and uniformly non-degenerate down to r = 0, we can find a constantc ≥ 0 such that ∂ y |η| 2 ≤ 2c|η| 2 , hence, by the unit speed condition and equation of motion (37) respectively, d(ỹ δ (τ ),ỹ δ (σ)) ≤ τ σ |ỹ δ (υ)| dυ ≤ |τ − σ|, and |η δ (τ ) −η δ (σ)| = τ ση δ (υ) dυ ≤C|τ − σ|. This shows that the familiesỹ δ ,η δ are uniformly (in τ and δ) Lipschitz continuous. Let K ⊂Ĩ δ be any compact subset. By the Arzelà-Ascoli theorem, there is a subsequence δ n → 0 such that (ỹ δn ,η δn ) → (ỹ,η) in C(K; T * Y ). Since we can rewrite the differential equations (37) as integral equations, the limit (ỹ,η) satisfies the same equations, with respect to the metric h 0 and with initial conditionỹ(0) = y 0 andη(0) = v 0 . By the same argument, any subsequence of (ỹ δ ,η δ ) has a convergent subsequence, whose limit must be the same (ỹ,η) since it satisfies the same initial conditions and same equations. By a standard argument, this implies the convergence of (ỹ δ ,η δ ) to (ỹ,η). Discussion 5.1. The function F. An assumption we make is the existence of the function F(σ) = lim →0 F (σ ) F ( ) , and we would like to discuss this function a bit. Since f is convex and increasing, F is concave and F is positive and decreasing. So we get a priori bounds 0 ≤ F(σ) ≤ 1, and one can improve the lower bound as follows. Lemma 16. Assume f satisfies (6) and F exists. Then (38) 1 σ ≤ F(σ) ≤ 1 and 2 ≤ C f ≤ π, where C f = π/2 −π/2 F 1 cos(ϑ) dϑ Proof. By L'Hôpital's rule, lim →0 F (σ ) F ( ) = σ lim →0 F (σ ) F ( ) = σF(σ). Now F is increasing and σ ≥ 1, so F ( σ) F ( ) ≥ 1, which implies σF(σ) ≥ 1. Integration yields the inequalities for C f . (38) is achieved by f (x) = x, the lower bound by f (x) = exp − 1 The upper bound in x β for any β > 0. Below are some classes of functions f satisfying the requirements we impose, and the corresponding F. • f (x) = x α for some α ≥ 1. Then F(σ) = σ 1 α −1 . The conical case α = 1 corresponds to C f = π. • f (x) = exp − ln 1 x µ for some µ ≥ 1. Then F(σ) = σ −1 for µ > 1 and F(σ) = 1 for µ = 1. • f (x) = exp − 1 x β for some β > 0. Then F(σ) = σ −1 and C f = 2. A few comments about the list is in order. Note how the result for the third family is the same as the α → ∞ limit of the first family. The second family shows how F might depend discontinuously on the parameters of f . It is also of interest that the second and third family give the same F when µ > 1, even though they have different decay rates. 5.2. Example of F failing to exist. The idea is to add a fast but suitably small oscillation to F = f −1 , preventing convergence. We perturb a family F (x) tending to 0 as x α for α ∈ (0, 1). Lemma 17. Let α ∈ (0, 1) and c ≥ 2−α 1−α · 1+α α . Define F (x) = x α (c + sin(log(x))) . Then F is positive, increasing and concave, but lim →0 F ( σ) F ( ) fails to exist unless log(σ) = 2nπ for some n ∈ N 0 . Proof. We compute F (x) = x α−1 (αc + α sin log x + cos log x) , F (x) = x α−2 α(α − 1)c + (α 2 − α − 1) sin log x + (2α − 1) cos log x . F (x) > 0 clearly holds if c > 1+α α , which is true since c ≥ 2−α 1−α · 1+α α . Similarly, since α 2 − α − 1 < 0 and |2α − 1| < 1, F (x) ≤ x α−2 α(α − 1)c − (α 2 − α − 1) + 1 ≤ 0 if c ≥ 2−α 1−α · 1+α α . To see that the limit fails to exist, assume it did. Then, by L'Hôpital's rule, the limit lim →0 F ( σ) F ( ) would also exist. But F ( σ) F ( ) = σ α c + sin log( σ) c + sin log = σ α c + sin (log + log(σ)) c + sin log , and the existence of the limit as → 0 is equivalent to the existence of the limit lim x→∞ c + sin(x + y) c + sin(x) for all y = log(σ) ≥ 0. Consider the sequence x n = π 2 + nπ. Then c + sin(x n + y) c + sin(x n ) = c + (−1) n cos y c + (−1) n , and this converges only if cos y = 1, i.e. y = log(σ) ∈ 2πN 0 . The converse follows from F (σ ) F ( ) = σ α−1 whenever log(σ) ∈ 2πN 0 . We formulate a small question concerning F. Question 1. Assume f satisfies (6), and additionally, that lim x→0 f (x) x α = 0 for all α ≥ 1. Does F exist and is F(σ) = 1 σ ? 5.3. Why convexity? One possible extension of our work is to relax the assumptions on f . One could for instance ask what happens when f is concave instead of convex? What one will notice then is that the bounds on F are no longer true, and C f might very well be infinite. When f (x) = x α with 0 < α < 1, F(σ) = σ 1−α α is unbounded, and C f is infinite for α ≤ 1 2 . The border case α = 1 2 can be explicitly solved in the warped case. The formula (32) holds for arbitrary positive and increasing f , hence also for f (x) = √ x. Here F (ρ) = ρ 2 , so f (δ) (y δ ) = Θ δ −Θ δ dθ cos θ , where cos Θ δ = δ R . The integral is explicitly computable, and the result is f (δ) (y δ ) = 2 log R + √ R 2 − δ 2 √ δ . The essential point here is that the length behaves as f (δ) (y δ ) ∼ log δ −1 , so the asymptotic behaviour has become more complicated. 20 Metrics on isolated singularities Singular spaces typically arise in one of the following ways: • as subsets of smooth manifolds M , e.g. as solution sets {x ∈ M : F (x) = 0} of (systems of) equations, where F : M → N is a smooth map to a manifold N and singularities may arise at points p where the differential dF |p is not surjective; an important example are algebraic varieties; • as quotients of smooth spaces by non-free group actions; important classes of examples are orbifolds and moduli spaces. These spaces are often equipped with natural metrics. For example, complex projective algebraic varieties carry metrics induced by the Fubini-Study metric, and the Riemannian moduli space carries a natural metric, which has (non-isolated) cuspidal singularities. However, in this section we carefully distinguish non-metric and metric aspects of singularities. The discussion is summarized in Subsection 6.3. 6.1. Singularities of type [s]. We focus on singular spaces arising as subsets of manifolds and only consider isolated singularities which are described by a profile (or 'shrinking') function s of a single variable, in the sense described below. First, we define: Definition 18. Let M be a manifold and X ⊂ M . Let p ∈ X. We say that p is an isolated singularity of X if X 0 = X \ {p} is a submanifold of M and p lies in the closure of X 0 . We define a profile function to be a C 1 function s : [0, ε) → [0, ∞) for some ε > 0 which satisfies s(0) = 0 , s(z) > 0 for z > 0 . We call two profile functions s,s equivalent ifs = as near 0, where a is positive and C 1 on some interval [0, ). Denote the equivalence class of s by [s]. Let us say that two curves γ, γ in M with γ(0) = γ(0) = p are s-tangent at p if |γ(t) − γ(t)| = O(s(t)) near t = 0 in one (hence any) local coordinate system. If s(z) = z k , k ∈ N, then this corresponds to tangency of order k − 1. Roughly, we will say that X has a singularity of type s at p if it is a union of curves which are pairwise s-tangent at p, but not tangent to higher order (i.e. also |γ(t) − γ(t)| ≥ cs(t) for a constant c > 0). 6.1.1. The cuspidal case. We call a profile function s cuspidal if s (0) = 0. For a cuspidal profile function s the s-blow-down map β s is defined by (39) β s : [0, ε) × R N −1 → [0, ε) × R N −1 , (z, u) → (z, s(z)u) It shrinks the hyperplane at height z by the factor s(z). In particular the boundary plane z = 0 is collapsed to the origin. Definition 19. Let M be a manifold and X ⊂ M . Let p ∈ X, and let s be a cuspidal profile function. We say that X has a cuspidal singularity of type [s] at p if there definition can be restated by saying that X is resolved by blowing up p. The map β s in (39), with s(z) = z, then is β conic written in one of the projective coordinate systems. See [Mel96] or [Gri01] for details on this. If s(z) = z k then the map β s is the blow-down map for a quasi-homogeneous blow-up (in projective coordinates; see [Gri17], [KoMe15]), so for general s we have defined a generalized notion of quasi-homogeneous blow-up. The other remarks above carry over to the conical case. In particular, this notion of conical singularity is differential, not metric. Definition 20. Let (M, g M ) be a smooth Riemannian manifold and X ⊂ M . Assume that X has an isolated singularity at p ∈ X. The induced metric for X, denoted g X , is the Riemannian metric on X 0 = X \ {p} obtained by restriction of g M to T X 0 . The induced distance on X, denoted d X , is the distance function on X defined by g X . That is, if q, q ∈ X then d X (q, q ) = inf (γ) where the infimum is taken over all curves γ : (0, 1) → X 0 with lim t→0 γ(t) = q, lim t→1 γ(t) = q . This is sometimes called the intrinsic distance on X, and is to be distinguished from the extrinsic distance, which is the restriction of the distance function d M (defined on M × M by g M ) to X × X. To understand the geometry of (X, g), e.g. the behaviour of geodesics near p, it is useful to have a normal form of the metric. Specifically, we ask under which conditions the induced distance makes X a Riemannian space with conical or cuspidal singularity at p as in Definition 6. This turns out to be rather subtle even for spaces with cuspidal or conical singularity as defined above. 6.2.1. Warped products. In order to get a warped product metric on X (rather than just a generalized one) one expects to have to impose rather rigid conditions. We will consider the case where M = R N with the Euclidean metric, and where the resolutioñ X (defined using standard coordinates (z, x) on R N ) is a product [0, ε) × Y . Proposition 21. Let X ⊂ R N have an isolated singularity at 0. Let g X be the metric on X 0 induced by the Euclidean metric on R N , and d X the induced distance. (a) (Cuspidal case) If X has a cuspidal singularity of type [s], and if, in (40), (42) Y z = Y ∀z where Y ⊂ R N −1 is contained in the unit sphere centered at 0 then g X is a warped product metric. The warping function is determined by (43) f (r(z)) = s(z) where r(z) = z 0 1 + (s (v)) 2 dv , and h is the metric on Y induced by the Euclidean metric on R N −1 . Also, f (r) ∼ s(r) as r → 0, and f is convex iff s is convex, so (X, d X ) is a Riemannian space with an isolated cuspidal singularity in this case. 23 (b) (Conical case) If X has a conical singularity, and if (44) Y z = Y ∀z for some Y ⊂ S N −1 then g X is a warped product metric with warping function f (r) = r, so (X, d X ) is a Riemannian space with an isolated conical singularity. The metric h is the metric on Y induced by the standard metric on S N −1 . Note that in the cuspidal case there is an additional condition on the cross section Y , while in the conical case there isn't. See also the remark after Theorem 23. A class of examples is surfaces of revolution with profile function s. Here N = 3 and Y is the unit circle in R 2 . The idea of the proof is that r(z) is arc length along the curve z → (z, s(z)u) for any fixed u with |u| = 1. In the cuspidal case the unit sphere condition is needed in order to ensure that mixed terms vanish, i.e. that the curves u = const. are orthogonal to the level sets z = const. Proof. (a) We calculate the induced Riemannian metric on X 0 = X \ {0}. First, we write the Euclidean metric on R N in coordinates z, u, i.e. we pull it back under the map β s , see (39). The differential of s(z)u i is d(s(z)u i ) = s(z) du i + s (z)u i dz , so we get β * s (g eucl ) = dz 2 + N −1 i=1 (d(su i )) 2 = (1 + (s ) 2 |u| 2 ) dz 2 + 2ss dz N −1 i=1 u i du i + s 2 N −1 i=1 du 2 i where s = s(z) etc. Now we restrict to Y . From Y ⊂ S N −2 we have that |u| 2 is constant equal to 1 on Y . In particular, the mixed term, which is ss dz d(|u| 2 ), vanishes when pulled back to Y , so we get g X 0 = (1 + (s ) 2 ) dz 2 + s 2 h . With r defined in (43) we have dr = 1 + (s ) 2 dz and s(z) = f (r). So in r, u coordinates the metric is dr 2 + f (r) 2 h as claimed. From s (0) = 0 we get r(z) ∼ z as z → 0, hence f (r) ∼ s(r) as r → 0. Finally, one calculates f (r(z)) = s (z) 1 + (s (z)) 2 and since the function z → r(z) as well as the function σ → σ √ 1+σ 2 (and hence also their inverses) are strictly increasing, it follows that f is increasing iff s is increasing, so f is convex iff s is convex. (b) It is well-known that the Euclidean metric on R N reads β * conic (g eucl ) = dr 2 + r 2 g S N −1 in polar coordinates (where we write r instead of z in (41)). SinceX is β −1 (X) = [0, R) × Y in polar coordinates, it follows that g = dr 2 + r 2 h. 6.2.2. Generalized warped products. If we use more general metrics on M near p or do not impose the product type condition Y z = Y ∀z then we cannot expect the induced metric on X to have warped product structure. We discuss what is known in the conical and k-cuspidal case, i.e. f (r) = r k for some k ∈ N, k ≥ 2. First, in the conical case one always gets a generalized warped product, as shown by Melrose and Wunsch. Theorem 22 ( [MeWu04]). Let X ⊂ M have a conical singularity as defined above. Let g M be a Riemannian metric on M and d X the induced distance function on X. Then (X, d X ) is a Riemannian space with an isolated conical singularity at p. Theorem 22 is proven in two steps: First one uses normal polar coordinates in M , centred at p, to write the induced metric as g = dr 2 + r 2 h onX = [0, R) × Y , where all Y z are identified with a fixed Y and h is a smooth 2-tensor onX restricting to a metric on {0} × Y . However, this is more general than (1) since h may involve dr 2 terms and also mixed dr dy terms. In a second, more difficult step one proves that one can change coordinates so that these terms are removed, i.e. g takes the form (1). For this one shows that for each q ∈ Y there is a unique geodesic in (0, R) × Y hitting the boundary at (0, q), and that these geodesics together define a fibration Φ : [0, R ) × Y → U of a neighbourhood U ⊂X of r = 0. Then Φ is the desired coordinate system. The k-cuspidal case was analysed by Grandjean and Grieser in [GrGr15]. Let X be given as in (40) with s(z) = z k , k ≥ 2, and let g M be any Riemannian metric on M . First [GrGr15, Proposition 7.3] says that the coordinates (z, x) can be chosen so that the induced metric on X, pulled back to the resolutionX, has the form (45) g = (1 + S(z, u)z 2k−2 )dz 2 + z 2k h for a smooth function S and a smooth 2-tensor h onX restricting to a metric h 0 on ∂X = Y 0 × {0}. The function S 0 := S(z, ·) on Y 0 is, up to additive constants, an invariant of the singularity and metric, see [GrGr15, Lemma 2.2]. In particular, if S 0 is not constant then the metric can not be put in generalized warped product form by any choice of coordinates onX. However, if S 0 is constant then an argument using geodesics hitting p, similar to the argument in [MeWu04], shows: Theorem 23 (Theorem 1.2 + Remark 3.4 in [GrGr15]). Suppose X has a k-cuspidal singularity as discussed above. If the function S 0 defined above is constant then the metric on X is a generalized warped product metric with warping factor f (r) = r k . For example, if M = R N with the Euclidean metric then the function S 0 on Y 0 ⊂ R N −1 is simply S 0 (u) = |u| 2 . This shows that the additional condition on Y in (42) is necessary, up to a constant factor. 6.3. Summary of Section 6. We distinguish differential and metric notions of conical/cuspidal singularities. The differential notion refers to subsets X of a manifold M with isolated singularity p ∈ X. It is based on the idea of s-tangency of curves, where s is a profile function. 25 The metric notion is based on the idea of generalized warped product metrics. It is related to the differential notion as follows. Given a Riemannian metric g M on M , the induced metric g X on X is • conical if X has a conical singularity in the differential sense • cuspidal if X has a cuspidal singularity in the differential sense and if it satisfies additional requirements in its relation to g M , like the constancy of the function S 0 in Theorem 23. Thus, in the cuspidal case, the metric notion is more restrictive than the differential notion. We also mention that there are other notions of conical singularity in the literature. For example, the notion of corner domain introduced by Dauge in [Dau88] refers to subsets of R N which arise from 'straight' conical spaces as in Proposition 21(b) by local diffeomorphisms of the ambient space R N . This is more special than our notion of conical singularity given in Subsection 6.1.2. (However, corner domains also include non-isolated singularities such as those arising from the base Y having corner domain singularities itself.) a generalized warped product metric as in (1) and where f satisfies the following conditions: f extends to a continuously differentiable map f : [0, R) → [0, ∞) which satisfies (6) f (0) = 0 , f convex If f (0) > 0 then we speak of a conical singularity, while if f (0) = 0 then we speak of a cuspidal singularity. and the convergence is uniform on compact subsets. This condition is satisfied for the examples above. See Lemma 17 for a family of cuspidal examples where it is not satisfied. Furthermore, we need (except in Section 3) to limit the variation of h r over the interval (0, R): Assumption 10 (Small perturbation condition). The constant c in (5 Proof. (19) follows from f (0) = 0 and the convexity of f : f (r) = r 0 f (s) ds ≤ rf (r). The equation (13') forθ along with ( 25 ) 25uf (u) =ṙf (r) sin θ +θf (r) cos θ which by virtue ofṙ = sin θ,θ = f (r) f (r) cos θ simplifies to (26)uf (u) = f (r) Now f (u) ≤ f (r) by (21), so u ≤ r since f is strictly increasing. Then f (u) ≤ f (r) since f is increasing. So (26) impliesu ≥ 1 . 6. 2 . 2Metrics on singularities of type [s]. It is natural to consider metrics on X which are induced by smooth metrics on M . ) Furthermore, along any lifted geodesic we have(18) d dt |η| ≤ c|η| | sin θ| Estimate (18) quantifies the fact that lifted geodesics are tangent to {η = 0}. The notation h(r, y) dy 2 is supposed to be suggestive of the coordinate representation of h r , which is h ij (r, y) dy i dy j . We use the Einstein summation convention. are coordinates (z, x) : U → R × R N −1 on a neighbourhood U ⊂ M of p, mapping p to the origin, in terms of which(40)X ∩ U = β s (X) ,X :=where β s is the s-blow-down map and Y z ⊂ R N −1 are closed submanifolds varying smoothly 3 with z ∈ [0, ε). We callX a resolution of X.The curve z → (z, 0) is called axis of the singularity. We also call the germ of X ⊂ M at p a singularity of type [s].These singularities are natural generalizations of surfaces of rotation in R 3 = R × R 2 with profile function s, for which Y z = S 1 ⊂ R 2 for each z. Compared to isolated singularities of algebraic sets they allow more flexibility in that s can vanish to infinite order at zero, but are also more restricted since the blow-down map shrinks each point u ∈ Y z by the same factor s(z). For example, the set {(z, x, y) :[GrGr07]for more on this.The singularity type is an equivalence class [s] because equivalent profile functions define the same class of singularities: ifs = as then s, (Y z ) defines the same X as s, (a(z) −1 Y z ).It follows from the cusp condition s (0) = 0 that a cuspidal singularity (X, p) has a well-defined tangent direction at p, which is a ray in T p M . This is the set of tangents at p of curves in X starting at p, and is equal to the tangent cone in the sense of[BL07], for instance.Note that this notion of singularity type is purely differential, i.e. there is no metric involved. It is also natural in the sense that if the condition holds in one coordinate system then it holds in any other having the same axis. 4 6.1.2. Conical singularities. Conical singularities can be defined in essentially the same way, where the profile function is s(z) = z or equivalent to this. However, while cuspidal singularities always lie in a half space since they have a unique tangent direction at p, it would be unnatural to require this for a cone. Therefore, we define a conical singularity in the same way as in Definition 19, except that we replace the map β s above by the polar coordinates mapand take a smooth family of closed submanifolds Y z ⊂ S N −1 . The space [0, ∞) × S N −1 is called the blow-up of R N at the point 0 and β conic the blow-down map, and the 3 That is, Y z = ι(Y, z) for a fixed manifold Y , with ι : Y × [0, ε) → R N −1 smooth and ι(·, z) an embedding for each z. Equivalently,X is a p-submanifold of R N −1 × [0, ∞), see[Mel96]or[Gri01].4We don't make use of this. Here is a sketch of the proof: (We assume here for simplicity that everything is smooth.) We need to show that any diffeomorphism Φ : R N → R N which fixes the z-axis pointwise lifts to a diffeomorphism under β s , locally near zero. By the argument in [Mel08, Proof of Lemma 2] this reduces to showing that the vector fields generating these diffeomorphisms lift to smooth vector fields under β. These vector fields are spanned over C ∞ by the vector fields x i ∂ xj and x i ∂ z for i, j = 1, . . . , N − 1. A simple calculation shows that β * (U ij ) = x i ∂ xj for U ij = u i ∂ uj and β * (Z i ) = x i ∂ z for Z i = su i ∂ z − s u i j u j ∂ uj , so U ij , Z i are the desired smooth lifts.A fine point is that the axis is only determined by X up to a perturbation of order s. Tangent spaces and Gromov-Hausdorff limits of subanalytic spaces. Andreas Bernig, Alexander Lytchak, Transactions of the American Mathematical Society. BiO'N69] Richard L. Bishop and Barrett O'Neill608Manifolds of Negative CurvatureJ. Reine Angew. Math.Andreas Bernig and Alexander Lytchak. Tangent spaces and Gromov-Hausdorff limits of subanalytic spaces. J. Reine Angew. Math., 608, pp. 1-15, 2007. [BiO'N69] Richard L. Bishop and Barrett O'Neill, Manifolds of Negative Curvature, Transactions of the American Mathematical Society 145, pp. 1-49, 1969. Elliptic boundary value problems on corner domains. Smoothness and asymptotics of solutions. Monique Dauge, SpringerMonique Dauge, Elliptic boundary value problems on corner domains. Smooth- ness and asymptotics of solutions, Springer, 1988. Analysis and Geometric Singularities. Vincent Grandjean, Daniel Grieser, In Oberwolfach Reports. 4European Math. Soc.Vincent Grandjean and Daniel Grieser, Geodesics on singular surfaces, In Oberwolfach Reports, volume 4-3. Workshop 'Analysis and Geometric Singularities' European Math. Soc., 2007. The exponential map at a cuspidal singularity. Vincent Grandjean, Daniel Grieser, Journal für die reine und angewandte Mathematik. 2018736Vincent Grandjean and Daniel Grieser, The exponential map at a cuspidal singularity, Journal für die reine und angewandte Mathematik, Volume 2018, Issue 736, 2015. Basics of the b-calculus. Daniel Grieser, Advances in Partial Differential Equations. J. Gil, D. Grieser, and M. LeschBirkhäuserApproaches to Singular AnalysisDaniel Grieser, Basics of the b-calculus, In J. Gil, D. Grieser, and M. Lesch, ed- itors, Approaches to Singular Analysis, Advances in Partial Differential Equations, pp. 30-84, Birkhäuser, 2001. Dynamical systems, differential equations and applications. Daniel Grieser, 8th AIMS Conference. IA natural differential operator on conic spaces. SupplDaniel Grieser, A natural differential operator on conic spaces, Discrete Contin. Dyn. Syst., Dynamical systems, differential equations and applications. 8th AIMS Conference. Suppl. Vol. I, pp. 568-577, 2011. Daniel Grieser, In A Scales, D Girouard, M Jakobson, N Levitin, I Nigam, F Polterovich, Rochon, math.SP/1607.04171Geometric and Computational Spectral Theory. AMS7002017Daniel Grieser, Scales, blow-up and quasimode constructions, In A. Girouard, D. Jakobson, M. Levitin, N. Nigam, I. Polterovich, and F. Rochon, editors, Geo- metric and Computational Spectral Theory, volume 700 of Contemp. Math., pages 207-266. AMS, 2017. arXiv math.SP/1607.04171. On the existence and the regularity of solutions of linear pseudo-differential equations. Lars Hörmander, Enseignment Math. 2Lars Hörmander, On the existence and the regularity of solutions of linear pseudo-differential equations, Enseignment Math. (2) 17, pp. 99-163, 1971. Generalized blow-up of corners and fiber products. Chris Kottke, Richard B Melrose, Trans. Amer. Math. Soc. 367126Chris Kottke and Richard B. Melrose, Generalized blow-up of corners and fiber products,Trans. Amer. Math. Soc., 367(1), pp. 651-705, 2015. 26 Differential analysis on manifolds with corners. Richard B Melrose, Book in preparationRichard B. Melrose, Differential analysis on manifolds with corners, Book in preparation. http://www-math.mit.edu/∼rbm/book.html, 1996. Real blow ups: Introduction to analysis on singular spaces. Richard B Melrose, Notes for lectures at MSRI. Richard B. Melrose, Real blow ups: Introduction to analysis on singular spaces, Notes for lectures at MSRI, http://www-math.mit.edu/∼rbm/InSisp/InSiSp.html, 2008. Propagation of singularities for the wave equation on conic manifolds, Inventiones mathematicae. B Richard, Jared Melrose, Wunsch, 156Richard B. Melrose and Jared Wunsch, Propagation of singularities for the wave equation on conic manifolds, Inventiones mathematicae, volume 156, pp. 235-299, 2004. . O&apos; Barrett, Semi-Riemannian Neill, Geometry, Academic PressBarrett O'Neill, Semi-Riemannian Geometry, Academic Press, 1983. The exponential map at an isolated singular point. Number 256 in Memoirs of the. David A Stone, American Mathematical SocietyDavid A. Stone. The exponential map at an isolated singular point. Number 256 in Memoirs of the American Mathematical Society. American Mathematical Society, 1982. . Mathematisches Institut, Universität Oldenburg Email address: [email protected] Institut für Differentialgeometrie, Leibniz Universität Hannover Email address: [email protected] Institut, Universität Oldenburg Email address: [email protected] Institut für Differentialgeometrie, Leibniz Universität Hannover Email address: [email protected]
[]
[ "Wafer-scale Graphene Electro-absorption Modulators Fabricated in a 300mm CMOS Platform", "Wafer-scale Graphene Electro-absorption Modulators Fabricated in a 300mm CMOS Platform" ]
[ "Chenghan Wu [email protected] \nDepartment of Information Technology\nPhotonics Research Group\nGhent University\nGraphenea Semiconductor SLU\nSan SebastianSpain\n", "Steven Brems \nDepartment of Information Technology\nPhotonics Research Group\nGhent University\nGraphenea Semiconductor SLU\nSan SebastianSpain\n", "Didit Yudistira \nDepartment of Information Technology\nPhotonics Research Group\nGhent University\nGraphenea Semiconductor SLU\nSan SebastianSpain\n", "Daire Cott \nDepartment of Information Technology\nPhotonics Research Group\nGhent University\nGraphenea Semiconductor SLU\nSan SebastianSpain\n", "Alexey Milenin \nDepartment of Information Technology\nPhotonics Research Group\nGhent University\nGraphenea Semiconductor SLU\nSan SebastianSpain\n", "Kevin Vandersmissen \nDepartment of Information Technology\nPhotonics Research Group\nGhent University\nGraphenea Semiconductor SLU\nSan SebastianSpain\n", "Aran-Txa Maestre \nDepartment of Information Technology\nPhotonics Research Group\nGhent University\nGraphenea Semiconductor SLU\nSan SebastianSpain\n", "Alba Centeno \nDepartment of Information Technology\nPhotonics Research Group\nGhent University\nGraphenea Semiconductor SLU\nSan SebastianSpain\n", "Amaia Zurutuza \nDepartment of Information Technology\nPhotonics Research Group\nGhent University\nGraphenea Semiconductor SLU\nSan SebastianSpain\n", "Joris Van \nDepartment of Information Technology\nPhotonics Research Group\nGhent University\nGraphenea Semiconductor SLU\nSan SebastianSpain\n", "Campenhout Cedric [email protected] \nDepartment of Information Technology\nPhotonics Research Group\nGhent University\nGraphenea Semiconductor SLU\nSan SebastianSpain\n", "Huyghebaert Dries \nDepartment of Information Technology\nPhotonics Research Group\nGhent University\nGraphenea Semiconductor SLU\nSan SebastianSpain\n", "Van Thourhout [email protected] \nDepartment of Information Technology\nPhotonics Research Group\nGhent University\nGraphenea Semiconductor SLU\nSan SebastianSpain\n", "Marianna Pantouvaki \nDepartment of Information Technology\nPhotonics Research Group\nGhent University\nGraphenea Semiconductor SLU\nSan SebastianSpain\n", "ProfC D Wu \nDepartment of Information Technology\nPhotonics Research Group\nGhent University\nGraphenea Semiconductor SLU\nSan SebastianSpain\n", "Van Thourhout \nDepartment of Information Technology\nPhotonics Research Group\nGhent University\nGraphenea Semiconductor SLU\nSan SebastianSpain\n" ]
[ "Department of Information Technology\nPhotonics Research Group\nGhent University\nGraphenea Semiconductor SLU\nSan SebastianSpain", "Department of Information Technology\nPhotonics Research Group\nGhent University\nGraphenea Semiconductor SLU\nSan SebastianSpain", "Department of Information Technology\nPhotonics Research Group\nGhent University\nGraphenea Semiconductor SLU\nSan SebastianSpain", "Department of Information Technology\nPhotonics Research Group\nGhent University\nGraphenea Semiconductor SLU\nSan SebastianSpain", "Department of Information Technology\nPhotonics Research Group\nGhent University\nGraphenea Semiconductor SLU\nSan SebastianSpain", "Department of Information Technology\nPhotonics Research Group\nGhent University\nGraphenea Semiconductor SLU\nSan SebastianSpain", "Department of Information Technology\nPhotonics Research Group\nGhent University\nGraphenea Semiconductor SLU\nSan SebastianSpain", "Department of Information Technology\nPhotonics Research Group\nGhent University\nGraphenea Semiconductor SLU\nSan SebastianSpain", "Department of Information Technology\nPhotonics Research Group\nGhent University\nGraphenea Semiconductor SLU\nSan SebastianSpain", "Department of Information Technology\nPhotonics Research Group\nGhent University\nGraphenea Semiconductor SLU\nSan SebastianSpain", "Department of Information Technology\nPhotonics Research Group\nGhent University\nGraphenea Semiconductor SLU\nSan SebastianSpain", "Department of Information Technology\nPhotonics Research Group\nGhent University\nGraphenea Semiconductor SLU\nSan SebastianSpain", "Department of Information Technology\nPhotonics Research Group\nGhent University\nGraphenea Semiconductor SLU\nSan SebastianSpain", "Department of Information Technology\nPhotonics Research Group\nGhent University\nGraphenea Semiconductor SLU\nSan SebastianSpain", "Department of Information Technology\nPhotonics Research Group\nGhent University\nGraphenea Semiconductor SLU\nSan SebastianSpain", "Department of Information Technology\nPhotonics Research Group\nGhent University\nGraphenea Semiconductor SLU\nSan SebastianSpain" ]
[]
Graphene-based devices have shown great promise for several applications. For graphene devices to be used in real-world systems, it is necessary to demonstrate competitive device performance, repeatability of results, reliability, and a path to large-scale manufacturing with high yield at low cost. Here, we select single-layer graphene electro-absorption modulators as test vehicle and establish their wafer-scale integration in a 300mm pilot CMOS foundry environment. A hardmask is used to shape graphene, while tungsten-based contacts are fabricated using the damascene approach to enable CMOS-compatible fabrication. By analyzing data from hundreds of devices per wafer, the impact of specific processing steps on the performance could be identified and optimized. After optimization, modulation depth of 50 ± 4 dB/mm is demonstrated on 400 devices measured using 6 V peak-to-peak voltage. The electrooptical bandwidth is up to 15.1 ± 1.8 GHz for 25µm-long devices. The results achieved are comparable to lab-based record-setting graphene devices of similar design and CVD graphene quality. By demonstrating the reproducibility of the results across hundreds of devices, this work resolves the bottleneck of graphene wafer-scale integration. Furthermore, CMOS-compatible processing enables co-integration of graphene-based devices with other photonics and electronics building blocks on the same chip, and for high-volume low-cost manufacturing.
null
[ "https://export.arxiv.org/pdf/2304.02646v1.pdf" ]
257,985,186
2304.02646
19bd9597c50b02d2d0207cad5d454f25e680beb7
Wafer-scale Graphene Electro-absorption Modulators Fabricated in a 300mm CMOS Platform Chenghan Wu [email protected] Department of Information Technology Photonics Research Group Ghent University Graphenea Semiconductor SLU San SebastianSpain Steven Brems Department of Information Technology Photonics Research Group Ghent University Graphenea Semiconductor SLU San SebastianSpain Didit Yudistira Department of Information Technology Photonics Research Group Ghent University Graphenea Semiconductor SLU San SebastianSpain Daire Cott Department of Information Technology Photonics Research Group Ghent University Graphenea Semiconductor SLU San SebastianSpain Alexey Milenin Department of Information Technology Photonics Research Group Ghent University Graphenea Semiconductor SLU San SebastianSpain Kevin Vandersmissen Department of Information Technology Photonics Research Group Ghent University Graphenea Semiconductor SLU San SebastianSpain Aran-Txa Maestre Department of Information Technology Photonics Research Group Ghent University Graphenea Semiconductor SLU San SebastianSpain Alba Centeno Department of Information Technology Photonics Research Group Ghent University Graphenea Semiconductor SLU San SebastianSpain Amaia Zurutuza Department of Information Technology Photonics Research Group Ghent University Graphenea Semiconductor SLU San SebastianSpain Joris Van Department of Information Technology Photonics Research Group Ghent University Graphenea Semiconductor SLU San SebastianSpain Campenhout Cedric [email protected] Department of Information Technology Photonics Research Group Ghent University Graphenea Semiconductor SLU San SebastianSpain Huyghebaert Dries Department of Information Technology Photonics Research Group Ghent University Graphenea Semiconductor SLU San SebastianSpain Van Thourhout [email protected] Department of Information Technology Photonics Research Group Ghent University Graphenea Semiconductor SLU San SebastianSpain Marianna Pantouvaki Department of Information Technology Photonics Research Group Ghent University Graphenea Semiconductor SLU San SebastianSpain ProfC D Wu Department of Information Technology Photonics Research Group Ghent University Graphenea Semiconductor SLU San SebastianSpain Van Thourhout Department of Information Technology Photonics Research Group Ghent University Graphenea Semiconductor SLU San SebastianSpain Wafer-scale Graphene Electro-absorption Modulators Fabricated in a 300mm CMOS Platform -imec, Technologiepark-Zwijnaarde 15, 9052 Gent, Belgium C. Wu, Dr. S. Brems, Dr. D. Yudistira, Dr. D. Cott, Dr. A. Milenin, Dr. K. Vandersmissen, Dr. J. Van Campenhout, Dr. C. Huyghebaert, Prof. D. Van Thourhout, Dr. M. Pantouvaki Imec, Kapeldreef 75, 3001 Leuven, Belgium Dr. A. Maestre, Dr. A. Centeno, Dr. A. ZurutuzaCMOS-compatibleGraphenephotonicsintegrationelectro-absorption modulator Graphene-based devices have shown great promise for several applications. For graphene devices to be used in real-world systems, it is necessary to demonstrate competitive device performance, repeatability of results, reliability, and a path to large-scale manufacturing with high yield at low cost. Here, we select single-layer graphene electro-absorption modulators as test vehicle and establish their wafer-scale integration in a 300mm pilot CMOS foundry environment. A hardmask is used to shape graphene, while tungsten-based contacts are fabricated using the damascene approach to enable CMOS-compatible fabrication. By analyzing data from hundreds of devices per wafer, the impact of specific processing steps on the performance could be identified and optimized. After optimization, modulation depth of 50 ± 4 dB/mm is demonstrated on 400 devices measured using 6 V peak-to-peak voltage. The electrooptical bandwidth is up to 15.1 ± 1.8 GHz for 25µm-long devices. The results achieved are comparable to lab-based record-setting graphene devices of similar design and CVD graphene quality. By demonstrating the reproducibility of the results across hundreds of devices, this work resolves the bottleneck of graphene wafer-scale integration. Furthermore, CMOS-compatible processing enables co-integration of graphene-based devices with other photonics and electronics building blocks on the same chip, and for high-volume low-cost manufacturing. Introduction Given its exceptional electrical and photonic properties [1,2,3,4] , graphene has attracted considerable attention in recent years. Its unique band structure and resulting broadband absorption spectrum, spanning from the ultraviolet to far-infrared [5] , make graphene particularly promising for optoelectronic applications, while its large mobility [6,7] can be exploited in high-speed communications [8] . However, its atomic layer thickness also limits the interaction strength with light. This constraint can be overcome by integrating graphene with Silicon Photonics: integrating graphene on a sub-micron scale waveguide and leveraging the evanescent field coupling, the interaction between the 2D-material and the light travelling through the waveguide can be enhanced. With this approach, integrated optoelectronic devices exhibiting ultrafast response and outstanding performance have been reported in recent years. [9,10,11,12,13] Silicon and silicon nitride based photonic integrated circuits are now being considered as a core technology for future optical interconnects, high-performance computing, light detection and ranging (Lidar) and sensing. [14,15] They can be fabricated at low cost and in large-volume with high-yield utilizing existing infrastructure of the complementary metal-oxide-semiconductor (CMOS) industry. [16,17] Graphene provides a number of advantages in terms of CMOS compatibility. Firstly, graphene itself is a CMOScompatible material that can be grown by chemical vapor deposition (CVD) using wafer-scale tools [18,19] . Numerous research studies have been conducted on the large-scale growth of high-quality graphene. [20,21,22] It has also been shown that graphene can be integrated by transfer onto almost any substrate as long as the surface is sufficiently flat, either in a single step using wafer-size graphene layers [23,24] or in multiple steps using smaller patches to cover the entire wafer [25] . This enables the integration of high-quality graphene on a silicon photonics platform in a straightforward manner. Moreover, while early demonstrations typically used doped silicon waveguides for controlling graphene's electronic properties, more re-cent configurations consist of two graphene layers separated by a dielectric gate oxide that can be implemented on any type of waveguide, such as for example Silicon Nitride waveguides, therefore greatly enhancing its flexibility and eliminating the need for Si ion implantation. Finally, graphene transfer has a low thermal budget and can be performed in both the front-or back-end-of-line (FEOL or BEOL) in a CMOS process flow, which is advantageous for the cointegration with other silicon photonics modules. In earlier work both waveguide integrated graphene modulators [26,27,28] and photodetectors [29,30,31] were shown. Most of these demonstrations used small coupons or a non-scalable graphene supply. Recently, some promising results starting from 6" wafers [25,32] and using scalable CVD-grown graphene were reported. Through systematic inline metrology, the quality of graphene was monitored at each stage of the process, at wafer-scale, [25] bringing graphene-based photonics close to an industrial viable platform. However, none of this work was carried out using fully CMOS-compatible integration technology. The primary issues are the lithography process, the graphene encapsulation and the graphene contacts. [33] At the moment, electron-beam (ebeam) lithography and lift-off-based contact metallization are mostly used but not compatible with high-volume industrial manufacturing. [33] Standard photolithography utilizing a mask is preferred to enable high throughput and to keep the process cost effective, while typically a damascene process, involving via-etching, metal filling and planarization, is used for realizing contacts in CMOS fabs. To stabilize and protect graphene during further processing, a effective capping layer is another critical step that needs to be established using CMOS infrastructure. Therefore, the development of new and robust modules, adhering to the strict contamination requirements of CMOS fabs, are required for the scalable wafer-level integration of graphene based optoelectronic devices. In this paper, we develop a wafer-scale integration process for realizing graphene-based photonics devices in a 300 mm CMOS pilot line. As a test vehicle we choose an electro-absorption modulator (EAM) consisting of a doped silicon waveguide with a single layer of graphene integrated on top of a gate oxide, resulting in a graphene-oxide-semiconductor configuration. The basic process flow [34] is outlined in Figure 1. It consists of defining doped waveguides, including their planarization (Figure 1a (Figure 1f). In particular, we study and optimize three critical steps in this overall flow: the planarization step before graphene transfer (study 1), encapsulation of the graphene layer (study 2) and contacting the graphene layer using a damascene process (study 3). Following optimization of these steps, hundreds of devices demonstrate performance comparable to that of lab-based devices with similar design and graphene quality [35] . The reproducible and robust integration route developed in this paper lays the groundwork for scaling also other graphene-based photonics devices and promoting their industrial adoption. Results and Discussion Fab-level integration and optimization The integration flow started from 300 mm silicon-on-insulator (SOI) wafers with a 220 nm crystalline silicon layer and a 2 µm buried oxide (BOX). Standard 193 nm immersion lithography was used for pattering the silicon waveguides with a nominal width of 500 nm. One side of the waveguide was only partially etched to create a rib structure, allowing for electrical contacting through a 70 nm silicon slab layer. Afterwards, we utilized a standard chemical mechanical polishing (CMP) process, stopping on the SiN hardmask, a process also typically used in CMOS fabrication for shallow trench isolation (STI). [36] Before removing the hardmask, we performed an oxide etch-back with diluted HF in an effort to lower the step height induced by the SiN mask removal. However, this approach typically results in a topography of a few nanometers locally at the edge of the waveguides. As graphene is a monolayer material, it is highly susceptible to its environment, and a few nanometers of step-height can already affect its properties and eventually device uniformity and yield across a 300 mm wafer. Therefore, we examined the impact of an extra CMP step designed to minimize the topography of the wafer prior to wafer-scale graphene transfer. After the hardmask removal, an additional oxide layer was deposited using a PECVD process on Figure 2b and c. These images were taken after the full device fabrication. As indicated by the arrow in Figure 2c, the wafer with the additional CMP module has a more uniform and smooth oxide surface, especially near the waveguide edge. In contrast, the wafer with the conventional CMP module exhibits a discernible step at the side of the waveguide and a greater variation in the gate oxide thickness, which could result in larger strain in the graphene layer and a non-uniform electric field. This will be elaborated further when discussing the results of Raman measurements and the electro-optical performance in the following sections. Next, a 5 nm gate oxide was thermally grown on top of the waveguides. Three implantation steps were carried out, to minimize the contact and sheet resistance of the Si layers, without considerably increasing the optical loss in the waveguides. Then, a commercial company, Graphenea, grew a 6-inch graphene layer by chemical vapor deposition (CVD) and transferred it to the middle of a 300-mm wafer using a semi-dry technique as shown in Figure 2d. In this process, the graphene layer on its copper catalyst is attached to a polymer substrate, which allows to etch the copper catalyst away using a standard F eCl 3 wet etching method. After the etching, several consecutive ultra-pure DI water and acidic rinses were used to minimize Fe contamination. Graphene interface was then dried with N2 flow. When the graphene layer was dry, a dry lamination method was used to transfer the graphene onto the target wafers. The polymers/Graphene was laminated at a pressure above 1 bar and a temperature of 150 • C for the transfer. Finally, the remaining protective polymer layer is removed by a wet solvent process. Given graphene's self-passivated properties [37,38,39] , it is difficult to directly deposit a dielectric on its surface. Commonly, a seeding layer is used to achieve homogeneous oxide deposition. Here we used a low-temperature surface physisorption based 'soak' method with tri-methylaluminium (TMA) as the precursor to carefully deposit a dielectric seeding layer. The actual Al 2 O 3 capping layer was then deposited using a atomic layer deposition (ALD) process. To investigate the impact of the capping layer uniformity on the performance of the final devices, a second study was defined at this stage, whereby the soaking time was varied. Figure 2e shows a top-down scanning electron microscope (SEM) image after the PEALD deposition when short soaking time was used. This image shows a large number of distinct voids in the Al 2 O 3 layer. These voids could potentially lead to unintentional etching of the graphene layer during subsequent processing steps. Figure 2f, where a longer soaking time was applied, exhibits superior Al 2 O 3 coverage of graphene, and the number of voids is significantly reduced. Only a few wrinkles generated during graphene growth and transfer remain visible. Overall, by optimizing the coverage of the capping layer, we expect to reduce the impact of later integration steps on the graphene layer achieving better device yield. After deposition of the Al 2 O 3 layer, a SiO 2 layer is deposited, also using a PEALD process, which is then patterned using DUV lithography and dry etching. Following resist strip, the oxide layer is used as a hardmask to pattern the Al 2 O 3 and graphene stack. Careful control of these steps is critical to avoid etching into the underlying silicon waveguides and is made possible through the use of high-end tools typical for a CMOS foundry. After graphene patterning, a pre-metal dielectric (PMD) is deposited and planarized by CMP, following a standard CMOS flow. Finally, the contacts to both the graphene and the doped silicon layers are defined. The latter are fabricated first, by etching contact holes using reactive ion etching (RIE), which are then filled using a CMOS Ti/TiN/W damascene metallization process. A similar damascene process was used for contacting the graphene layer. This is very different from most other work reported in literature, where typically a liftoff process is used to define top-contacts on graphene [25,35] . Although this provides a low-cost and simple method for contact fabrication, it is not compatible with industrial CMOS process flows, where damascene processes are preferred as they offer higher yield and uniformity. As selectively stopping the via etching process directly on top of the graphene layer would be very challenging, we choose to over-etch the oxide layer and create edge contacts. Recent reports indicate that such an edge contact could offer lower contact resistance [40,41] . The contact holes of 250nm diameter were patterned using DUV lithography and transferred in the PMD oxide by dry etching, selectively stopping on the Al 2 O 3 capping layer. This step was then followed by resist striping, and etching of the Al 2 O 3 and graphene layers, stopping in the underlying SiO 2 layer. Etching of graphene creates fresh dangling bonds, which can form strong covalent bonding with the metal [42,43] that is subsequently deposited. However, with increasing time elapse between the etching and metal deposition steps, these dangling bonds could bind with atmospheric water and oxygen and be passivated, hindering the formation of good contacts and increasing the resistance. The latter is detrimental for the high-speed response of the devices, as they are RC-limited. To study this effect in more detail, we kept this time-delay as short as possible for all wafers, except for one, for which we introduced an intentional gap of two days between the two steps, as illustrated in Figure 2g. Finally, the integration flow was completed with a conventional Cu-oxide metal-1 module. The final crosssection of the device is shown in Figure 2h. In this TEM image, the graphene layer is located below the Al 2 O 3 capping layer. Notably, despite the fact that 6-inch graphene currently limits the number of available devices, the CMOS-compatible modules developed in this paper provide a 300 mm platform to scale up graphene-based photonics devices. Table 1 summarizes the complete design of experiment (DoE) defined to study the effect of planarization, soaking time and contact module optimization. The results from four wafers with this DoE, labelled wafers A, B, C, D, will be discussed in the following sections. Raman characterization Before any electric-optical measurement, the graphene quality was checked by Raman Spectroscopy. paring wafer B (standard CMP) and wafer D (extra CMP). The measurements were carried out after completion of the full process flow, through the dielectric stacks of the metal-1 and PMD modules. From Figure 3a, the defect peak (D) is negligible for both wafers, confirming the proposed integration process does not result in significant degradation of the graphene quality. After fitting the G and 2D peaks of spectra taken at different locations within the wafer with a single Lorentzian, their relative position is mapped in Figure 3b. The black and red lines with slope of 2.745 [44] , and 0.722 [45] , represent the effect of biaxial strain and doping respectively. The results indicate that the doping level of graphene varies from 6 to 10 × 10 12 cm −2 after the integration process. Wafer B suffers from more tensile strain effects (up to 0.14%) compared to Wafer D (up to 0.07%). Both wafers exhibit a similar amount of compressive strain, which could be explained by the deposition of the Al 2 O 3 capping layer [46,47] . Figure 3c shows the full width at half-maximum (FWHM) of the 2D peak, with median values of 40 and 35 cm −2 for Wafers B and D, respectively. These results verify that the smoother surface provided by introducing the extra CMP step is reducing strain effects and better preserves the quality of the graphene layer. [48] EO Static performance of inline EAMs The EAMs are designed for operation in the C-band with transverse electric (TE) polarization and coupled to an external laser source via grating couplers. In order to highlight the broadband nature of graphene, the wavelength was swept from 1530 nm to 1580 nm, for devices with four distinct device lengths. This wavelength range is restricted by the response of the grating couplers. The inset of Figure 4a depicts a representative transmission spectrum for all device lengths considered. By comparing with a neighboring straight waveguide without modulator, the loss from the grating coupler can be excluded and the wavelength dependent insertion loss (IL) can be determined. Figure 4a summarizes the IL for the unbiased devices measured across 17 dies of wafer D. The solid line represents the median value, while the band reflects the 25 th to 75 th percentiles for each active length. Next, we defined the normalized IL by comparing the peak transmission values of each curve and dividing by device length to capture the waferto-wafer variation in performance. Figure 4b shows a histogram of the normalized insertion loss for all 4 wafers. The mean values for wafers A, B, C, and D are 89 ± 7, 85 ± 12, 87 ± 7, and 87 ± 8 dB/mm, respectively. The comparable distribution in all four wafers suggests that graphene is transferred and patterned uniformly in each wafer, despite local variations in CVD graphene quality. Table 2 provides a summary of the loss measurement data. To evaluate the electro-optical (EO) response, a DC bias is then supplied to the devices. Figure 4c shows a typical transmission response curve, measured at 1550 nm wavelength and normalized with respect to a straight waveguide. The red line represents the median value obtained from four hundred 75 µm-long devices measured on wafer D, whereas the black lines are simulation results generated by a commercial solver (Lumerical T M ) using three different graphene scattering rates. In the simulation, we set the dop- ing level of the silicon waveguide and graphene layer at 1.5e18 cm −3 and 1e13 cm −2 , respectively. We noticed that the curves generated by the simulation need a 1 dB downward shift in transmission and a -1.5 V shift in voltage to match well with the experimental results. The voltage adjustment can be explained by fixed charges inside the gate oxide, while the additional loss could originate from residues remaining after graphene transfer. The minimum transmission occurs at a negative voltage, indicating ptype doping of graphene. Figure 4d shows the wavelength dependent extinction ratio (ER), for a 6V peak-to-peak drive voltage. The solid line and shaded range indicate the median and 5-95 percentile for each of Wafer D's four different active lengths. We clearly observe that the EO response is consistently broadband and that the ER scales uniformly with device length, resulting in median values of 1.3, 2.5, 3.8, and 5.0 dB for 25, 50, 75, and 100 µm-long devices, respectively, at 1550 nm wavelength. To compare the DC performance between wafers, the modulation depth (MD), defined as the ER normalized by the active length, is calculated. The difference in performance and uniformity between the wafers is visualized by the cumulative distribution function (CDF) shown in Figure 4e. The mean and standard deviation values of the MD are 32 ± 13, 39 ± 4, 49 ± 2, and 50 ± 4 dB/mm for Wafers A, B, C, and D, respectively. The CDF curves in Figure 4e lead to three conclusions. (1) Despite the fact that the maximum MD of Wafer A and Wafer B are comparable, Wafer B has substantially lower variability. We ascribe this enhancement to the improved coverage of the capping layer, which minimizes the impact of following graphene integration processes. Overall, a longer soaking time and the resulting more uniform capping layer increased device yield by more than 20 percent and decreased the within-wafer standard deviation of MD. (2) Comparing wafer B (standard CMP) and wafers C and D (extra CMP) shows that the improved planarization boosts the modulation depth by 25%. As indicated previously when discussing the Raman results, the smoother surface of wafers C and D reduces strain effects and better preserves graphene material quality, resulting in a larger ER within the same voltage range. In addition, the homogenous gate oxide can provide a constant electric field and uniform tuning of the graphene fermi level resulting in a steeper modulation response. (3) Finally, comparing wafers C and D, we can conclude that the DC performance is unaffected by the time delay introduced in the contact module, since both wafers exhibit a nearly identical CDF. Figure 4f depicts a wafer mapping of the modulation depth MD, with black dashed circles indicating the area where graphene was transferred. We measured devices on dies within a circular area with 75mm radius from the center of the wafer. Both wafers C and D exhibit excellent uniformity across 17 dies and 400 tested devices. On average, a modulation depth MD = 50 dB/mm is recorded, which is comparable to lab-based champion devices employing similar CVD graphene [35] . Wafers A and B on the other hand clearly exhibit less good uniformity and performance, which we attribute to the lower quality of graphene capping and planarization as discussed before. Table 2 summarizes the results for extinction ratio and modulation response for all 4 wafers. EO Dynamic performance of inline EAMs we performed S-parameter measurements to assess the frequency response of the devices. An RF smallsignal ranging from 100 MHz to 30 GHz was applied to the graphene modulators. A DC bias of 1 V is selected to ensure modulation at the slope of the transmission curve. Figure 5a shows a representative result for a 25-µm long device of wafers B, C and D. The 3dB-bandwidth for the wafer C device is 3.8 GHz, evidently much lower than for the other two devices (15.3 and 16.1 GHz for wafer B and D respectively). Figure 5b shows the statistics for all devices measured. These reveal that wafer C, for which a delay was introduced between the contact etch and metallization process, has consistently a lower EO bandwidth than the other two wafers, for all four device lengths. It suggests that the time-delay during fabrication hinders good bonding between metal and graphene resulting in a higher contact resistance. This will be discussed further in the next section. For wafer D, median values of 15.3, 14.3, 12.4, and 11.3 GHz are measured for 25, 50, 75, and 100-µm long devices respectively, comparable with lab-based hero devices with similar design and graphene quality. To understand this length dependence better and get more insight on these devices, we further analyzed the S11 response for wafer B and C devices. Since the dynamic response of our graphene modulator is primarily limited by the electrical RC constant [35] , we continue our analysis by fitting the S11 response to the equivalent circuit model shown in the inset of Figure 5c. The graphene-oxide-silicon (GOS) structure can be considered as a lumped device with a capacitance C gos . The total resistance R gos of the device includes both contact and sheet resistance of silicon and graphene. R si , C ox , and C m are parasitic components, representing resistance of the substrate, capacitance of the buried oxide layer and capacitance of the metal pad, respectively. Figure 5c shows the S11 response for a 25-µm long device of wafer D, along with the result of the fitting process. From these, the total resistance R gos , and capacitance C gos can be determined. Figure 5d summarizes the extracted device capacitance for Wafer D, which served as the basis for this analysis. As anticipated, C gos scales linearly with device length, resulting in wafer median values of 27, 62, 104, and 140 fF for devices that are 25, 50, 75, and 100 µm long, respectively. When evaluating Wafer C, the range of the capacitance was given to match the results obtained from Wafer D. This allowed to have reasonable value and prevent unrealistic outcomes. Figure 5e shows Figure 5f summarizes the calculation for the 75-µm long device, showing the intrinsic 3dB-bandwidth (1/2π R gos C gos ) extracted from S11-measurements, the effect of the 50 Ohm load resistance, the effect of the parasitics and finally the measured electro-optical 3dB bandwidth. Table 3 provides information for the other lengths. In general, the final electrical BW derived from S11 data is close to our experimentally measured EO BW, demonstrating the accuracy of our equivalent circuit model. Following the discussion above, the EO bandwidth of our SLG EAM devices is mainly limited by the RC constant. Reducing the capacitance and resistance of the devices is key towards realizing a highspeed EAM. In recent work, large-area single-crystal graphene with 7.3 × 10 3 cm −2 V −1 s −1 mobility [22] and extraordinarily low contact resistance (23 Ω at room temperature) using a Ti-graphene edge contact configuration [40] has been demonstrated, which would allow for devices with lower sheet and contact resistance in the future. Capacitance reduction, on the other hand, is not as straightforward. Reducing the capacitor surface or increasing the equivalent oxide thickness (EOT) of the gate oxide will both result in a lower capacitance but lead to a trade-off between bandwidth, modulation efficiency and drive voltage. Modulation efficiency and speed should be balanced for modulators driven at CMOS-compatible voltages (below 2 V for conventional CMOS circuitry). A possible solution to this conundrum is to enhance the mode interaction with graphene. By improving the interaction of light with graphene using TM polarization [26,28] or by constructing a double-layer structure [9,11,13,27] , graphene-based modulators can modulate light effectively even for shorter devices. Our high-yield wafer scale integration approach is ideal for systematic exploration of these potential device architectures. Conclusions To summarize, we have demonstrated the integration of single layer graphene electro absorption modulators in a CMOS fabrication environment. Damascene contact and hardmask lithography were used to build the wafer-scale devices in accordance with industry standards. Three critical processing steps are also studied in this work to determine their effect on device performance. We discovered that the surface flatness has a significant impact on the graphene quality and electric field homogeneity, both of which af- fect the modulation depth of the final device. Following that, the uniform capping layer reduces the impact of later integration steps on the graphene layer, resulting in increased device yield. Finally the time delay involved in constructing the damascene contacts affects the contact resistance and the 3dB bandwidth of the EAMs. After optimizing these three critical processing steps and implementing a CMOScompatible dedicated integration approach, the device yield exceeds 95% with loss, extinction ratio, and 3dB bandwidth values comparable to CVD graphene devices previously demonstrated in the lab [35] . We anticipate that the knowledge presented in this study can be extended and applied to a sophisticated building block library of graphene-based optoelectronic devices, that includes modulators, photodetectors, and sensors. This work will underpin the industrial adoption of graphene-based photonics devices, paving the way for the next-generation datacom and telecommunications applications. ), graphene transfer (Figure 1b), graphene encapsulation (Figure 1c), patterning of graphene and defining contacts to the doped silicon (Figure 1d), the graphene contacts (Figure 1e) and a final Copper damascene metal routing layer Figure 1 : 1Proposed integration flow. (a) Waveguide patterning, surface planarization, and Si implantation steps, (b) wafer-scale graphene transfer, (c) graphene encapsulation, (d) graphene patterning and damascene contacts to p++ Si (e) graphene damascene contacts (f) final Cu metal lines. some wafers, followed by an extra CMP step stopping selectively on the Si waveguide. In Figure 2a, step height measurements at the silicon-oxide transition area show median values of 3.06 nm and 0.41 nm for the standard STI process and the process with the extra CMP step, respectively, confirming an improvement of surface flatness. The improved flatness of the wafer surface can also be observed from the cross-sectional transmission electron microscope (XTEM) images shown in Fig- ure 3 Figure 2 : 32summarizes the most relevant results, focusing on the effect of the extra planarization step by com-(a) Comparison of the step height remaining after surface planarization. Within 170 measured devices, the mean and standard deviation values of the step height are 4.3 ± 4.1 and 0.5 ± 0.3 nm for the standard CMP process and the process with the extra CMP step, respectively. Cross-TEM images taken at the waveguide edge for wafer with (b) standard CMP and (c) extra-CMP module. The standard planarization process results in a considerably higher remaining step and non-uniform gate oxide thickness. (d) Top-down image of 300 mm wafer with 6-inch graphene transferred at the center. Impact of soaking time. Representative top-down SEM image of wafer with (e) short and (f) long soaking time. Red arrows indicate the voids on top of the surface. The wrinkles in the graphene layer also visible in the pictures are induced during graphene growth and transfer. (g) Cross-section device scheme and description of the study on graphene contact. (h) Cross-section TEM of final device. Figure 3 : 3(a) Representative Raman Spectra for Wafers B and D after the full integration process. (b) Position and (c) FWHM of 2D peak as a function of the position of G peak. The black and red lines inFigure 3bare the theoretical trajectories indicating the effect of doping and biaxial strain, respectively. The back dot represents un-strained and un-doped graphene. Figure 4 : 4(a) Insertion loss as a function of wavelength for 25, 50, 75, and 100µm-long devices in Wafer D. The solid lines indicate the median value while the shaded areas show the 25-75 percentile. The inset shows representative transmission spectra of unbiased devices with different length. (b) Histogram of normalized insertion loss for all four wafers.(c) Normalized transmission of 75 µm-long devices of wafer D as a function of applied bias. Red solid line shows the median value of experimental results for 400 devices, black dashed lines represent simulation results for 3 different scattering rates. (d) Extinction ratio as a function of wavelength for 25, 50, 75, and 100µm-long devices (Wafer D). The solid lines represent the median value while the shaded areas show the 5-95 percentile of the results. (e) Cumulative distribution function and (f) wafer mapping of modulation depth at 1550nm wavelength for all four wafers. Figure 5 : 5(a) Representative S21 response and (b) box plots of extracted EO bandwidth for wafers B, C and D at DC bias of 1V. Inserted table gives the median value for each device length and each wafer.(c) Representative S11 response and fitting results. The inset shows the equivalent circuit model of our structure, where R Si , C ox , R gos , C gos and C m represent silicon resistance, oxide capacitance, GOS resistance, GOS capacitance and metal capacitance, respectively. (d) Box plots of extracted GOS capacitance C gos for wafer D. (e) Box plot of extracted GOS resistance R gos for 25, 50, and 75-µm long devices in Wafer C and D. R 2 values for the fit were larger than 0.9 and 0.98, respectively. The table in the inset shows the median values. (f) Bandwidth estimated from the fitting results, and bandwidth measured from S21 for Wafer D. The equations used to calculate these values are shown inside the figure. Table 1 : 1DOE summary of four wafers reported in this paperDOE Wafer A Wafer B Wafer C Wafer D Surface planarization Standard STI Standard STI Extra CMP Extra CMP Encapsulation soaking Short Long Long Long Contact metal deposition No delay No delay 2 days delay No delay Table 2 : 2Summary of the static performance for all four wafers with four different active lengths.The unit of IL(ER) and normalized IL (modulation depth) are dB and dB/mm respectively. Wafer IL-25µm IL-50µm IL-75µm IL-100µm Normalized IL Observed devices A 2.2 ± 0.2 4.4 ± 0.3 6.6 ± 0.3 8.8 ± 0.2 89 ± 7 144 B 2.4 ± 1.1 4.2 ± 0.5 6.3 ± 1.4 8.1 ± 0.5 85 ± 12 155 C 2.0 ± 0.7 4.1 ± 0.5 6.3 ± 0.5 8.5 ± 0.5 87 ± 7 408 D 2.1 ± 0.4 4.4 ± 0.7 6.8 ± 1.5 8.8 ± 1.7 87 ± 8 400 Wafer ER-25µm ER-50µm ER-75µm ER-100µm MD Observed devices A 0.9 ± 0.3 1.6 ± 0.7 2.0 ± 1.1 2.9 ± 1.6 31 ± 14 100 B 1.0 ± 0.2 2.0 ± 0.2 3.1 ± 0.3 4.1 ± 0.4 41 ± 5 155 C 1.2 ± 0.1 2.5 ± 0.1 3.7 ± 0.2 4.9 ± 0.1 49 ± 2 408 D 1.2 ± 0.2 2.5 ± 0.1 3.7 ± 0.3 5.0 ± 0.2 50 ± 4 400 Wafer D confirms that the limited time between oxide etch and metal deposition better preserves the graphene contact quality, resulting in larger EO bandwidth. Lastly, we recalculated the electrical bandwidth of the devices based on the fitting results. Wafer D's intrinsic RC bandwidth, considering only R gos and C gos , attains wafer median values of 22, 31, 32, and 30 GHz, respectively, for devices measuring 25, 50, 75, and 100 µm in length. However, when the 50 Ω load resistance from the vector network analyzer (VNA) is considered, the calculated values (BW) are reduced to19, 19, 16 and 13 GHz. These values are close to the final calculation, which takes into account all other parasitic components (C ox and R si ).a wafer median value for the resistance R GOS of 711, 271, 146 Ω for Wafer C and 263, 84, 47 Ω for Wafer D, for 25, 50, 75-µm long devices, respectively. The smaller resistance in Table 3 : 3Summary of the outcomes from S-parameter for Wafer D with four different active lengths.S-parameter outcomes Unit 25µm 50µm 75µm 100µm Measured EO BW GHz 15.1 ± 1.8 14.1 ± 1.4 12.6 ± 0.9 11.2 ± 0.7 Fitting result: C gos fF 26.7 ± 1.5 62.1 ± 3.7 102.8 ± 4.7 139.2 ± 7.9 Fitting result: R gos Ω 280 ± 61 86 ± 18 49 ± 6 38 ± 4 Intrinsic BW GHz 22.0 ± 3.1 30.6 ± 4.0 32.2 ± 0.4 30.5 ± 2.6 Intrinsic + Driver GHz 18.5 ± 2.2 19.0 ± 1.3 15.8 ± 0.3 13.1 ± 0.8 Final estimated BW GHz 16.8 ± 2.0 17.2 ± 1.2 14.5 ± 0.4 12.2 ± 0.7 Observed devices 29 29 32 28 AcknowledgementsWe acknowledge funding from EU Horizon 2020 research and innovation program under grant agreement no. 881603 (Graphene flagship Core3) and from imec's industry affiliation R&D program on Optical I/O. We thank H.C. Tsai and S. Sergeant for their assistance in Raman measurements.Data availabilityThe data that support the findings of this study are available from the corresponding author upon reasonable request. A C Neto, F Guinea, N M Peres, K S Novoselov, A K Geim, Reviews of modern physics. 81109A. C. Neto, F. Guinea, N. M. Peres, K. S. Novoselov, A. K. Geim, Reviews of modern physics 2009, 81, 1 109. . K F Mak, M Y Sfeir, Y Wu, C H Lui, J A Misewich, T F Heinz, Physical review letters. 101196405K. F. Mak, M. Y. Sfeir, Y. Wu, C. H. Lui, J. A. Misewich, T. F. Heinz, Physical review letters 2008, 101, 19 196405. F Xia, H Yan, P Avouris, Proceedings of the IEEE 2013. the IEEE 20131011717F. Xia, H. Yan, P. Avouris, Proceedings of the IEEE 2013, 101, 7 1717. . F Bonaccorso, Z Sun, T Hasan, A Ferrari, Nature photonics. 4611F. Bonaccorso, Z. Sun, T. Hasan, A. Ferrari, Nature photonics 2010, 4, 9 611. . R R Nair, P Blake, A N Grigorenko, K S Novoselov, T J Booth, T Stauber, N M Peres, A K Geim, 3201308R. R. Nair, P. Blake, A. N. Grigorenko, K. S. Novoselov, T. J. Booth, T. Stauber, N. M. Peres, A. K. Geim, science 2008, 320, 5881 1308. L Banszerus, M Schmitz, S Engels, J Dauber, M Oellers, F Haupt, K Watanabe, T Taniguchi, B Beschoten, C Stampfer, Science advances 2015. 11500222L. Banszerus, M. Schmitz, S. Engels, J. Dauber, M. Oellers, F. Haupt, K. Watanabe, T. Taniguchi, B. Beschoten, C. Stampfer, Science advances 2015, 1, 6 e1500222. . P Zomer, S Dash, N Tombros, B Van Wees, Applied Physics Letters. 23232104P. Zomer, S. Dash, N. Tombros, B. Van Wees, Applied Physics Letters 2011, 99, 23 232104. . M Romagnoli, V Sorianello, M Midrio, F H Koppens, C Huyghebaert, D Neumaier, P Galli, W Templ, A Errico, A C Ferrari, Nature Reviews Materials. 3392M. Romagnoli, V. Sorianello, M. Midrio, F. H. Koppens, C. Huyghebaert, D. Neumaier, P. Galli, W. Templ, A. D'Errico, A. C. Ferrari, Nature Reviews Materials 2018, 3, 10 392. . H Agarwal, B Terrés, L Orsini, A Montanaro, V Sorianello, M Pantouvaki, K Watanabe, T Taniguchi, D V Thourhout, M Romagnoli, Nature communications 2021. 121H. Agarwal, B. Terrés, L. Orsini, A. Montanaro, V. Sorianello, M. Pantouvaki, K. Watanabe, T. Taniguchi, D. V. Thourhout, M. Romagnoli, et al., Nature communications 2021, 12, 1 1. . S Goossens, G Navickaite, C Monasterio, S Gupta, J J Piqueras, R Pérez, G Burwell, I Nikitskiy, T Lasanta, T Galán, Nature Photonics. 6366S. Goossens, G. Navickaite, C. Monasterio, S. Gupta, J. J. Piqueras, R. Pérez, G. Burwell, I. Nikit- skiy, T. Lasanta, T. Galán, et al., Nature Photonics 2017, 11, 6 366. . C T Phare, Y.-H. Daniel Lee, J Cardenas, M Lipson, Nature Photonics. 9511C. T. Phare, Y.-H. Daniel Lee, J. Cardenas, M. Lipson, Nature Photonics 2015, 9, 8 511. . S Schuler, J E Muench, A Ruocco, O Balci, D V Thourhout, V Sorianello, M Romagnoli, K Watanabe, T Taniguchi, I Goykhman, Nature Communications. 121S. Schuler, J. E. Muench, A. Ruocco, O. Balci, D. v. Thourhout, V. Sorianello, M. Romagnoli, K. Watanabe, T. Taniguchi, I. Goykhman, et al., Nature Communications 2021, 12, 1 1. . M A Giambra, V Sorianello, V Miseikis, S Marconi, A Montanaro, P Galli, S Pezzini, C Coletti, M Romagnoli, Optics Express. 1527M. A. Giambra, V. Sorianello, V. Miseikis, S. Marconi, A. Montanaro, P. Galli, S. Pezzini, C. Co- letti, M. Romagnoli, Optics Express 2019, 27, 15 20145. . G T Reed, G Mashanovich, F Y Gardes, D Thomson, Nature photonics. 8518G. T. Reed, G. Mashanovich, F. Y. Gardes, D. Thomson, Nature photonics 2010, 4, 8 518. . D Thomson, A Zilkie, J E Bowers, T Komljenovic, G T Reed, L Vivien, D Marris-Morini, E Cassan, L Virot, J.-M Fédéli, Journal of Optics. 1873003D. Thomson, A. Zilkie, J. E. Bowers, T. Komljenovic, G. T. Reed, L. Vivien, D. Marris-Morini, E. Cassan, L. Virot, J.-M. Fédéli, et al., Journal of Optics 2016, 18, 7 073003. . A Rahim, T Spuesens, R Baets, W Bogaerts, Proceedings of the IEEE. 1062313A. Rahim, T. Spuesens, R. Baets, W. Bogaerts, Proceedings of the IEEE 2018, 106, 12 2313. P P Absil, P De Heyn, H Chen, P Verheyen, G Lepage, M Pantouvaki, J De, A Coster, Y Khanna, D Drissi, Van Thourhout, Silicon Photonics X. 9367P. P. Absil, P. De Heyn, H. Chen, P. Verheyen, G. Lepage, M. Pantouvaki, J. De Coster, A. Khanna, Y. Drissi, D. Van Thourhout, et al., In Silicon Photonics X, volume 9367. SPIE, 2015 166-171. . K Verguts, B Vermeulen, N Vrancken, K Schouteden, C Van Haesendonck, C Huyghebaert, M Heyns, S De Gendt, S Brems, The Journal of Physical Chemistry C. 1297K. Verguts, B. Vermeulen, N. Vrancken, K. Schouteden, C. Van Haesendonck, C. Huyghebaert, M. Heyns, S. De Gendt, S. Brems, The Journal of Physical Chemistry C 2016, 120, 1 297. . M E Ramon, A Gupta, C Corbet, D A Ferrer, H C Movva, G Carpenter, L Colombo, G Bourianoff, M Doczy, D Akinwande, ACS nano. 57198M. E. Ramon, A. Gupta, C. Corbet, D. A. Ferrer, H. C. Movva, G. Carpenter, L. Colombo, G. Bourianoff, M. Doczy, D. Akinwande, et al., ACS nano 2011, 5, 9 7198. . K Verguts, Y Defossez, A Leonhardt, J De Messemaeker, K Schouteden, C Van Haesendonck, C Huyghebaert, S De Gendt, S Brems, ECS Journal of Solid State Science and Technology. 7195K. Verguts, Y. Defossez, A. Leonhardt, J. De Messemaeker, K. Schouteden, C. Van Haesendonck, C. Huyghebaert, S. De Gendt, S. Brems, ECS Journal of Solid State Science and Technology 2018, 7, 12 M195. . H Zhou, W J Yu, L Liu, R Cheng, Y Chen, X Huang, Y Liu, Y Wang, Y Huang, X Duan, Nature communications. H. Zhou, W. J. Yu, L. Liu, R. Cheng, Y. Chen, X. Huang, Y. Liu, Y. Wang, Y. Huang, X. Duan, Nature communications 2013, 4, 1 1. . M Wang, M Huang, D Luo, Y Li, M Choe, W K Seong, M Kim, S Jin, M Wang, S Chatterjee, Nature. 596519M. Wang, M. Huang, D. Luo, Y. Li, M. Choe, W. K. Seong, M. Kim, S. Jin, M. Wang, S. Chatter- jee, et al., Nature 2021, 596, 7873 519. . F Qing, Y Zhang, Y Niu, R Stehle, Y Chen, X Li, Nanoscale. 202010890F. Qing, Y. Zhang, Y. Niu, R. Stehle, Y. Chen, X. Li, Nanoscale 2020, 12, 20 10890. . Y Lee, S Bae, H Jang, S Jang, S.-E Zhu, S H Sim, Y I Song, B H Hong, J.-H Ahn, Nano letters. 10490Y. Lee, S. Bae, H. Jang, S. Jang, S.-E. Zhu, S. H. Sim, Y. I. Song, B. H. Hong, J.-H. Ahn, Nano letters 2010, 10, 2 490. . M A Giambra, V Miseikis, S Pezzini, S Marconi, A Montanaro, F Fabbri, V Sorianello, A C Ferrari, C Coletti, M Romagnoli, ACS. 20213171M. A. Giambra, V. Miseikis, S. Pezzini, S. Marconi, A. Montanaro, F. Fabbri, V. Sorianello, A. C. Ferrari, C. Coletti, M. Romagnoli, ACS nano 2021, 15, 2 3171. . M Liu, X Yin, E Ulin-Avila, B Geng, T Zentgraf, L Ju, F Wang, X Zhang, Nature. 734964M. Liu, X. Yin, E. Ulin-Avila, B. Geng, T. Zentgraf, L. Ju, F. Wang, X. Zhang, Nature 2011, 474, 7349 64. . M Liu, X Yin, X Zhang, Nano letters 2012. 121482M. Liu, X. Yin, X. Zhang, Nano letters 2012, 12, 3 1482. . Y Hu, M Pantouvaki, J Van Campenhout, S Brems, I Asselberghs, C Huyghebaert, P Absil, D Van Thourhout, Laser & Photonics Reviews. 10307Y. Hu, M. Pantouvaki, J. Van Campenhout, S. Brems, I. Asselberghs, C. Huyghebaert, P. Absil, D. Van Thourhout, Laser & Photonics Reviews 2016, 10, 2 307. . A Pospischil, M Humer, M M Furchi, D Bachmann, R Guider, T Fromherz, T Mueller, Nature Photonics. 117892A. Pospischil, M. Humer, M. M. Furchi, D. Bachmann, R. Guider, T. Fromherz, T. Mueller, Nature Photonics 2013, 7, 11 892. . X Gan, R.-J Shiue, Y Gao, I Meric, T F Heinz, K Shepard, J Hone, S Assefa, D Englund, Nature Photonics. 117883X. Gan, R.-J. Shiue, Y. Gao, I. Meric, T. F. Heinz, K. Shepard, J. Hone, S. Assefa, D. Englund, Nature Photonics 2013, 7, 11 883. . X Wang, Z Cheng, K Xu, H K Tsang, J.-B Xu, Nature Photonics. 117888X. Wang, Z. Cheng, K. Xu, H. K. Tsang, J.-B. Xu, Nature Photonics 2013, 7, 11 888. . D Schall, C Porschatis, M Otto, D Neumaier, Journal of Physics D: Applied Physics. 50124004D. Schall, C. Porschatis, M. Otto, D. Neumaier, Journal of Physics D: Applied Physics 2017, 50, 12 124004. . V Mišeikis, C Coletti, Applied Physics Letters. 11950501V. Mišeikis, C. Coletti, Applied Physics Letters 2021, 119, 5 050501. C H Wu, S Brems, D Yudistira, D Cott, A Milenin, K Vandersmissen, A Maestre, A Centeno, J Van Campenhout, C Huyghebaert, 2021 Symposium on VLSI Circuits. IEEE. C. H. Wu, S. Brems, D. Yudistira, D. Cott, A. Milenin, K. Vandersmissen, A. Maestre, A. Centeno, J. Van Campenhout, C. Huyghebaert, et al., In 2021 Symposium on VLSI Circuits. IEEE, 2021 1-2. study describes the development of a CMOS-compatible integration for graphene-based photonics devices. The optimization of three critical processing steps has been explored. The results achieved are comparable to state-of-art devices. The reproducible and robust integration route. C Alessandri, I Asselberghs, S Brems, C Huyghebaert, J Van Campenhout, D Van Thourhout, M Pantouvaki, Japanese Journal of Applied Physics. 5952008developed in this paper lays the groundwork for scaling other graphenebased devices and promoting their industrial adoptionC. Alessandri, I. Asselberghs, S. Brems, C. Huyghebaert, J. Van Campenhout, D. Van Thourhout, M. Pantouvaki, Japanese Journal of Applied Physics 2020, 59, 5 052008. study describes the development of a CMOS-compatible integration for graphene-based photonics devices. The opti- mization of three critical processing steps has been explored. The results achieved are comparable to state-of-art devices. The reproducible and robust integration route developed in this paper lays the groundwork for scaling other graphene- based devices and promoting their industrial adoption. Handbook of thin film deposition. K Seshan, William AndrewK. Seshan, Handbook of thin film deposition, William Andrew, 2012. . X Wang, S M Tabakman, H Dai, Journal of the American Chemical Society. 1308152X. Wang, S. M. Tabakman, H. Dai, Journal of the American Chemical Society 2008, 130, 26 8152. . K Kim, H.-B.-R Lee, R W Johnson, J T Tanskanen, N Liu, M.-G Kim, C Pang, C Ahn, S F Bent, Z Bao, Nature communications. 51K. Kim, H.-B.-R. Lee, R. W. Johnson, J. T. Tanskanen, N. Liu, M.-G. Kim, C. Pang, C. Ahn, S. F. Bent, Z. Bao, Nature communications 2014, 5, 1 1. . B Karasulu, R H Vervuurt, W M Kessels, A A Bol, Nanoscale. B. Karasulu, R. H. Vervuurt, W. M. Kessels, A. A. Bol, Nanoscale 2016, 8, 47 19829. . H.-Y Park, W.-S Jung, D.-H Kang, J Jeon, G Yoo, Y Park, J Lee, Y H Jang, J Lee, S Park, Advanced Materials. 28864H.-Y. Park, W.-S. Jung, D.-H. Kang, J. Jeon, G. Yoo, Y. Park, J. Lee, Y. H. Jang, J. Lee, S. Park, et al., Advanced Materials 2016, 28, 5 864. S Lee, H Choi, I Moon, H Shin, K Watanabe, T Taniguchi, W J Yoo, Advanced Electronic Materials 2022. 82101169S. Lee, H. Choi, I. Moon, H. Shin, K. Watanabe, T. Taniguchi, W. J. Yoo, Advanced Electronic Ma- terials 2022, 8, 5 2101169. . J A Robinson, M Labella, Iii , K A Trumbull, X Weng, R Cavelero, T Daniels, Z Hughes, M Hollander, M Fanton, D Snyder, Acs Nano. 52667J. A. Robinson, M. LaBella III, K. A. Trumbull, X. Weng, R. Cavelero, T. Daniels, Z. Hughes, M. Hollander, M. Fanton, D. Snyder, Acs Nano 2010, 4, 5 2667. A Meersha, H B Variar, K Bhardwaj, A Mishra, S Raghavan, N Bhat, M Shrivastava, 2016 IEEE International Electron Devices Meeting (IEDM). IEEEA. Meersha, H. B. Variar, K. Bhardwaj, A. Mishra, S. Raghavan, N. Bhat, M. Shrivastava, In 2016 IEEE International Electron Devices Meeting (IEDM). IEEE, 2016 5-3. . A Das, S Pisana, B Chakraborty, S Piscanec, S K Saha, U V Waghmare, K S Novoselov, H R Krishnamurthy, A K Geim, A C Ferrari, A K Sood, Nature Nanotechnology. 3210A. Das, S. Pisana, B. Chakraborty, S. Piscanec, S. K. Saha, U. V. Waghmare, K. S. Novoselov, H. R. Krishnamurthy, A. K. Geim, A. C. Ferrari, A. K. Sood, Nature Nanotechnology 2008, 3, 4 210. . J E Lee, G Ahn, J Shim, Y S Lee, S Ryu, Nature Communications. 31J. E. Lee, G. Ahn, J. Shim, Y. S. Lee, S. Ryu, Nature Communications 2012, 3, 1. . J A Robinson, M Labella, M Zhu, M Hollander, R Kasarda, Z Hughes, K Trumbull, R Cavalero, D Snyder, Applied Physics Letters. 9853103J. A. Robinson, M. LaBella, M. Zhu, M. Hollander, R. Kasarda, Z. Hughes, K. Trumbull, R. Cav- alero, D. Snyder, Applied Physics Letters 2011, 98, 5 053103. . L Zheng, X Cheng, D Cao, Z Wang, C Xia, Y Yu, D Shen, Applied Physics Letters. 10423112L. Zheng, X. Cheng, D. Cao, Z. Wang, C. Xia, Y. Yu, D. Shen, Applied Physics Letters 2014, 104, 2 023112. . J A Robinson, M Wetherington, J L Tedesco, P M Campbell, X Weng, J Stitt, M A Fanton, E Frantz, D Snyder, B L Vanmil, G G Jernigan, R L Myers-Ward, C R Eddy, D K Gaskill, Nano Letters. 92873J. A. Robinson, M. Wetherington, J. L. Tedesco, P. M. Campbell, X. Weng, J. Stitt, M. A. Fanton, E. Frantz, D. Snyder, B. L. VanMil, G. G. Jernigan, R. L. Myers-Ward, C. R. Eddy, D. K. Gaskill, Nano Letters 2009, 9, 8 2873.
[]
[ "MetaPAD: Meta Pa ern Discovery from Massive Text Corpora", "MetaPAD: Meta Pa ern Discovery from Massive Text Corpora" ]
[ "Meng Jiang \nDepartment of Computer Science\nUniversity of Illinois Urbana-Champaign\nILUSA\n", "Jingbo Shang \nDepartment of Computer Science\nUniversity of Illinois Urbana-Champaign\nILUSA\n", "Taylor Cassidy \nComputational & Information Sciences Directorate\nArmy Research Laboratory\nAdelphiMDUSA\n", "Xiang Ren \nDepartment of Computer Science\nUniversity of Illinois Urbana-Champaign\nILUSA\n", "Lance M Kaplan \nComputational & Information Sciences Directorate\nArmy Research Laboratory\nAdelphiMDUSA\n", "Timothy P Hanra Y [email protected] \nComputational & Information Sciences Directorate\nArmy Research Laboratory\nAdelphiMDUSA\n", "Jiawei Han [email protected] \nDepartment of Computer Science\nUniversity of Illinois Urbana-Champaign\nILUSA\n" ]
[ "Department of Computer Science\nUniversity of Illinois Urbana-Champaign\nILUSA", "Department of Computer Science\nUniversity of Illinois Urbana-Champaign\nILUSA", "Computational & Information Sciences Directorate\nArmy Research Laboratory\nAdelphiMDUSA", "Department of Computer Science\nUniversity of Illinois Urbana-Champaign\nILUSA", "Computational & Information Sciences Directorate\nArmy Research Laboratory\nAdelphiMDUSA", "Computational & Information Sciences Directorate\nArmy Research Laboratory\nAdelphiMDUSA", "Department of Computer Science\nUniversity of Illinois Urbana-Champaign\nILUSA" ]
[]
Mining textual pa erns in news, tweets, papers, and many other kinds of text corpora has been an active theme in text mining and NLP research. Previous studies adopt a dependency parsing-based pa ern discovery approach. However, the parsing results lose rich context around entities in the pa erns, and the process is costly for a corpus of large scale. In this study, we propose a novel typed textual pa ern structure, called meta pa ern, which is extended to a frequent, informative, and precise subsequence pa ern in certain context. We propose an e cient framework, called MetaPAD, which discovers meta pa erns from massive corpora with three techniques: (1) it develops a context-aware segmentation method to carefully determine the boundaries of pa erns with a learnt pa ern quality assessment function, which avoids costly dependency parsing and generates high-quality pa erns; (2) it identi es and groups synonymous meta pa erns from multiple facets-their types, contexts, and extractions; and (3) it examines type distributions of entities in the instances extracted by each group of pa erns, and looks for appropriate type levels to make discovered pa erns precise. Experiments demonstrate that our proposed framework discovers high-quality typed textual pa erns e ciently from di erent genres of massive corpora and facilitates information extraction.Synonymous group of meta patterns (on "person:age") by segmentation, pattern grouping, and adjusting type level (b) MetaPAD nds meta pa erns consisting of both entity types and data types like $D . It also adjusts the type level for appropriate granularity.
10.1145/3097983.3098105
[ "https://arxiv.org/pdf/1703.04213v2.pdf" ]
15,764,969
1703.04213
12a45e195899f166c8b64abc76189e667b497509
MetaPAD: Meta Pa ern Discovery from Massive Text Corpora Meng Jiang Department of Computer Science University of Illinois Urbana-Champaign ILUSA Jingbo Shang Department of Computer Science University of Illinois Urbana-Champaign ILUSA Taylor Cassidy Computational & Information Sciences Directorate Army Research Laboratory AdelphiMDUSA Xiang Ren Department of Computer Science University of Illinois Urbana-Champaign ILUSA Lance M Kaplan Computational & Information Sciences Directorate Army Research Laboratory AdelphiMDUSA Timothy P Hanra Y [email protected] Computational & Information Sciences Directorate Army Research Laboratory AdelphiMDUSA Jiawei Han [email protected] Department of Computer Science University of Illinois Urbana-Champaign ILUSA MetaPAD: Meta Pa ern Discovery from Massive Text Corpora 10.1145/nnnnnnn.nnnnnnn Mining textual pa erns in news, tweets, papers, and many other kinds of text corpora has been an active theme in text mining and NLP research. Previous studies adopt a dependency parsing-based pa ern discovery approach. However, the parsing results lose rich context around entities in the pa erns, and the process is costly for a corpus of large scale. In this study, we propose a novel typed textual pa ern structure, called meta pa ern, which is extended to a frequent, informative, and precise subsequence pa ern in certain context. We propose an e cient framework, called MetaPAD, which discovers meta pa erns from massive corpora with three techniques: (1) it develops a context-aware segmentation method to carefully determine the boundaries of pa erns with a learnt pa ern quality assessment function, which avoids costly dependency parsing and generates high-quality pa erns; (2) it identi es and groups synonymous meta pa erns from multiple facets-their types, contexts, and extractions; and (3) it examines type distributions of entities in the instances extracted by each group of pa erns, and looks for appropriate type levels to make discovered pa erns precise. Experiments demonstrate that our proposed framework discovers high-quality typed textual pa erns e ciently from di erent genres of massive corpora and facilitates information extraction.Synonymous group of meta patterns (on "person:age") by segmentation, pattern grouping, and adjusting type level (b) MetaPAD nds meta pa erns consisting of both entity types and data types like $D . It also adjusts the type level for appropriate granularity. INTRODUCTION Discovering textual pa erns from text data is an active research theme [5,8,12,14,29], with broad applications such as a ribute extraction [13,31,33,34], aspect mining [9,18,20], and slot lling [40,41]. Moreover, a data-driven exploration of e cient textual pattern mining may also have strong implications on the development of e cient methods for NLP tasks on massive text corpora. Traditional methods of textual pa ern mining have made large pa ern collections publicly available, but very few can extract arbitrary pa erns with semantic types. Hearst pa erns like "N P such as N P, N P, and N P" were proposed and widely used to acquire hyponymy lexical relation [17]. TextRunner [5] and ReVerb [12] are blind to the typing information in their lexical pa erns; Re-Verb constrains pa erns to verbs or verb phrases that end with prepositions. NELL [8] learns to extract noun-phrase pairs based on a xed set of prespeci ed relations with entity types like country:president→$C ×$P . One interesting exception is the SOL pa erns proposed by Nakashole et al. in PATTY [29]. PATTY relies on the Stanford dependency parser [10] and harnesses the typing information from a knowledge base [4,6,30] or a typing system [21,28]. Figure 1 First, a good typed textual pa ern should be of informative, self-contained context. e dependency parsing in PATTY loses the rich context around the entities such as the word "president" next to "Barack Obama" in sentence #1, and "president" and "prime minister" in #2 (see Figure 1(a)). Moreover, the SOL pa erns are restricted to the dependency path between two entities but do not represent the data types like $D for "55" (see Figure 1(b)) and $M $D $Y . Furthermore, the parsing process is costly: Its complexity is cubic in the length of sentence [24], which is too costly for news and scienti c corpora that o en have long sentences. We expect an e cient textual pa ern mining method for massive corpora. Second, synonymous textual pa erns are expected to be identied and grouped for handling pa ern sparseness and aggregating their extractions for extending knowledge bases and question answering. As shown in Figure 1, country:president and person:age are two synonymous pa ern groups. However, the process of nding such synonymous pa ern groups is non-trivial. Multi-faceted information should be considered: (1) synonymous pa erns should share the same entity types or data types; (2) even for the same entity (e.g., Barack Obama), one should allow it be grouped and generalized di erently (e.g., in United States, Barack Obama vs. Barack Obama, 55 ); and (3) shared words (e.g., "president") or semantically similar contextual words (e.g., "age" and "-year-old") may play an important role in synonymous pa ern grouping. PATTY does not explore the multi-faceted information at grouping syonymous pa erns, and thus cannot aggregate such extractions. ird, the entity types in the textual pa erns should be precise. In di erent pa erns, even the same entity can be typed at di erent type levels. For example, the entity "Barack Obama" should be typed at a ne-grained level ($P ) in the pa erns generated from sentence #1-2, and it should be typed at a coarse-grained level ($P ) in the pa erns from sentence #3-4. However, PATTY does not look for appropriate granularity of the entity types. In this paper, we propose a new typed textual pa ern called meta pa ern, which is de ned as follows. De nition (Meta Pa ern). A meta pa ern refers to a frequent, informative, and precise subsequence pa ern of entity types (e.g., $P , $P , $C ) or data types (e.g., $D , $M , $Y ), words (e.g., "politician", "age") or phrases (e.g., "prime minister"), and possibly punctuation marks (e.g., ",", "("), which serves as an integral semantic unit in certain context. We study the problem of mining meta pa erns and grouping synonymous meta pa erns. Why mining meta pa erns and grouping them into synonymous meta pa ern groups?-because mining and grouping meta pa erns into synonymous groups may facilitate information extraction and turning unstructured data into structures. For example, given us a sentence from a news corpus, "President Blaise Compaoré's government of Burkina Faso was founded …", if we have discovered the meta pa ern "president $P 's government of $C ", we can recognize and type new entities (i.e., type "Blaise Compaoré" as a $P and "Burkina Faso" as a $C ), which previously requires human expertise on language rules or heavy annotations for learning [27]. If we have grouped the pa ern with synonymous pa erns like "$C president $P ", we can merge the fact tuple Burkina Faso, president, Blaise Compaoré into the large collection of facts of the a ribute type country:president. To systematically address the challenges of mining meta pa erns and grouping synonymous pa erns, we develop a novel framework called MetaPAD (Meta PA ern Discovery). Instead of working on every individual sentence, our MetaPAD leverages massive sentences in which redundant pa erns are used to express a ributes or relations of massive instances. First, MetaPAD generates meta pa ern candidates using e cient sequential pa ern mining, learns a quality assessment function of the pa erns candidates with a rich set of domain-independent contextual features for intuitive ideas (e.g., frequency, informativeness), and then mines the quality meta pa erns by assessment-led context-aware segmentation (see Sec. 4.1). Second, MetaPAD formulates the grouping process of synonymous meta pa erns as a learning task, and solves it by integrating features from multiple facets including entity types, data types, pa ern context, and extracted instances (see Sec. 4.2). ird, MetaPAD examines the type distributions of entities in the extractions from every meta pa ern group, and looks for the most appropriate type level that the pa erns t. is includes both topdown and bo om-up schemes that traverse the type ontology for the pa erns' preciseness (see Sec. 4.3). e major contributions of this paper are as follows: (1) we propose a new de nition of typed textual pa ern, called meta pattern, which is more informative, precise, and e cient in discovery than the state-of-the-art SOL pa ern; (2) we develop an e cient meta-pa ern mining framework, MetaPAD 1 , of three components: generating quality meta pa erns by context-aware segmentation, grouping synonymous meta pa erns, and adjusting entity-type levels for appropriate granularity in the pa ern groups; and (3) our experiments on three datasets of di erent genres-news, tweets, and biomedical corpus-demonstrate that the MetaPAD not only generates high quality pa erns but also achieves signi cant improvement over the state-of-the-art in information extraction. RELATED WORK In this section, we summarize existing systems and methods that are related to the topic of this paper. TextRunner [5] extracts strings of words between entities in text corpus, and clusters and simpli es these word strings to produce relation-strings. ReVerb [12] constrains pa erns to verbs or verb phrases that end with prepositions. However, the methods in the TextRunner/ReVerb family generate pa erns of frequent relational strings/phrases without entity information. Another line of work, open information extraction systems [3,23,37,39], are supposed to extract verbal expressions for identifying arguments. is is less related to our task of discovering textual pa erns. Google's Biperpedia [14,15] generates E-A pa erns (e.g., "A of E" and "E 's A") from users' fact-seeking queries (e.g., "president of united states" and "barack oabma's wife") by replacing entity with "E" and noun-phrase a ribute with "A". ReNoun [40] generates S-A-O pa erns (e.g., "S's A is O" and "O, A of S, ") from human-annotated corpus (e.g., "Barack Obama's wife is Michelle Obama" and "Larry Page, CEO of Google") on a pre-de ned subset of the a ribute names, by replacing entity/subject with "S", a ribute name with "A", and value/object with "O". However, the query logs and annotations are o en unavailable or expensive. Furthermore, query log word distributions are highly constrained compared with ordinary wri en language. So most of the S-A-O pa erns like "S A O" and "S's A O" will generate noisy extractions when applied to a text corpus. Textual pa ern learning methods [38] including the above are blind to the typing information of the entities in the pa erns; the pa erns are not typed textual pa erns. NELL [8] learns to extract noun-phrase pairs from text corpus based on a xed set of prespeci ed relations with entity types. OntExt [26] clusters pa ern co-occurrences for the noun-phrase pairs for a given entity type at a time and does not scale up to mining a large corpus. PATTY [29] was the rst to harness the typing system for mining relational pa erns with entity types. We have extensively discussed the di erences between our proposed meta pa erns and PATTY's SOL pa erns in the introduction: Meta pa ern candidates are e ciently generated by sequential pa ern mining [2,32,42] on a massive corpus instead of dependency parsing on every individual sentence; meta pa ern mining adopts a contextaware segmentation method to determine where a pa ern starts and ends; and meta pa erns are not restricted to words between entity pairs but generated by pa ern quality estimation based on four criteria: frequency, completeness, informativeness, and preciseness, grouped on synonymous pa erns, and with type level adjusted for appropriate granularity. META PATTERN DISCOVERY 3.1 Preprocessing: Harnessing Typing Systems To nd meta pa erns that are typed textual pa erns, we apply e cient text mining methods for preprocessing a corpus into negrained typed corpus as input in three steps as follows (see Figure 2): (1) we use a phrase mining method [22] to break down a sentence into phrases, words, and punctuation marks, which nds more real phrases (e.g., "barack obama", "prime minister") than the frequent n-grams by frequent itemset mining in PATTY; (2) we use a distant supervision-based method [35] to jointly recognize entities and their coarse-grained types (i.e., $P , $L , and $O ); (3) we adopt a ne-grained typing system [36] to distinguish 113 entity types of 2-level ontology (e.g., $P , $C , and $C ); we further use a set of language rules to have 6 data types (i.e., $D , $D U 2 , $D R 3 , $M , $D , and $Y ). Now we have a ne-grained, typed corpus consisting of the same kinds of tokens as de ned in the meta pa ern: entity types, data types, phrases, words, and punctuation marks. All the tools are publicly available on GitHub. e Proposed Problem Problem (Meta Pa ern Discovery). Given a ne-grained, typed corpus of massive sentences C = [. . . , S, . . .], and each sentence is denoted as S = t 1 t 2 . . . t n in which t k ∈ T ∪ P ∪ M is the k-th 2 $D U : "percent", "%", "hundred", "thousand", "million", "billion", "trillion"… 3 $D R : " rst", "1st", "second", "2nd", "44th"… token (T is the set of entity types and data types, P is the set of phrases and words, and M is the set of punctuation marks), the task is to nd synonymous groups of quality meta patterns. A meta pa ern mp is a subsequential pa ern of the tokens from the set T ∪ P ∪ M. A synonymous meta pa ern group is denoted by MPG = [. . . , mp i , . . . , mp j . . .] in which each pair of meta pa erns, mp i and mp j , are synonymous. What is a quality meta pa ern? Here we take the sentences as sequences of tokens. Previous sequential pa ern mining algorithms mine frequent subsequences satisfying a single metric, the minimum support threshold (min sup), in a transactional sequence database [2]. However, for text sequence data, the quality of our proposed textual pa ern, the meta pa ern, should be evaluated similar to phrase mining [22], in four criteria as illustrated below. Example. e quality of a pa ern is evaluated with the following criteria: (the former pa ern has higher quality than the la er) Frequency: "$D R president of $C " vs. "young president of $C "; Completeness: "$C president $P " vs. "$C president", "$P 's wife, $P " vs. "$P 's wife"; Informativeness: "$P 's wife, $P " vs. "$P and $P "; Preciseness: "$C president $P " vs. "$L president $P ", "$P 's wife, $P " vs. "$P 's wife, $P ", "population of $L " vs. "population of $C ". What are synonymous meta pa erns? e full set of frequent sequential pa erns from a transaction dataset is huge [2]; and the number of meta pa erns from a massive corpus is also big. Since there are multiple ways to express the same or similar meanings in a natural language, many meta pa erns may share the same or nearly the same meaning. Examples have been given in Figure 1. Grouping synonymous meta pa erns can help aggregate a large number of extractions of di erent pa erns from di erent sentences. And the type distribution of the aggregated extractions can help us adjust the meta pa erns in the group for preciseness. Figure 3 presents the MetaPAD framework for Meta PA ern Discovery. It has three modules. First, it develops a context-aware segmentation method to determine the boundaries of the subsequences and generate the meta pa erns of frequency, completeness, and informativeness (see Sec. 4.1). Second, it groups synonymous meta pa erns into clusters (see Sec. 4.2). ird, for every synonymous pa ern group, it adjusts the levels of entity types for appropriate granularity to have precise meta pa erns (see Sec. 4.3). THE METAPAD FRAMEWORK Generating meta patterns by context-aware segmentation Pa ern candidate generation. We adopt the standard frequent sequential pa ern mining algorithm [32] to look for pa ern candidates that satisfy a min sup threshold. In practice, one can set a maximum pa ern length ω to restrict the number of tokens in the pa erns. Di erent from syntactic analysis of very long sentences, our meta pa ern mining explores pa ern structures that are local but still of wide context: in our experiments, we set ω = 20. Meta pa ern quality assessment. Given a huge number of pa ern candidates that can be messy (e.g., "of $C " and "$P and"), it is desired but challenging to assess the quality of the pa erns with a very few training labels. We introduce a rich set of contextual features of the pa erns according to the quality criteria (see Sec. 3.2) as follows and train a classi er to estimate the quality function Q(mp) ∈ [0, 1] where mp is a meta pa ern candidate: 1. Frequency: A good pa ern mp should occur with su cient count c(mp) in a given typed text corpus. 2. Concordance: If the collocation of tokens in such frequency that is signi cantly higher than what is expected due to chance, the meta pa ern mp has good concordance. To statistically reason about the concordance, we consider a null hypothesis: the corpus is generated from a series of independent Bernoulli trials. Suppose the number of tokens in the corpus is L that can be assumed to be fairly large. e expected frequency of a pair of sub-pa erns mp l ,mp r under our null hypothesis of their independence is µ 0 (c( mp l , mp r )) = L · p(mp l ) · p(mp r ),(1) where p(mp) = c(mp) L is the empirical probability of the pa ern. We use Z score to provide a quantitative measure of a pair of subpa erns mp l ,mp r forming the best collocation as mp in the corpus: Z (mp) = max mp l ,mp r =mp c(mp) − µ 0 (c( mp l , mp r )) σ mp l ,mp r ,(2) where σ mp l ,mp r is the standard deviation of the frequency. A high Z score indicates that the pa ern is acting as an integral semantic unit in the context: its composed sub-pa erns are highly associated. Informativeness: A good pa ern mp should have informative context. We examine the counts of di erent kinds of tokens (e.g., types, words, phrases, non-stop words, marks). For example, the pa ern "$P 's wife $P " is informative for the non-stop word "wife"; "$P was born in $C " is good for the phrase "born in"; and "$P , $D ," is also informative for the two di erent types and two commas. 4. Completeness: We use the ratio between the frequencies of the pa ern candidate (e.g., "$C president $P ") and its sub-pa erns (e.g., "$C president"). If the ratio is high, the candidate is likely to be complete. We also use the ratio between the frequencies of the pa ern candidate and its super-pa erns. If the ratio is high, the candidate is likely to be incomplete. Moreover, we expect the meta pa ern to be NOT bounded by stop words. For example, neither "and $C president" nor "president $P and" is properly bounded. 5. Coverage: A good typed pa ern can extract multiple instances. For example, the type $P in the pa ern "$P 's healthcare law" refers to only one entity "Barack Obama", and thus has too low coverage in the corpus. We train a classi er based on random forests [7] for learning the meta-pa ern quality function Q(mp) with the above rich set of contextual features. Our experiments (not reported here for the sake of space) show that using only 100 pa ern labels can achieve similar precision and recall as using 300 labels. Note that the learning results can be transferred to other domains: the features of lowquality pa erns "$P and $C " and "$B and Figure 3: ree modules in our MetaPAD framework. $A " are similar; the features of high-quality pa erns "$P is president of $C " and "$B is resistant to $A " are similar. Context-aware segmentation using Q(.) with feedback. With the pa ern quality function Q(.) learnt from the rich set of contextual features, we develop a bo om-up segmentation algorithm to construct the best partition of segments of high quality scores. As shown in Figure 4, we use Q(.) to determine the boundaries of the segments: we take "$C president $P " for its high quality score; we do not take the candidate "and prime minister $P of $C " because of its low quality score. Since Q(mp) was learnt with features including the raw frequency c(mp), the quality score may be overestimated or underestimated: the principle is that every token's occurrence should be assigned to only one pa ern but the raw frequency may count the tokens multiple times. Fortunately, a er the segmentation, we can rectify the frequency as c r (mp), for example in Figure 4, the segmentation avoids counting "$P and prime minister $P " of overestimated frequency/quality (see Table 1). Once the frequency feature is recti ed, we re-learn the quality function Q(.) using c(mp) as feedback and re-segment the corpus with it. is can be an iterative process but we found in only one iteration, the result converges. Algorithm 1 shows the details. Grouping synonymous meta patterns Grouping truly synonymous meta pa erns enables a large collection of extractions of the same relation aggregated from di erent but synonymous pa erns. For example, there could be hundreds of ways of expressing the relation country:president; if we group all such meta pa erns, we can aggregate all the extractions of this relation from massive corpus. PATTY [29] has a narrow de nition Segment the sentence S into Se =[…, mp, …] by maximizing mp ∈Se Q(mp) with a bo om-up scheme (see Figure 4), where mp ∈ MP cand is a segment of high quality score 4: for mp ∈ Se do of their synonymous dependency path-based SOL pa erns: two patterns are synonymous if they generate the same set of extractions from the corpus. Here we develop a learning method to incorporate information of three aspects, (1) entity/data types in the pa ern, (2) context words/phrases in the pa ern, and (3) extractions from the pa ern, to assign the meta pa erns into groups. Our method is based on three assumptions as follows (see Figure 5): A1: Synonymous meta pa erns must have the same entity/data types: the meta pa erns "$P 's age is $D " and "$P 's wife is $P " cannot be synonymous; A2: If two meta pa erns share (nearly) the same context words/phrases, they are more likely to be synonymous: the pa erns "$C president $P " and "president $P of $C " share the word "president"; A3: If two pa erns generate more common extractions, they are more likely to be synonymous: both "$P 's age is $D " and "$P , $D , " generate Barack Obama, 55 . Since the number of groups cannot be pre-speci ed, we propose to rst construct a pa ern-pa ern graph in which the two pa ern nodes of every edge satisfy A1 and are predicted to be synonymous, and then use a clique detection technique [16] to nd all the cliques as synonymous meta pa en groups. Each pair of the pa erns (mp i , mp j ) in the group MPG = [. . . , mp i , . . . , mp j . . .] are synonymous. For the graph construction, we train Support Vector Regression Machines [11] to learn the following features of a pair of pa erns based on A2 and A3: (1) the numbers of words, non-stop words, phrases that each pa ern has and they share; (2) the maximum similarity score between pairs of non-stop words or phrases in the two pa erns; (3) the number of extractions that each pa ern has and Figure 6: Adjusting entity-type levels for appropriate granularity with entity-type distributions. they share. e similarity between words/phrases is represented by the cosine similarity of their word2vec embeddings [25,38]. Adjusting type levels for preciseness Given a group of synonymous meta pa erns, we expect the pa erns to be precise: it is desired to determine the levels of the entity types in the pa erns for appropriate granularity. anks to the grouping process of synonymous meta pa erns, we have rich type distributions of the entities from the large collection of extractions. As shown in Figure 6, given the ontology of entity types (e.g., $L : $C , $S , $C , . . . ; $P : $A , $A , $P , . . . ), for the group of synonymous meta pa erns "president $P of $L ", "$L 's president $P ", and "$L president $P ", are the entity types, $L and $P , of appropriate granularity to make the pa erns precise? If we look at the type distributions of entities in the extractions of these pa erns, it is clear that most of the entities for $L are typed at a ne-grained level as $C (e.g., "United States") or $E (e.g., "Russian"), and most of the entities for $P also have the ne-grained type $P . erefore, compared with "$L president $P ", the two ne-grained meta pa erns "$C president $P " and "$E president $P " are more precise; we have the same claim for other meta pa erns in the synonymous group. On the other hand, for the group of synonymous meta pa erns on person:age, we can see most of the entities are typed at a coarsegrained level as $P instead of $A or $P . So the entity type in the pa erns is good to be $P . From this observation, given an entity type T in the meta pa ern group, we propose a metric, called graininess, that is de ned as the fraction of the entities typed by T that can be ne-grained to T 's sub-types: (T ) = T ∈subt pe of (T ) num entit (T ) T ∈subt pe of (T )∪{T } num entit (T ) . If (T ) is higher than a threshold θ , we go down the type ontology for the ne-grained types. Suppose we have determined the appropriate type level in the meta pa ern group using the graininess metric. However, not every type at the level should be used to construct precise meta pa erns. For example, we can see from Figure 6 for the pa erns on president, very few entities of $L are typed as $C , and very few entities of $P are typed as $A . Comparing with $C , $E , and $P , these ne-grained types are at the same level but have too small support of extractions. We exclude them from the meta pa ern group. Based on this idea, for an entity type T , we propose another metric, called support, that is de ned as the ratio of the number of entities typed by T to the maximum number of entities typed by T 's sibling types: s(T ) = num entit (T ) max T ∈sibl in −t pe of (T )∪{T } num entit (T ) . (4) If s(T ) is higher than a threshold γ , we consider the type T in the meta pa ern group; otherwise, we drop it. With these two metrics, we develop a top-down scheme that rst conducts segmentation and synonymous pa ern grouping on the coarse-grained typed meta pa erns, and then checks if the negrained types are signi cant and if the pa erns can be split to the ne-grained level; we also develop a bo om-down scheme that rst works on the ne-grained typed meta pa erns, and then checks if the pa erns can be merged into a coarse-grained level. Complexity analysis We develop three new components in our MetaPAD. e time complexity of generating meta pa erns with context-aware segmentation is O(ω |C|) where ω is the maximum pa ern length and |C| is the corpus size (i.e., the total number of tokens in the corpus). e complexity of grouping synonymous meta pa erns is O(|MP |), and the complexity of adjusting type levels is O(h|MP |) where |MP | is the number of quality meta pa erns and h is the height of type ontology. e total complexity is O(ω |C| + (h + 1)|MP |), which is linear in the corpus size. PATTY [29] is also scalable in the number of sentences but for each sentence, the complexity of dependency parsing it adopted is as high as O(n 3 ) where n is the length of the sentence. If the corpus has many long sentences, PATTY is time-consuming; whereas our MetaPAD's complexity is linear to the sentence length for every individual sentence. e empirical study on the scalability can be found in the next section. EXPERIMENTS is section reports our essential experiments that demonstrate the e ectiveness of the MetaPAD at (1) typed textual pa ern mining: discovering synonymous groups of meta pa erns, and (2) one application: extracting tuple information from three datasets of di erent genres. Additional results regarding e ciency are reported as well. Table 2 presents the statistics of three datasets from di erent genres: Datasets • APR: news from e Associated Press and Reuters in 2015; • TWT: tweets collected via Twitter API in 2015/06-2015/09; • CVD: paper titles and abstracts about the Cardiovascular diseases from the PubMed database. e news and biomedical paper corpora o en have long sentences, which is rather challenging for textual pa ern mining. For example, the component of dependency parsing in PATTY [29] has cubic computational complexity of the length for individual sentences. e preprocessing techniques in our MetaPAD adopt distant supervision with external databases for entity recognition and negrained typing (see Sec. 3.1). For the general corpora like news and tweets, we use DBpedia [4] and Freebase [6]; for the biomedical corpus, we use public MeSH databases [1]. Experimental Settings We conduct two tasks in the experiments. e rst task is to discover typed textual pa erns from massive corpora and organize the pa erns into synonymous groups. We compare with the state-of-the-art SOL pa ern synset mining method PATTY [29] on both the quality of pa erns and the quality of synonymous pa ern groups. Since there is no standard ground truth of the typed textual pa erns, we report extensive qualitative analysis on the three datasets. e second task is to extract entity, a ribute, value (EAV) tuple information. For every synonymous pa ern set generated by the competitive methods from news and tweets, we assign it to one a ribute type from the set in Table 3 if appropriate. We collect 5,621 EAV-tuples from the extractions, label them as true or false, and nally, we have 3,345 true EAV-tuples. We have 2,400 true EAV-tuples from APR and 2,090 from TWT. Most of them are out We evaluate the performance in terms of precision and recall. Precision is de ned as the fraction of the predicted EAV-tuples that are true. Recall is de ned as the fraction of the labelled true EAV-tuples that are predicted as true EAV-tuples. We use (1) the F1 score that is the harmonic mean of precision and recall, and (2) the Area Under the precision-recall Curve (AUC). All the values are between 0 and 1, and a higher value means be er performance. In the second task, besides PATTY, the competitive methods for tuple extraction are: Ollie [37] is an open IE system that extracts relational tuples with syntactic and lexical pa erns; ReNoun [40] learns "S-A-O" pa erns such as "S A, O, " and "A of S is O" with annotated corpus. Both methods ignore the entity-typing information. We develop four alternatives of MetaPAD as follows: : Compared with our meta patterns, the SOL pattern mining does not take the rich context into full consideration of pattern quality assessment; the de nition of SOL pattern synset is too limited to group truly synonymous patterns. Results on Typed Textual Pattern Discovery Our proposed MetaPAD discovers high-quality meta pa erns by context-aware segmentation from massive text corpus with a pattern quality assessment function. It further organizes them into synonymous groups. With each group of the truly synonymous meta pa erns, we can easily assign an appropriate a ribute type to it, and harvest a large collection of instances extracted by di erent pa erns of the same group. All these can be done not only on the news corpus but also on the biomedical corpus. Table 5 presents the groups of synonymous meta pa erns that express a ribute types country:president and company:ceo. First, we can see that the meta pa erns are generated from a typed corpus instead of the shortest path of a dependency parse tree. us, the pa erns can keep rich, wide context information. Second, the meta pa erns are of high quality on informativeness, completeness, and so on, and practitioners can easily tell why the pa erns are extracted as an integral semantic unit. ird, though the pa erns like "$P was elected as the president of $C " are relatively long and rare, they can be grouped with their synonymous pa erns so that all the extractions about one entity-a ribute type can be aggregated into one set. at is why MetaPAD successfully discovers who is/was the president of a small country like Burkina Faso or the ceo of a young company like Afghan Citadel. Fourth, MetaPAD discovered a rich collection of person:date of birth information from the new corpus that does not o en exist in the knowledge bases, thanks to our meta pa erns use not only entity types but also data types like $M $D $Y . Figure 7 shows the SOL pa ern synsets that PATTY generates from the four sentences. First, the dependency path loses the rich context around the entities like "president" in the rst example and "ceo" in the last example. Second, the SOL pa ern synset cannot group truly synonymous typed textual pa erns. We can see the advantages of generating meta pa erns and grouping them into synonymous clusters. In the introduction section we also show our MetaPAD can nd meta pa erns of rich data types for the a ribute types like person:age and person:date of birth. Results on EAV-Tuple Extraction Besides directly comparisons on the quality of mining synonymous typed textual pa erns, we apply pa erns from di erent systems, Ollie [37], ReNoun [40], and PATTY [29], to extract tuple information from the two general corpora APR (news) and TWT (tweets). We a empt to provide quantitative analysis on the use of the typed textual pa erns by evaluating how well they can facilitate the tuple extraction which is similar with one of the most challenging NLP tasks called slot lling for new a ributes [19]. Table 6 summarizes comparison results on tuple information that each texutal pa ern-driven system extracts from news and tweet datasets. Figure 8 presents precision-recall curves that further demonstrate the e ectiveness of our MetaPAD methods. We provide our observation and analysis as follows. 1) Overall, our MetaPAD-TS and MetaPAD-BS outperform the baseline methods, achieving signi cant improvement on both datasets (e.g., relatively 37.3% and 41.2% on F1 and AUC in the APR data). MetaPAD achieves 0.38-0.42 F1 score on discovering the EAV-tuples of new a ributes like country:president and company:ceo. In the TAC KBP competition, the best F1 score of extracting values of traditional a ributes like person:parent is only 0.3430 [19]. Meta-PAD can achieve reasonable performance when working on the new a ributes. MetaPAD also discovers the largest number of true tuples: on both datasets we discover more than a half of the labelled EAV-tuples (1,355/2,400 from APR and 1,111/2,090 from TWT). 2) e best of MetaPAD-T and MetaPAD-B that only segment but do not group meta pa erns can outperform PATTY relatively by So Ollie can hardly nd exact a ribute names from words or phrases of the relational phrases. ReNoun's S-A-O pa erns like "S's A O" require human annotations, use too general symbols, and bring too much noise in the extractions. PATTY's SOL pa erns use entity types but ignore rich context around the entities and only keep the short dependency path. Our meta pa en mining has context-aware segmentation with pa ern quality assessment, which generates high-quality typed textual pa erns from the rich context. 3) In MetaPAD-TS and MetaPAD-BS, we develop the modules of grouping synonymous pa erns and adjusting the entity types for appropriate granularity. ey improve the F1 score by 14.8% and 16.8% over MetaPAD-T and MetaPAD-B, respectively. We can see the number of true positives is signi cantly improved by aggregating extractions from di erent but synonymous meta pa erns. 4) On the tweet data, most of the person, location, and organization entities are NOT able to be typed at a ne-grained level. So MetaPAD-T(S) works be er than MetaPAD-B(S). e news data include a large number of entities of ne-grained types like the presidents and CEOs. So MetaPAD-B(S) works be er. Figure 9 shows the performance on di erent a ribute types on APR. MetaPAD outperforms all the other methods on each type. When there are many ways (pa erns) of expressing the a ributes, such as country:president, company:ceo, and award:winner, Meta-PAD gains more aggregated extractions from grouping the synonymous meta pa erns. Our MetaPAD can generate more informative and complete pa erns than PATTY's SOL pa erns: for state:representative, state:senator, and county:sheri that may not have many pa erns, MetaPAD does not improve the performance much but it still works be er than the baselines. Results on E ciency e execution time experiments were all conducted on a machine with 20 cores of Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz. Our framework is implemented in C++ for meta-pa ern segmentation and in Python for grouping synonymous meta pa erns and adjusting type levels. We set up 10 threads for MetaPAD as well as all baseline methods. Table 7 presents the e ciency performance of MetaPAD on three datasets: both the number of meta pa erns and time complexity are linear to the corpus size. Speci cally, for the 31G tweet data, MetaPAD takes less than 2 hours, while PATTY that requires Stanford parser takes 7.3 hours, and Ollie takes 28.4 hours. Note that for the smaller news data that have many long sentences, PATTY takes even more time, 10.1 hours. CONCLUSIONS In this work, we proposed a novel typed textual pa ern structure, called meta pa ern, which is extened to a frequent, complete, informative, and precise subsequence pa ern in certain context, compared with the SOL pa ern. We developed an e cient framework, Ollie ReNoun PATTY MetaPAD-TS MetaPAD-BS Figure 9: Performance comparisons on concrete attribute types in terms of F1 score and number of true positives. MetaPAD, to discover the meta pa erns from massive corpora with three techniques, including (1) a context-aware segmentation method to carefully determine the boundaries of the pa erns with a learnt pa ern quality assessment function, which avoids costly dependency parsing and generates high-quality pa erns, (2) a clustering method to group synonymous meta pa erns with integrated information of types, context, and instances, and (3) top-down and bo om-up schemes to adjust the levels of entity types in the meta pa erns by examining the type distributions of entities in the instances. Experiments demonstrated that MetaPAD e ciently discovered a large collection of high-quality typed textual pa erns to facilitate challenging NLP tasks like tuple information extraction. (a) shows how arXiv, arXiv © 2017 978-x-xxxx-xxxx-x/YY/MM. . . $15.00 DOI: 10.1145/nnnnnnn.nnnnnnn the SOL pa erns are automatically generated with the shortest paths between two typed entities on the parse trees of individual sentences. Despite of the signi cant contributions of the work, SOL pa erns have three limitations on mining typed textual pa erns from a large-scale text corpus as illustrated below. Figure 1 : 1Comparing the synonymous group of meta patterns in MetaPAD with that of SOL patterns in PATTY. U.S. President Barack Obama and Prime Minister Justin Trudeau of Canada met in … u_s president barack_obama and prime_minister justin_trudeau of canada met in … $LOCATION president $PERSON and prime_minister $PERSON of $LOCATION met in … $LOCATION.COUNTRY president $PERSON.POLITICIAN and prime_minister $PERSON.POLITICIAN of $LOCATION.COUNTRY met in mining 2 entity recognition and coarse-grained typing 3 fine-grained typing Figure 2 : 2Preprocessing for ne-grained typed corpus: given us a corpus and a typing system. Figure 4 : 4COUNTRY president $POLITICIAN⎦ and ⎡prime_minister $POLITICIAN of $COUNTRY⎦ Q(.) ⇧ Q(.) ⇧ Q(.) ⇩ Generating meta patterns by context-aware segmentation with the pattern quality function Q(.). Re-learn Q(.) by replacing the raw frequency feature c(mp) with the recti ed frequency c r (mp) as feedback 9: Re-segment the corpus C with the new Q(.)10: return Segmented corpus, a set of quality meta pa erns in the segmented corpus, and their quality scores in Q(.) 1 .Figure 7 17MetaPAD-T only develops segmentation to generate pa erns in which the entity types are at the top (coarse-grained) level; 2. MetaPAD-TS develops all the three components of MetaPAD including synonymous pa ern grouping based on MetaPAD-T; 3. MetaPAD-B only develops segmentation to generate pa erns in which the entity types are at the bo om ( ne-grained) level; Figure 8 : 8Precision-recall on tuple information extraction. ⎡$LOCATION president $PERSON⎦ and ⎡prime_minister $PERSON of $LOCATION⎦ met in … Generating meta patterns by context-aware segmentation: (Section 4.1)$LOCATION president $PERSON president $PERSON of $LOCATION $LOCATION 's president $PERSON … prime_minister $PERSON of $LOCATION $LOCATION prime_minister $PERSON $LOCATION 's prime_minister $PERSON … Grouping synonymous meta patterns: (Section 4.2) Adjusting entity-type levels for appropriate granularity: (Section 4.3) $COUNTRY president $POLITICIAN president $POLITICIAN of $COUNTRY $COUNTRY 's president $POLITICIAN … prime_minister $POLITICIAN of $COUNTRY $COUNTRY prime_minister $POLITICIAN $COUNTRY 's prime_minister $POLITICIAN … $LOCATION.COUNTRY president $PERSON.POLITICIAN and prime_minister $PERSON.POLITICIAN of $LOCATION.COUNTRY met in … 1 2 3 Table 1 : 1Issues of quality over-/under-estimation can be xed when the segmentation recti es pattern frequency.Before segmentation Frequency recti ed a er segmentation Figure 5: Grouping synonymous meta patterns with information of context words and extractions.$COUNTRY president $POLITICIAN president $POLITICIAN of $COUNTRY $PERSON, $DIGIT, $PERSON's age is $DIGIT $PERSON, a $DIGIT -year-old president ⟨United States, Barack Obama⟩ ⟨Barack Obama, 55⟩ ⟨Justin Trudeau, 43⟩ age -year-old ⟨United States, Bill Clinton⟩ word2vec similarity $PERSON $POLITICIAN $ARTIST $PERSON , a $DIGIT -year-old $PERSON , $DIGIT , $PERSON 's age is $DIGIT $PERSON $LOCATION 's president $PERSON President of $LOCATION $PERSON $LOCATION president $PERSON $ATTACKER $ARTIST $ATHLETE $POLITICIAN $VICTIM $LOCATION $COUNTRY $ETHNICITY $CITY $PERSON $COUNTRY $ETHNICITY $POLITICIAN Table 2 : 2ree datasets of di erent genres.Dataset File Size #Document #Entity #Entity Mention APR (news) 199MB 62,146 284,061 6,732,399 TWT (tweet) 1.05GB 13,200,821 618,459 21,412,381 CVD (paper) 424MB 463,040 751,158 27,269,242 Table 3 : 3Entity-Attribute-Value tuples as ground truth.A ribute Type of Entity Type of Value #Tuple country:president $C $P 1,170 country:minister $C $P 1,047 state:representative $S $P 655 state:senator $S $P 610 county:sheri $C $P 106 company:ceo $C $B 1,052 university:professor $U $R 707 award:winner $A $P 274 Table 4 : 4Synonymous meta patterns and their extractions that MetaPAD generates from the biomedical corpus CVD.A group of synonymous meta pa erns $T $D $T was used to treat $D zoledronic acid therapy Paget's disease of bone $D using the $T bisphosphonates osteoporosis $T has been widely used to treat $D calcitonin Paget's disease of bone $T of patients with $D calcitonin osteoporosis … … … A group of synonymous meta pa erns $B $A $B was resistant to $A corynebacterium striatum BM4687 gentamicin $B are resistant to $A corynebacterium striatum BM4687 tobramycin $B is the most resistant to $A methicillin-susceptible S aureus vancomycin $B , particularly those resistant to $A multidrug-resistant enterobacteriaceae gentamicin … … … Table 5 : 5Synonymous meta patterns and their extractions that MetaPAD generates from the news corpus APR on coun- try:president, company:ceo, and person:date of birth. A group of synonymous meta pa erns $C $P $C president $P United States Barack Obama $C 's president $P United States Bill Clinton president $P of $C Russia Vladimir Putin $P , the president of $C , France François Hollande … … … president $P 's government of $C Comoros Ikililou Dhoinine $P was elected as the president of $C Burkina Faso Blaise Compaoré A group of synonymous meta pa erns $C $B $C ceo $B Apple Tim Cook $C chief executive $B Facebook Mark Zuckerburg $B , the $C ceo, Hewle -Packard Carly Fiorina $C former ceo $B Yahoo! Marissa Mayer … … … $B was appointed as ceo of $C Infor Charles Phillips $B , former interim ceo, leaves $C Afghan Citadel Roya Mahboob A group of synonymous meta pa erns $P $D $M $Y $P was born $M $D , $Y Willie Howard Mays 6 May 1931 $P was born on $D $M $Y Robert David Simon 29 May 1941 $P (born on $M $D , $Y ) Phillip Joel Hughes 30 Nov 1988 $P (born on $D $M $Y ) … … $P , was born on $M $D , $Y Carl Sessions Stepp 8 Sept 1956 … Richard von Weizsaecker 15 April 1920 of the existing knowledge bases: we are exploring new extractions from new text corpora. Table 4 4shows our MetaPAD can also discover synonymous meta pa ern groups and extractions from the biomedical domain. Without heavy annotation of speci c domain knowledge, we can nd all the pa erns about what $T can treat what $D and what $B are resistant to what $A . Table 6 : 6Reporting F1, AUC, and number of true positives (TP) on tuple extraction from news and tweets data.APR (news, 199MB) TWT (tweets, 1.05GB) F1 AUC TP F1 AUC TP Ollie [37] 0.0353 0.0133 288 0.0094 0.0012 115 ReNoun [40] 0.1309 0.0900 562 0.0821 0.0347 698 PATTY [29] 0.3085 0.2497 860 0.2029 0.1256 860 MetaPAD-T 0.3614 0.2843 799 0.3621 0.2641 880 MetaPAD-TS 0.4156 0.3269 1,355 0.4153 0.3554 1,111 MetaPAD-B 0.3684 0.3186 787 0.3228 0.2704 650 MetaPAD-BS 0.4236 0.3525 1,040 0.3827 0.3408 975 Ollie ReNoun PATTY MetaPAD-TS MetaPAD-BS Table 7 : 7E ciency: time complexity is linear in corpus size. 4% (APR) and 78.5% (TWT) on F1 and by 27.6% (APR) and 115.3% (TWT) on AUC. Ollie parses individual sentences for relational tuples in which the relational phrases are o en verbal expressions.APR CVD TWT File Size 199MB 424MB 1.05GB #Meta Pa ern 19,034 41,539 156,338 Time Cost 29min 72min 117min 19. Data and code can be found here: h ps://github.com/mjiang89/MetaPAD. . MetaPAD-BS develops all the three components of MetaPAD including synonymous pa ern grouping based on MetaPAD-B.For the parameters in MetaPAD, we set the maximum pa ern length as ω = 20, the threshold of graininess score as θ = 0.8, and the threshold of support score as γ = 0.1.$POLITICIAN government $COUNTRY $POLITICIAN elected president $COUNTRY $BUSINESSPERSON appointed ceo $COMPANY $BUSINESSPERSON leaves $COMPANY Mining sequential pa erns. Rakesh Agrawal, Ramakrishnan Srikant, ICDE. Rakesh Agrawal and Ramakrishnan Srikant. 1995. Mining sequential pa erns. In ICDE. 3-14. Leveraging linguistic structure for open domain information extraction. Gabor Angeli, Melvin Johnson Premkumar, Christopher D Manning, ACL. Gabor Angeli, Melvin Johnson Premkumar, and Christopher D Manning. 2015. Leveraging linguistic structure for open domain information extraction. ACL (2015). Dbpedia: a nucleus for a web of open data. Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, Zachary Ives, Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: a nucleus for a web of open data. In e semantic web. 722-735. Open information extraction from the web. Michele Banko, J Michael, Stephen Cafarella, Soderland, Oren Ma Hew Broadhead, Etzioni, IJCAI. 7Michele Banko, Michael J Cafarella, Stephen Soderland, Ma hew Broadhead, and Oren Etzioni. 2007. Open information extraction from the web.. In IJCAI, Vol. 7. 2670-2676. Freebase: a collaboratively created graph database for structuring human knowledge. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, Jamie Taylor, SIGMOD. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In SIGMOD. 1247-1250. Random forests. Leo Breiman, Machine learning. 45Leo Breiman. 2001. Random forests. Machine learning 45, 1 (2001), 5-32. Toward an architecture for never-ending language learning. Andrew Carlson, Justin Be Eridge, Bryan Kisiel, Burr Se Les, Tom M Estevam R HruschkaJr, Mitchell, In AAAI. 53Andrew Carlson, Justin Be eridge, Bryan Kisiel, Burr Se les, Estevam R Hr- uschka Jr, and Tom M Mitchell. 2010. Toward an architecture for never-ending language learning. In AAAI, Vol. 5. 3. Aspect extraction with automated prior knowledge learning. Zhiyuan Chen, Arjun Mukherjee, Bing Liu, ACL. Zhiyuan Chen, Arjun Mukherjee, and Bing Liu. 2014. Aspect extraction with automated prior knowledge learning. In ACL. Generating typed dependency parses from phrase structure parses. Marie-Catherine De Marne E, Bill Maccartney, Proceedings of LREC. LRECGenoa6Christopher D Manning, and othersMarie-Catherine De Marne e, Bill MacCartney, Christopher D Manning, and others. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of LREC, Vol. 6. Genoa, 449-454. Alex Smola, Vladimir Vapnik, and others. Harris Drucker, J C Christopher, Linda Burges, Kaufman, NIPS. 9Support vector regression machinesHarris Drucker, Christopher JC Burges, Linda Kaufman, Alex Smola, Vladimir Vapnik, and others. 1997. Support vector regression machines. NIPS 9 (1997), 155-161. Identifying relations for open information extraction. Anthony Fader, Stephen Soderland, Oren Etzioni, EMNLP. Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In EMNLP. 1535-1545. Text mining for product a ribute extraction. Rayid Ghani, Katharina Probst, Yan Liu, Marko Krema, Andrew Fano, SIGKDD Explorations. 81Rayid Ghani, Katharina Probst, Yan Liu, Marko Krema, and Andrew Fano. 2006. Text mining for product a ribute extraction. SIGKDD Explorations 8, 1 (2006), 41-48. Biperpedia: an ontology for search applications. Rahul Gupta, Alon Halevy, Xuezhi Wang, Steven Euijong Whang, Fei Wu, PVLDB. 7Rahul Gupta, Alon Halevy, Xuezhi Wang, Steven Euijong Whang, and Fei Wu. 2014. Biperpedia: an ontology for search applications. PVLDB 7, 7 (2014), 505-516. Discovering structure in the universe of a ribute names. Alon Halevy, Natalya Noy, Sunita Sarawagi, Steven Euijong Whang, Xiao Yu, Alon Halevy, Natalya Noy, Sunita Sarawagi, Steven Euijong Whang, and Xiao Yu. 2016. Discovering structure in the universe of a ribute names. In WWW. 939-949. A procedure for clique detection using the group matrix. Frank Harary, C Ian, Ross, Sociometry. 20Frank Harary and Ian C Ross. 1957. A procedure for clique detection using the group matrix. Sociometry 20, 3 (1957), 205-215. Automatic acquisition of hyponyms from large text corpora. A Marti, Hearst, COLINGS. Marti A Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In COLINGS. 539-545. Mining and summarizing customer reviews. Minqing Hu, Bing Liu, KDD. Minqing Hu and Bing Liu. 2014. Mining and summarizing customer reviews. In KDD. Overview of the TAC 2010 knowledge base population track. Heng Ji, Ralph Grishman, Hoa Trang Dang, Kira Gri, Joe Ellis , ird Text Analysis Conference (TAC). 3Heng Ji, Ralph Grishman, Hoa Trang Dang, Kira Gri , and Joe Ellis. 2010. Overview of the TAC 2010 knowledge base population track. In ird Text Anal- ysis Conference (TAC), Vol. 3. 3-3. Matching unstructured product o ers to structured product speci cations. Anitha Kannan, E Inmar, Rakesh Givoni, Ariel Agrawal, Fuxman, KDD. Anitha Kannan, Inmar E Givoni, Rakesh Agrawal, and Ariel Fuxman. 2011. Matching unstructured product o ers to structured product speci cations. In KDD. 404-412. Fine-grained entity recognition. Xiao Ling, Daniel S Weld, AAAI. Xiao Ling and Daniel S Weld. 2012. Fine-grained entity recognition. In AAAI. Mining quality phrases from massive text corpora. Jialu Liu, Jingbo Shang, Chi Wang, Xiang Ren, Jiawei Han, SIGMOD. Jialu Liu, Jingbo Shang, Chi Wang, Xiang Ren, and Jiawei Han. 2015. Mining quality phrases from massive text corpora. In SIGMOD. 1729-1744. 2014. e stanford CoreNLP natural language processing toolkit. D Christopher, Mihai Manning, John Surdeanu, Jenny Rose Bauer, Steven Finkel, David Bethard, Mcclosky, ACL. Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David McClosky. 2014. e stanford CoreNLP natural language processing toolkit. In ACL. 55-60. Online largemargin training of dependency parsers. Ryan Mcdonald, Koby Crammer, Fernando Pereira, ACL. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large- margin training of dependency parsers. In ACL. 91-98. Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Je Dean, NIPS. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Je Dean. 2013. Distributed representations of words and phrases and their compositionality. In NIPS. 3111-3119. Discovering relations between noun categories. Ahir Mohamed, R Estevam, Tom M HruschkaJr, Mitchell, EMNLP. ahir Mohamed, Estevam R. Hruschka Jr., and Tom M. Mitchell. 2011. Discov- ering relations between noun categories. In EMNLP. A survey of named entity recognition and classi cation. David Nadeau, Satoshi Sekine, Lingvisticae Investigationes. 30David Nadeau and Satoshi Sekine. 2007. A survey of named entity recognition and classi cation. Lingvisticae Investigationes 30, 1 (2007), 3-26. Finegrained semantic typing of emerging entities. Ndapandula Nakashole, Tomasz Tylenda, Gerhard Weikum, ACL. Ndapandula Nakashole, Tomasz Tylenda, and Gerhard Weikum. 2013. Fine- grained semantic typing of emerging entities. In ACL. 1488-1497. PATTY: a taxonomy of relational pa erns with semantic types. Ndapandula Nakashole, Gerhard Weikum, Fabian Suchanek, EMNLP. Ndapandula Nakashole, Gerhard Weikum, and Fabian Suchanek. 2012. PATTY: a taxonomy of relational pa erns with semantic types. In EMNLP. 1135-1145. WikiNet: a very large scale multi-lingual concept network. Vivi Nastase, Michael Strube, Benjamin Börschinger, Cäcilia Zirn, Anas Elghafari, LREC. Vivi Nastase, Michael Strube, Benjamin Börschinger, Cäcilia Zirn, and Anas Elghafari. 2010. WikiNet: a very large scale multi-lingual concept network. In LREC. Weakly-supervised acquisition of open-domain classes and class a ributes from web documents and query logs. Marius Pasca, Benjamin Van Durme, Marius Pasca and Benjamin Van Durme. 2008. Weakly-supervised acquisition of open-domain classes and class a ributes from web documents and query logs.. . Acl In, In ACL. 19-27. Mining sequential pa erns by pa ern-growth: e pre xspan approach. Jian Pei, Jiawei Han, Behzad Mortazavi-Asl, Jianyong Wang, Helen Pinto, Qiming Chen, Umeshwar Dayal, Mei-Chun Hsu, TKDE. 16Jian Pei, Jiawei Han, Behzad Mortazavi-Asl, Jianyong Wang, Helen Pinto, Qiming Chen, Umeshwar Dayal, and Mei-Chun Hsu. 2004. Mining sequential pa erns by pa ern-growth: e pre xspan approach. TKDE 16, 11 (2004), 1424-1440. Semi supervised learning of a ribute value pairs from product descriptions. Katharina Probst, Rayid Ghani, Marko Krema, Andrew Fano, Yan Liu, AAAI. Katharina Probst, Rayid Ghani, Marko Krema, Andrew Fano, and Yan Liu. 2007. Semi supervised learning of a ribute value pairs from product descriptions. In AAAI. Using structured text for large-scale a ribute extraction. Sujith Ravi, Marius Paşca, CIKM. Sujith Ravi and Marius Paşca. 2008. Using structured text for large-scale a ribute extraction. In CIKM. 1183-1192. Clustype: e ective entity recognition and typing by relation phrasebased clustering. Xiang Ren, Ahmed El-Kishky, Chi Wang, Fangbo Tao, Clare R Voss, Jiawei Han, Xiang Ren, Ahmed El-Kishky, Chi Wang, Fangbo Tao, Clare R Voss, and Jiawei Han. 2015. Clustype: e ective entity recognition and typing by relation phrase- based clustering. In KDD. 995-1004. . Xiang Ren, Wenqi He, Clare R Meng, Heng Voss, Jiawei Ji, Han, Xiang Ren, Wenqi He, Meng , Clare R Voss, Heng Ji, and Jiawei Han. 2016. Label noise reduction in entity typing by heterogeneous partial-label embedding. KDD. Label noise reduction in entity typing by heterogeneous partial-label embedding. In KDD. Open language learning for information extraction. Michael Schmitz, Robert Bart, Stephen Soderland, EMNLP. Oren Etzioni, and othersMichael Schmitz, Robert Bart, Stephen Soderland, Oren Etzioni, and others. 2012. Open language learning for information extraction. In EMNLP. 523-534. Representing text for joint embedding of text and knowledge bases. Kristina Toutanova, Danqi Chen, Patrick Pantel, Pallavi Choudhury, Michael Gamon, ACL. Kristina Toutanova, Danqi Chen, Patrick Pantel, Pallavi Choudhury, and Michael Gamon. 2015. Representing text for joint embedding of text and knowledge bases. ACL (2015). Open information extraction using Wikipedia. Fei Wu, Daniel S Weld, ACL. Fei Wu and Daniel S Weld. 2010. Open information extraction using Wikipedia. In ACL. 118-127. ReNoun: fact extraction for nominal a ributes. Mohamed Yahya, Steven Whang, Rahul Gupta, Alon Y Halevy, EMNLP. Mohamed Yahya, Steven Whang, Rahul Gupta, and Alon Y Halevy. 2014. ReNoun: fact extraction for nominal a ributes. In EMNLP. 325-335. Unsupervised person slot lling based on graph mining. Dian Yu, Heng Ji, ACL. Dian Yu and Heng Ji. 2016. Unsupervised person slot lling based on graph mining. In ACL. E ective pa ern discovery for text mining. Ning Zhong, Yuefeng Li, Sheng-Tang Wu, IEEE TKDE. 24Ning Zhong, Yuefeng Li, and Sheng-Tang Wu. 2012. E ective pa ern discovery for text mining. IEEE TKDE 24, 1 (2012), 30-44.
[]
[ "Refinement of Hottopixx Method for Nonnegative Matrix Factorization Under Noisy Separability", "Refinement of Hottopixx Method for Nonnegative Matrix Factorization Under Noisy Separability", "Refinement of Hottopixx Method for Nonnegative Matrix Factorization Under Noisy Separability", "Refinement of Hottopixx Method for Nonnegative Matrix Factorization Under Noisy Separability" ]
[ "Tomohiko Mizutani ", "Tomohiko Mizutani " ]
[]
[]
Hottopixx, proposed by Bittorf et al. at NIPS 2012, is an algorithm for solving nonnegative matrix factorization (NMF) problems under the separability assumption. Separable NMFs have important applications, such as topic extraction from documents and unmixing of hyperspectral images. In such applications, the robustness of the algorithm to noise is the key to the success. Hottopixx has been shown to be robust to noise, and its robustness can be further enhanced through postprocessing. However, there is a drawback. Hottopixx and its postprocessing require us to estimate the noise level involved in the matrix we want to factorize before running, since they use it as part of the input data. The noise-level estimation is not an easy task. In this paper, we overcome this drawback. We present a refinement of Hottopixx and its postprocessing that runs without prior knowledge of the noise level. We show that the refinement has almost the same robustness to noise as the original algorithm.
10.1137/21m144220
[ "https://export.arxiv.org/pdf/2109.02863v2.pdf" ]
245,836,186
2109.02863
106856e68784c8393a54692f7eba574bd98ea620
Refinement of Hottopixx Method for Nonnegative Matrix Factorization Under Noisy Separability 9 Jan 2022 January 11, 2022 Tomohiko Mizutani Refinement of Hottopixx Method for Nonnegative Matrix Factorization Under Noisy Separability 9 Jan 2022 January 11, 2022nonnegative matrix factorizationseparabilityrobustness to noiselinear program- ming Hottopixx, proposed by Bittorf et al. at NIPS 2012, is an algorithm for solving nonnegative matrix factorization (NMF) problems under the separability assumption. Separable NMFs have important applications, such as topic extraction from documents and unmixing of hyperspectral images. In such applications, the robustness of the algorithm to noise is the key to the success. Hottopixx has been shown to be robust to noise, and its robustness can be further enhanced through postprocessing. However, there is a drawback. Hottopixx and its postprocessing require us to estimate the noise level involved in the matrix we want to factorize before running, since they use it as part of the input data. The noise-level estimation is not an easy task. In this paper, we overcome this drawback. We present a refinement of Hottopixx and its postprocessing that runs without prior knowledge of the noise level. We show that the refinement has almost the same robustness to noise as the original algorithm. Introduction Let R d×n + denote the set of all nonnegative matrices of size d × n. We are given V ∈ R d×n + and the factorization rank r. The nonnegative matrix factorization (NMF) problem asks us to find the factors W ∈ R d×r + and H ∈ R r×n + of V minimizing the gap between V and the product W H. NMFs have many applications in diverse fields and thus have drawn the attention of researchers and practitioners. The problem is that the computation is intractable; it was shown to be NP-hard by Vavasis [19]. Arora et al. [3] further investigated the complexity of the NMF problem. They proposed to use an assumption, called separability, to remedy the issue. The notion of separability was originally introduced by Donoho and Stodden in [6] as a way of discussing the uniqueness of NMFs. Arora et al. showed that, if we place the separability assumption on the input matrix V , then the NMF problem turns out to be tractable; we can find the factors W and H without much effort such that V = W H. Let us say that a matrix is separable if it satisfies the separability assumption. The application range of separable NMFs is restricted in comparison to NMFs, but they have still important applications, such as topic extraction from documents [4,2] and unmixing of hyperspectral images [15,16]. Other applications can be found in [7,12]. So far, several algorithms have been developed for solving separable NMF problems. Separable matrices arising from applications should be perturbed by noise. Hence, it is desirable that an algorithm is robust against noise; even if noise is added to a separable matrix, the algorithm should be able to find the factors whose product well approximates the noisy separable matrix. Bittorf et al. [5] proposed an algorithm, referred to as Hottopixx, for separable NMF problems. Their development is based on the observation that a certain feature of a separable matrix can be captured using linear programming (LP), and an optimal solution of the LP serves as a guide for solving the separable NMF problem. They showed that Hottopixx is robust to noise. Their result needs somewhat strong assumption. Roughly speaking, they assume that the columns of the separable matrix do not overlap. The assumption is not reasonable when dealing with applications such as topic extraction from documents and unmixing of hyperspectral images. Gillis [9] pointed out this issue and suggested a resolution. He developed postprocessing for Hottopixx and showed that with it Hottopixx is robust to noise without the assumption Bittorf et al. [5] put. There is a drawback with Hottopixx and its postprocessing. They require three input data: a noisy separable matrix, the factorization rank, and the noise level. In the applications we mentioned above, we often encounter the situation in which the factorization rank can be estimated in advance. Meanwhile, it is unlikely that the noise level can be estimated in advance; we thus need to estimate it and its estimation is not an easy task. For that reason, most of the algorithms for solving separable NMF problems, such as VCA [18], SPA [15], SNPA [10] and ER [17], are designed to receive two input data: a noisy separable matrix and the factorization rank. Several drawbacks of Hottopixx are listed by Gillis and Luce in [13] and the drawback we mentioned above is one of them. The main contribution of this paper is to overcome the drawback. We present a refinement of Hottopixx and its postprocessing that takes a noisy separable matrix and the factorization rank as input, but does not need prior knowledge of the noise level. We show that the refinement has almost the same robustness to noise as the original algorithm. The results are summarized in Theorems 1 and 2 of Section 3. In addition, we demonstrate in experiments the effectiveness of our refinement. This paper is organized as follows. In Section 2, we formulate the separable NMF problem and explain the assumption and parameters used in our analysis. Section 3 presents the main results and compares them with the results of previous studies. Sections 4 and 5 describe the proposed algorithms and examine their robustness to noise; the refinement of Hottopixx is in Section 4 and the refinement of postprocessing is in Section 5. Section 6 describes experiments. Notation and Symbols We write 0 for a vector of all zeros, 1 for a vector of all ones, e i for the ith unit vector, and I for the identity matrix. The symbol 0 is also used for a matrix of all zeros; in particular, 0 m×n for an m × n matrix of all zeros. The notation a(i) denotes the ith element of a ∈ R n . Let A ∈ R m×n . The rows, columns and elements are denoted as follows: A(i, :) for the ith row, A(:, j) or a j for the jth column, and A(i, j) for the (i, j)th element. Let I ⊂ {1, . . . m} and J ⊂ {1, . . . , n}. The notation A(I, :) denotes the submatrix obtained by eliminating rows A(i, :) for all indices i in the complement of I, and A(:, J) that by eliminating columns A(:, j) for all indices j in the complement of J. The notation · p denotes the L p norm of a vector or a matrix, · F the Frobenius norm of a matrix, tr(·) the trace of a square matrix, and diag(·) a vector composed of diagonal elements of a square matrix. i.e., diag(B) = [B(1, 1), . . . , B(n, n)] ⊤ for B ∈ R n×n . For positive integers r and n, the symbol R denotes the set of consecutive integers from 1 to r, and N that from 1 to n. For S ⊂ N , we denote by S c the complement of S. For a, b ∈ R with a < b, the notation (a, b) denotes the open interval {x ∈ R : a < x < b}, and [a, b] the closed interval {x ∈ R : a ≤ x ≤ b}. Problem and Preliminaries Let V ∈ R d×n + have an exact NMF V = W H for W ∈ R d×r + and H ∈ R r×n + . Separability assumes that it can be further written as V = W H for W ∈ R d×r + and H = [I,H]Π ∈ R r×n + (1) where I is an r × r identity matrix,H is an r × (n − r) nonnegative matrix, and Π is an n × n permutation matrix. When a nonnegative matrix is written in the form shown in (1), we say that it is r-separable or simply separable. Separability means that all columns of W appear in those of V ; that is, there is a map φ : R → N such that w j = v φ(j) for each j = 1, . . . , r. We call the matrix [v φ(1) , . . . , v φ(r) ], which is equivalent to W , the basis of V : in particular, v φ(j) is the basis column and φ(j) the basis index. We call r the factorization rank of V . We formulate the separable NMF problem as follows: Problem 1. Given a separable matrix V and factorization rank r, find the basis of V . Separable matrices arising from applications would contain noise. Noisy separability assumes that N ∈ R d×n is added to a separable matrix V ∈ R d×n + such that A = V + N . We call N the noise added to the separable matrix V . If a matrix is in the form above, we say that it is noisy separable. When dealing with applications, it is desirable that, even if a separable matrix contains noise, an algorithm for solving separable NMF problems can still find a near-basis. Given a noisy separable matrix A = V + N , we say that the algorithm is robust to noise if it can find the column index set J such that A(:, J) is close to the basis of V . Our analysis put the following assumption on a matrix A. (1) and N ∈ R d×n is noise. Moreover, (a) every column of V , W and H has unit L 1 norm, and (b) the noise N satisfies N 1 ≤ ǫ for some real number ǫ satisfying 0 ≤ ǫ < 1. We call ǫ the noise level involved in A. As described in [3,8], we can assume without loss of generality that part (a) holds. Our analysis uses parameters κ, ω and β, which were introduced by Gillis [9] for the analysis of Hottopixx. (1) and N ∈ R d×n is noise. The parameters κ and ω are defined in terms of W by Assumption 1. A = V + N ∈ R d×n , where V ∈ R d×n + is r-separable of the form V = W H = W [I,H]Π shown inLet A = V + N ∈ R d×n where V ∈ R d×n + is r-separable of the form V = W H = W [I,H]Π shown inκ = min 1≤j≤r min z≥0 w j − W (:, R \ {j})z 1 , ω = min 1≤j 1 =j 2 ≤r w j 1 − w j 2 1 . They satisfy the relation κ ≤ ω. (2) It is easy to verify that it holds. Let j 1 , j 2 ∈ R with j 1 = j 2 satisfy ω = w j 1 − w j 2 1 . Then, there exists an integer ℓ ∈ R such that W (:, R \ {j 1 })e ℓ = w j 2 . Hence, κ ≤ w j 1 − W (:, R \ {j 1 })e ℓ 1 = w j 1 − w j 2 1 = ω. Let Assumption 1(a) hold. Then, we can bound κ and ω as 0 ≤ κ ≤ 1,(3)0 ≤ ω ≤ 2.(4) The lower bounds come from the definitions of κ and ω. For the upper bounds, we find that κ ≤ w j 1 = 1 for any j ∈ R, and ω ≤ w j 1 − w j 2 1 ≤ w j 1 1 + w j 2 1 = 2 for any different j 1 , j 2 ∈ R. The parameter β is defined in terms of the submatrixH of H by β = max 1≤i≤r, 1≤j≤n−rH (i, j). Let Assumption 1(a) hold. Then, β satisfies 0 ≤ β ≤ 1. In particular, if β = 1, there are columns ofH such that one element is 1 and the others are 0. This means that there are duplicate basis columns. Main Results Here, we present the main results in the form of Theorems 1 and 2. We refine Hottopixx of Bittorf et al. [5]. Our refinement uses the optimization model P, which is shown in Section 4.1. Algorithm 1 of the section describes the details of the refinement. Our first result, which states the robustness of Algorithm 1 to noise, is as follows: Theorem 1. Let A satisfy Assumption 1. Assume κ > 0. Run the refinement of Hottopixx, i.e., Algorithm 1, on the input (A, r). If ǫ ≤ κ(1 − β) 9(r + 1) , then, after suitably rearranging the columns of W , the output W out satisfies W − W out 1 ≤ ǫ. If the noise level ǫ is positive and the basis columns overlap, i.e., β = 1, the theorem is invalid and does not say anything about the robustness of Algorithm 1 to noise. To cope with this issue, we develop postprocessing that ensures the algorithm's robustness to noise even in such a case. Postprocessing for that purpose was proposed by Gillis [9], and here, we refine it. A detailed description of the refinement is given in Algorithm 2 of Section 5.1. Our second result, which states the robustness of Algorithm 2 to noise, is as follows: Theorem 2. Let A satisfy Assumption 1. Run the refinement of Hottopixx with postprocessing, i.e., Algorithm 2, on the input (A, r). If ǫ < κω 578(r + 1) , then, after suitably rearranging the columns of W , the output W out satisfies W − W out 1 ≤ 136(r + 1) κ ǫ. In particular, if ǫ < κ 2 289(r + 1) 2 , then, after suitably rearranging the columns of W , the output W out satisfies W − W out 1 ≤ 8 √ ǫ. Theorem 2 tells us that there is a range of noise intensity that Algorithm 2 is robust to even if there are duplicate basis columns. From the relation κ ≤ ω shown in (2), we can see that . The theorem tells us that, if ǫ satisfies ǫ ≤ κω 578(r+1) , the error of the output W out relative to the basis W can be bounded by using ǫ, r, κ; in particular, if ǫ is small and satisfies ǫ ≤ κ 2 289(r+1) 2 , the error bound depends on only ǫ. Note that it remains an open question how tight the bounds shown in Theorems 1 and 2 are. This is a topic for further research. Now, let us review the previous work on Hottopixx and compare our results with the previous ones. Arora et al. [3] proposed the first algorithm with provable guarantees for solving separable NMF problems. Motivated by that work, Bittorf et al. [5] developed Hottopixx. Let A = V + N where V ∈ R d×n + is r-separable of the form V = W H shown in (1) and N ∈ R d×n is noise satisfying N 1 ≤ ǫ for some nonnegative real number ǫ. Hottopixx is based on the optimization model Q and require (A, r, ǫ) as its input. The details of the algorithm and Q are given in Section 4.1. Bittorf et al. showed that Hottopixx is robust to noise. However, it was unclear whether one can ensure its robustness in the case that there are duplicate basis columns. Gillis [9] and Gillis and Luce [13] pursued a line of research that examined the robustness of Hottopixx. Tables 1 and 2 summarize their results as well as ours. The first column lists the input data of the algorithms; the second one lists the optimization model whose details are given in Section 4.1; the third one lists the assumptions imposed on the analysis; and the fourth and fifth ones list the robustness results obtained by the analysis, i.e., the bound on the noise level and the error of the output relative to the basis. Gillis [9] investigated the robustness of Hottopixx. He started by analyzing the case where there are no duplicate basis columns. The analysis suggested that the use of postprocessing makes it possible to enhance its robustness. He then developed postprocessing and showed that Hottopixx with the postprocessing is robust to noise even when there are duplicate basis columns. The results on Hottopixx (Theorem 2.3 of [9]) are summarized in the second row of Table 1, and those on Hottopixx with the postprocessing (Theorem 3.5 of [9]) are in the second row of Table 2. Gillis and Luce [13] developed a refinement of Hottopixx. Their refinement is based on the optimization model R, and it requires (A, ǫ) as input. The details of the algorithm and R are given [9]) and Gillis and Luce (Theorem 2 of [13]) for algorithms without postprocessing. The algorithm of Gillis and Luce uses a parameter ρ that is set to a positive real number. Input Model Assumption Noise level Error in Section 4.1. They showed that the refinement is robust to noise. The results (Theorem 2 of [13]) are summarized in the third row of Table 1. Here, ρ is a parameter that is set to a positive real number. The advantage of the refinement over Hottopixx is that it does not require prior knowledge of the factorization rank r of the matrix A we want to factorize, and the robustness result does not depend on r. They also incorporated the postprocessing of Gillis [9] into the refinement, and showed that the same result as Theorem 3.5 of [9] holds for the refinement with the postprocessing. The results (Theorem 7 of [13]) are summarized in the third row of Table 2. Let us compare our results with those of Gillis [9] and Gillis and Luce [13]. We can see from Tables 1 and 2 that Algorithm 1 is as robust as Hottopixx, and Algorithm 2 is almost as robust as Hottopixx and the refinement of Gillis and Luce with the postprocessing of Gillis. The assumptions of our analysis are the same as theirs. There is a difference in the input data: (A, r) for our algorithms and (A, r, ǫ) or (A, ǫ) for the existing algorithms. We often encounter a situation in which the factorization rank r is available in advance in applications such as topic extraction from documents and unmixing of hyperspectral images. Hence, it is reasonable to assume that a noisy separable matrix A and the factorization rank r will be given as input. As mentioned in Section 1, most of the algorithms for solving separable NMF problems are designed to take (A, r) as input. The advantage of our algorithms over the existing ones is they run on (A, r) that does not include prior knowledge of the noise level ǫ and yet have almost the same robustness to noise as the existing ones. Our result A, r P Assumption 1, κ > 0 κ(1−β) 9(r+1) ǫ Gillis A, r, ǫ Q Assumption 1, κ > 0 κ(1−β) 9(r+1) ǫ Gillis and Luce A, ǫ R Assumption 1, κ > 0 κ(1−β) min{1,ρ} 5(ρ+2) ǫ Refinement of Hottopixx Algorithm Our refinement of Hottopixx is described in Algorithm 1. For the input A and r, step 2 constructs Algorithm 1 Refinement of Hottopixx Input: A ∈ R d×n and a positive integer r. Output: W out ∈ R d×r . 1. If there are duplicate columns in A, keep one of them and remove all the rest. 2. Compute the optimal solution X opt of the problem P(A, r). Set p = diag(X opt ). 3. Let W out = A(:, J) for the index set J corresponding to the r largest elements of p, and return W out . and solves the optimization problem with variable X ∈ R n×n , P(A, r) : Minimize A − AX 1 subject to tr(X) = r, X(i, i) ≤ 1 for all i ∈ N, X(i, j) ≤ X(i, i) for all i, j ∈ N, X(i, j) ≥ 0 for all i, j ∈ N. Throughout this paper, we use X opt to denote the optimal solution and θ to denote the optimal value A−AX opt 1 . By introducing new variables Y ∈ R d×n and z ∈ R, the problem above can be reduced to an LP problem, since the minimization of A − AX 1 is equivalent to the minimization of z under the constraints: −Y ≤ A − AX ≤ Y and d i=1 Y (i, j) ≤ z for all j ∈ N . We use P ′ to denote the LP problem. It should be noted that P ′ has n 2 + dn + 1 variables and 2n 2 + 2dn + n + 1 constraints. Hence, the size of P ′ may be rather large. Step 1 performs the preprocessing on the input matrix. Although Hottopixx does not contain this step, Algorithm 1 must have it. See Remark 1 at the end of this section for the reason. Here, let us recall Hottopixx of Bittorf et al. [5] and the refinement of Gillis and Luce [13]. Bittorf et al. looked at a certain feature of separable matrices and developed Hottopixx on the basis of that observation. Let A satisfy Assumption 1. Then, it can be written as (1) and N ∈ R d×n is noise. Using an n × n permutation matrix Π and the r × (d − r) nonnegative matrixH, we construct the matrix A = V + N , where V ∈ R d×n + is r-separable of the form V = W H = W [I,H]Π shown inX 0 = Π −1 IH 0 (n−r)×r 0 (n−r)×(n−r) Π ∈ R n×n(5) where I is an identity matrix of size r. We make the following observations: • The basis of V can be identified by using X 0 , since the diagonal entries of X 0 are 0 or 1 and the positions with 1 correspond to the basis indices of V . • X 0 satisfies A − AX 0 1 ≤ 2ǫ.(6) The second observation comes from the fact that we have V X 0 = W [I,H]ΠΠ −1 IH 0 0 Π = W [I,H]Π = V , which gives A − AX 0 1 = V + N − (V + N )X 0 1 = N − N X 0 1 ≤ N 1 + N 1 X 0 1 ≤ 2ǫ (by Assumption 1). To compute X 0 approximately, Bittorf et al. proposed to solve an optimization problem with variable X ∈ R n×n , Q(A, r, ǫ) : Minimize f ⊤ diag(X) subject to A − AX 1 ≤ 2ǫ, tr(X) = r, X(i, i) ≤ 1 for all i ∈ N, X(i, j) ≤ X(i, i) for all i, j ∈ N, X(i, j) ≥ 0 for all i, j ∈ N. Here, f is a parameter set by the user: it can be chosen to be any n-dimensional vector with distinct elements. The problem Q can be reduced to an LP. Hottopixx is the same as performing steps 2 and 3 of Algorithm 1 with a replacement of P(A, r) in step 2 by Q(A, r, ǫ). It thus requires (A, r, ǫ) as input. Gillis and Luce [13] refined Hottopixx. They proposed to solve an optimization problem with variable X ∈ R n×n , R(A, ǫ) : Minimize g ⊤ diag(X) subject to A − AX 1 ≤ ρǫ, X(i, i) ≤ 1 for all i ∈ N, X(i, j) ≤ X(i, i) for all i, j ∈ N, X(i, j) ≥ 0 for all i, j ∈ N. Here, g and ρ are parameters set by the user: g can be chosen to be any n-dimensional vector with distinct positive elements and ρ a positive value. As in the case of Q, the problem R can be reduced to an LP. Their algorithm computes the optimal solution of R and constructs an index set corresponding to diagonal entries larger than 1 − min{1,ρ} 2 . Hence, it takes as input (A, ǫ) and does not require r as input. Remark 1. If Algorithm 1 does not contain step 1, it may fail to find a basis from separable matrices with duplicate basis columns. For instance, consider V = 1 0 1 1 0 0 1 0 0 1 . This is 2-separable with β = 1 since it can be written as V = W H by letting W = I and H = V . Suppose that the algorithm receives (A, r) by letting A = V and r = 2 as input. Consider the two matrices, X 1 =     1 0 1 1 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0     and X 2 =       1/3 0 1/3 1/3 0 0 1/2 0 0 1/2 1/3 0 1/3 1/3 0 1/3 0 1/3 1/3 0 0 1/2 0 0 1/2       . Both X 1 and X 2 are optimal solutions of problem P(A, r), since they satisfy all the constraints and A − AX 1 1 = A − AX 2 1 = 0. If Algorithm 1 skips step 1 and finds X 2 in step 2, then it constructs J = {2, 5} in step 3. We have W out = W , since W out = A(:, J) = V (:, J) and W = I. Analysis The optimal value θ of problem P is related to the noise level ǫ involved in separable matrices. Actually, from the observation Bittorf et al. made in [5], we can easily see that θ ≤ 2ǫ holds. Lemma 1. Let A satisfy Assumption 1. Then, the optimal value θ of problem P(A, r) satisfies θ ≤ 2ǫ. (1) and N ∈ R d×n is noise. Using the permutation matrix Π and the nonnegative matrixH, we construct the matrix X 0 that is shown in (5), i.e., Proof. Since A satisfies Assumption 1, it is given by A = V + N where V ∈ R d×n + is r-separable of the form V = W H = W [I,H]Π shown inX 0 = Π −1 IH 0 0 Π ∈ R n×n . Since Assumption 1(a) holds, we can check that X 0 is a feasible solution of P(A, r). Hence, the objective function value at X 0 satisfies θ ≤ A − AX 0 1 . In addition, as shown in (6), we have A − AX 0 1 ≤ 2ǫ. Consequently, θ ≤ 2ǫ holds. Let A satisfy Assumption 1. Let I be a set of basis indices of V . Gillis showed in Lemma 2.1 of [9] that a feasible solution X of problem Q has the following properties: the L 1 norm of each column of X is less than about 1, and V X serves as a good approximation to V . Using the results, Gillis showed in Lemma 2.2 of [9] that the diagonal elements of X indexed by I take higher values than the others. Hence, we can construct I by checking the values of the diagonal elements of X. Lemma 1 implies that the optimal solution of problem P is feasible for problem Q. Hence, the same results as in Lemmas 2.1 and 2.2 of [9] hold for the optimal solution of problem P. Here, we formally describe these results as Lemmas 2 and 3. Lemma 2. Let A satisfy Assumption 1. Then, the optimal solution X opt ∈ R n×n of problem P(A, r) satisfies X opt (:, i) 1 ≤ 1 + 4ǫ 1 − ǫ and v i − V X opt (:, i) 1 ≤ 4ǫ 1 − ǫ for i ∈ N . The proof is almost the same as the one of Lemma 2.1 in [9]. We have included it in Appendix A to make the discussion self-contained. Lemma 3. Let A satisfy Assumption 1. Assume κ > 0 and β < 1. Let I be a set of basis indices of V . Let p = diag(X opt ) for the optimal solution X opt of problem P(A, r). Then, the elements of p indexed by I satisfy p(i) ≥ 1 − 8ǫ κ(1 − β)(1 − ǫ) for every i ∈ I. We have included the proof in Appendix B. Our proof follows the one of Lemma 2.2 in [9], although additional considerations are made; see Remark 2. The key idea of the proof is as follows. Since A satisfies Assumption 1, it can be written as A = V + N where V is r-separable of the form V = W H = W [I,H]Π shown in (1) and there is a map φ : R → N such that w j = v φ(j) for each j ∈ R. For j ∈ R and i = φ(j) ∈ N , let η = H(j, :)X opt (:, i). We can see that η is rewritten by using X opt (i, i), which is equivalent to p(i), due to H(j, i) = 1, and evaluate the lower and upper bounds on η. The result of the lemma follows from the bounds. Now, we can prove Theorem 1. It follows from Lemma 3. (Proof of Theorem 1). Let us consider the case of β = 1. Here, we only have to show that, if A is separable with an overlap of basis columns, then the algorithm finds a set of basis indices. Separability means that duplicate basis columns appear in the columns of A. Hence, after conducting step 1, the resulting matrix is separable with no overlapping basis columns. This reduces to the case of β < 1. Let us move on to the case of β < 1. Step 2 solves problem P(A, r) and sets p = diag(X opt ) for the optimal solution X opt . Let I be a set of basis indices of V . Lemma 3 tells us that p(i) ≥ 1 − 8ǫ κ(1 − β)(1 − ǫ) (A) holds for every i ∈ I. Since ǫ ≤ κ(1 − β) 9(r + 1) ≤ 1 18 , we have 1 − ǫ ≥ 17/18 > 8/9. In light of this, the term (A) is bounded as follows: (A) < 9ǫ κ(1 − β) ≤ 1 r + 1 . We thus obtain p(i) > r r + 1 for i ∈ I.(7) The first constraint of problem P(A, r) requires X opt to satisfy tr(X opt ) = r ⇔ i∈N p(i) = r. Hence, r = i∈N p(i) = i∈I p(i) + i∈N \I p(i). Combining it with inequality (7) gives p(i) ≤ r r + 1 for i ∈ N \ I.(8) Since I has r elements, inequalities (7) and (8) ensure that the index set corresponding to the r largest elements of p coincides with I. Hence, the index set J constructed in step 3 coincides with I, which is the set of basis indices of V . Consequently, after suitably rearranging the columns of W , the output W out = A(:, J) satisfies W − W out 1 ≤ ǫ. Refinement of Hottopixx with Postprocessing Algorithm We explore the case where there are duplicate basis columns in the input matrix of Algorithm 1. As shown in Section 4.2, the algorithm's guarantee of robustness to noise is founded upon Lemma 3. However, the lemma does not hold any more, because β = 1 in this case. To address this issue, we develop and incorporate postprocessing in the algorithm. Let us outline our postprocessing first and give the details at the end of this section. In what follows, we will assume that we are given A satisfying Assumption 1. We use the term cluster to refer to the set of column indices of A. Although Lemma 3 does not hold in the case where there are duplicate basis columns, the optimal solution of problem P still provides us with clues to finding clusters from which we can obtain near-basis columns. For a cluster S ⊂ N and p ∈ R n + , define the score of cluster S by score(S, p) = u∈S p(u), and we call p a point list. Let µ > 0 be a parameter and define T j = {u ∈ N : a u − w j 1 ≤ 2µ}(9) for each j ∈ R. Here, a u is the uth column of A and w j is the jth column of W . We call T 1 , . . . , T r anchors with parameter µ. Let p = diag(X opt ) for the optimal solution X opt of problem P, and choose the parameter µ of T j , depending on the noise level ǫ involved in A. We show in Corollary 1 that the anchors T 1 , . . . , T r have high scores, i.e., score(T j , p) > r r + 1 for every j ∈ R. Lemma 3.3 of [9] by Gillis implies that the same result holds for a feasible solution of problem Q. If we find all the anchors, then near-basis columns can be obtained by choosing one element from each anchor. However, even if we use the point list obtained from the optimal solution of P, it is not an easy task to find anchors exactly. We thus construct a collection F of clusters that contains all the anchors, and observe the structure of F. Our postprocessing algorithm is designed on the basis of this observation. To describe F, we introduce Ω, which is a collection of clusters, that will serve as the foundation of F. Sort columns a 1 , . . . , a n of A by their L 1 distance to a i in ascending order so that For a point list p ∈ R n + and parameter µ used for constructing the anchors T 1 , . . . , T r , let a i − a u 1 1 ≤ a i − a u 2 1 ≤ · · · ≤ a i − a u nF i (p) = S ∈ Ω i : diam(S) ≤ 3µ, score(S, p) > r r + 1(10) and F(p) = i∈N F i (p). In particular, if p is set as p = diag(X opt ) for the optimal solution X opt of problem P, we use the abbreviation F i for F i (p) and the abbreviation F for F(p). That is, for p = diag(X opt ), F i = F i (p) and F = F(p). As mentioned above, we have to choose µ depending on the noise level ǫ to ensure that the anchors can have high scores. Hence, it is impossible to construct F(p). But, it is possible to compute some of the clusters in F(p). Consider a collection G i (p) of clusters obtained by removing the condition diam(S) ≤ 3µ in F i (p): G i (p) = S ∈ Ω i : score(S, p) > r r + 1 .(11) Let Unlike F(p), we can construct G(p). LetŜ = arg min S∈G(p) diam(S). Since F(p) ⊂ G(p), we have G(p) = i∈N G i (p).diam(Ŝ) = min S∈G(p) diam(S) ≤ min S∈F (p) diam(S) ≤ 3µ. Hence,Ŝ belongs to F(p). We can get it through G(p). Let us look at F, which is an abbreviation of F(p) with the point list p obtained from the optimal solution of problem P. We show in Lemma 7 that any cluster in F always has a common element with some anchor. This means that clusters in F are localized around each anchor, and anchors are the cores of F. Hence, using the componentsF 1 , . . . ,F r of F, given as F j = {S ∈ F : max u∈S a u − w j 1 ≤ 8µ},(12) we can write F as F =F 1 ∪ · · · ∪F r . If anchors are far from each other; in other words, ω is large, then the componentsF 1 , . . . ,F r are disjoint from each other. The left of Figure 1 illustrates F. According to the observations made so far, it turns out that we can find a cluster belonging to one ofF 1 , . . . ,F r . A cluster of F is obtained by using G(p), and it belongs to one ofF 1 , . . . ,F r because F can be written as F =F 1 ∪ · · · ∪F r . Let us denote the obtained cluster by S 1 , and assume that S 1 belongs toF 1 in order to simplify the subsequent description. By updating the point list p, we can find a cluster belonging to one of the remaining componentsF 2 , . . . ,F r . Let q be a point list made by updating p as q(u) = 0 if u ∈ S 1 , p(u) otherwise. Algorithm 2 Refinement of Hottopixx with postprocessing Input: A ∈ R d×n and a positive integer r. Output: W out ∈ R d×r . 1. Compute the optimal solution X opt ∈ R n×n of problem P(A, r). 2. Set p 1 = diag(X opt ), J = ∅ and ℓ = 1. Perform the following procedure. 2-1. Find S ℓ such that S ℓ = arg min S∈G(p ℓ ) diam(S). 2-2. Choose one element from S ℓ and add it to J. Increase ℓ by 1. 2-3. If ℓ = r, then return W out = A(:, J) and terminate; otherwise, construct p ℓ ∈ R n + as p ℓ (u) = 0 if u ∈ S 1 ∪ · · · ∪ S ℓ−1 , p 1 (u) otherwise, and go to step 2-1. We show in Lemma 10 that F(q) can be written as F(q) =F 2 ∪ · · · ∪F r . The right of Figure 1 illustrates F(q). A cluster, denoted by S 2 , of F(q) is obtained by using G(q), and it belongs to one ofF 2 , . . . ,F r . By repeating the procedure, we can find r clusters S 1 , . . . , S r such that S j ∈F j for each j ∈ R by rearranging the indices ofF 1 , . . . ,F r . The obtained clusters provide near-basis columns. We choose one element from each cluster and construct the set J. Rearranging the columns of W , we find that it satisfies W − A(:, J) 1 ≤ 8µ. This leads to Theorem 2. Algorithm 2 is a formal description of our algorithm. It takes as input (A, r). Step 2 is the postprocessing. The cost of step 2 is dominated by step 2-1. The cost of step 2-1 is in turn dominated by the computation of the L 1 distance between any two columns of A ∈ R d×n , which takes O(n 2 d) flops. We below summarize the definition and role of T j , F(p), G(p) andF j , which are used for analyzing Algorithm 2 in Section 5.2. • T j is a cluster, called anchor, which is defined as in (9). • F(p) is a collection of clusters constructed by using a point list p. This is formed as F(p) = ∪ i∈N F i (p) where F i (p) is defined as in (10). If p is a point list obtained from the optimal solution of problem P, we abbreviate F(p) and F i (p) as F and F i , respectively. • G(p) is a collection of clusters constructed by using a point list p. This is formed as G(p) = ∪ i∈N G i (p) where G i (p) is defined as in (11), which is obtained by discarding some condition imposed on F i (p). •F j is the component of F, defined as in (12). We show in Lemma 8 that F is written as F = ∪ j∈RFj . Analysis Scores of Anchors We show that anchors have high scores by using the point list obtained by solving problem P. Lemma 4. Let A satisfy Assumption 1. Assume κ > 0. Let µ satisfy µ = 0 and ǫ ≤ µ. Set p = diag(X opt ) for the optimal solution X opt of problem P(A, r). Then, anchors T 1 , . . . , T r with parameter µ satisfy score(T j , p) ≥ 1 − 16ǫ κµ(1 − ǫ) . for every j ∈ R. We can prove this in a similar way as Lemma 3; the proof is in Appendix B. From Lemma 4, we immediately obtain Corollary 1. Corollary 1. Let A satisfy Assumption 1. Set p = diag(X opt ) for the optimal solution X opt of problem P(A, r). Consider two cases as follows: • Let ǫ satisfy ǫ < The following hold in both cases. (a) 0 ≤ ǫ < 1. (b) 0 < µ < ω 17 . (c) ǫ ≤ µ. (d) score(T j , p) > r r+1 for every j ∈ R. We can easily check that the corollary holds. The proof is given in Appendix C. Part (a) just tells us that the bounds imposed on ǫ in the two cases do not violate Assumption 1(b). The role of ξ is to prevent the value of µ from being zero; hence, we are allowed to choose an arbitrary real number from the open interval (0, κ 35 ). Structure of F We prove the observations about F that we made in Section 5.1. Lemma 5. Let F(q) = ∅ for some q ∈ R n + . Then, G(q) = ∅. Moreover, F(q) containsŜ = arg min S∈G(q) diam(S). Proof. Since F i (q) ⊂ G i (q) ⊂ G(q) and F(q) = i∈N F i (q), any element of F(q) belongs to G(q). Hence, F(q) ⊂ G(q) holds. Consequently, F(q) = ∅ implies G(q) = ∅. SinceŜ belongs to G(q) = i∈N G i (q), we haveŜ ∈ Ω i * for some i * ∈ N and score(Ŝ, q) > r r+1 . From the relation F(q) ⊂ G(q), we have diam(Ŝ) = min S∈G(q) diam(S) ≤ min S∈F (q) diam(S) ≤ 3µ. Consequently,Ŝ ∈ F i * (q), which impliesŜ ∈ F(q). Proof. Separability means that there is a map φ : R → N such that w j = v φ(j) for each j ∈ R. We use the map φ in the proof of parts (a) and (b). (a) From Corollary 1(c), we have a φ(j) − w j 1 = v φ(j) + n φ(j) − w j 1 = n φ(j) 1 ≤ ǫ ≤ µ. Hence, T j contains φ(j), which means that T j is not empty. (b) We show that T j belongs to F φ(j) for each j ∈ R. Since φ(j) ∈ T j , as shown in part (a), we have T j ∈ Ω φ(j) . By Corollary 1(d), the score of T j by p satisfies score(T j , p) > r r+1 . The diameter of T j in Ω φ(j) satisfies diam(T j ) ≤ 3µ, since any u ∈ T j satisfies a u − a φ(j) 1 = a u − v φ(j) − n φ(j) 1 = a u − w j − n φ(j) 1 ≤ a u − w j 1 + n φ(j) 1 ≤ 2µ + ǫ ≤ 3µ. The last inequality uses Corollary 1(c). Hence, T j ∈ F φ(j) for each j ∈ R. In addition, the definition of F implies F φ(j) ⊂ F. Consequently, T 1 , . . . , T r belong to F. (c) We have already shown T j ∈ F for each j ∈ R in part (b). The definition of T j implies that, for any u ∈ T j , we have a u − w j 1 ≤ 2µ ≤ 8µ. Hence, T j belongs toF j . Parts (a) and (c) tell us that the componentsF 1 , . . . ,F r of F are not empty. We will use this observation in the proof of Theorem 2. Lemma 7. Frame the hypotheses of Corollary 1. For any S ∈ F, there is some j ∈ R such that S ∩ T j = ∅. Proof. We start by showing that any two different anchors do not have a common element. Let x, y ∈ R and x = y. No u ∈ T x belongs to T y , since a u − w y 1 = (w x − w y ) + (a u − w x ) 1 ≥ w x − w y 1 − a u − w x 1 ≥ ω − 2µ (by the definition of ω) > 15µ (by Corollary 1(b)). Hence, T x ∩ T y = ∅ holds for any different x and y in R. We will prove the lemma by contradiction. Assume that there is some S ∈ F such that S ∩ T j = ∅ for any j ∈ R. Since T x ∩ T y = ∅ for x, y ∈ R with x = y, any two different clusters among S, T 1 , . . . , T r do not have a common element. Hence, we have score(S, p) + j∈R score(T j , p) = score(S ∪ T 1 ∪ · · · ∪ T r , p) ≤ score(N, p) = r. The last equality follows from the fact that score(N, p) = tr(X opt ) = r holds since the first constraint of problem P requires X opt to satisfy tr(X opt ) = r. By Corollary 1(d), the score of T j by p satisfies score(T j , p) > r r+1 for each j ∈ R. Therefore, we get score(S, p) ≤ r r+1 and reach a contradiction to S ∈ F, which means score(S, p) > r r+1 . The assumption is false. That is, for any S ∈ F, there is some j ∈ R such that S ∩ T j = ∅. (b) Let x, y ∈ R and x = y. We have S x ∩ S y = ∅ for any (S x , S y ) ∈F x ×F y . Proof. (a) First, we prove the inclusion "⊃". Let S ∈ ∪ j∈RFj . Then, there is a j * ∈ R such that S ∈F j * . The definition ofF j * implies S ∈ F. Hence, the inclusion "⊃" holds. Next, we prove the inclusion "⊂". Let S ∈ F. Recall that F is defined by F = ∪ i∈N F i . Hence, there is an i * ∈ N such that S ∈ F i * . Lemma 7 ensures that there is a j * ∈ R such that S ∩ T j * = ∅. Let v ∈ S ∩ T j * . Then, for any u ∈ S, a u − w j * 1 = (a u − a v ) + (a v − w j * ) 1 ≤ a u − a v 1 + a v − w j * 1 ≤ a u − a v 1 + 2µ (by v ∈ T j * ) = (a u − a i * ) + (a i * − a v ) 1 + 2µ ≤ a u − a i * 1 + a i * − a v 1 + 2µ ≤ 8µ. (by u, v ∈ S and S ∈ F i * ) Accordingly, we have S ∈F j * for j * ∈ R, which implies S ∈ ∪ j∈RFj . Hence, the inclusion "⊂" holds. Consequently, F = j∈RF j as claimed. (b) Let x, y ∈ R and x = y. Let S x ∈F x and S y ∈F y . We have, for any u ∈ S x , a u − w y 1 = (w x − w y ) + (a u − w x ) 1 ≥ w x − w y 1 − a u − w x 1 ≥ ω − 8µ (by the definition of ω and S x ∈F x ) > 9µ (by Corollary 1(b)). Hence, u / ∈ S y . This means S x ∩ S y = ∅. Here, we prove Lemma 9 for establishing Lemma 10. In Lemmas 9 and 10, we use the following notation: ℓ 1 , . . . , ℓ r denote the r integers in R; k is any positive integer satisfying k < r; and K is the set of consecutive integers from 1 to k. Lemma 9. Frame the hypotheses of Corollary 1. We have the relation j∈KF ℓ j = F \ j∈R\KF ℓ j . Proof. Lemma 8(b) impliesF x ∩F y = ∅ for x, y ∈ R with x = y. We use this relation in the proof. To simplify the description, we denote A = j∈KF ℓ j and B = j∈R\KF ℓ j . First, we prove the inclusion "⊂". Let S ∈ A. Then, S ∈F ℓ j * for some j * ∈ K. This implies S ∈ F by the definition ofF ℓ j * . In addition, as shown above, we haveF ℓ j * ∩F ℓ j = ∅ for every j ∈ R \ K. Hence, S / ∈ j∈R\KF ℓ j . Consequently, the inclusion "⊂" holds. Next, we prove the inclusion "⊃". It holds if F \ B = ∅. In what follows, we thus assume F \ B = ∅. We use contradiction. Since the assumption means that there exists S ∈ F \ B, we choose such S. Let us assume contradiction; S / ∈ A. Then, S ∈ F ∩ A c ∩ B c . Meanwhile, F ∩ A c ∩ B c = F ∩ (A ∪ B) c = F ∩   j∈RF ℓ j   c = F ∩ F c = ∅ holds by De Morgan's laws and Lemma 8(a). This contradicts the fact that S exists. Hence, the inclusion "⊃" holds. Consequently, j∈KF ℓ j = F \ j∈R\KF ℓ j as claimed. Lemma 10. Frame the hypotheses of Corollary 1. Let S 1 , . . . , S k be clusters such that S j ∈F ℓ j for each j ∈ K. Suppose that we are given S 1 , . . . , S k and the point list p. Construct a point list q ∈ R n + : q(u) = 0 if u ∈ S 1 ∪ · · · ∪ S k , p(u) otherwise. Then, the following hold: (a) Let S ∈ F. Then, S ∈ j∈R\KF ℓ j ⇔ score(S, q) > r r + 1 . (b) F(q) is represented as F(q) = j∈R\KF ℓ j . Proof. (a) First, we prove the direction "⇒". Let S ∈ j∈R\KF ℓ j . Then, S belongs toF ℓ j * for some j * ∈ R \K. Meanwhile, S j belongs toF ℓ j for j ∈ K. Lemma 8(b) then tells us that S ∩ S j = ∅ for every j ∈ K. Hence, from the construction of q, we have q(u) = p(u) for every u ∈ S. In addition, since S ∈F ℓ j * implies S ∈ F by the definition ofF ℓ j * , it takes score(S, p) > r r+1 . Consequently, we obtain score(S, q) = score(S, p) > r r+1 . Next, we prove the direction "⇐" by showing that the contrapositive is true. In light of Lemma 9, the contrapositive statement is S ∈ j∈KF ℓ j ⇒ score(S, q) ≤ r r + 1 .(13) Let S ∈ j∈KF ℓ j . Then, S belongs toF ℓ j * for some j * ∈ K. From the construction of q, we have q(u) = 0 for every u ∈ S j * . Hence, score(S, q) = u∈S q(u) = u∈S q(u).(14) forS = S \ S j * . Let S k+1 , . . . , S r be clusters such that S j ∈F ℓ j for each j ∈ R \ K. Since S j ∈F ℓ j for j ∈ R andS = S \ S j * where S, S j * ∈F ℓ j * , Lemma 8(b) tells us that the following statements hold:S ∩ S j = ∅ for every j ∈ R.(15)S x ∩ S y = ∅ for every different x, y ∈ R.(16) Statement (15) implies that no element ofS belongs to S 1 ∪ · · · ∪ S k . Hence, u∈S q(u) = u∈S p(u) = score(S, p).(17) It follows from equalities (14) and (17) that the relation score(S, q) = score(S, p) holds. From statements (15) and (16), we have score(S, p) + j∈R score(S j , p) = score(S ∪ S 1 ∪ · · · ∪ S r , p) ≤ score(N, p) = r. Here, score(S j , p) > r r+1 since S j ∈F ℓ j implies S j ∈ F. Accordingly, the inequality above yields score(S, p) ≤ r r+1 . Combining it with the relation score(S, q) = score(S, p), we obtain score(S, q) ≤ r r+1 . Consequently, statement (13) holds. (b) First, we prove the inclusion "⊃". Let S ∈ j∈R\KF ℓ j . Then, S belongs toF ℓ j * for some j * ∈ R \ K. It thus follows from part (a) that score(S, q) > r r+1 . In addition, S ∈F ℓ j * implies S ∈ F by the definition ofF ℓ j * . From F = ∪ i∈N F i , we have S ∈ F i * for some i * ∈ N . Thus, S ∈ Ω i * and diam(S) ≤ 3µ by the definition of F i * . Consequently, we obtain S ∈ F i * (q), which implies S ∈ F(q) since F(q) = ∪ i∈N F i (q). Next, we prove the inclusion "⊂". Let S ∈ F(q). Then, S belongs to F i * (q) for some i * ∈ N , since F(q) = ∪ i∈N F i (q). It follows from the definition of F i * (q) that S ∈ Ω i * , diam(S) ≤ 3µ, and score(S, q) > r r+1 . Since S satisfies score(S, q) > r r+1 , part (a) ensures that the inclusion "⊂" holds if S ∈ F. Thus, the remainder of the proof is to show S ∈ F. The construction of a point list q tells us that p(i) ≥ q(i) for every i ∈ N . Hence, score(S, p) ≥ score(S, q) > r r + 1 . holds. Consequently, we obtain S ∈ F i * (p), which implies S ∈ F since F = F(p) = ∪ i∈N F i (p). Robustness to Noise We are now ready to prove Theorem 2. (Proof of Theorem 2). Let S 1 , . . . , S r be clusters generated by Algorithm 2. We claim that there is a permutation π : R → R such that S ℓ ∈F π(ℓ) for each ℓ ∈ R. We use induction on ℓ. Set a parameter µ as µ = λ + ξ by choosing an arbitrary real number ξ from the open interval (0, κ 35 ). The value of λ is set according to the noise level described in the theorem: • λ = 17(r+1)ǫ κ in the former case where ǫ < κω 578(r+1) . • λ = √ ǫ in the latter case where ǫ < κ 2 289(r+1) 2 . Base case: Step 1 of the algorithm computes the optimal solution X opt of problem P(A, r) and step 2 sets p 1 = diag(X opt ). Thus, Lemmas 6 and 8 hold. Lemma 8(a) tells us that F(p 1 ) is represented as F(p 1 ) = j∈RF j . It follows from Lemmas 6(a) and 6(c) that the components F 1 . . . ,F r are not empty. This means that F(p 1 ) is not empty. We can thus use Lemma 5, which tells us that F(p 1 ) contains S 1 = arg min S∈G(p 1 ) diam(S). Accordingly, there is a j ∈ R such that S 1 ∈F j . Induction step: Let ℓ 1 , . . . , ℓ r denote the r integers in R. Let k be any positive integer satisfying k < r, and K be the set of consecutive integers from 1 to k. Suppose that S j ∈F ℓ j holds for each j ∈ K. Lemma 10 holds; part (b) of the lemma tells us that F(p k+1 ) is represented as F(p k+1 ) = j∈R\KF ℓ j . As mentioned above,F k+1 . . . ,F r are not empty. Hence, F(p k+1 ) is not empty. We can thus use Lemma 5, which tells us that F(p k+1 ) contains S k+1 = arg min S∈G(p k+1 ) diam(S). Accordingly, there is a j ∈ R \ K such that S k+1 ∈F ℓ j . Consequently, there is a permutation π : R → R such that S ℓ ∈F π(ℓ) for each ℓ ∈ R. In light of the definition ofF j , this result implies that the output W out = A(:, J) of the algorithm satisfies W − W out 1 ≤ 8µ = 8(λ + ξ) by rearranging the columns of W . Since the inequality holds for any small positive number ξ, it turns out that W − W out 1 ≤ 8λ holds. This gives the desired results. Experiments We conducted experiments to see the practical performance of our algorithms. Gillis and Luce [13] observed in their experiments that the postprocessing of Gillis [9] does not always enhance the robustness of their refinement of Hottopixx. For that reason, they proposed to incorporate a hybrid postprocessing into their refinement. A detailed description was given in Algorithm 6 of [13]. They implemented it and showed its superiority to other algorithms. We incorporated the algorithmic framework of hybrid postprocessing into Algorithm 2, as described in Algorithm 3, and implemented it on MATLAB. The purpose of our experiments was to demonstrate its performance. We compared four algorithms as follows: RHHP (Algorithm 3), LP-rho1 (Algorithm 6 of [13]), Hottopixx (Algorithm 1 of [13]) and SPA (Algorithm 1 with f (x) = x 2 2 of [15]). SPA was originally proposed in [1] in the context of chemometrics, and is now considered a popular algorithm for solving separable NMF problems. For the implementation of LP-rho1, Hottopixx and SPA, we used the MATLAB functions LPsepNMF cplex, hottopixx cplex and FastSepNMF whose code is available at the website of the first author of [13]. For solving LP problems, the functions hottopixx cplex and LPsepNMF cplex employed CPLEX. Following them, we employed it in the implementation of RHHP. We tested the algorithms on four synthetic datasets whose construction is the same as in [13,10]. Each dataset contained noisy separable matrices A = W H + N ∈ R 30×200 where the factorization rank is 10 and the set of basis indices is {1, . . . , 10}. The components W ∈ R 30×10 + , H ∈ R 10×200 + and N ∈ R 30×200 were generated as follows. • W : Using the following procedures (A) and (B), two types of matrices were generated. (A) Normal: First, generate W ∈ R 30×10 whose elements are drawn from a uniform distribution on the interval [0, 1]. Then, normalize the columns to have unit L 1 norm. (B) Ill-conditioned: First, generate W ∈ R 30×10 as in the first step of procedure above. Second, compute the reduced SVD W = F ΣG ⊤ where Σ is a diagonal matrix of size 10, F ∈ R 30×10 , and G ∈ R 10×10 . Third, choose a positive integer c and replace W by F SG ⊤ using a diagonal matrix S of size 10 whose ith diagonal element is α (i−1) for α ∈ R satisfying α 9 = 10 −c . Finally, replace all negative elements by 0 and then normalize the columns to have unit L 1 norm. • H: It is formed as H = [I,H] where the submatrix composed of 10 columns from the first one is an identity matrix of size 10, and the columns of the remaining submatrix of size 10 × 190 are from a Dirichlet distribution whose 10 parameters are uniformly from the interval [0, 1]. Hence, H is nonnegative and every column has unit L 1 norm. Moreover, if one constructs V = W H, our parameter choice of a Dirichlet distribution encourages the columns of V to lie around the boundary of the convex hull of the columns of W . • N : First, choose a positive real number δ serving as a noise intensity, and generate N ∈ R 30×200 whose elements are from a standard normal distribution. Then, normalize it such that the L 1 norm is equal to δ. Hence, the resulting matrix N satisfies N 1 = δ. To generate noisy separable matrices in dataset 1, we chose 20 equally spaced points δ in log space between 10 −2 and 1, and then constructed N satisfying N 1 = δ for each δ. We used the matrices W generated by procedure (A). For those in datasets 2-4, we chose 20 equally spaced points δ in log space between 10 −2 and 0.5, and then constructed N satisfying N 1 = δ. We used ill-conditioned matrices W generated by procedure (B) with the choice of c as follows: c = 3 for dataset 2, c = 4 for dataset 3, and c = 5 for dataset 4. In the construction of datasets 1-4, we generated 50 separable matrices V = W H, and then formed noisy separable matrices A = V + N by adding 20 matrices N to each V ; hence, each dataset contained 1,000 matrices in total. Table 3 displays the average values of κ, ω, σ max /σ min and β over 50 matrices W and H in datasets 1-4. Here, σ max /σ min is the ratio of the largest singular value of W divided by the smallest one. Recall that κ and ω are defined in terms of W and β in terms of the submatrixH of H. The performance of the algorithm was evaluated by using the index recovery rate, defined by |J ∪{1, . . . , 10}|/10 for an index set J output by it. LP-rho1 and Hottopixx required us to designate a noise level ǫ as input. For a matrix A = W H + N in the datasets, we set ǫ = N 1 , which is equal to δ, and then ran the algorithms. The experiments were conducted on Intel Xeon CPU E5-1620 with 64 GB memory running MATLAB. Figure 2 and Table 4 summarize the experimental results: the figure displays the average of index recovery rates determined by the four algorithms; and the table lists the maximum values of δ for 100% and 80% recovery of basis indices by them. Regarding the index recovery rates of the algorithms, we can see the following: • RHHP, LP-rho1, and SPA are better than Hottopixx for every dataset. • For dataset 1, SPA is slightly better than RHHP and LP-rho1, since the maximum value of δ for 80% recovery determined by SPA exceeds those determined by RHHP and LP-rho1. RHHP is almost the same as LP-rho1. The experimental results imply that, without taking a noise level as input, RHHP is as robust to noise as LP-rho1. Concluding Remarks We refined Hottopixx of Bittorf et al. [5] and the postprocessing of Gillis [9] and showed that our refinement has almost the same robustness to noise as the original one. To enable Hottopixx to run without prior knowledge of the noise level, we replaced the problem Q with P. This is a simple idea, and it is easy to see that Lemma 1 holds. From the lemma, we can immediately see that the refinement is similar in robustness to Hottopixx. However, it is not obvious how the postprocessing of Gillis can be refined so that the algorithm runs without prior knowledge of the noise level. We constructed a collection F of clusters containing anchors T 1 , . . . , T r and examined the structure of F. On the basis of this examination, we developed a refinement of the postprocessing and analyzed its robustness to noise. We close this paper with remarks on directions for future research. There is a computational issue in Algorithms 1 and 2. The bottleneck is in solving problem P. As shown in Section 4.1, this can be transformed into an equivalent LP problem P ′ with O(n 2 ) variables and O(n 2 ) constraints where n is the number of columns of the input matrix and we assume that it is greater than the number d of rows. Since the size of P ′ grows quadratically with n, solving P ′ is computationally challenging when n is large. We thus need to develop efficient algorithms. Bittorf et al. [5] and Gillis and Luce [14] used first-order methods and developed algorithms for solving their optimization models Q and R. The use of first-order methods would be promising for solving P ′ efficiently. Regarding the bounds given in Theorems 1 and 2, it remains to investigate the tightness of them. Recently, Gillis [11] studied an ideal algorithm for solving separable NMF problems. Since the computational cost grows exponentially with the problem size, it is not realistic to apply the algorithm to large problems. They showed that it achieves the best possible bound on the error relative to the basis. There is a gap between our error bound shown for Algorithm 2 in Theorem 2 and the optimal one. It would be interesting to see whether we can reduce the gap. since 1 ⊤ V = 1 by Assumption 1(a) and V , X opt ≥ 0. By using Assumptions 1(a) and 1(b), we bound the terms (B) and (C) as follows: (B) ≤ N 1 X opt (:, i) 1 ≤ ǫ X opt (:, i) 1 , (C) = v i + n i 1 ≤ v i 1 + n i 1 ≤ 1 + ǫ. Hence, we obtain 1 + 3ǫ ≥ (1 − ǫ) X opt (:, i) 1 , which gives the first inequality of this lemma, since 0 ≤ ǫ < 1 by Assumption 1(b). Next, we prove the second inequality. By Lemma 1, 2ǫ ≥ θ = A − AX opt 1 ≥ a i − AX opt (:, i) 1 = v i + n i − V X opt (:, i) − N X opt (:, i) 1 = v i − V X opt (:, i) + n i − N X opt (:, i) 1 ≥ v i − V X opt (:, i) 1 − n i − N X opt (:, i) 1 ≥ v i − V X opt (:, i) 1 − ( n i 1 + N X opt (:, i) 1 ) (A) . By using Assumption 1(b) and the first inequality of this lemma, we bound the term (A) as follows: (A) ≤ n i 1 + N 1 X opt (:, i) 1 ≤ 2ǫ(1 + ǫ) 1 − ǫ . We then obtain the second inequality of this lemma. Note that ǫw j 1 =ǫ w j 1 holds sinceǫ = 4ǫ 1−ǫ ≥ 0 by Assumption 1(b). We thus obtain (1 − η +ǫ)w j − W (:, R \ {j})z 1 ≤ 2ǫ for η and z defined above. Moreover, considering that all elements of H are less than or equal to 1 since Assumption 1(a) holds and H ≥ 0, we have η = H(j, :)X opt (:, i) ≤ 1 ⊤ X opt (:, i) = X opt (:, i) 1 (by X opt ≥ 0) ≤ 1 +ǫ (by Lemma 2). This gives 1 − η +ǫ ≥ 0. We are now able to prove Lemma 3. (Proof of Lemma 3). Since we put Assumption 1 on A, it can be written as A = V + N ∈ R d×n for V ∈ R d×n + and N ∈ R d×n . Since V is r-separable of the form V = W H = W [I,H]Π shown in (1), there is a map φ : R → N such that w j = v φ(j) for each j ∈ R. Hence, the basis index I of V is given as I = {φ(1), . . . , φ(j)}. Let i = φ(j) for j ∈ R. Lemma 11 tells us that Note that z ′ ≥ 0 since z = H(R \ {j}, :)X opt (:, i) ≥ 0 and 1 − η +ǫ > 0. Accordingly, we obtain a lower bound on η, η ≥ 1 + (κ − 2)ǫ κ .(18) We can upper bound η using p(i). Since w j = v φ(j) and i = φ(j), we have H(:, i) = e j , and thus H(j, i) = 1. In light of this, we rewrite η as = β( X opt (:, i) 1 − X opt (i, i)) (by X opt ≥ 0) ≤ β(1 +ǫ − p(i)) (by Lemma 2). We thus obtain an upper bound on η, η ≤ (1 − β)p(i) + β(1 +ǫ). The bounds (18) and (19) yield 1 + (κ − 2)ǫ κ ≤ (1 − β)p(i) + β(1 +ǫ) ⇔ p(i) ≥ 1 +ǫ − 2ǫ κ(1 − β) . Assumption 1(b) impliesǫ = 4ǫ/(1−ǫ) ≥ 0. Recall that i = φ(j) for j ∈ R and I = {φ(1), . . . , φ(r)}. Hence, from the inequality above, we obtain p(i) ≥ 1 − 8ǫ κ(1−β)(1−ǫ) for every i ∈ I. Next, consider the case where 1 − η +ǫ = 0. By inequality (19), we have 1 +ǫ = η ≤ (1 − β)p(i) + β(1 +ǫ), which gives p(i) ≥ 1 +ǫ. Here,ǫ ≥ 0 and 8ǫ κ(1−β)(1−ǫ) ≥ 0 by Assumption 1(b), κ > 0 and β < 1. We thus obtain p(i) ≥ 1 − 8ǫ κ(1−β)(1−ǫ) for every i ∈ I. Remark 2. In the proof above, to find a lower bound on η, we have used the observation that 1 − η +ǫ is positive or zero, which is not taken into account in the proof of Lemma 2.2 of [9]. Let us move on to prove Lemma 4. To do so, we prove the following lemma. Note that (1 − H(j, u))w j 1 = (1 − H(j, u)) w j 1 holds since 1 − H(j, u) ≥ 0 by Assumption 1(a) and H ≥ 0. It follows from the inequality above that max u∈T c j H(j, u) < 1 − µ 2 holds. In light of this, we can prove Lemma 4 in a similar way as Lemma 3. The proof is almost the same, except the evaluation of the upper bound on η. (Proof of Lemma 4). We use Lemma 11. Let φ : R → N be a map such that w j = v φ(j) for each j ∈ R. The lemma tells us that, for j ∈ R and i = φ(j) ∈ N , we have (1 − η +ǫ)w j − W (:, R \ {j})z 1 ≤ 2ǫ and 1 − η +ǫ ≥ 0 where η, z andǫ are as shown in the lemma. First, consider the case where 1 − η +ǫ > 0. As in the proof of Lemma 3, we have 2ǫ ≥ (1 − η +ǫ)w j − W (:, R \ {j})z 1 ≥ (1 − η +ǫ)κ, which gives a lower bound on η, as shown in (18). We can upper bound η using score(T j , p). Write η as η = H(j, :)X opt (:, i) = H(j, T j )X opt (T j , i) (A) + H(j, T c j )X opt (T c j , i) (B) . The term (A) is bounded as follows: (A) ≤ 1 ⊤ X opt (T j , i) (by Assumption 1(a) and H ≥ 0) = X opt (T j , i) 1 (by X opt ≥ 0). The term (B) is bounded as follows: (B) < 1 − µ 2 1 ⊤ X opt (T c j , i) (by Lemma 12) = 1 − µ 2 X opt (T c j , i) 1 (by X opt ≥ 0) = 1 − µ 2 ( X opt (:, i) 1 − X opt (T j , i) 1 ) ≤ 1 − µ 2 (1 +ǫ − X opt (T j , i) 1 ) (by Lemma 2). −1 1 where 1{i, u 1 , . . . , u n−1 } = N . Then, construct Ω i = {{i}, {i, u 1 }, {i, u 1 , u 2 }, . . . , {i, u 1 , u 2 , . . . , u For a cluster S ∈ Ω i , define the diameter of S in Ω i by diam(S) = max u∈S a i − a u 1 . FFigure 1 : 1= Illustration of F = F(p) and F(q) where p is a point list obtained from the optimal solution of problem P, and q is a point list obtained by updating p such that q(u) = 0 if u belongs to the red-colored cluster S; otherwise, q(u) = p(u): clusters (set of points surrounded by an oval), anchors (set of points surrounded by an oval filled with gray color), the componentsF i of F (collection of clusters surrounded by a dotted oval), and basis columns (star). κω 578(r+1) . The value of µ is set as µ = 17(r+1)ǫ κ + ξ by choosing an arbitrary real number ξ from the open interval (0, κ 35 ). • Let ǫ satisfy ǫ < κ 2 289(r+1) 2 . The value of µ is set as µ = √ ǫ + ξ by choosing an arbitrary real number ξ from the open interval (0, κ 35 ). Lemma 6 . 6Frame the hypotheses of Corollary 1. The following hold:(a) Anchor T j is not empty.(b) All anchors T 1 , . . . , T r belong to F. ( c ) cAnchor T j belongs to the componentF j of F. Lemma 8 . 8Frame the hypotheses of Corollary 1. The following hold: (a) F, i.e., the abbreviation of F(p), is represented as F = j∈RF j by using the componentsF j of F. Algorithm 3 3Refinement of Hottopixx with hybrid postprocessingInput: A ∈ R d×n and a positive integer r. Output: Set J of r elements from N .1. Perform step 1 of Algorithm 2. Let J 1 be the index set corresponding to the r largest elements of diag(X opt ). 2 = J for the index set J obtained at the termination of step Figure 2 : 2Average of index recovery rates by four algorithms for datasets1-4. Since W H = w 1 H(1, :) + · · · + w r H(r, :) = w j H(j, :) + W (:, R \ {j})H(R \ {j}, :) the term (A) is rewritten as(A) = η · w j + W (:, R \ {j})z.by letting η = H(j, :)X opt (:, i) ∈ R and z = H(R \ {j}, :)X opt (:, i) ∈ R r−1 .Accordingly,ǫ ≥ (1 − η)w j − W (:, R \ {j})z 1 = (1 − η +ǫ)w j − W (:, R \ {j})z −ǫw j 1 ≥ (1 − η +ǫ)w j − W (:, R \ {j})z 1 −ǫ w j 1 = (1 − η +ǫ)w j − W (:, R \ {j})z 1 −ǫ (by Assumption1(a)). ( 1 1− η +ǫ)w j − W (:, R \ {j})z 1 ≤ 2ǫ and 1 − η +ǫ ≥ 0 hold for η = H(j, :)X opt (:, i), z = H(R \ {j}, :)X opt (:, i) andǫ = 4ǫ 1 − ǫ .First, consider the case where 1 − η +ǫ > 0. We find that2ǫ ≥ (1 − η +ǫ)w j − W (:, R \ {j})z 1 = (1 − η +ǫ) w j − W (:, R \ {j})z ′ 1 (by letting z ′ = z/(1 − η +ǫ)) ≥ (1 − η +ǫ)κ(by the definition of κ). η = H(j, :)X opt (:, i) = X opt (i, i) + H(j, N \ {i})X opt (N \ {i}, i) = p(i) + H(j, N \ {i})X opt (N \ {i}, i) (A)and bound the term (A) as follows:(A) ≤ β · 1 ⊤ X opt (N \ {i}, i)(by the definition of β) Lemma 12 . 12Let A satisfy Assumption 1. Let T j be an anchor with parameter µ satisfying ǫ ≤ µ.Then, for j ∈ R, w j − a u 1 = w j − W H(:, u) − n u 1 ≤ w j − W H(:, u) 1 + n u 1 ≤ w j − W H(:, u) 1 + ǫ ≤ w j − W H(:, u) 1 + µ.Hence, w j − W H(:, u) 1 > µ holds. Furthermore,µ < w j − W H(:, u) 1 = w j − w j H(j, u) − W (:, R \ {j})H(R \ {j}, u) 1 ≤ (1 − H(j, u)) w j 1 + W (:, R \ {j} 1 H(R \ {j}, u) 1 = 1 − H(j, u) + H(R \ {j}, u) 1 (by Assumption 1(a)) = 1 − 2H(j, u) + H(:, u) 1 (by H ≥ 0) = 2 − 2H(j, u) (by Assumption1(a)). Table 1 : 1Comparison of our result (Theorem 1) with those of Gillis (Theorem 2.3 of Table 2 : 2Comparison of our result (Theorem 2) with those of Gillis (Theorem 3.5 of [9]) and Gillis and Luce (Theorem 7 of [13]) for algorithms with postprocessing. Input Model Assumption Noise level Error Our result A, r P Assumption 1 κω 578(r+1) 136(r+1) κ ǫ Gillis A, r, ǫ Q Assumption 1 κω 99(r+1) 49(r+1) κ ǫ+2ǫ Gillis and Luce A, r, ǫ R Assumption 1 κω 99(r+1) 49(r+1) κ ǫ+2ǫ Table 3 : 3Average values of κ, ω, σ max /σ min and β over 50 matrices W and H in datasets 1-4.Dataset 1 Dataset 2 Dataset 3 Dataset 4 Type of W Normal Ill-conditioned Ill-conditioned Ill-conditioned with c = 3 with c = 4 with c = 5 κ 3.27 × 10 −1 3.67 × 10 −2 1.16 × 10 −2 3.43 × 10 −3 ω 4.70 × 10 −1 1.47 × 10 −1 8.62 × 10 −2 5.15 × 10 −2 σ max /σ min 1.09 × 10 1 3.07 × 10 2 3.38 × 10 3 5.36 × 10 4 β 8.03 × 10 −1 8.03 × 10 −1 8.03 × 10 −1 8.03 × 10 −1 Table 4 : 4Maximum values of δ for 100% and 80% recovery of basis indices. The symbol "-" in 100% recovery (resp. 80% recovery) means that the average of the index recovery rates at δ = 0.01 is less than 1 (resp. 0.8). The bold-faced values indicate the maximum value in each column.• For datasets 2-4, RHHP and LP-rho1 are better than SPA. RHHP is slightly better than LP-rho1, since the maximum values of δ for 80% recovery determined by RHHP exceed those determined by LP-rho1.Dataset 1 Dataset 2 Dataset 3 Dataset 4 100% 80% 100% 80% 100% 80% 100% 80% RHHP 0.089 0.298 0.015 0.118 - 0.052 - 0.019 LP-rho1 0.089 0.298 0.010 0.096 - 0.042 - 0.015 Hottopixx - 0.043 - 0.010 - - - - SPA 0.089 0.379 - 0.052 - 0.015 - - AcknowledgmentsThe author would like thank Nicolas Gillis of University of Mons who provided feedback on this manuscript, and thank the anonymous referees for careful reading and helpful comments that enhanced the quality of this paper significantly. This research was supported by the Japan Society for the Promotion of Science (JSPS KAKENHI Grant Number 20K11951).Appendix AProof of Lemma 2 (Proof ofLemma 2). We prove the first inequality. By Lemma 1, .The term (A) can be rewritten as (A) = 1 ⊤ V X opt (:, i) = 1 ⊤ X opt (:, i) = X opt (:, i) 1Appendix B Proof of Lemmas 3 and 4We use the following lemma to prove Lemmas 3 and 4.Lemma 11. Let A satisfy Assumption 1. Let φ : R → N be a map such that w j = v φ(j) for each j ∈ R. Then, for j ∈ R and i = φ(j) ∈ N , we havewhere X opt is the optimal solution of problem P(A, r).Proof. Let j ∈ R and i = φ(j) ∈ N . Lemma 2 tells us thatWe then find thatHere,since X opt (u, i) ≤ X opt (u, u) and X opt ≥ 0 by the third and fourth constraints of problem P. Accordingly, we obtainThe bounds(18)and(20)yieldHere,ǫ ≥ 0. We thus obtain score(T j , p) ≥ 1 − 16ǫ κµ(1−ǫ) for every j ∈ R. Next, consider the case where 1 − η +ǫ = 0. By inequality (20), we havewhich gives score(T j , p) > 1 +ǫ. Here,ǫ ≥ 0 and 16ǫ κµ(1−ǫ) ≥ 0. We thus obtain score(T j , p) ≥ 1 − 16ǫ κµ(1−ǫ) for every j ∈ R.Appendix C Proof of Corollary 1 (Proof of Corollary 1). Since Assumption 1(a) holds, we have the bounds 0 ≤ κ ≤ 1 and 0 ≤ ω ≤ 2 shown in(3)and(4). Also, since Assumption 1(b) holds, we have ǫ ≥ 0. Hence, the bounds imposed on ǫ in the two cases imply κ > 0. Accordingly, κ and ω satisfy 0 < κ ≤ 1 and 0 ≤ ω ≤ 2.In addition, they satisfy the relation κ ≤ ω shown in(2). Former case (a) We only have to prove ǫ < 1 since ǫ ≥ 0 by Assumption 1(b). The bounds κ ≤ 1 and ω ≤ 2 imply ǫ < ωκ 578(r + 1) ≤ 1 578.Hence, ǫ satisfies ǫ < 1. (b) Since r, κ, ξ > 0 and ǫ ≥ 0, we have µ = 17(r + 1)ǫ κ + ξ > 0.By using the bound on ǫ, we can put a bound on µ: µ = 17(r + 1)ǫ κ + ξ < ω 34 + ξ.Since ξ < κ/35 and κ ≤ ω, we haveHence, 0 < µ < ω/17 holds. (c) Since ξ > 0 and κ ≤ 1, we haveThus, ǫ ≤ µ holds. (d) The corollary satisfies the hypotheses of Lemma 4. This is because Assumption 1(b) is not violated by the bound on ǫ that we put in part (a); κ > 0 holds, as explained at the beginning of the proof; and µ = 0 and ǫ ≤ µ hold, as shown in parts (b) and (c). Accordingly,holds for every j ∈ R. If ǫ = 0, then, score(T j , p) ≥ 1 > r/(r + 1). We thus assume ǫ > 0. The bound on ǫ in (21) implies ǫ < 1/17 and we haveWrite the value of µ as µ = λ + ξ by letting λ = 17(r + 1)ǫ/κ. We find thatwhere the third inequality uses µ = λ + ξ and λ, ξ > 0, and the equality uses λ = 17(r + 1)ǫ/κ. Latter case (a) As mentioned in the former case, we only have to prove ǫ < 1. From the bound κ ≤ 1 and 289 = 17 2 , we obtain a bound on ǫ, ǫ < κ 2 17 2 (r + 1) 2 ≤ 1 34 2 .(Hence, ǫ satisfies ǫ < 1. (b) Since ξ > 0 and ǫ ≥ 0, we have µ = √ ǫ + ξ > 0. Here, ξ satisfies ξ < κ/35 < κ/34, and we have κ ≤ ω. Hence, using the bound on ǫ, we obtain a bound on µ,Hence, 0 < µ < ω/17 holds. (c) Two functions f 1 (x) = x and f 2 (x) = √ x satisfy f 1 (x) ≤ f 2 (x) for 0 ≤ x ≤ 1. Since ξ satisfies ξ > 0 and ǫ satisfies 0 ≤ ǫ < 1 as shown in part (a), we have ǫ ≤ µ = √ ǫ + ξ. (d) The bound on ǫ in (22) implies ǫ < 1/17. We can thus prove this part in the same way as part (d) of the former case. The successive projections algorithm for variable selection in spectroscopic multicomponent analysis. U M C Araújo, B T C Saldanha, R K H Galvão, T Yoneyama, H C Chame, V Visani, Chemometrics and Intelligent Laboratory Systems. 572U. M. C. Araújo, B. T. C. Saldanha, R. K. H. Galvão, T. Yoneyama, H. C. Chame, and V. Visani. The successive projections algorithm for variable selection in spectroscopic multi- component analysis. Chemometrics and Intelligent Laboratory Systems, 57(2):65-73, 2001. A practical algorithm for topic modeling with provable guarantees. S Arora, R Ge, Y Halpern, D Mimno, A Moitra, Proceedings of the 30th International Conference on Machine Learning (ICML). the 30th International Conference on Machine Learning (ICML)S. Arora, R. Ge, Y. Halpern, D. Mimno, and A. Moitra. A practical algorithm for topic modeling with provable guarantees. In Proceedings of the 30th International Conference on Machine Learning (ICML), 2013. Computing a nonnegative matrix factorization -Provably. S Arora, R Ge, R Kannan, A Moitra, Proceedings of the 44th symposium on Theory of Computing (STOC). the 44th symposium on Theory of Computing (STOC)S. Arora, R. Ge, R. Kannan, and A. Moitra. Computing a nonnegative matrix factorization -Provably. In Proceedings of the 44th symposium on Theory of Computing (STOC), pages 145-162, 2012. Learning topic models -Going beyond SVD. S Arora, R Ge, A Moitra, Proceedings of the 53rd Annual Symposium on Foundations of Computer Science (FOCS). the 53rd Annual Symposium on Foundations of Computer Science (FOCS)S. Arora, R. Ge, and A. Moitra. Learning topic models -Going beyond SVD. In Proceedings of the 53rd Annual Symposium on Foundations of Computer Science (FOCS), pages 1-10, 2012. Factoring nonnegative matrices with linear programs. V Bittorf, B Recht, C Re, J A Tropp, Advances in Neural Information Processing Systems 25 (NIPS). V. Bittorf, B. Recht, C. Re, and J. A. Tropp. Factoring nonnegative matrices with linear programs. In Advances in Neural Information Processing Systems 25 (NIPS), pages 1223- 1231, 2012. When does non-negative matrix factorization give a correct decomposition into parts?. D Donoho, V Stodden, Proceedings of Advances in Neural Information Processing Systems 16 (NIPS). Advances in Neural Information Processing Systems 16 (NIPS)D. Donoho and V. Stodden. When does non-negative matrix factorization give a correct de- composition into parts? In Proceedings of Advances in Neural Information Processing Systems 16 (NIPS), pages 1141-1148, 2003. Nonnegative matrix factorization for signal and data analytics: Identifiability, algorithms, and applications. X Fu, K Huang, N D Sidiropoulos, W.-K Ma, IEEE Signal Processing Magazine. 362X. Fu, K. Huang, N. D. Sidiropoulos, and W.-K. Ma. Nonnegative matrix factorization for signal and data analytics: Identifiability, algorithms, and applications. IEEE Signal Processing Magazine, 36(2):59-80, 2019. Sparse and unique nonnegative matrix factorization through data preprocessing. N Gillis, Journal of Machine Learning Research. 13N. Gillis. Sparse and unique nonnegative matrix factorization through data preprocessing. Journal of Machine Learning Research, 13:3349-3386, 2012. Robustness analysis of Hottopixx, a linear programming model for factoring nonnegative matrices. N Gillis, SIAM Journal on Matrix Analysis and Applications. 343N. Gillis. Robustness analysis of Hottopixx, a linear programming model for factoring nonneg- ative matrices. SIAM Journal on Matrix Analysis and Applications, 34(3):1189-1212, 2013. Successive nonnegative projection algorithm for robust nonnegative blind source separation. N Gillis, SIAM Journal on Imaging Sciences. 72N. Gillis. Successive nonnegative projection algorithm for robust nonnegative blind source separation. SIAM Journal on Imaging Sciences, 7(2):1420-1450, 2014. Separable simplex-structured matrix factorization: Robustness of combinatorial approaches. N Gillis, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). N. Gillis. Separable simplex-structured matrix factorization: Robustness of combinatorial approaches. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5521-5525, 2019. . N Gillis, Nonnegative Matrix Factorization. SIAM. N. Gillis. Nonnegative Matrix Factorization. SIAM, 2020. Robust near-separable nonnegative matrix factorization using linear optimization. N Gillis, R Luce, Journal of Machine Learning Research. 15N. Gillis and R. Luce. Robust near-separable nonnegative matrix factorization using linear optimization. Journal of Machine Learning Research, 15:1249-1280, 2014. A fast gradient method for nonnegative sparse regression with selfdictionary. N Gillis, R Luce, IEEE Transactions on Image Processing. 271N. Gillis and R. Luce. A fast gradient method for nonnegative sparse regression with self- dictionary. IEEE Transactions on Image Processing, 27(1):24-37, 2018. Fast and robust recursive algorithms for separable nonnegative matrix factorization. N Gillis, S A Vavasis, IEEE Transactions on Pattern Analysis and Machine Intelligence. 364N. Gillis and S. A. Vavasis. Fast and robust recursive algorithms for separable nonnega- tive matrix factorization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(4):698-714, 2014. A signal processing perspective on hyperspectral unmixing: Insights from remote sensing. W.-K Ma, J M Bioucas-Dias, T.-H Chan, N Gillis, P Gader, A J Plaza, A Ambikapathi, C.-Y Chi, IEEE Signal Processing Magazine. 312W.-K. Ma, J. M. Bioucas-Dias, T.-H. Chan, N. Gillis, P. Gader, A. J. Plaza, A. Ambikapathi, and C.-Y. Chi. A signal processing perspective on hyperspectral unmixing: Insights from remote sensing. IEEE Signal Processing Magazine, 31(2):67-81, 2014. Ellipsoidal rounding for nonnegative matrix factorization under noisy separability. T Mizutani, Journal of Machine Learning Research. 15T. Mizutani. Ellipsoidal rounding for nonnegative matrix factorization under noisy separability. Journal of Machine Learning Research, 15:1011-1039, 2014. Vertex component analysis: A fast algorithm to unmix hyperspectral data. J M P Nascimento, J M B Dias, IEEE Transactions on Geoscience and Remote Sensing. 434J. M. P. Nascimento and J. M. B. Dias. Vertex component analysis: A fast algorithm to unmix hyperspectral data. IEEE Transactions on Geoscience and Remote Sensing, 43(4):898-910, 2005. On the complexity of nonnegative matrix factorization. S A Vavasis, SIAM Journal of Optimization. 203S. A. Vavasis. On the complexity of nonnegative matrix factorization. SIAM Journal of Optimization, 20(3):1364-1377, 2009.
[]
[]
[ "Beshtoev M Kh \nNeutrino Oscillations. Theory and Experiment\nJoint Institute for Nuclear Research\nJoliot Curie 6141980Dubna, Moscow regionRussia\n" ]
[ "Neutrino Oscillations. Theory and Experiment\nJoint Institute for Nuclear Research\nJoliot Curie 6141980Dubna, Moscow regionRussia" ]
[]
The theoretical schemes on neutrino oscillations are considered. The experimental data on neutrino oscillations from Super-Kamiokande (Japan) and SNO (Kanada) are given. Comparison of these data with theoretical schemes is done. Conclusion is made that the experimental data have confirmed the scheme only with transitions (oscillations) between aromatic ν e , ν µ , ν τ neutrinos with maximal mixing angles. PACS: 12.15 Ff Quarks and Lepton masses and mixings. PACS: 12.15 Ji Application of electroweak model to specific processes.
null
[ "https://export.arxiv.org/pdf/hep-ph/0204324v1.pdf" ]
15,662,467
hep-ph/0204324
f2e252e1f122acadecf90ceca67cea93701df059
28 Apr 2002 Beshtoev M Kh Neutrino Oscillations. Theory and Experiment Joint Institute for Nuclear Research Joliot Curie 6141980Dubna, Moscow regionRussia 28 Apr 2002 The theoretical schemes on neutrino oscillations are considered. The experimental data on neutrino oscillations from Super-Kamiokande (Japan) and SNO (Kanada) are given. Comparison of these data with theoretical schemes is done. Conclusion is made that the experimental data have confirmed the scheme only with transitions (oscillations) between aromatic ν e , ν µ , ν τ neutrinos with maximal mixing angles. PACS: 12.15 Ff Quarks and Lepton masses and mixings. PACS: 12.15 Ji Application of electroweak model to specific processes. Introduction The suggestion that, by analogy with K o ,K o oscillations, there could be neutrino oscillations (i.e., that there could be neutrino-antineutrino oscillations ν →ν) was considered by Pontecorvo [1] in 1957. It was subsequently considered by Maki et al. [2] and Pontecorvo [3] that there could be mixings (and oscillation) of neutrinos of different aromas (i.e., ν e → ν µ transitions). The problem of solar neutrinos arose after the first experiment performed to measure the flux of neutrinos from the Sun by the 37 Cl− 37 Ar [4] method. The flux was found to be several times smaller than expected from calculations made in accordance with the standard solar model (SSM) [5]. It was suggested in [6] that the solar neutrino deficit could be explained by neutrino oscillations. Subsequently, when the result of the experiment at Kamiokande [7] confirmed the existence of the deficit relative to the SSM calculations, one of the attractive approaches to the explanation of the solar neutrino deficit became resonant enhancement of neutrino oscillations in matter [8]. Resonant enhancement of neutrino oscillations in matter was obtained from Wolfenstein's equation for neutrinos in matter [9]. It was noted in Ref. [10] that Wolfenstein's equation for neutrinos in matter is an equation for neutrinos in matter in which they interact with matter not through the weak but through a hypothetical weak interaction that is left-right symmetric. Since in the standard weak interactions participate only left components of neutrinos the results obtained from Wolfenstein's equation have no direct relation to real neutrinos. Later experimentalists obtained the first results on the Gran Sasso 71 Ga − 71 Ge experiment [11], that within a 3σ limit did not disagree with the SSM calculations. The new data from the SAGE experiment [12] are fairly close to the Gran Sasso results. In Ref. [13], the author of this article proposed a new mechanism of enhancement of neutrino oscillations in matter that is realized through the weak interaction of oscillation neutrinos with matter if the thickness of this matter is sufficiently great. Later in works [14] it was shown that since the standard weak interactions cannot generate masses, the resonance enhancement of neutrino oscillations in matter cannot be realized without violation of the energy-momentum conservation law. Besides the experimental devices marked above, at present there are working Super-Kamiokande [15][16][17] and SNO [18] detectors. The experimental results obtained with SNO detector present a great interest since they can be used for modelness analysis of neutrino oscillations. After the discovery of neutrino oscillations on Super-Kamiokande [19] (by non direct method) and on SNO [20] (by direct method) it is necessary to analyze the situation which arises in the problem of neutrino oscillations. In this work theoretical schemes of neutrino oscillations and their analyses are considered. Also the experimental data obtained on Super-Kamiokande (Japan) and SNO (Canada) are given. Comparison of these data with consequences in the theoretical schemes has been carried out. Theory Distinguishing Features of Weak Interactions The strong and electromagnetic interaction theories are left-right systemic theories (i.e. all components of the spinors participate in these interactions symmetrically). In contrast to this only the left components of fermions participate in the weak interaction. We will consider some consequences deduced from this specific feature of the weak interaction. The local conserving current j µi of the weak interaction has the following form: j µi =Ψ L τ i γ µ Ψ L ,(1)whereΨ L , Ψ L are lepton or quark doublets   e ν e   iL (2)   q 1 q 2   iL , i = 1 − 3, where i is aromatic number of quarks or leptons. The currents S µ i obtained from the global abelian transformation by using Neuter theorem [21] are S µ i = i(Ψ i ∂ µ Ψ i ),(3) (where i characterizes the type of the gauge transformation) and the corresponding conserving current (the forth component of S µ i ) is I i = S 0 i d 3 x = ǫΨ i Ψ i d 3 x,(4) where ǫ is the energy of fermion Ψ i . Since we cannot switch off the weak interactions, then while the particle is moving in vacuum all the effects connected with these interactions will be realized. If now we take into account that the right components of fermions Ψ iR , Ψ iR do not participate in the weak interaction, then from (4) for abelian currents we get I i = ǫΨ iL Ψ iL d 3 x ≡ 0,(5) i.e. (in contrast to the strong and electromagnetic interactions) no conserving additive numbers appear in the weak interaction. However, we can see from experiments that the hierarchical violation of these additive numbers takes place here (see [22] and references there). About Neutrino mass a) Hypothesis: A massless free particle cannot have a charge. An example of this case is photon (carrier of the electromagnetic interactions), which has no charge. To gluons, which are in the confining state, this hypothesis cannot be applied. In application to neutrino having a weak charge, this hypothesis drives to a conclusion: the neutrino participating in weak interactions cannot be massless. In work [23] this hypothesis at sufficiently common suppositions was proved. b) The discovery of neutrino oscillations is an additional confirmation of the conclusion that it is a massive particle. Theory of Neutrino Oscillations In the old theory of neutrino oscillations [24,6], constructed in the framework of Quantum theory in analogy with the theory of K o ,K o oscillation, it is supposed that mass eigenstates are ν 1 , ν 2 , ν 3 neutrino states but not physical neutrino states ν e , ν µ , ν τ , and that the neutrinos ν e , ν µ , ν τ are created as superpositions of ν 1 , ν 2 , ν 3 states. This means that the ν e , ν µ , ν τ neutrinos have no definite mass, i.e. their masses may vary in dependence on the ν 1 , ν 2 , ν 3 admixture in the ν e , ν µ , ν τ states. Naturally, in this case the law of conservation of the energy and the momentum of the neutrinos is not fulfilled. Besides, every particle must be created on its mass shell and it will be left on its mass shell while passing through vacuum. It is clear that this picture is incorrect. In the modern theory on neutrino oscillations [25]- [26], constructed in the framework of the particle physics theory it is supposed that: 1) The physical states of the ν e , ν µ , ν τ neutrinos are eigenstates of the weak interaction and, naturally, the mass matrix of ν e , ν µ , ν τ neutrinos is diagonal. All the available, experimental results indicate that the lepton numbers l e , l µ , l τ are well conserved, i.e. the standard weak interactions do not violate the lepton numbers. 2) Then, to violate the lepton numbers, it is necessary to introduce an interaction violating these numbers. It is equivalent to introducing nondiagonal mass terms in the mass matrix of ν e , ν µ , ν τ . Diagonalizing this matrix we go to the ν 1 , ν 2 , ν 3 neutrino states. Exactly like the case of K o mesons created in strong interactions, when mainly K o ,K o mesons are produced, in the considered case ν e , ν µ , ν τ , but not ν 1 , ν 2 , ν 3 , neutrino states are mainly created in the weak interactions (this is so, because the contribution of the lepton numbers violating interactions in this process is too small). And in this case no oscillations take place. 3) Then, when the ν e , ν µ , ν τ neutrinos are pass through vacuum, they will be converted into superpositions of the ν 1 , ν 2 , ν 3 owing to the presence of the interactions violating the lepton number of neutrinos and will be left on their mass shells. And, then, oscillations of the ν e , ν µ , ν τ neutrinos will take place according to the standard scheme [24][25][26]. Whether these oscillations are real or virtual, it will be determined by the masses of the physical neutrinos ν e , ν µ , ν τ . i) If the masses of the ν e , ν µ , ν τ neutrinos are equal, then the real oscillation of the neutrinos will take place. ii) If the masses of the ν e , ν µ , ν τ are not equal, then the virtual oscillation of the neutrinos will take place. To make these oscillations real, these neutrinos must participate in the quasielastic interactions, in order to undergo transition to the mass shell of the other appropriate neutrinos by analogy with γ − ρ o transition in the vector meson dominance model. In case ii) enhancement of neutrino oscillations will take place if the mixing angle is small at neutrinos passing through a bulk of matter [13,27]. So, the mixings (oscillations) appear since at neutrinos creating eigenstates of the weak interaction are realized (i.e. ν e , ν µ , ν τ neutrinos) but not the eigenstates of the weak interaction violating lepton numbers (i.e. ν 1 , ν 2 , ν 3 neutrinos) and then, when passing through vacuum, they are converted into superpositions of ν 1 , ν 2 , ν 3 neutrinos. If ν 1 , ν 2 , ν 3 neutrinos were originally created, then the mixings (oscillations) would not have taken place since the weak interaction conserves the lepton numbers. Now we come to a more detailed consideration of the oscillations. For simplification we consider the oscillation of two types of neutrinos ν e , ν µ having l ν e , l ν µ numbers which can transit each into the other. We can use the mass matrix of ν e , ν µ neutrinos to consider transitions between these particles in the framework of the quantum theory (or particle physics) since the mass matrix is an eigenstate of the type of interaction which creates these particles (see below). The mass matrix of ν e and ν µ neutrinos has the form   m ν e 0 0 m ν µ   .(6) Due to the presence of the interaction violating the lepton numbers, a nondiagonal term appears in this matrix and then this mass matrix is transformed into the following nondiagonal matrix (CP is conserved):   m ν e m ν e ν µ m ν µ ν e m ν µ   ,(7) then the lagrangian of mass of the neutrinos takes the following form (ν ≡ ν L ): L M = − 1 2 m ν eν e ν e + m ν µν µ ν µ + m ν e ν µ (ν e ν µ +ν µ ν e ) ≡ ≡ − 1 2 (ν e ,ν µ )   m ν e m ν e ν µ m ν µ ν e m ν µ     ν e ν µ   ,(8) which is diagonalized by turning through the angle θ and (see ref. in [24]) and then this lagrangian (8) transforms into the following one: L M = − 1 2 [m 1ν1 ν 1 + m 2ν2 ν 2 ] ,(9) where m 1,2 = 1 2 (m ν e + m ν µ ) ± (m ν e − m ν µ ) 2 + 4m 2 ν µ ν e 1/2 , and angle θ is determined by the following expression: tg2θ = 2m ν e ν µ (m ν µ − m ν e ) ,(10)ν e = cosθν 1 + sinθν 2 , ν µ = −sinθν 1 + cosθν 2 .(11) From eq.(10) one can see that if m ν e = m ν µ , then the mixing angle is equal to π/4 independently of the value of m ν e ν µ : sin 2 2θ = (2m ν e ν µ ) 2 (m ν e − m ν µ ) 2 + (2m ν e ν µ ) 2 ,(12)  m ν 1 0 0 m ν 2   . It is interesting to remark that expression (12) can be obtained from the Breit-Wigner distribution [28] P ∼ (Γ/2) 2 (E − E 0 ) 2 + (Γ/2) 2 ,(13) by using the following substitutions: E = m ν e , E 0 = m ν µ , Γ/2 = 2m ν e ,ν µ , where Γ/2 ≡ W (...) is a width of ν e → ν µ transition, then we can use a standard method [26,29] for computing this value. The expression for time evolution of ν 1 , ν 2 neutrinos (see (9), (11)) with masses m 1 and m 2 is ν 1 (t) = e −iE 1 t ν 1 (0), ν 2 (t) = e −iE 2 t ν 2 (0),(14) where E 2 k = (p 2 + m 2 k ), k = 1, 2. If neutrinos are propagating without interactions, then ν e (t) = cosθe −iE 1 t ν 1 (0) + sinθe −iE 2 t ν 2 (0), ν µ (t) = −sinθe −iE 1 t ν 1 (0) + cosθe −iE 2 t ν 2 (0).(15) Using the expression for ν 1 and ν 2 from (11), and putting it into (15), one can get the following expression: ν e (t) = e −iE 1 t cos 2 θ + e −iE 2 t sin 2 θ ν e (0)+ + e −iE 1 t − e −iE 2 t sinθ cos θν µ (0),(16)ν µ (t) = e −iE 1 t sin 2 θ + e −iE 2 t cos 2 θ ν µ (0)+ + e −iE 1 t − e −iE 2 t sinθcosθν e (0). The probability that neutrino ν e created at the time t = 0 will be transformed into ν µ at the time t is an absolute value of amplitude ν µ (0) in (16) squared, i. e. P (ν e → ν µ ) =| (ν µ (0) · ν e (t)) | 2 = = 1 2 sin 2 2θ 1 − cos((m 2 2 − m 2 1 )/2p)t ,(17) where it is supposed that (17) presents the probability of neutrino aroma oscillations. The angle θ (mixing angle) characterizes value of mixing. The probability P (ν e → ν µ ) is a periodical function of distances where the period is determined by the following expression: p ≫ m 1 , m 2 ; E k ≃ p + m 2 k /2p. The expressionL o = 2π 2p | m 2 2 − m 2 1 | .(18) And probability P (ν e → ν e ) that the neutrino ν e created at time t = 0 is preserved as ν e neutrino at time t is given by the absolute value of the amplitude of ν e (0) in (16) squared. Since the states in (16) are normalized states, then P (ν e → ν e ) + P (ν e → ν µ ) = 1.(19) So, we see that aromatic oscillations caused by nondiagonality of the neutrinos mass matrix violate the law of the −ℓ e and ℓ µ lepton number conservations. However in this case, as one can see from exp. (19), the full lepton numbers ℓ = ℓ e + ℓ µ are conserved. We can also see that there are two cases of ν e , ν µ transitions (oscillations) [26], [29]. 1. If we consider the transition of ν e into ν µ particle, then sin 2 2β ∼ = 4m 2 ν e ,ν µ (m ν e − m ν µ ) 2 + 4m 2 ν e ,ν µ ,(20) if the probability of the transition of ν e particles into ν µ particles through the interaction (i.e. m ν e ,ν µ ) is very small, then sin 2 2β ∼ = 4m 2 ν e ,ν µ (m ν e − m ν µ ) 2 ∼ = 0.(21) How can we understand this ν e → ν µ transition? If 2m ν e ,ν µ = Γ 2 is not zero, then it means that the mean mass of ν e particle is m ν e and this mass is distributed by sin 2 2β (or by the Breit-Wigner formula) and the probability of the ν e → ν µ transition differs from zero and it is defined by masses of ν e and ν µ particles and m ν e ,ν µ , which is computed in the framework of the standard method, as pointed out above. So, this is a solution of the problem of the origin of mixing angle in the theory of vacuum oscillations. In this case the probability of ν e → ν µ transition (oscillation) is described by the following expression: P (ν e → ν µ , t) = sin 2 2βsin 2   πt | m 2 ν 1 − m 2 ν 2 | 2p ν e   ,(22) where p ν e is a momentum of ν e particle. Originally it was supposed [6,24] that these oscillations are real oscillations. However we see that these oscillations are virtual because when ν e really transits into ν µ , then it can decay into electron neutrino plus something, i.e. we gain the energy from vacuum, which is equal to the mass difference ∆m = m ν µ − m ν e (momenta of ν e and ν µ are equal at oscillations). Then it is clear that at real ν e → ν µ transition the law of energy conservation is violated. This law can be fulfilled only at virtual ν e → ν µ transitions. 2. If we consider the virtual transition of ν e into ν µ neutrino at m ν e = m ν µ (i.e. without changing the mass shell), then tg2β = ∞, β = π/4, and sin 2 2β = 1. In this case the probability of the ν e → ν µ transition (oscillation) is described by the following expression: P (ν e → ν µ , t) =   πt 4m 2 ν e ,ν µ 2p a   .(24) To make these virtual oscillations real, their participation in quasielastic interactions is necessary for the transitions to their own mass shells [29]. It is clear that the ν e → ν µ transition is a dynamical process. Now let us consider the common case. In this case the mass lagrangian has the following form: L M = −ν R Mν L + H.c. ≡ ≡ l,l ′ =e,µ,τ ν l ′ R M l ′ l ν lL + H.c.,(25) M is a complex 3 × 3 matrix. It is necessary to remark that the ν R is absent in the weak interactions lagrangian. By using the expression M = V mU + ,(26) (where V, U -unitary matrices) we transform L M to a diagonal form L M = −ν R mν L + H.c. ≡ ≡ 3 k=1 m kνk ν k + H.c.,(27) where m ik = m k δ ik , and ν ′ L = U + ν L , ν ′ R = V + ν R , ν ′ =      ν 1 ν 2 ν 3      .(28) We can see that the lagrangian (25) is invariant at the global gauge transformation ν k (x) → e Λ ν k (x)(29) or l(x) → e Λ l(x), l = e, µ, τ , i.e. lepton numbers are not conserved separately (i.e. neutrino is mixed) but there appears a lepton number l related with the common gauge transformation which is conserved. Schemes of Neutrino Oscillations Let us consider different schemes of neutrino oscillations. 2.4.a. Neutrino-Antineutrino Oscillations The suggestion that, by analogy with K o ,K o oscillations, there could be ν,ν oscillations, was considered by B. Pontecorvo in work [1]. In this case the mass lagrangian of neutrinos has the following form: L ′ M = − 1 2 (ν e , ν e )   m ν e ν e mν e ν e m ν eνe mν eνe     ν ē ν e   .(30) Diagonalizing this mass matrix by standard methods one obtains the following expression: L ′ M = − 1 2 (ν 1 ,ν 2 )   m ν 1 0 0 mν 2     ν 1 ν 2   ,(31) where ν 1 = cosθν e − sinθν e , ν 2 = sinθν e + cosθν e . These neutrino oscillations are described by expressions (14)- (19) with the following substitution of ν µ →ν e . It is necessary to remark that if these neutrinos are Dirac ones, then the probability to observeν e is much smaller than the probability to observe ν e (such neutrinos can be named the "sterile" neutrinos (see ref. [3]). It is clear that in this case the lepton numbers are not conserved, i.e. gauge invariance is violated since the particle transforms into antiparticle in contrast to the ν e → ν µ transitions where only aromatic numbers are violated. 2.4.b. Oscillations of aromatic neutrinos In the work [2] Maki et al. supposed that there could exist transitions between aromatic neutrinos ν e , ν µ . Afterwards ν τ was found and then ν e , ν µ , ν τ transitions could be possible. The author of this work has developed this direction (see [30]). It is necessary to remark that only this scheme of oscillations is realistic for neutrino oscillations (see also this work). The expressions which described neutrino oscillations in this case are given above in expressions (14)- (19). 2.4.c. Majorana neutrino oscillations Before discussion of neutrino oscillations in this scheme we give definitions of Majorana neutrinos (more common consideration, in a formal form, of this question is given in [6,24]). Majorana fermion in Dirac representation has the following form [24,31]: χ M = 1 2 [Ψ(x) + η C Ψ C (x)],(32)Ψ C (x) → η C CΨ T (x), where η C is a phase, C is a charge conjunction, T is a transposition. From Exp. (32) we see that Majorana fermion χ M has two spin projections ± 1 2 and then the Majorana spinor can be rewritten in the following form: χ M (x) =   χ + 1 2 (x) χ − 1 2 (x)   .(33) The mass Lagrangian of Majorana neutrinos in the case of two neutrinos χ e , χ µ (− 1 2 components of Majorana neutrinos, andχ ... is the same Majorana fermion with the opposite spin projection) in the common case has the following form: L ′ M = − 1 2 (χ e ,χ µ )   m χ e m χ e χ µ m χ µ χ e m χ µ     χ e χ µ   .(34) Diagonalizing this mass matrix by standard methods one obtains the following expression: L ′ M = − 1 2 (ν 1 ,ν 2 )   m ν 1 0 0 m ν 2     ν 1 ν 2   ,(35) where ν 1 = cosθχ e − sinθχ µ , ν 2 = sinθχ e + cosθχ µ . These neutrino oscillations are described by expressions (14)- (19) with the following substitution of ν eµ → χ M eµ . The standard theory of weak interactions is constructed on the base of local gauge invariance of Dirac fermions. In this case Dirac fermions have the following lepton numbers l l, which are conserved (however, see Sect. 2.1), l l , l = e, µ, τ,(36) and Dirac antiparticles have lepton numbers with the opposite sign l = −l l .(37) Gauge transformation of Majorana fermions can be written in the form: χ ′ + 1 2 (x) = exp(−iβ)χ + 1 2 (x), χ ′ − 1 2 (x) = exp(+iβ)χ − 1 2 (x).(38) Then lepton numbers of Majorana fermions are l M = i l M i (+1/2) = − i l M i (−1/2), i. e., antiparticle of Majorana fermion is the same fermion with the opposite spin projection. Now we come to discussion of the problem of the place of Majorana fermion in the standard theory of weak interactions [32]. To construct the standard theory of weak interactions [33] Dirac fermions are used. The absence of contradiction of this theory with the experimental data confirms that all fermions are Dirac particles. And in this theory there are numbers which can be connected with conserving currents. As it is stressed above these numbers are violated (Sect. 2.1). Now, if we want to include the Majorana fermions into the standard theory we must take into account that, in the common case, the gauge charges of the Dirac and Majorana fermions are different (especially it is well seen in the example of Dirac fermion having an electrical charge since it cannot have a Majorana charge (it is worth to remind that in the weak currents the fermions are included in the couples form)). In this case we cannot just include Majorana fermions in the standard theory of weak interactions by gauge invariance manner. Then in the standard theory the Majorana fermions cannot appear. 2.4.d. Neutrino Oscillations in the case of Dirac-Majorana mixing type We do not discuss this mechanism due to the reason mentioned above. Consideration of this mechanism can be found in [24]. 2.4.e. Neutrino Oscillation Enhancement in Matter At present there exist two mechanisms of neutrino oscillation enhancement in matter. A short consideration of these mechanisms is given below. 2.4.e.1. Resonant Mechanism of Neutrino Oscillation Enhancement in Matter In strong and electromagnetic interactions the left-handed and right-handed components of spinors participate in a symmetric manner. In contrast to these interactions only the left-handed components of spinors participate in the weak interactions as it is mentioned above. This is a distinctive feature of the weak interactions. In the ultrarelativistic limit, the evolution equation for the neutrino wave function ν Φ in matter has the form [8], [9] i dν P h dt = (pÎ +M 2 2p +Ŵ )ν P h ,(39) where p,M 2 ,Ŵ are, respectively, the momentum, the (nondiagonal) square mass matrix in vacuum, and the matrix, taking into account neutrino interactions in matter, ν P h =   ν e ν µ   ,Î =   1 0 0 1   , M 2 =   m 2 ν e ν e m 2 ν e ν µ m 2 ν µ ν e m 2 ν µ ν µ   . If we suppose that neutrinos in matter behave analogously to the photon in matter and the neutrino refraction indices are defined by the expression n i = 1 + 2πN p 2 f i (0) = 1 + 2 πW i p ,(40) (where i is a type of neutrinos (e, µ, τ ), N is density of matter, f i (0) is a real part of the forward scattering amplitude), then W characterizes polarization of matter by neutrinos (i.e. it is the energy of matter polarization). The electron neutrino (ν e ) in matter interacts via W ± , Z 0 bosons and ν µ , ν τ interact only via Z 0 boson. These differences in interactions lead to the following differences in the refraction coefficients of ν e and ν µ , ν τ ∆n = 2πN p 2 ∆f (0),(41)∆f (0) = − √ 2 G F 2π , where G F is the Fermi constant. Therefore the velocities (or effective masses) of ν e and ν µ , ν τ in matter are different. And at the suitable density of matter this difference can lead to a resonance enhancement of neutrino oscillations in "matter" [8,34]. As we can see from the form of Eq. (39), this equation holds the left-right symmetric neutrinos wave function Ψ(x) = Ψ L (x) + Ψ R (x). This equation contains the term W , which arises from the weak interaction (contribution of W boson) and which contains only a left-side interaction of the neutrinos, and is substituted in the left-right symmetric equation (39) without indication of its left-side origin. Then we see that equation (39) is an equation that includes term W which arises not from the weak interaction but from a hypothetical left-right symmetric interaction (see also works [10,30,35]). Therefore this equation is not the one for neutrinos passing through real matter. The problem of neutrinos passing through real matter has been considered in [10,30,35,36]. In three different approaches: by using mass Lagrangian [35,30], by using the Dirac equation [35,30], and using the operator formalism [36], the author of this work has discussed the problem of the mass generation in the standard weak interactions and has come to a conclusion that the standard weak interaction cannot generate masses of fermions since the right-handed components of fermions do not participate in these interactions. Also it is shown [37] that the equation for Green function of the weak-interacting fermions (neutrinos) in the matter coincides with the equation for Green function of fermions in vacuum and the law of conservation of the energy and the momentum of neutrino in matter will be fulfilled [36] only if the energy W of polarization of matter by the neutrino or the corresponding term in Wolfenstein equation, is zero (it means that neutrinos cannot generate permanent polarization of matter). These results lead to the conclusion: resonance enhancement of neutrino oscillations in matter does not exist. The simplest method to prove the absence of the resonance enhancement of neutrino oscillations in matter is: If we put an electrical (or strong) charged particle in matter, there arises polarization of matter. Since the field around the particle is spherically symmetrical, the polarization must also be spherically symmetrical. Then the particle will be left at rest and the law of energy and momentum conservation is fulfilled. If we put a weakly interacting particle (a neutrino) in matter then, since the field around the particle has a left-right asymmetry (weak interactions are left interactions with respect to the spin direction), polarization of matter must be nonsymmetrical, i.e. on the left side there arises maximal polarization and on the right there is zero polarization. Since polarization of the matter is asymmetrical, there arises asymmetrical interaction of the particle (the neutrino) with matter and the particle cannot be at rest and will be accelerated. Then the law of energy momentum conservation will be violated. The only way to fulfil the law of energy and momentum conservation is to demand that polarization of matter be absent in the weak interactions. The same situation will take place in vacuum. It is interesting to remark that in the gravitational interaction the polarization does not exist either [38]. 2.4.e.2. Enhanced Oscillation of Neutrinos of Different Masses in Matter The oscillation probability is estimated for neutrinos of different masses in their passing through matter of different thickness, including the Sun [13,27]. 2) if neutrino masses are different for different neutrino types, only virtual neutrino oscillations are possible while real oscillations require participation of neutrinos in interactions for their transition to the respective mass shells by analogy with transition of a γ-quantum to the ρ -meson in the vector dominance model. We shall estimate the probability for neutrinos to change from one type ν l to another ν l ′ (m ν l = m ν l ′ ) in passing through matter. Neutrino transition to the mass shell will occur via the weak neutrinomatter interaction (by analogy with the γ − ρ o transition or K o 1 ,K o 2 oscillation). We shall assume that difference in mass of ν l , ν l ′ neutrinos is small enough to consider ν l ′ the probability of transition to the mass shell proportional to the total elastic cross section σ el (p) for the weak interaction (for simplicity we shall deal with the oscillation of two types of neutrinos). Then the length of the elastic interaction of the neutrino in the matter of density charge Z, atomic number A and momentum p will be defined as Λ 0 ∼ 1 σ el (p)ρ(z/A) . If the neutrino mass difference is fairly large, it can be taken into account by the methods of the vector dominance model [39]. As pointed out above, we shall assume that this difference is very small and employ above formula. The real part of forward scattering amplitude Ref i (p, 0) is responsible for elastic neutrino scattering in matter (it is supposed that at low energies the coherent process takes place). It is related to the exponential phase term exp(−p∆ i r) (as factor to momentum) in the wave function of particle Ψ(r, ...) and has the following form: p∆ i ≃ 2πN e f i (p, 0) p , i = ν e , ν µ , ν τ .(42) Keeping in mind that [33] f i (p, 0) ≃ √ 2G F p   M 2 W M 2 i   ,(43)if i = ν e , M 2 i = M 2 W , if i = ν µ , ν τ , M 2 i = M 2 Z 0 , we obtain p(∆ i ) ≃ √ 2G F N e   M 2 W M 2 i   . The phase of the elastic scattering amplitude changes by 2π over the length Λ i0 ≃ 2π √ 2G F ρ(z/A) M 2 W M 2 i = 2πL i0 ∼ Λ 0 .(44) For simplification further we will suppose that M 2 e ≃ M 2 µ ≃ M 2 τ and then Λ i = Λ. (Absorption or the imaginary part of the forward scattering amplitude can be ignored for low-energy neutrinos.) Knowing that the length of elastic neutrino-matter interaction is Λ 0 , we must estimate the oscillation probability for the neutrino passing through the matter of thickness L. The probability of the elastic ν l interaction in matter of thickness L is P (L) = 1 − exp(−2πL/Λ 0 ).(45) Then, using formulae (44), (45), we can find the neutrino oscillation probability ρ ν l ν l ′ (L) at different thickness L. Averaging the expression for neutrino oscillation probability [13] over R P ν l ν l ′ (R) = 1 2 sin 2 2θ ν l ν l ′ (1 − cos2π R L 0 ),(46) where L 0 = 4πp ∆m 2 , then we obtainP ν l ν l ′ (R) = 1 2 sin 2 2θ ν l ν l ′ . Then the oscillation probability ρ ν l ν l ′ (L) or the mixing angle β at Λ 0 ≥ L 0 will be defined by the expressions (for simplicity it is supposed that Λ 0 = Λ e = Λ µ = Λ τ ): a) for L comparable with Λ 0 , ρ ν l ν l ′ (L) = 1 2 sin 2 2β ≃P ν l ν l ′ = 1 2 sin 2 2θ ν l ν l ′ ,(47)where β ≃ θ ν l ν l ′ ; b) for very large L, L Λ 0 > 1 sin 2 2θ ν l ν l ′ ≫ 1, ρ ν l ν l ′ (L) = 1 2 sin 2 2β ≃ 1 2 ,(48) and β ≃ π 4 ; c) for intermediate L, ρ ν l ν l ′ (L) = 1 2 sin 2 2θ ν l ν l ′ ≤ ρ ν l ν l ′ (L) ≤ 1 2 ,(49) and θ ν l ν l ′ ≤ β ≤ π 4 . If L 0 ≥ Λ 0 the expressions like (47)-(49) are also true, but Λ 0 should be replaced by L 0 and the thickness of matter will be determined in units of L 0 . Also, since the oscillation length L 0 increases with the neutrino momentum (see (46)), the number of oscillation lengths n = L/L 0 fitting in the given thickness L decreases with increasing neutrino momentum as, accordingly, the neutrino oscillation probability ρ ν l ν l ′ (L) does. Let us consider the neutrino oscillation probability for intermediate interaction numbers n. The distribution probability of n-fold elastic neutrino interaction for thickness L with the mean valuen = L/Λ 0 at not very largen is determined by the Poisson distribution f (n,n) = (n) n n! e −n . At largen it changes to the Gaussian distribution: f (n,n,n) = 1 √ 2πn e −(n−n) 2 2n .(51) The probability of neutrino conversion from ν l to ν l and ν l ′ n-fold elastic interaction is determined by recursion relations (where θ ≡ θ ν l ν l ′ ) given in works [13,27]. Here we give the expression for probability for neutrino conversion at sin 2 2θ ≪ 1 for two types of neutrinos (ν e , ν µ ) ρ(ν e → ν e ) = 1 −n 1 2 sin 2 2θ,(52) ρ(ν e → ν µ ) =n 1 2 sin 2 2θ. Then enhancement of neutrino oscillation in matter will take place, i.e. ν e neutrinos will transit in ν µ , ν τ neutrinos, but it is necessary to take into account that mean numbers of interaction lengths L o µ , L o τ of ν µ , ν τ will be δ times less and then, correspondingly,n in (52) will be changed forn µ ,n τ . δ =n e /n µ =n e /n τ ≃ 2.49. (53) The mean number of elastic interactions of electron neutrinos produced in the Sun is Λ Sun ≃ 1.7 · 10 7 m,n Sun e ≃ 40,n Sun µ ≃ 16,n Sun τ ≃ 16. It is necessary to mention that the considered mechanisms of enhancement of neutrino oscillation in matter lead only to changing the mixing angles and for their realizations the vacuum mixing angle of neutrino oscillations must differ from zero. 2.4.f. Neutrino Oscillations in Supersymmetrical Models Neutrino oscillation in supersymmetrical models is considered in works (and see references there) [40][41][42]. Here we do not fulfil detailed considerations of these schemes but want to remark that in these schemes side by side the neutrino oscillations the superpartner of the fermions and bosons must be observed. 3 Experimental Data 3.a. Neutrino Experimental Data from SNO (Canada) The SNO detector [18], containing 1000 tons of heavy water (D 2 O), is placed in a shaft in Sudbury at depth 6010 m water equivalent (Sudbury Neutrino Observatory). The neutrinos are detected in the following reactions: 1. ν x + e − → ν x + e − , E thre ≃ 6 MeV (ES), 2. ν e + d → p + p + e − , E thre ≃ 1.45 MeV (CC), 3. ν x + d → p + n + ν x , E thre ≃ 2.23 MeV (NC), x = e, µ, τ . Reaction 1 goes through charged and neutral currents, if x = e, and neutral if x = µ, τ ; reaction 2 goes through charged current, and reaction 3 through neutral current. Using any couple of the reactions we can find the primary flux of the Sun neutrinos. Sudbury reported first results in [20,43]. These results are obtained for the Sun neutrinos with threshold E ef f ≥ 6.75MeV . Figure 1 shows the distribution of cosθ (a), and kinetic energy spectrum with a statistical error (b) with the 8 B spectrum [44] scaled to the data. The ratio of the data to the prediction [45] is shown in (c). The bands represent the 1σ uncertainties derived from the most significant energy-dependent systematic errors. There is no evidence for a deviation of the spectral shape from the predicted shape on the non-oscillation hypothesis. Normalized to the integrated rates above the energy E ef f = 6.75 MeV, the flux of neutrinos is (from reactions 2 and 1): The difference between ν flux deduced from the ES rate and that deduced from the CC rate is φφ SN O = 0.64 ± 0.40 × 10 6 cm −2 s −1 . It is the ν µ , ν τ flux measured through NC. The best fit to the φ SN O (ν µτ ) flux is: φ SN O (ν µτ ) = 3.69 ± 1.13 × 10 6 cm −2 s −1 .(56) The ratio of the SNO CC flux to the solar model [44 ] is φ CC SN O φ BP B00 = 0.347 ± 0.029. The total flux of active 8 B neutrinos is determined to be: φ SN O (ν x ) = 5.44 ± 0.99 × 10 6 cm −2 s −1 .(57) This result is in a good agreement with prediction of the standard solar models [45,46]. The SNO results are the first direct indication of the non-electron flavor components in the solar neutrino fluxes, and it is, practically [45,46], the total flux of 8 B neutrinos generated by the Sun. 3.b. Neutrino Experimental Data from Super-Kamiokande (Japan) The Super-Kamiokande detector [15,16] is a cylindrically-sharped water Cherenkov detector with 50000 ton of ultra-pure water. It is located about 1000m (2700 m.w.e.) underground in the Kamioka mine. Super-Kamiokande is a multipurpose experiment, and solar and atmospheric neutrino physics is one of its main topics. i). The Sun neutrino fluxes measured in Super-Kamiokande detector [47] through the electron scattering reaction 1. ν x + e − → ν x + e − thre ≃ 5 MeV are as follows: The day-night asymmetry A is A = (Φ n − Φ d ) ((Φ n + Φ d )/2) = 0.033 ± 0.022(stat.) + 0.013(−0.012)(syst.). This is 1.3σ from the zero asymmetry. ii). Atmospheric neutrinos are produced by the interactions of the primary cosmic rays on nuclei of the Earth atmosphere. The atmospheric neutrinos at a few GeV have ratio 2 (ν µ +ν µ )/(ν e +ν e ). The events observed in Super-Kamiokande are categorized into four types: (1) Fully Contained (FC) events, which have their vertex in the detector and all visible particles contained in the detector. (2) Partially Contained (PC) events, which have their vertex in the detector and at least one visible particle exits from the detector. (3) Upward throughgoing muons which are produced by the ν µ charged current interaction in the rock surrounding the detector and go through the detector. (4) Upward stopping muons which are produced by the ν µ charged current interaction in the rock surrounding the detector but stop in the detector. The primary neutrino (ν e -like and ν µ -like) energy are divided in two regions: (1) E ν ≤ 1.33 GeV sub-GeV, (2) E ν > 1.33GeV multi GeV. Figure 2 gives the zenith angle distribution of Super-Kamiokande 1289 days samples [48]. Dots, solid and dashed lines correspond to the data, MC with no oscillation and MC best oscillation parameters [49], respectively (∆m = 2.5 × (10) −3 eV 2 , sin 2 2θ = 1.00) . These data are well explained by ν µ → ν τ 2-flavor oscillations and are consistent with ν τ appearance roughly at the two-sigma level. Conclusions from Comparison of the Experimental Data with Theoretical Scheme Predictions on Neutrino Oscillations 1. In the Super-Kamiokande experiment on atmospheric neutrinos the deficit of muonic neutrinos is detected. The analysis shows that they can transit only in ν τ neutrinos. The ν µ → ν e transition in this experiment is not observed. From this fact we can conclude (taking into account SNO results) that the length of ν µ → ν τ transitions is of the order of the Earth diameter, and the angle θ of ν µ → ν τ transitions is near to the maximal mixing angle θ ∼ = π/4. Then the length of ν µ → ν e transitions is much more than the Earth diameter. The SNO experimental data also confirm, through neutral current registration, ν µ → ν τ transitions with the same mixing angle. 2. From the SNO experimental data on straight registration neutrinos (by neutral and charge currents in case ν e and neutral current in the case ν µ , ν τ ) we can come to the following conclusion: the primary ν e neutrinos transit in equally proportions in µ e , ν µ , ν τ neutrinos, i. e., mixing angles θ (...) of ν e , ν µ , ν τ are equal to the maximal angles of mixing. The length of ν e → ν µ , µ τ oscillations is less than the distance to the Sun. Come to comparison of these results with the predictions in the above-considered theoretical schemes on neutrino oscillations. 4.a. Neutrino-Antineutrino Oscillations In the existing experimental results the neutrinos disappearance has not been detected (see above), i. e., this mechanism is not confirmed. 4.b. Aromatic Neutrino Oscillations This scheme was confirmed by experiments. Pontecorvo-Gribov type oscillations for aromatic neutrinos maximal mixing angle can be realized only at neutrino masses equality m ν e = m ν µ = m ν τ . It is hardly probable that the neutrino masses are equal. The length of µ µ → ν τ oscillations nearly is equal to the Earth diameter, and the length of ν e → ν τ oscillations is more much than the Earth diameter. Then more probable is the type of oscillations suggested by the author [36], θ (...) = π/4 (see Sect. 2 and below) and the transition between oscillating neutrinos is virtual. Here neutrino oscillations can take place in the charge mixings scheme [50]. It is supposed that the neutrinos are mixed via weak interactions and therefore if we consider charge mixings of two neutrinos-a, b, then the mixing angles must be sinθ ∼ = g w (a) g 2 w (a) + g 2 w (b) ∼ = 1 √ 2 , since g w (a) ∼ = g w (b), where g w (a), g w (b) are weak couple constants of a, b neutrinos. 4.c. Majorana Neutrino Oscillations From the above-considered discussion (see Sect. 2.4.c) we can come to a conclusion that the Dirac and Majorana gauge charges are different and therefore we cannot put Majorana fermions in the Dirac theory. Then it is obvious that this scheme of neutrino oscillations cannot be realized. 4.d. Neutrino Oscillations in the Scheme of Majorana-Dirac Mixing Type We do not discuss this scheme for the reason given above (see Sect. 2.4.c). It is clear that this scheme cannot be realized in the experiment either. 4.e. Mechanisms of Neutrino Oscillations Enhancement in Matter 4.e.1 Mechanism of Resonance Enhancement of Neutrino Oscillations in Matter The experimental data on energy spectrum and day-night effect obtained in Super-Kamiokande (energy spectrum of neutrinos is not distorted, day-night effect is within the experimental mistakes) and the results obtained in SNO have not confirmed this effect. Besides, this effect can be realized only at the violation of the law of the energymomentum conservation (see Section 2.4.e.1. in this work and Ref. [26]). 4.e.2. Mechanism of Accumulation of the Neutrino Different Masses in Matter This mechanism effectively works only at small mixing angles. Since the mixing angles discovered in SNO and Super-Kamiokande are maximal, then we can neglect the contribution of this mechanism to the neutrino oscillations. 4.f. Neutrino Oscillation in Supersymmetric Models This type of oscillations can be confirmed only in case of discovery of the superpartners of fermions and bosons besides the neutrino oscillations. Conclusion The theoretical schemes on neutrino oscillations are considered. The experimental data on neutrino oscillations from Super-Kamiokande (Japan) and SNO (Canada) are given. The comparison of these data with theoretical schemes has been done. We have come to a conclusion: The experimental data confirm only the scheme with transitions (oscillations) between aromatic ν e , ν µ , ν τ neutrinos with maximal mixing angles. This scheme was suggested by Z. Makki et al., in 1962 [2] and repeated by B. Pontecorvo in 1967 [3] and subsequently is developed by Kh. Beshtoev (see references in this work). Besides, this mechanism of a neutrino oscillations is the only one which is theoretically substantiated. 1 ) 1So, if neutrinos of different types have equal masses, real oscillations are possible for different types of neutrinos by analogy with K o ,K o oscillation; CC SN O (ν e ) = 1.75 ± 0.07(stat.) + 0.12(−0.11)(syst.) (54) ±0.05(theor.) × 10 6 cm −2 s −1 , φ ES SN O (ν x ) = 2.39±0.34(stat.)+0.16(−0.14)(sys.)×10 6 cm −2 s −1 , (55)where the theoretical uncertainty is the CC cross section uncertainty. The neutrinos flux (55) measured on SNO is consistent with the same flux measured on Super-Kamiokande (58). Figure 1 : 1Distributions of (a) cosθ sun , and (b) extracted kinetic energy spectrum for CC events with R ≤ 5.50 m and T ef f ≥ 6.75 MeV. The Monte Carlo simulations for an undistorted 8 B spectrum are shown as histograms. The ratio of the data to the expected kinetic energy distribution with correlated systematic errors is shown in (c). The uncertainties in the 8 B spectrum have not been included φ ES SK (ν e ) = 2.32 ± 0.03(stat.) + 0.08(−0.07)(syst.) × 10 6 cm −2 s −1 . (58) These fluxes are in a good consistence with the Sun neutrino fluxes measured in SNO. Figure 2 : 2Zenith angle distribution of Super-Kamiokande 1289 days FC, PC and UPMU samples. Dots, solid and dashed lines correspond to data, MC with no oscillation and MC with best fit oscillation parameters, respectively . B M Pontecorvo, Soviet Journ, JETP. 33549Pontecorvo B. M., Soviet Journ. JETP, 1957, v. 33, p.549; . JETP. 34247JETP, 1958, v.34, p.247. . Z Maki, Prog.Theor. Phys. 28870Maki Z. et al., Prog.Theor. Phys., 1962, vol.28, p.870. . B M Pontecorvo, Soviet Journ, JETP. 531717Pontecorvo B. M., Soviet Journ. JETP, 1967, v. 53, p.1717. . R Davis, Phys. Rev. Letters. 201205Davis R. et al., Phys. Rev. Letters, 1968, vol.20, p.1205. . J Bahcall, Phys. Lett.B. 261Bahcall J. et al., Phys. Lett.B, 1968, vol.26, p.1; . J Bahcall, N Bahcall, G Shaviv, Phys. Rev. Lett. 201209Bahcall J., Bahcall N., Shaviv G., Phys. Rev. Lett. 1968, vol.20, p.1209; . S Turck-Chiere, Astrophys.J. 335415S. Turck-Chiere et al., Astrophys.J. 335 (1988), p.415. . V Gribov, B M Pontecorvo, Phys. Lett. B. 28493Gribov V., Pontecorvo B.M., Phys. Lett. B, 1969, vol.28, p.493. . K S Hirata, Phys. Rev. Lett. 6316Hirata K.S. et al., Phys. Rev. Lett., 1989, vol.63,p.16. . S P Mikheyev, A Smirnov, Nuovo Ju, Cimento, 917Mikheyev S.P., Smirnov A.Ju., Nuovo Cimento, 1986, vol.9,p.17. . L Wolfenstein, Phys. Rev.D. 172369Wolfenstein L., Phys. Rev.D, 1978, vol.17, p.2369. E2-91-183, Dubna. . M Beshtoev Kh, Jinr Commun, Proceedings of III Int. Symp. on Weak and Electromag. Int. in Nucl. (World Scient. 781Beshtoev Kh.M., JINR Commun. E2-91-183, Dubna, 1991; Proceedings of III Int. Symp. on Weak and Electromag. Int. in Nucl. (World Scient., Singapoure, P. 781, 1992); 13th European Cosmic Ray Symp. GenevaCERN13th European Cosmic Ray Symp. CERN, Geneva, HE-5-13. . P Anselmann, Phys. Lett. B. 285391Anselmann P. et al., Phys. Lett. B, 1992, vol.285, p.376; 1992, vol.285, p.391; . W Hampel, Phys. Lett. B. 447127Hampel W. et al., Phys. Lett. B, 1999, v.447, p. 127. . J N Abdurashitov, Phys. Rev. Lett. 3284683Phys. Lett.BAbdurashitov J.N. et al., Phys. Lett.B, 1994, vol.328, p.234; Phys. Rev. Lett., 1999, v.83, p.4683. . . M Beshtoev Kh, Jinr Commun, DubnaBeshtoev Kh.M., JINR Commun, E2-93-297, Dubna, 1993; . JINR Commun. JINR Commun. E2-94-46; . Hadronic Journal. 18165Hadronic Journal, 1995, vol 18, p.165. . . M Beshtoev Kh, Jinr Commun, E2-97-360JINR Commun. Beshtoev Kh.M., JINR Commun. E2-96-458, Dubna, 1996; JINR Commun. E2-97-360, Dubna, 1997; Report at International Conf. "Neutrino98. JapanReport at International Conf. "Neutrino98", Japan, 1998. Y Totsuka, Proc. Intern. Symp. on Underground Exp. Intern. Symp. on Underground ExpTokyo129Totsuka Y., Proc. Intern. Symp. on Underground Exp. (ed.K. Nakamura), Tokyo, 1990, p.129. Y Suzuki, Report at Intern. Conf. "Neutrino98. Japan1158Suzuki Y., Report at Intern. Conf. "Neutrino98", Japan, June, 1998; Phys. Rev. Lett. 1998, v.81 p.1158; . Y Fukuda, Phys. Rev. Lett. 821810Fukuda Y. et al., Phys. Rev. Lett., 1999, v.82, p.2430; 1999, v.82, p.1810. T Kajita, Report on Intern. Conf. "Neutrino98" Japan. Kajita T., Report on Intern. Conf. "Neutrino98" Japan, June, 1998; . Y Fukuda, Phys. Rev. Lett. 822644Fukuda Y. et al., Phys. Rev. Lett., 1999, v.82, p.2644. . Aardsma, Phys. Lett. B. 194321Aardsma et al., Phys. Lett. B, 1987, vol.194, p.321. J Kameda, Proceedings of ICRC 2001. ICRC 2001Germany, Hamburg1057Kameda J., Proceedings of ICRC 2001, August 2001, Germany, Hamburg, p.1057. . Q R Ahmad, nucl-ex/0106015Internet PubAhmad Q. R. et al., Internet Pub. nucl-ex/0106015, June 2001. N N Bogolubov, D V Shirkov, Introd, the Quantum Field Theory, M.: Nauka. Bogolubov N.N., Shirkov D.V., Introd. to the Quantum Field Theory, M.: Nauka, 1986; G Kane, Modern Elementary Particle Physics, Add. W. P.C. Kane G., Modern Elementary Particle Physics, Add. W. P.C., 1987. . . M Beshtoev Kh, Jinr Commun, hep-ph/99DubnaBeshtoev Kh.M., JINR Commun. E2-99-81, Dubna, 1999; hep-ph/99 . . . M Beshtoev Kh, Inr Ac Ussr, Preprint, MoscowBeshtoev Kh. M., INR AC USSR Preprint -577, Moscow, 1988. . S M Bilenky, B M Pontecorvo, Phys. Rep. 41225Bilenky S.M., Pontecorvo B.M., Phys. Rep., C41(1978)225; F Boehm, P Vogel, Physics of Massive Neutrinos. Cambridge Univ. Press121Boehm F., Vogel P., Physics of Massive Neutrinos: Cambridge Univ. Press, 1987, p.27, p.121; . S M Bilenky, S T Petcov, Rev. of Mod. Phys. 59631Bilenky S.M., Petcov S.T., Rev. of Mod. Phys., 1977, v.59, p.631. . . M Beshtoev Kh, Jinr Commun, DubnaJINR Rapid Communications, N3[71]-95Beshtoev Kh.M., JINR Commun. E2-92-318, Dubna, 1992; JINR Rapid Communications, N3[71]-95. . . M Beshtoev Kh, Internet Pub, hep-ph/9911513Beshtoev Kh.M., Internet Pub. hep-ph/9911513; . The Hadronic Journal. 23477The Hadronic Journal, v.23, 2000, p.477; Proceedings of 27th Intern. Cosmic Ray Conf. 27th Intern. Cosmic Ray ConfGermany, Hamburg1186Proceedings of 27th Intern. Cosmic Ray Conf., Germany, Hamburg, 7-15 August 2001, v.3, p. 1186. . . M Beshtoev Kh, 18165Beshtoev Kh.M., Hadronic Journal, 1995, vol.18, p.165. The Theory of Nuclear Reactions. J M Blatt, V F Waiscopff, INR T.R. 42Blatt J.M., Waiscopff V.F., The Theory of Nuclear Reactions, INR T.R. 42. . . M Beshtoev Kh, Jinr Commun, JINR Commun. Beshtoev Kh.M., JINR Commun. E2-99-307, Dubna, 1999; JINR Commun. E2-99-306, Dubna, 1999. Phys. of Elem. Part. and Atomic Nucl. (Particles and Nuclei. . M Beshtoev Kh, 2753Beshtoev Kh.M., Phys. of Elem. Part. and Atomic Nucl. (Particles and Nuclei), 1996, v.27, p.53. . S P Rosen, Lectore Notes on Mass Matrices. LASL preprintRosen S.P., Lectore Notes on Mass Matrices, LASL preprint, 1983. . . M Beshtoev Kh, Jinr Commun, E2-92-195DubnaBeshtoev Kh.M., JINR Commun. E2-92-195, Dubna, 1992. . S L Glashow, Phys. 22579Glashow S.L.-Nucl. Phys., 1961, vol.22, p.579 ; . S Weinberg, Phys. Rev. Lett. 191264Weinberg S.-Phys. Rev. Lett., 1967, vol.19, p.1264 ; A Salam, Proc. of the 8th Nobel Symp. N. Svarthholm (Almgvist and Wiksellof the 8th Nobel SympStockholm367Salam A.-Proc. of the 8th Nobel Symp., edited by N. Svarthholm (Almgvist and Wiksell, Stockholm) 1968, p.367. . S P Mikheyev, A Smirnov, Yu, Yad, Fiz, 421441Mikheyev S. P., Smirnov A. Yu., Yad. Fiz. 1986, v.42, p.1441; . Sov. Phys. JETP. 917Sov. Phys. JETP, 1986, v.91, p.7; . S P Mikheyev, A Smirnov, Yu, Nuovo Cimento C. 917Mikheyev S. P., Smirnov A. Yu, Nuovo Cimento C 9, 1986, p.17; . J Boucher, Z. Phys. C. 32499Boucher J. et al., Z. Phys. C 32, 1986, p.499. . . M Beshtoev Kh, JINR Communication. JINR CommunicationBeshtoev Kh. M., JINR Communication E2-93-167, Dubna, 1993; JINR Communication P2-93-44, Dubna, 1993; . hep-ph/9912532Beshtoev Kh.M. Beshtoev Kh.M., hep-ph/9912532, 1999; . Hadronic Journal. 22235Hadronic Journal, 1999, v.22, p.235. JINR Communication E2-2000-30. . M Beshtoev Kh, hep-ph/0003274Internet PublDubnaBeshtoev Kh.M., JINR Communication E2-2000-30, Dubna, 2000; Internet Publ. hep-ph/0003274. . . M Beshtoev Kh, JINR Communication. Beshtoev Kh.M., JINR Communication P2-2001-65, Dubna, 2001. Currents and Mesons. The Univ. J J Sakurai, of Chicago PressSakurai J.J., Currents and Mesons. The Univ. of Chicago Press, 1967. . Haug, Nucl. Phys. 5653848Haug et al., Nucl. Phys. B565, 2000, p.3848. . V Bednyakov, Nucl. Phys. 442203Bednyakov V. et al., Nucl. Phys. B442, 1998, p.203. . C Dib, hep-ph/0011213Dib C. et al., hep-ph/0011213. C Waltham, Proceedings of ICRC 2001. ICRC 2001Hamburg, Germany3167C. Waltham, Proceedings of ICRC 2001, August 2001, Hamburg, Germany, v.4, p.3167. . C E Ortz, Phys. Rev. Lett. 852909Ortz C.E. et al., Phys. Rev. Lett., 2000, v.85, p.2909. . J N Bahcall, astro-ph/0010346Bahcall J.N. et al., astro-ph/0010346. . S Turck-Chieze, Ap. J. Lett. Turck-Chieze S. et al., Ap. J. Lett., v.555 July 1, 2001. . S Fukuda, Phys Rev. Lett. 865651Fukuda S. et al., Phys Rev. Lett., 2001, v.86, p.5651. . T Toshito, hep-ex/0105023Toshito T., hep-ex/0105023; Proceeding of 27th ICRC. J Kameda, 1057Hamburg, Germany, v.2Kameda J., Proceeding of 27th ICRC, August 2001, Hamburg, Germany, v.2, p.1057. . M Honda, Phys. Rev. D53. 1313Phys Rev. D52Honda M. et al., Phys Rev. D52, 1995, p.4985; Phys. Rev. D53, 1996, p.1313. . . M Beshtoev Kh, Communication, Beshtoev Kh. M., JINR Communication E2-2000-229, Dubna, 2000.
[]
[]
[ "\nDepartment of Mathematics\nObafemi Awolowo University\nIle IfeNigeria\n" ]
[ "Department of Mathematics\nObafemi Awolowo University\nIle IfeNigeria" ]
[]
The concept of Smarandache isotopy is introduced and its study is explored for Smarandache: groupoids, quasigroups and loops just like the study of isotopy theory was carried out for groupoids, quasigroups and loops. The exploration includes:
10.5281/zenodo.32337
[ "https://arxiv.org/pdf/0707.1423v3.pdf" ]
8,992,878
0707.1423
8b15b1fa22f195bcff372f363969bc7f93635e5f
5 Jun 2008 Department of Mathematics Obafemi Awolowo University Ile IfeNigeria 5 Jun 2008Smarandache Isotopy Theory Of Smarandache: Quasigroups And Loops * † Corollary 5.3 A S-loop is a GS-loop if and only if it is S-isomorphic to all its Smarandache f, g principal isotopes. Proof This follows by the definition of a GS-loop and Corollary 5.2.Smarandacheisotopy and isomorphy classes, Smarandache f, g principal isotopes and G-Smarandache loops * 2000 Mathematics Subject Classification Primary 20NO5Secondary 08A05 † Keywords and Phrases : Smarandache: groupoidsquasigroupsloopsf ,g principal isotopes The concept of Smarandache isotopy is introduced and its study is explored for Smarandache: groupoids, quasigroups and loops just like the study of isotopy theory was carried out for groupoids, quasigroups and loops. The exploration includes: Introduction In 2002, W. B. Vasantha Kandasamy initiated the study of Smarandache loops in her book [12] where she introduced over 75 Smarandache concepts in loops. In her paper [13], she defined a Smarandache loop (S-loop) as a loop with at least a subloop which forms a subgroup under the binary operation of the loop. For more on loops and their properties, readers should check [11], [1], [3], [4], [5] and [12]. In [ [12], Page 102], the author introduced Smarandache isotopes of loops particularly Smarandache principal isotopes. She has also introduced the Smarandache concept in some other algebraic structures as [14,15,16,17,18,19] account. The present author has contributed to the study of S-quasigroups and S-loops in [6], [7] and [8] while Muktibodh [10] did a study on the first. In this study, the concept of Smarandache isotopy will be introduced and its study will be explored in Smarandache: groupoids, quasigroups and loops just like the study of isotopy theory was carried out for groupoids, quasigroups and loops as summarized in Bruck [1], Dene and Keedwell [4], Pflugfelder [11]. Definitions and Notations Definition 2.1 Let L be a non-empty set. Define a binary operation (·) on L : If x · y ∈ L ∀ x, y ∈ L, (L, ·) is called a groupoid. If the system of equations ; a · x = b and y · a = b have unique solutions for x and y respectively, then (L, ·) is called a quasigroup. Furthermore, if there exists a unique element e ∈ L called the identity element such that ∀ x ∈ L, x · e = e · x = x, (L, ·) is called a loop. If there exists at least a non-empty and non-trivial subset M of a groupoid(quasigroup or semigroup or loop) L such that (M, ·) is a non-trivial subsemigroup(subgroup or subgroup or subgroup) of (L, ·), then L is called a Smarandache: groupoid(Sgroupoid) quasigroup(S-quasigroup) or semigroup(S-semigroup) or loop(S-loop) with Smarandache: subsemigroup(S-subsemigroup) subgroup(S-subgroup) or subgroup(S- subgroup) or subgroup(S-subgroup) M. Let (G, ·) be a quasigroup(loop). The bijection L x : G → G defined as yL x = x · y ∀ x, y ∈ G is called a left translation(multiplication) of G while the bijection R x : G → G defined as yR x = y · x ∀ x, y ∈ G is called a right translation(multiplication) of G. The set SY M(L, ·) = SY M(L) of all bijections in a groupoid (L, ·) forms a group called the permutation(symmetric) group of the groupoid (L, ·). xU • yV = (x · y)W ∀ x, y ∈ L. So we call L and G groupoid isotopes. If L = G and W = I(identity mapping) then (U, V, I) is called a principal isotopism, so we call G a principal isotope of L. But if in addition G is a quasigroup such that for some f, g ∈ G, U = R g and V = L f , then (R g , L f , I) : (G, ·) → (G, •) is called an f, g-principal isotopism while (G, ·) and (G, •) are called quasigroup isotopes. [12]], it will be observed that the author did not allow the component bijections U,V and W in (U, V, W ) to act on the whole S-loop L but only on the S-subloop(S-subgroup) L ′ . We feel this is necessary to adjust here so that the set L − L ′ is not out of the study. Apart from this, our adjustment here will allow the study of Smarandache isotopy to be explorable. Therefore, the S-isotopism and S-isomorphism here are clearly special types of relations(isotopism and isomorphism) on the whole domain into the whole co-domain but those of Vasantha Kandasamy [12] only take care of the structure of the elements in the Ssubloop and not the S-loop. Nevertheless, we do not fault her study for we think she defined them to apply them to some life problems as an applied algebraist. If U = V = W , 3 Smarandache Isotopy and Isomorphy Classes Theorem 3.1 Let G = G ω , • ω ω∈Ω be a set of distinct S-groupoids with a corresponding set of S-subsemigroups H = H ω , • ω ω∈Ω . Define a relation ∼ on G such that for all G ω i , • ω i , G ω j , • ω j ∈ G, where ω i , ω j ∈ Ω, G ω i , • ω i ∼ G ω j , • ω j ⇐⇒ G ω i , • ω i and G ω j , • ω j are S-isotopic. Then ∼ is an equivalence relation on G. Proof Let G ω i , • ω i , G ω j , • ω j , G ω k , • ω k , ∈ G, where ω i , ω j , ω k ∈ Ω. Reflexivity If I : G ω i → G ω i is the identity mapping, then xI • ω i yI = (x • ω i y)I ∀ x, y ∈ G ω i =⇒ the triple (I, I, I) : G ω i , • ω i → G ω i , • ω i is an S-isotopism since H ω i I = H ω i ∀ ω i ∈ Ω. In fact, it can be simply deduced that every S-groupoid is S-isomorphic to itself. Symmetry Let G ω i , • ω i ∼ G ω j , • ω j . Then there exist bijections U, V, W : G ω i , • ω i −→ G ω j , • ω j such that H ω i A = H ω j ∀ A ∈ {U, V, W } so that the triple α = (U, V, W ) : G ω i , • ω i −→ G ω j , • ω j is an isotopism. Since each of U, V, W is bijective, then their inverses U −1 , V −1 , W −1 : G ω j , • ω j −→ G ω i , • ω i are bijective. In fact, H ω j A −1 = H ω i ∀ A ∈ {U, V, W } since A is bijective so that the triple α −1 = (U −1 , V −1 , W −1 ) : G ω j , • ω j −→ G ω i , • ω i is an isotopism. Thus, G ω j , • ω j ∼ G ω i , • ω i . Transitivity Let G ω i , • ω i ∼ G ω j , • ω j and G ω j , • ω j ∼ G ω k , • ω k . Then there exist bijections U 1 , V 1 , W 1 : G ω i , • ω i −→ G ω j , • ω j and U 2 , V 2 , W 2 : G ω j , • ω j −→ G ω k , • ω k such that H ω i A = H ω j ∀ A ∈ {U 1 , V 1 , W 1 } and H ω j B = H ω k ∀ B ∈ {U 2 , V 2 , W 2 } so that the triples α 1 = (U 1 , V 1 , W 1 ) : G ω i , • ω i −→ G ω j , • ω j and α 2 = (U 2 , V 2 , W 2 ) : G ω j , • ω j −→ G ω k , • ω k are isotopisms. Since each of U i , V i , W i , i = 1, 2, is bijective, then U 3 = U 1 U 2 , V 3 = V 1 V 2 , W 3 = W 1 W 2 : G ω i , • ω i −→ G ω k , • ω k are bijections such that H ω i A 3 = H ω i A 1 A 2 = H ω j A 2 = H ω k so that the triple α 3 = α 1 α 2 = (U 3 , V 3 , W 3 ) : G ω i , • ω i −→ G ω k , • ω k is an isotopism. Thus, G ω i , • ω i ∼ G ω k , • ω k . Remark 3.1 As a follow up to Theorem 3.1, the elements of the set G/ ∼ will be referred to as Smarandache isotopy classes(S-isotopy classes). Similarly, if ∼ meant "S-isomorphism" in Theorem 3.1, then the elements of G/ ∼ will be referred to as Smarandache isomorphy classes(S-isomorphy classes). Just like isotopy has an advantage over isomorphy in the classification of loops, so also S-isotopy will have advantage over S-isomorphy in the classification of S-loops. Corollary 3.1 Let L n , SL n and N SL n be the sets of; all finite loops of order n; all finite S-loops of order n and all finite non S-loops of order n respectively. 1. If A n i and B n i represent the isomorphy class of L n and the S-isomorphy class of SL n respectively, then (a) |SL n | + |N SL n | = |L n |; (i) |SL 5 | + |N SL 5 | = 56, (ii) |SL 6 | + |N SL 6 | = 9, 408 and (iii) |SL 7 | + |N SL 7 | = 16, 942, 080. (b) |N SL n | = i=1 |A n i | − i=1 |B n i |; (i) |N SL 5 | = 6 i=1 |A 5 i | − i=1 |B 5 i |, (ii) |N SL 6 | = 109 i=1 |A 6 i | − i=1 |B 6 i | and (iii) |N SL 7 | = 23,746 i=1 |A 7 i | − i=1 |B 7 i |. If A n i and B n i represent the isotopy class of L n and the S-isotopy class of SL n respectively, then |N SL n | = i=1 |A n i | − i=1 |B n i |; (i) |N SL 5 | = 2 i=1 |A 5 i | − i=1 |B 5 i |, (ii) |N SL 6 | = 22 i=1 |A 6 i | − i=1 |B 6 i | and (iii) |N SL 7 | = 564 i=1 |A 7 i | − i=1 |B 7 i |. Proof An S-loop is an S-groupoid. Thus by Theorem 3.1, we have S-isomorphy classes and Sisotopy classes. Recall that |L n | = |SL n | + |N SL n | − |SL n N SL n | but SL n N SL n = ∅ so |L n | = |SL n | + |N SL n |. As stated and shown in [11], [5], [2] and [9], the facts in Table 1 are true where n is the order of a finite loop. Hence the claims follow. 2. An S-isotopism is an isotopism. So, SISOT (G, ·) ⊂ ISOT (G, ·). Thus, we need to just verify the axioms of a group to show that SISOT (G, ·) ≤ ISOT (G, ·). These can be done using the proofs of reflexivity, symmetry and transitivity in Theorem 3.1 as guides. For all triples α ∈ SISOT (G, ·) such that α = (U, V, W ) : (G, ·) −→ (G, •), where (G, ·) and (G, •) are S-groupoids with S-subgroups (H, ·) and (K, •) respectively, we can set U ′ := U| H , V ′ := V | H and W ′ := W | H since A(H) = K ∀ A ∈ {U, V, W }, so that SISOT (H, ·) = {(U ′ , V ′ , W ′ )}. This is possible because of the following arguments. Let ·). If ISOT (G, ·) is the group of all isotopisms of (G, ·) and S n is the symmetric group of degree n, then ISOT (G, ·) S n × S n × S n . X = f ′ := f | H f : G −→ G, f : H −→ K is Proof As concluded in [Corollary 1, [4]], ISOT (G, ·) ∼ = S n × S n × S n . Let PISOT (G, ·) be the set of all principal isotopisms on (G, ·). PISOT (G, ·) is an S-subgroup in ISOT (G, ·) while S n × S n × {I} is an S-subgroup in S n × S n × S n . If Thus, we need (G, •) so that the commutative diagram below is true: (G, ·) α isotopism > (H, * ) (G, •) γ isomorphism ∧ β principal isotopism > because following the proof of transitivity in Theorem 3.1, α = βγ which implies (U, V, W ) = (XZ, Y Z, Z) and so we can make the choices; Z = W , Y = V W −1 , and X = UW −1 and consequently, x · y = xUW −1 • V W −1 ⇐⇒ x • y = xW U −1 · yW V −1 ∀ x, y ∈ G. Hence, (G, •) is a groupoid principal isotope of (G, ·) and (H, * ) is an isomorph of (G, •). It remains to show that these two relationships are Smarandache. Proof An S-quasigroup and an S-loop are S-groupoids. So by Theorem 4.1, (H, * ) is S-isomorphic to a Smarandache principal isotope (G, •) of (G, ·). Let α = (U, V, I) be the Smarandache principal isotopism of (G, ·) onto (G, •). Since (H, * ) is a S-loop and (G, •) (H, * ) implies that (G, •) ∼ = (H, * ), then (G, •) is necessarily an S-loop and consequently, (G, •) has a twosided identity element say e and an S-subgroup (G 2 , •). Let α = (U, V, I) be the Smarandache principal isotopism of (G, ·) onto (G, •). Then, xU • yV = x · y ∀ x, y ∈ G ⇐⇒ x • y = xU −1 · yV −1 ∀ x, y ∈ G. So, y = e•y = eU −1 ·yV −1 = yV −1 L eU −1 ∀ y ∈ G and x = x•e = xU −1 ·eV −1 = xU −1 R eV −1 ∀ x ∈ G. Assign f = eU −1 , g = eV −1 ∈ G 2 . This assignments are well defined and hence V = L f and U = R g . So that α = (R g , L f , I) is a Smarandache f, g principal isotopism of (G, •) onto (G, ·). This completes the proof. Proof An S-loop is an S-quasigroup. So the claim follows from Theorem 4.2. G-Smarandache Loops Definition 2. 2 2If (L, ·) and (G, •) are two distinct groupoids, then the triple (U, V, W ) : (L, ·) → (G, •) such that U, V, W : L → G are bijections is called an isotopism if and only if then U is called an isomorphism, hence we write (L, ·) ∼ = (G, •). A loop (L, ·) is called a G-loop if and only if (L, ·) ∼ = (G, •) for all loop isotopes (G, •) of (L, ·). Now, if (L, ·) and (G, •) are S-groupoids with S-subsemigroups L ′ and G ′ respectively such that (G ′ )A = L ′ , where A ∈ {U, V, W }, then the isotopism (U, V, W ) : (L, ·) → (G, •) is called a Smarandache isotopism(S-isotopism). Consequently, if W = I the triple (U, V, I) is called a Smarandache principal isotopism. But if in addition G is a S-quasigroup with S-subgroup H ′ such that for some f, g ∈ H, U = R g and V = L f , and (R g , L f , I) : (G, ·) → (G, •) is an isotopism, then the triple is called a Smarandache f, g-principal isotopism while f and g are called Smarandache elements(S-elements). Thus, if U = V = W , then U is called a Smarandache isomorphism, hence we write (L, ·) (G, •). An S-loop (L, ·) is called a G-Smarandache loop(GS-loop) if and only if (L, ·) (G, •) for all loop isotopes(or particularly all S-loop isotopes) (G, •) of (L, ·). Example 2. 1 1The systems (L, ·) and (L, * ), L = {0, 1, 2, 3, 4} with the multiplication tables below are S-quasigroups with S-subgroups (L ′ , ·) and (L ′′ , * ) respectively, L ′ = {0, 1} and L ′′ = {1, 2}. (L, ·) is taken from Example 2.2 of [10]. The triple (U, V, W ) such that on L, is an S-isotopism of (L, ·) onto (L, * ). Notice that A(L ′ ) = L ′′ for all A ∈ {U, V, W } and U, V, W : L ′ → L ′′ are all bijcetions. According to Example 4.2.2 of [15], the system (Z 6 , × 6 ) i.e the set L = Z 6 under multiplication modulo 6 is an S-semigroup with S-subgroups (L ′ , × 6 ) and (L ′′ , × 6 ), L ′ = {2, 4} and L ′′ = {1, 5}. This can be deduced from its multiplication table, below. The triple (U, V, W ) such that permutations on L, is an S-isotopism of (Z 6 , × 6 ) unto an S-semigroup (Z 6 , * ) with S-subgroups (L ′′′ , * ) and (L ′′′′ , * ), L ′′′ = {2, 5} and L ′′′′ = {0, 3} as shown in the second table below. Notice that A(L ′ ) = L ′′′ and A(L ′′ ) = L ′′′′ for all A ∈ {U, V, W } and U, V, W : L ′ → L ′′′ and U, V, W : L ′′ → L ′′′′ are all bijcetions. Theorem 3. 2 Furthermore: 1 . 21Let (G, ·) be a finite S-groupoid of order n with a finite S-subsemigroup (H, ·) of order m. Also, let ISOT (G, ·), SISOT (G, ·) and N SISOT (G, ·) be the sets of all isotopisms, S-isotopisms and non S-isotopisms of (G, ·). Then, ISOT (G, ·) is a group and SISOT (G, ·) ≤ ISOT (G, ·). |ISOT (G, ·)| = (n!) 3 ; 2. |SISOT (G, ·)| = (m!) 3 ; 3. |N SISOT (G, ·)| = (n!) 3 − (m!) 3 .Proof 1. This has been shown to be true in [Theorem 4.1.1,[4]]. bijective and f (H) = K . Let SY M(H, K) = {bijections from H unto K}. By definition, it is easy to see that X ⊆ SY M(H, K). Now, for all U ∈ SY M(H, K), define U : H c −→ K c so that U : G −→ G is a bijection since |H| = |K| implies |H c | = |K c |. Thus, SY M(H, K) ⊆ X so that SY M(H, K) = X. Given that |H| = m, then it follows from (1) that |ISOT (H, ·)| = (m!) 3 so that |SISOT (G, ·)| = (m!) 3 since SY M(H, K) = X. (G, ·) = SISOT (G, ·) c .So, the identity isotopism (I, I, I) ∈ N SISOT (G, ·), hence N SISOT (G, ·) ISOT (G, ·). Furthermore, |N SISOT (G, ·)| = (n!) 3 − (m!) 3 . Corollary 3. 2 2Let (G, ·) be a finite S-groupoid of order n with an S-subsemigroup (H, Υ: ISOT (G, ·) −→ S n × S n × S n is defined as Υ (A, B, I) =< A, B, I > ∀ (A, B, I) ∈ ISOT (G, ·), then Υ PISOT (G, ·) = S n × S n × {I}. ∴ ISOT (G, ·) S n × S n × S n . 4 Smarandache f, g-Isotopes of Smarandache Loops Theorem 4.1 Let (G, ·) and (H, * ) be S-groupoids. If (G, ·) and (H, * ) are S-isotopic, then (H, * ) is S-isomorphic to some Smarandache principal isotope (G, •) of (G, ·). Proof Since (G, ·) and (H, * ) are S-isotopic S-groupoids with S-subsemigroups (G 1 , ·) and (H 1 , * ), then there exist bijections U, V, W : (G, ·) → (H, * ) such that the triple α = (U, V, W ) : (G, ·) → (H, * ) is an isotopism and G 1 A = H 1 ∀ A ∈ {U, V, W }. To prove the claim of this theorem, it suffices to produce a closed binary operation ' * ' on G, bijections X, Y : G → G, and bijection Z : G → H so that • the triple β = (X, Y, I) : (G, ·) → (G, •) is a Smarandache principal isotopism and • Z : (G, •) → (H, * ) is an S-isomorphism or the triple γ = (Z, Z, Z) : (G, •) → (H, * ) is an S-isotopism. Note that (H 1 )Z −1 , • = (G 1 , •) is a non-trivial subsemigroup in (G, •). Thus, (G, •) is an S-groupoid. So (G, •) (H, * ). (G, ·) and (G, •) are Smarandache principal isotopes because (G 1 )UW −1 = (H 1 )W −1 = (H 1 )Z −1 = G 1 and (G 1 )V W −1 = (H 1 )W −1 = (H 1 )Z −1 = G 1 . Let (G, ·) be an S-groupoid with an arbitrary groupoid isotope (H, * ). Any such groupoid (H, * ) is an S-groupoid if and only if all the principal isotopes of (G, ·) are S-groupoids.ProofBy classical result in principal isotopy [[11], III.1.4 Theorem], if (G, ·) and (H, * ) are isotopic groupoids, then (H, * ) is isomorphic to some principal isotope (G, •) of (G, ·). Assuming (H, * ) is an S-groupoid then since (H, * ) ∼ = (G, •), (G, •) is an S-groupoid. Conversely, let us assume all the principal isotopes of (G, ·) are S-groupoids. Since (H, * ) ∼ = (G, •), then (H, * ) is an S-groupoid. Theorem 4. 2 2Let (G, ·) be an S-quasigroup. If (H, * ) is an S-loop which is S-isotopic to (G, ·), then there exist S-elements f and g so that (H, * ) is S-isomorphic to a Smarandache f, g principal isotope (G, •) of (G, ·). Corollary 4. 2 2Let (G, ·) be an S-quasigroup(S-loop) with an arbitrary groupoid isotope (H, * ). Any such groupoid (H, * ) is an S-quasigroup(S-loop) if and only if all the principal isotopes of (G, ·) are S-quasigroups(S-loops).Proof This follows immediately from Corollary 4.1 since an S-quasigroup and an S-loop are Sgroupoids. Corollary 4. 3 3If (G, ·) and (H, * ) are S-loops which are S-isotopic, then there exist Selements f and g so that (H, * ) is S-isomorphic to a Smarandache f, g principal isotope (G, •) of (G, ·). Table 1 : 1Enumeration of Isomorphy and Isotopy classes of finite loops of small order Question 3.1 How many S-loops are in the family L n ? That is, what is |SL n | or |N SL n |. If (G, ·) is a group, then (G, ·) and (H, * ) are S-isomorphic groups. Proof By Corollary 4.3, there exist S-elements f and g in (G, ·) so that (H, * ) (G, •) such that (G, •) is a Smarandache f, g principal isotope of (G, ·). Lemma 5.1 Let (G, ·) and (H, * ) be S-isotopic S-loopsLet us set the mapping ψ := R f ·g. R f g : G → G. This mapping is bijectiveLemma 5.1 Let (G, ·) and (H, * ) be S-isotopic S-loops. If (G, ·) is a group, then (G, ·) and (H, * ) are S-isomorphic groups. Proof By Corollary 4.3, there exist S-elements f and g in (G, ·) so that (H, * ) (G, •) such that (G, •) is a Smarandache f, g principal isotope of (G, ·). Let us set the mapping ψ := R f ·g = R f g : G → G. This mapping is bijective. Since (G, ·) is associative and x • y = xR −1 g · yL −1 f ∀ x, y ∈ G, the following arguments are true. x · y · f g = (x · y)R f g = (x · y)ψ ∀ x. ·) → (g = R F G ; G, • , y ∈ G. So, (G, ·) ∼ = (G, •Thus, (G, •) is a group. If (G 1 , ·) and (G 1 , •) are the S-subgroups in (G, ·) and (G, •), then (G 1 , ·) R f g = (G 1 , •). Hence, (G, ·) (G, •)Now, let us consider when ψ := R f g : (G, ·) → (G, •). Since (G, ·) is associative and x • y = xR −1 g · yL −1 f ∀ x, y ∈ G, the following arguments are true. x · y · f g = (x · y)R f g = (x · y)ψ ∀ x, y ∈ G. So, (G, ·) ∼ = (G, •). Thus, (G, •) is a group. If (G 1 , ·) and (G 1 , •) are the S-subgroups in (G, ·) and (G, •), then (G 1 , ·) R f g = (G 1 , •). Hence, (G, ·) (G, •). ·) (H, * ) and (H, * ) is a group. ∴ (g, ∴ (G, ·) (H, * ) and (H, * ) is a group. Corollary 5.2 An S-loop is S-isomorphic to all its S-loop S-isotopes if and only if it is S-isomorphic to all its Smarandache f, g principal isotopes. Proof Let (G, ·) be an S-loop with arbitrary S-isotope (H, * ). Let us assume that (G, ·) (H, * )Corollary 5.2 An S-loop is S-isomorphic to all its S-loop S-isotopes if and only if it is S-isomorphic to all its Smarandache f, g principal isotopes. Proof Let (G, ·) be an S-loop with arbitrary S-isotope (H, * ). Let us assume that (G, ·) (H, * ). there exists a Smarandache f, g principal isotope (G, •) of (G, ·) such that (H, * ) (G, •). So, (G, ·) (G, •). Conversely, let (G, ·) (G, •), using the fact in Corollary 4.3 again, for any arbitrary S-isotope (H, * ) of (G, ·), there exists a Smarandache f. H , * ) Of (g, · , From Corollary 4.3, for any arbitrary S-isotope. g principal isotope (G, •) of (G, ·) such that (G, •) (H, * ). Therefore, (G, ·) (H, * )From Corollary 4.3, for any arbitrary S-isotope (H, * ) of (G, ·), there exists a Smarandache f, g principal isotope (G, •) of (G, ·) such that (H, * ) (G, •). So, (G, ·) (G, •). Conversely, let (G, ·) (G, •), using the fact in Corollary 4.3 again, for any arbitrary S-isotope (H, * ) of (G, ·), there exists a Smarandache f, g principal isotope (G, •) of (G, ·) such that (G, •) (H, * ). Therefore, (G, ·) (H, * ). A survey of binary systems. R H Bruck, Springer-Verlag185Berlin-Göttingen-HeidelbergR. H. Bruck (1966), A survey of binary systems, Springer-Verlag, Berlin-Göttingen- Heidelberg, 185pp. Generation of NAFIL loops of small order. R E Cawagas, Quasigroups and Related Systems. 7R. E. Cawagas (2000), Generation of NAFIL loops of small order, Quasigroups and Related Systems, 7, 1-5. Quasigroups and loops : Theory and applications. O Chein, H O Pflugfelder, J D H Smith, Heldermann Verlag568O. Chein, H. O. Pflugfelder and J. D. H. Smith (1990), Quasigroups and loops : Theory and applications, Heldermann Verlag, 568pp. J Déne, A D Keedwell, Latin squares and their applications, the English University press Lts. 549J. Déne and A. D. Keedwell (1974), Latin squares and their applications, the English University press Lts, 549pp. E G Goodaire, E Jespers, C P Milies, Alternative loop rings. Elsevier387E. G. Goodaire, E. Jespers and C. P. Milies (1996), Alternative loop rings, NHMS(184), Elsevier, 387pp. An holomorphic study of the Smarandache concept in loops. T G Jaíyéo, Lá, Scientia Magna Journal. 2T. G. Jaíyéo . lá (2006), An holomorphic study of the Smarandache concept in loops, Scientia Magna Journal, 2, 1, 1-8. Parastrophic invariance of Smarandache quasigroups. T G Jaíyéo, Lá, Scientia Magna Journal. 2T. G. Jaíyéo . lá (2006), Parastrophic invariance of Smarandache quasigroups, Scientia Magna Journal, 2, 3, 48-53. On the universality of some Smarandache loops of Bol-Moufang type. T G Jaíyéo, Scientia Magna Journal. 2T. G. Jaíyéo . lá (2006), On the universality of some Smarandache loops of Bol-Moufang type, Scientia Magna Journal, 2, 4, 45-48. Small latin squares, quasigroups and loops. B D Mckay, A Meynert, W Myrvold, Journal of Combinatorial Designs. 15B. D. McKay, A. Meynert and W. Myrvold (2007), Small latin squares, quasigroups and loops, Journal of Combinatorial Designs, 15, 2, 98-119. Smarandache quasigroups. A S Muktibodh, Scientia Magna Journal. 21A. S. Muktibodh (2006), Smarandache quasigroups, Scientia Magna Journal, 2, 1, 13-19. Quasigroups and loops : Introduction, Sigma series in Pure Math. H O Pflugfelder, Heldermann Verlag7147BerlinH. O. Pflugfelder (1990), Quasigroups and loops : Introduction, Sigma series in Pure Math. 7, Heldermann Verlag, Berlin, 147pp. W B Vasantha Kandasamy, Smarandache loops. Madras, India128Department of Mathematics, Indian Institute of TechnologyW. B. Vasantha Kandasamy (2002), Smarandache loops, Department of Mathematics, Indian Institute of Technology, Madras, India, 128pp. W B Vasantha Kandasamy, Smarandache Loops. 13W. B. Vasantha Kandasamy (2002), Smarandache Loops, Smarandache Notions Journal, 13, 252-258. W B Vasantha Kandasamy, Groupoids and Smarandache Groupoids. American Research Press Rehoboth114W. B. Vasantha Kandasamy (2002), Groupoids and Smarandache Groupoids, American Research Press Rehoboth, 114pp. W B Vasantha Kandasamy, Smarandache Semigroups. American Research Press Rehoboth94W. B. Vasantha Kandasamy (2002), Smarandache Semigroups, American Research Press Rehoboth, 94pp. W B Vasantha Kandasamy, Smarandache Semirings, Semifields, And Semivector Spaces. American Research Press Rehoboth121W. B. Vasantha Kandasamy (2002), Smarandache Semirings, Semifields, And Semivec- tor Spaces, American Research Press Rehoboth, 121pp. Linear Algebra And Smarandache Linear Algebra. W B Vasantha Kandasamy, American Research Press174W. B. Vasantha Kandasamy (2003), Linear Algebra And Smarandache Linear Algebra, American Research Press, 174pp. W B Vasantha Kandasamy, Bialgebraic Structures And Smarandache Bialgebraic Structures. American Research Press Rehoboth271W. B. Vasantha Kandasamy (2003), Bialgebraic Structures And Smarandache Bialge- braic Structures, American Research Press Rehoboth, 271pp. N-Algebraic Structures And Smarandache N-Algebraic Structures. W B Vasantha Kandasamy, F Smarandache, 174Hexis Phoenix, ArizonaW. B. Vasantha Kandasamy and F. Smarandache (2005), N-Algebraic Structures And Smarandache N-Algebraic Structures, Hexis Phoenix, Arizona, 174pp.
[]
[ "The graphical lasso: New insights and alternatives", "The graphical lasso: New insights and alternatives" ]
[ "Rahul Mazumder [email protected] \nDepartment of Statistics\nDepartments of Statistics and Health Research and Policy\nMassachusetts Institute of Technology Cambridge\nStanford University Stanford\nStanford University Stanford\n02139, 94305, 94305MA, CA, CA\n", "Trevor Hastie [email protected] \nDepartment of Statistics\nDepartments of Statistics and Health Research and Policy\nMassachusetts Institute of Technology Cambridge\nStanford University Stanford\nStanford University Stanford\n02139, 94305, 94305MA, CA, CA\n" ]
[ "Department of Statistics\nDepartments of Statistics and Health Research and Policy\nMassachusetts Institute of Technology Cambridge\nStanford University Stanford\nStanford University Stanford\n02139, 94305, 94305MA, CA, CA", "Department of Statistics\nDepartments of Statistics and Health Research and Policy\nMassachusetts Institute of Technology Cambridge\nStanford University Stanford\nStanford University Stanford\n02139, 94305, 94305MA, CA, CA" ]
[ "Electronic Journal of Statistics" ]
The graphical lasso[5]is an algorithm for learning the structure in an undirected Gaussian graphical model, using ℓ 1 regularization to control the number of zeros in the precision matrix Θ = Σ −1[2,11]. The R package glasso [5] is popular, fast, and allows one to efficiently build a path of models for different values of the tuning parameter. Convergence of glasso can be tricky; the converged precision matrix might not be the inverse of the estimated covariance, and occasionally it fails to converge with warm starts. In this paper we explain this behavior, and propose new algorithms that appear to outperform glasso.By studying the "normal equations" we see that, glasso is solving the dual of the graphical lasso penalized likelihood, by block coordinate ascent; a result which can also be found in[2]. In this dual, the target of estimation is Σ, the covariance matrix, rather than the precision matrix Θ. We propose similar primal algorithms p-glasso and dp-glasso, that also operate by block-coordinate descent, where Θ is the optimization target. We study all of these algorithms, and in particular different approaches to solving their coordinate sub-problems. We conclude that dp-glasso is superior from several points of view.
10.1214/12-ejs740
null
6,795,998
1111.5479
6f4b1e3c0c8b9e7cfc103358e3eeee1af3327561
The graphical lasso: New insights and alternatives 2012 Rahul Mazumder [email protected] Department of Statistics Departments of Statistics and Health Research and Policy Massachusetts Institute of Technology Cambridge Stanford University Stanford Stanford University Stanford 02139, 94305, 94305MA, CA, CA Trevor Hastie [email protected] Department of Statistics Departments of Statistics and Health Research and Policy Massachusetts Institute of Technology Cambridge Stanford University Stanford Stanford University Stanford 02139, 94305, 94305MA, CA, CA The graphical lasso: New insights and alternatives Electronic Journal of Statistics 6201210.1214/12-EJS740Received August 2012.* Rahul Mazumder was supported by grant DMS-1007719 from the National Science Foun-dation. † Trevor Hastie was partially supported by grant DMS-1007719 from the National Science Foundation, and grant RO1-EB001988-15 from the National Institutes of Health. 2125 2126 R. Mazumder and T. HastieAMS 2000 subject classifications: Primary 62H9962-09; secondary 62-04 Keywords and phrases: Graphical lassosparse inverse covariance selec- tionprecision matrixconvex analysis/optimizationpositive definite ma- tricessparsitysemidefinite programming The graphical lasso[5]is an algorithm for learning the structure in an undirected Gaussian graphical model, using ℓ 1 regularization to control the number of zeros in the precision matrix Θ = Σ −1[2,11]. The R package glasso [5] is popular, fast, and allows one to efficiently build a path of models for different values of the tuning parameter. Convergence of glasso can be tricky; the converged precision matrix might not be the inverse of the estimated covariance, and occasionally it fails to converge with warm starts. In this paper we explain this behavior, and propose new algorithms that appear to outperform glasso.By studying the "normal equations" we see that, glasso is solving the dual of the graphical lasso penalized likelihood, by block coordinate ascent; a result which can also be found in[2]. In this dual, the target of estimation is Σ, the covariance matrix, rather than the precision matrix Θ. We propose similar primal algorithms p-glasso and dp-glasso, that also operate by block-coordinate descent, where Θ is the optimization target. We study all of these algorithms, and in particular different approaches to solving their coordinate sub-problems. We conclude that dp-glasso is superior from several points of view. Introduction Consider a data matrix X n×p , a sample of n realizations from a p-dimensional Gaussian distribution with zero mean and positive definite covariance matrix Σ. The task is to estimate the unknown Σ based on the n samples -a challenging problem especially when n ≪ p, when the ordinary maximum likelihood estimate does not exist. Even if it does exist (for p ≤ n), the MLE is often poorly behaved, and regularization is called for. The Graphical Lasso [5] is a regularization framework for estimating the covariance matrix Σ, under the assumption that its inverse Θ = Σ −1 is sparse [2,11,8]. Θ is called the precision matrix; if an element θ jk = 0, this implies that the corresponding variables X j and X k are conditionally independent, given the rest. Our algorithms focus either on the restricted version of Θ or its inverse W = Θ −1 . The graphical lasso problem minimizes a ℓ 1 -regularized negative log-likelihood: minimize Θ≻0 f (Θ) := − log det(Θ) + tr(SΘ) + λ Θ 1 . (1.1) Here S is the sample covariance matrix, Θ 1 denotes the sum of the absolute values of Θ, and λ is a tuning parameter controlling the amount of ℓ 1 shrinkage. This is a semidefinite programming problem (SDP) in the variable Θ [4]. In this paper we revisit the glasso algorithm proposed by Friedman, Hastie and Tibshirani [5] for solving (1.1); we analyze its properties, expose problems and issues, and propose alternative algorithms more suitable for the task. Some of the results and conclusions of this paper can be found in [2], both explicitly and implicitly. We re-derive some of the results and derive new results, insights and algorithms, using a unified and more elementary framework. Notation We denote the entries of a matrix A n×n by a ij . A 1 denotes the sum of its absolute values, A ∞ the maximum absolute value of its entries, A F is its Frobenius norm, and abs(A) is the matrix with elements |a ij |. For a vector u ∈ ℜ q , u 1 denotes the ℓ 1 norm, and so on. From now on, unless otherwise specified, we will assume that λ > 0. Review of the glasso algorithm We use the frame-work of "normal equations" as in [6,5]. Using sub-gradient notation, we can write the optimality conditions (aka "normal equations") for a solution to (1.1) as − Θ −1 + S + λΓ = 0, (2.1) where Γ is a matrix of component-wise signs of Θ: γ jk = sign(θ jk ) if θ jk = 0 γ jk ∈ [−1, 1] if θ jk = 0 (2. 2) (we use the notation γ jk ∈ Sign(θ jk )). Since the global stationary conditions of (2.1) require θ jj to be positive, this implies that w ii = s ii + λ, i = 1, . . . , p,(2.3) where W = Θ −1 . glasso uses a block-coordinate method for solving (2.1). Consider a partitioning of Θ and Γ: Θ = Θ 11 θ 12 θ 21 θ 22 , Γ = Γ 11 γ 12 γ 21 γ 22 (2.4) where Θ 11 is (p − 1) × (p − 1), θ 12 is (p − 1) × 1 and θ 22 is scalar. W and S are partitioned the same way. Using properties of inverses of block-partitioned matrices, observe that W = Θ −1 can be written in two equivalent forms: W 11 w 12 w 21 w 22 =   (Θ 11 − θ12θ21 θ22 ) −1 −W 11 θ12 θ22 · 1 θ22 − θ21W11θ12 θ 2 22   (2.5) =    Θ −1 11 + Θ −1 11 θ12θ21Θ −1 11 (θ22−θ21Θ −1 11 θ12) − Θ −1 11 θ12 θ22−θ21Θ −1 11 θ12 · 1 (θ22−θ21Θ −1 11 θ12)    . (2.6) glasso solves for a row/column of (2.1) at a time, holding the rest fixed. Considering the pth column of (2.1), we get − w 12 + s 12 + λγ 12 = 0. (2.7) Reading off w 12 from (2.5) we have w 12 = −W 11 θ 12 /θ 22 (2.8) and plugging into (2.7), we have: W 11 θ 12 θ 22 + s 12 + λγ 12 = 0. (2.9) glasso operates on the above gradient equation, as described below. As a variation consider reading off w 12 from (2.6): Θ −1 11 θ 12 (θ 22 − θ 21 Θ −1 11 θ 12 ) + s 12 + λγ 12 = 0. (2.10) The above simplifies to Θ −1 11 θ 12 w 22 + s 12 + λγ 12 = 0, (2.11) where w 22 = 1/(θ 22 − θ 21 Θ −1 11 θ 12 ) is fixed (by the global stationary conditions (2.3)). We will see that these two apparently similar estimating equations (2.9) and (2.11) lead to very different algorithms. The glasso algorithm solves (2.9) for β = θ 12 /θ 22 , that is W 11 β + s 12 + λγ 12 = 0, (2.12) where γ 12 ∈ Sign(β), since θ 22 > 0. (2.12) is the stationarity equation for the following ℓ 1 regularized quadratic program: minimize β∈ℜ p−1 1 2 β ′ W 11 β + β ′ s 12 + λ β 1 ,(2.13) where W 11 ≻ 0 is assumed to be fixed. This is analogous to a lasso regression problem of the last variable on the rest, except the cross-product matrix S 11 is replaced by its current estimate W 11 . This problem itself can be solved efficiently using elementwise coordinate descent, exploiting the sparsity in β. Fromβ, it is easy to obtainŵ 12 from (2.8). Using the lower-right element of (2.5),θ 22 is obtained by 1 θ 22 = w 22 −β ′ŵ 12 . (2.14) Finally,θ 12 can now be recovered fromβ andθ 22 . Notice, however, that having solved for β and updated w 12 , glasso can move onto the next block; disentangling θ 12 and θ 22 can be done at the end, when the algorithm over all blocks has converged. The glasso algorithm is outlined in Algorithm 1. We show in Lemma 3 in Section 8 that the successive updates in glasso keep W positive definite. Figure 1 (left panel, black curve) plots the objective f (Θ (k) ) for the sequence of solutions produced by glasso on an example. Surprisingly, the curve is not monotone decreasing, as confirmed by the middle plot. If glasso were solving (1.1) by block coordinate-descent, we would not anticipate this behavior. A closer look at steps (2.8) and (2.9) of the glasso algorithm leads to the following observations: Algorithm 1 glasso algorithm [5] 1. Initialize W = S + λI. 2. Cycle around the columns repeatedly, performing the following steps till convergence: (a) Rearrange the rows/columns so that the target column is last (implicitly). (b) Solve the lasso problem (2.13), using as warm starts the solution from the previous round for this column. (c) Update the row/column (off-diagonal) of the covariance usingŵ 12 (2.8). (d) Saveβ for this column in the matrix B. 3. Finally, for every row/column, compute the diagonal entriesθ jj using (2.14), and convert the B matrix to Θ. (a) We wish to solve (2.7) for θ 12 . However θ 12 is entangled in W 11 , which is (incorrectly) treated as a constant. (b) After updating θ 12 , we see from (2.6) that the entire (working) covariance matrix W changes. glasso however updates only w 12 and w 21 . These two observations explain the non-monotone behavior of glasso in minimizing f (Θ). Section 3 shows a corrected block-coordinate descent algorithm for Θ, and Section 4 shows that the glasso algorithm is actually optimizing the dual of problem (1.1), with the optimization variable being W. A corrected glasso block coordinate-descent algorithm Recall that (2.11) is a variant of (2.9), where the dependence of the covariance sub-matrix W 11 on θ 12 is explicit. With α = θ 12 w 22 (with w 22 ≥ 0 fixed), Algorithm 2 p-glasso Algorithm 1. Initialize W = diag(S) + λI, and Θ = W −1 . 2. Cycle around the columns repeatedly, performing the following steps till convergence: (a) Rearrange the rows/columns so that the target column is last (implicitly). (b) Compute Θ −1 11 using (3.3). (c) Solve (3.1) for α, using as warm starts the solution from the previous round of row/column updates. Updateθ 12 =α/w 22 , andθ 22 using (3.2). (d) Update Θ and W using (2.6), ensuring that ΘW = Ip. 3. Output the solution Θ (precision) and its exact inverse W (covariance). Θ 11 ≻ 0, (2.11) is equivalent to the stationary condition for minimize α∈ℜ p−1 1 2 α ′ Θ −1 11 α + α ′ s 12 + λ α 1 . (3.1) Ifα is the minimizer of (3.1), thenθ 12 =α/w 22 . To complete the optimization for the entire row/column we need to update θ 22 . This follows simply from (2.6) θ 22 = 1 w 22 +θ 21 Θ −1 11θ 12 ,(3.2) with w 22 = s 22 + λ. To solve (3.1) we need Θ −1 11 for each block update. We achieve this by maintaining W = Θ −1 as the iterations proceed. Then for each block • we obtain Θ −1 11 from Θ −1 11 = W 11 − w 12 w 21 /w 22 ; (3.3) • once θ 12 is updated, the entire working covariance matrix W is updated (in particular the portions W 11 and w 12 ), via the identities in (2.6), using the known Θ −1 11 . Both these steps are simple rank-one updates with a total cost of O(p 2 ) operations. We refer to this as the primal graphical lasso or p-glasso, which we present in Algorithm 2. The p-glasso algorithm requires slightly more work than glasso, since an additional O(p 2 ) operations have to be performed before and after each block update. In return we have that after every row/column update, Θ and W are positive definite (for λ > 0) and ΘW = I p . What is glasso actually solving? Building upon the framework developed in Section 2, we now proceed to establish that glasso solves the convex dual of problem (1.1), by block coordinate ascent. We reach this conclusion via elementary arguments, closely aligned with the framework we develop in Section 2. The approach we present here is intended for an audience without much of a familiarity with convex duality theory [4]. Figure 1 illustrates that glasso is an ascent algorithm on the dual of the problem 1.1. The red curve in the left plot shows the dual objective rising monotonely, and the rightmost plot shows that the increments are indeed positive. There is an added twist though: in solving the block-coordinate update, glasso solves instead the dual of that subproblem. Dual of the ℓ 1 regularized log-likelihood We present below the following lemma, the conclusion of which also appears in [2], but we use the framework developed in Section 2. Proof. The (sub)gradient conditions (2.1) can be rewritten as: − (S + λΓ) −1 + Θ = 0 (4.2) where Γ = sgn(Θ). We writeΓ = λΓ and observe that Γ ∞ ≤ λ. Denote by abs(Θ) the matrix with element-wise absolute values. Hence if (Θ, Γ) satisfy (4.2), the substitutions Γ = λΓ; P = abs(Θ) (4.3) satisfy the following set of equations: In the above, P is a symmetric p × p matrix with non-negative entries, 1 p 1 ′ p denotes a p×p matrix of ones, and the operator ' * ' denotes element-wise product. We observe that (4.4) are the KKT optimality conditions for the box-constrained SDP (4.1). Similarly, the transformations Θ = P * sgn(Γ) and Γ =Γ/λ show that conditions (4.4) imply condition (4.2). Based on (4.2) the optimal solutions of the two problems (1.1) and (4.1) are related by S +Γ = Θ −1 . −(S +Γ) −1 + P * sgn(Γ) = 0 P * (abs(Γ) − λ1 p 1 ′ p ) = 0 Γ ∞ ≤ λ. Notice that for the dual, the optimization variable isΓ, with S +Γ = Θ −1 = W. In other words, the dual problem solves for W rather than Θ, a fact that is suggested by the glasso algorithm. Remark 1. The equivalence of the solutions to problems (4.1) and (1.1) as described above can also be derived via convex duality theory [4], which shows that (4.1) is a dual function of the ℓ 1 regularized negative log-likelihood (1.1). Strong duality holds, hence the optimal solutions of the two problems coincide [2]. We now consider solving (4.4) for the last blockγ 12 (excluding diagonal), holding the rest ofΓ fixed. The corresponding equations are −θ 12 + p 12 * sgn(γ 12 ) = 0 p 12 * (abs(γ 12 ) − λ1 p−1 ) = 0 γ 12 ∞ ≤ λ. The only non-trivial translation is the θ 12 in the first equation. We must express this in terms of the optimization variableγ 12 . Since s 12 +γ 12 = w 12 , using the identities in (2.5), we have W −1 11 (s 12 +γ 12 ) = −θ 12 /θ 22 . Since θ 22 > 0, we can redefinep 12 = p 12 /θ 22 , to get W −1 11 (s 12 +γ 12 ) +p 12 * sgn(γ 12 ) = 0 p 12 * (abs(γ 12 ) − λ1 p−1 ) = 0 γ 12 ∞ ≤ λ. (4.6) The following lemma shows that a block update of glasso solves (4.6) (and hence (4.5)), a block of stationary conditions for the dual of the graphical lasso problem. Curiously, glasso does this not directly, but by solving the dual of the QP corresponding to this block of equations. whereγ 12 ∈ Sign(β), correspond to the solution of the ℓ 1 -regularized QP: minimize β∈ℜ p−1 1 2 β ′ W 11 β + β ′ s 12 + λ β 1 . (4.8) Solving (4.8) is equivalent to solving the following box-constrained QP: minimize γ∈ℜ p−1 1 2 (s 12 + γ) ′ W −1 11 (s 12 + γ) subject to γ ∞ ≤ λ, (4.9) with stationarity conditions given by (4.6), where theβ andγ 12 are related bŷ β = −W −1 11 (s 12 +γ 12 ). (4.10) Proof. (4.7) is the KKT optimality condition for the ℓ 1 regularized QP (4.8). We rewrite (4.7) asβ + W −1 11 (s 12 + λγ 12 ) = 0. (4.11) Observe thatβ i = sgn(β i )|β i | ∀i and γ 12 ∞ ≤ 1. Supposeβ,γ 12 satisfy (4.11), then the substitutionsγ in (4.11) satisfy the stationarity conditions (4.6). It turns out that (4.6) is equivalent to the KKT optimality conditions of the box-constrained QP (4.9). Similarly, we note that ifγ 12 ,p 12 satisfy (4.6), then the substitution γ 12 =γ 12 /λ;β =p 12 * sgn(γ 12 ) satisfies (4.11). Hence theβ andγ 12 are related by (4.10). Remark 2. The above result can also be derived via convex duality theory [4], where (4.9) is actually the Lagrange dual of the ℓ 1 regularized QP (4.8), with (4.10) denoting the primal-dual relationship. [2, Section 3.3] interpret (4.9) as an ℓ 1 penalized regression problem (using convex duality theory) and explore connections with the set up of [8]. Note that the QP (4.9) is a (partial) optimization over the variable w 12 only (since s 12 is fixed); the sub-matrix W 11 remains fixed in the QP. Exactly one row/column of W changes when the block-coordinate algorithm of glasso moves to a new row/column, unlike an explicit full matrix update in W 11 , which is required if θ 12 is updated. This again emphasizes that glasso is operating on the covariance matrix instead of Θ. We thus arrive at the following conclusion: In our annotation perhaps glasso should be called dd-glasso, since it performs dual block updates for the dual of the graphical lasso problem. Banerjee, Ghaoui and d'Aspremont [2], the paper that inspired the original glasso article [5], also operates on the dual. They however solve the block-updates directly (which are box constrained QPs) using interior-point methods. A new algorithm -dp-glasso In Section 3, we described p-glasso, a primal coordinate-descent method. For every row/column we need to solve a lasso problem (3.1), which operates on a quadratic form corresponding to the square matrix Θ −1 11 . There are two problems with this approach: • the matrix Θ −1 11 needs to be constructed at every row/column update with complexity O(p 2 ); • Θ −1 11 is dense. We now show how a simple modification of the ℓ 1 -regularized QP leads to a box-constrained QP with attractive computational properties. The KKT optimality conditions for (3.1), following (2.11), can be written as: Θ −1 11 α + s 12 + λ sgn(α) = 0. (5.1) Algorithm 3 dp-glasso algorithm 1. Initialize Θ = diag(S + λI) −1 . 2. Cycle around the columns repeatedly, performing the following steps till convergence: (a) Rearrange the rows/columns so that the target column is last (implicitly). (d) Update the working covariance w 12 = s 12 +γ. Along the same lines of the derivations used in Lemma 2, the condition above is equivalent toq 12 * sgn(γ) + Θ 11 (s 12 +γ) = 0 q 12 * (abs(γ) − λ1 p−1 ) = 0 γ ∞ ≤ λ (5.2) for some vector (with non-negative entries)q 12 . (5.2) are the KKT optimality conditions for the following box-constrained QP: minimize γ∈ℜ p−1 1 2 (s 12 + γ) ′ Θ 11 (s 12 + γ); subject to γ ∞ ≤ λ. (5.3) The optimal solutions of (5.3) and (5.1) are related bŷ α = −Θ 11 (s 12 +γ), (5.4) a consequence of (5.1), withα =θ 12 · w 22 and w 22 = s 22 + λ. The diagonal θ 22 of the precision matrix is updated via (2.6): θ 22 = 1 − (s 12 +γ) ′θ 12 w 22 (5.5) By strong duality, the box-constrained QP (5.3) with its optimality conditions (5.2) is equivalent to the lasso problem (3.1). Now both the problems listed at the beginning of the section are removed. The problem matrix Θ 11 is sparse, and no O(p 2 ) updating is required after each block. The solutions returned at step 2(b) forθ 12 need not be exactly sparse, even though it purports to produce the solution to the primal block problem (3.1), which is sparse. One needs to use a tight convergence criterion when solving (5.3). In addition, one can threshold those elements ofθ 12 for whichγ is away from the box boundary, since those values are known to be zero. Note that dp-glasso does to the primal formulation (1.1) what glasso does to the dual. dp-glasso operates on the precision matrix, whereas glasso operates on the covariance matrix. Computational costs in solving the block QPs The ℓ 1 regularized QPs appearing in (2.13) and (3.1) are of the generic form minimize u∈ℜ q 1 2 u ′ Au + a ′ u + λ u 1 ,(6.1) for A ≻ 0. In this paper, we choose to use cyclical coordinate descent for solving (6.1), as it is used in the glasso algorithm implementation of Friedman, Hastie and Tibshirani [5]. Moreover, cyclical coordinate descent methods perform well with good warm-starts. These are available for both (2.13) and (3.1), since they both maintain working copies of the precision matrix, updated after every row/column update. There are other efficient ways for solving (6.1), capable of scaling to large problems -for example first-order proximal methods [3,9], but we do not pursue them in this paper. The box-constrained QPs appearing in (4.9) and (5.3) are of the generic form: minimize v∈ℜ q 1 2 (v + b) ′à (v + b) subject to v ∞ ≤ λ (6.2) for someà ≻ 0. As in the case above, we will use cyclical coordinate-descent for optimizing (6.2). In general it is more efficient to solve (6.1) than (6.2) for larger values of λ. This is because a large value of λ in (6.1) results in sparse solutionsû; the coordinate descent algorithm can easily detect when a zero stays zero, and no further work gets done for that coordinate on that pass. If the solution to (6.1) has κ non-zeros, then on average κ coordinates need to be updated. This leads to a cost of O(qκ), for one full sweep across all the q coordinates. On the other hand, a large λ for (6.2) corresponds to a weakly-regularized solution. Cyclical coordinate procedures for this task are not as effective. Every coordinate update of v results in updating the gradient, which requires adding a scalar multiple of a column ofÃ. Ifà is dense, this leads to a cost of O(q), and for one full cycle across all the coordinates this costs O(q 2 ), rather than the O(qκ) for (6.1). However, our experimental results show that dp-glasso is more efficient than glasso, so there are some other factors in play. Whenà is sparse, there are computational savings. Ifà has κq non-zeros, the cost per column reduces on average to O(κq) from O(q 2 ). For the formulation (5.3)à is Θ 11 , which is sparse for large λ. Hence for large λ, glasso and dp-glasso have similar costs. For smaller values of λ, the box-constrained QP (6.2) is particularly attractive. Most of the coordinates in the optimal solutionv will pile up at the boundary points {−λ, λ}, which means that the coordinates need not be updated frequently. For problem (5.3) this number is also κ, the number of non-zero coefficients in the corresponding column of the precision matrix. If κ of the coordinates pile up at the boundary, then one full sweep of cyclical coordinate descent across all the coordinates will require updating gradients corresponding to the remaining q − κ coordinates. Using similar calculations as before, this will cost O(q(q − κ)) operations per full cycle (since for small λ,à will be dense). For the ℓ 1 regularized problem (6.1), no such saving is achieved, and the cost is O(q 2 ) per cycle. Note that to solve problem (1.1), we need to solve a QP of a particular type (6.1) or (6.2) for a certain number of outer cycles (ie full sweeps across rows/columns). For every row/column update, the associated QP requires varying number of iterations to converge. It is hard to characterize all these factors and come up with precise estimates of convergence rates of the overall algorithm. However, we have observed that with warm-starts, on a relatively dense grid of λs, the complexities given above are pretty much accurate for dp-glasso (with warmstarts) specially when one is interested in solutions with small / moderate accuracy. Our experimental results in Section 9.1 and Appendix Section B support our observation. We will now have a more critical look at the updates of the glasso algorithm and study their properties. glasso: Positive definiteness, sparsity and exact inversion As noted earlier, glasso operates on W -it does not explicitly compute the inverse W −1 . It does however keep track of the estimates for θ 12 after every row/column update. The copy of Θ retained by glasso along the row/column updates is not the exact inverse of the optimization variable W. Figure 2 illustrates this by plotting the squared-norm (Θ − W −1 ) 2 F as a function of the iteration index. Only upon (asymptotic) convergence, will Θ be equal to W −1 . This can have important consequences. In many real-life problems one only needs an approximate solution to (1.1): • for computational reasons it might be impractical to obtain a solution of high accuracy; • from a statistical viewpoint it might be sufficient to obtain an approximate solution for Θ that is both sparse and positive definite It turns out that the glasso algorithm is not suited to this purpose. Since the glasso is a block coordinate procedure on the covariance matrix, it maintains a positive definite covariance matrix at every row/column update. However, since the estimated precision matrix is not the exact inverse of W, it need not be positive definite. Although it is relatively straightforward to maintain an exact inverse of W along the row/column updates (via simple rank-one updates as before), this inverse W −1 need not be sparse. Arbitrary thresholding rules may be used to set some of the entries to zero, but that might destroy the positive-definiteness of the matrix. Since a principal motivation of solving (1.1) is to obtain a sparse precision matrix (which is also positive definite), returning a dense W −1 to (1.1) is not desirable. Figures 2 illustrates the above observations on a typical example. The dp-glasso algorithm operates on the primal (1.1). Instead of optimizing the ℓ 1 regularized QP (3.1), which requires computing Θ −1 11 , dp-glasso optimizes (5.3). After every row/column update the precision matrix Θ is positive definite. The working covariance matrix maintained by dp-glasso via w 12 := s 12 +γ need not be the exact inverse of Θ. Exact covariance matrix estimates, if required, can be obtained by tracking Θ −1 via simple rank-one updates, as described earlier. Unlike glasso, dp-glasso (and p-glasso) return a sparse and positive definite precision matrix even if the row/column iterations are terminated prematurely. Warm starts and path-seeking strategies Since we seldom know in advance a good value of λ, we often compute a sequence of solutions to (1.1) for a (typically) decreasing sequence of values λ 1 > λ 2 > · · · > λ K . Warm-start or continuation methods use the solution at λ i as an initial guess for the solution at λ i+1 , and often yield great efficiency. It turns out that for algorithms like glasso which operate on the dual problem, not all warm-starts necessarily lead to a convergent algorithm. We address this aspect in detail in this section. The following lemma states the conditions under which the row/column updates of the glasso algorithm will maintain positive definiteness of the covariance matrix W. Lemma 3. Suppose Z is used as a warm-start for the glasso algorithm. If Z ≻ 0 and Z − S ∞ ≤ λ, then every row/column update of glasso maintains positive definiteness of the working covariance matrix W. Proof. Recall that the glasso solves the dual (4.1). Assume Z is partitioned as in (2.4), and the pth row/column is being updated. Since Z ≻ 0, we have both Z 11 ≻ 0 and z 22 − z 21 (Z 11 ) −1 z 12 > 0. (8.1) Since Z 11 remains fixed, it suffices to show that after the row/column update, the expression (ŵ 22 −ŵ 21 (Z 11 ) −1ŵ 12 ) remains positive. Recall that, via standard optimality conditions we haveŵ 22 = s 22 + λ, which makesŵ 22 ≥ z 22 (since by assumption, |z 22 − s 22 | ≤ λ and z 22 > 0). Furthermore,ŵ 21 = s 21 +γ, wherê γ is the optimal solution to the corresponding box-QP (4.9). Since the starting solution z 21 satisfies the box-constraint (4.9) i.e. z 21 − s 21 ∞ ≤ λ, the optimal solution of the QP (4.9) improves the objective: w 21 (Z 11 ) −1ŵ 12 ≤ z 21 (Z 11 ) −1 z 12 Combining the above along with the fact thatŵ 22 ≥ z 22 we seê w 22 −ŵ 21 (Z 11 ) −1ŵ 12 > 0,(8.2) which implies that the new covariance estimate W ≻ 0. Remark 3. If the condition Z − S ∞ ≤ λ appearing in Lemma 3 is violated, then the row/column update of glasso need not maintain PD of the covariance matrix W. We have encountered many counter-examples that show this to be true, see the discussion below. The R package implementation of glasso allows the user to specify a warmstart as a tuple (Θ 0 , W 0 ). This option is typically used in the construction of a path algorithm. If ( Θ λ , W λ ) is provided as a warm-start for λ ′ < λ, then the glasso algorithm is not guaranteed to converge. It is easy to find numerical examples by choosing the gap λ − λ ′ to be large enough. Among the various examples we encountered, we briefly describe one here. Details of the experiment/data and other examples can be found in the Appendix A.1. We generated a data-matrix X n×p , with n = 2, p = 5 with iid standard Gaussian entries. S is the sample covariance matrix. We solved problem (1.1) using glasso for λ = 0.9 × max i =j |s ij |. We took the estimated covariance and precision matrices: W λ and Θ λ as a warmstart for the glasso algorithm with λ ′ = λ × 0.01. The glasso algorithm failed to converge with this warm-start. We note that W λ − S ∞ = 0.0402 λ ′ (hence violating the sufficient condition in Lemma 4) and after updating the first row/column via the glasso algorithm we observed that "covariance matrix" W has negative eigen-values -leading to a non-convergent algorithm. The above phenomenon is not surprising and easy to explain and generalize. If the warm-start fails to satisfy W λ − S ∞ , then during the course of the row/column updates the working covariance matrix may lose positive definiteness. In such a case, the block problems (QPs) may not correspond to valid convex programs (due to the lack of the postive-definiteness of the quadratic forms). This seems to be the fundamental reason behind the non-convergence of the algorithm. Since W λ solves the dual (4.1), it is necessarily of the form W λ = S +Γ, for Γ ∞ ≤ λ. In the light of Lemma 3 and also Remark 3, the warm-start needs to be dual-feasible in order to guarantee that the iterates W remain PD and hence for the sub-problems to be well defined convex programs. Clearly W λ does not satisfy the box-constraint W λ − S ∞ ≤ λ ′ , for λ ′ < λ. However, in practice the glasso algorithm is usually seen to converge (numerically) when λ ′ is quite close to λ. This is probably because the working covariance matrix remains positive definite and the block QPs are valid convex programs. If the difference between λ ′ and λ is large then the algorithm may very likely get into trouble. The following lemma establishes that any PD matrix can be taken as a warmstart for p-glasso or dp-glassoto ensure a convergent algorithm. Lemma 4. Suppose Φ ≻ 0 is a used as a warm-start for the p-glasso (or dp-glasso) algorithm. Then every row/column update of p-glasso (or dpglasso) maintains positive definiteness of the working precision matrix Θ. Proof. Consider updating the pth row/column of the precision matrix. The condition Φ ≻ 0 is equivalent to both Φ 11 ≻ 0 and φ 22 − Φ 21 (Φ 11 ) −1 Φ 12 > 0. Note that the block Φ 11 remains fixed; only the pth row/column of Θ changes. φ 21 gets updated toθ 21 , as doesθ 12 . From (2.6) the updated diagonal entrŷ θ 22 satisfies:θ 22 −θ 21 (Φ 11 ) −1θ 12 = 1 (s 22 + λ) > 0. Thus the updated matrixΘ remains PD. The result for the dp-glasso algorithm follows, since both the versions p-glasso and dp-glasso solve the same block coordinate problem. As exhibited in Lemma 4, both the algorithms dp-glasso and p-glasso are guaranteed to converge from any positive-definite warm start. This is due to the unconstrained formulation of the primal problem (1.1). glasso really only requires an initialization for W, since it constructs Θ on the fly. Likewise dp-glasso only requires an initialization for Θ. Having the other half of the tuple assists in the block-updating algorithms. For example, glasso solves a series of lasso problems, where Θ play the role as parameters. By supplying Θ along with W, the block-wise lasso problems can be given starting values close to the solutions. The same applies to dp-glasso. In neither case do the pairs have to be inverses of each other to serve this purpose. If we wish to start with inverse pairs, and maintain such a relationship, we have described earlier how O(p 2 ) updates after each block optimization can achieve this. One caveat for glasso is that starting with an inverse pair costs O(p 3 ) operations, since we typically start with W = S + λI. For dp-glasso, we typically start with a diagonal matrix, which is trivial to invert. Experimental results & timing comparisons We compared the performances of algorithms glasso and dp-glasso (both with and without warm-starts) on different examples with varying (n, p) values. While most of the results are presented in this section, some are relegated to the Appendix B. Section 9.1 describes some synthetic examples and Section 9.2 presents comparisons on a real-life micro-array data-set. Synthetic experiments In this section we present examples generated from two different covariance models -as characterized by the covariance matrix Σ or equivalently the precision matrix Θ. We create a data matrix X n×p by drawing n independent samples from a p dimensional normal distribution MVN(0, Σ). The sample covariance matrix is taken as the input S to problem (1.1). The two covariance models are described below: Type-1 The population concentration matrix Θ = Σ −1 has uniform sparsity with approximately 77 % of the entries zero. We created the covariance matrix as follows. We generated a matrix B with iid standard Gaussian entries, symmetrized it via 1 2 (B + B ′ ) and set approximately 77% of the entries of this matrix to zero, to obtainB (say). We added a scalar multiple of the p dimensional identity matrix toB to get the precision matrix Θ =B + ηI p×p , with η chosen such that the minimum eigen value of Θ is one. Type-2 This example, taken from [11], is an auto-regressive process of order two -the precision matrix being tri-diagonal: For each of the two set-ups Type-1 and Type-2 we consider twelve different combinations of (n, p): For every (n, p) we solved (1.1) on a grid of twenty λ values linearly spaced in the log-scale, with λ i = 0.8 i × {0.9λ max }, i = 1, . . . , 20, where λ max = max i =j |s ij |, is the off-diagonal entry of S with largest absolute value. λ max is the smallest value of λ for which the solution to (1.1) is a diagonal matrix. θ ij =          Since this article focuses on the glasso algorithm, its properties and alternatives that stem from the main idea of block-coordinate optimization, we present here the performances of the following algorithms: Dual-Cold glasso with initialization W = S + λI p×p , as suggested in [5]. Dual-Warm The path-wise version of glasso with warm-starts, as suggested in [5]. Although this path-wise version need not converge in general, this was not a problem in our experiments, probably due to the fine-grid of λ values. Primal-Cold dp-glasso with diagonal initialization Θ = (diag(S) + λI) −1 . Primal-Warm The path-wise version of dp-glasso with warm-starts. We did not include p-glasso in the comparisons above since p-glasso requires additional matrix rank-one updates after every row/column update, which makes it more expensive. None of the above listed algorithms require matrix inversions (via rank one updates). Furthermore, dp-glasso and p-glasso are quite similar as both are doing a block coordinate optimization on the dual. Hence we only included dp-glasso in our comparisons. We used our own implementation of the glasso and dp-glasso algorithm in R. The entire program is written in R, except the inner block-update solvers, which are the real work-horses: • For glasso we used the lasso code crossProdLasso written in FORTRAN by [5]; • For dp-glasso we wrote our own FORTRAN code to solve the box QP. An R package dpglasso that implements dp-glasso is available on CRAN. In the figure and tables that follow below, for every algorithm, at a fixed λ we report the total time taken by all the QPs -the ℓ 1 regularized QP for glasso and the box constrained QP for dp-glasso till convergence All computations were done on a Linux machine with model specs: Intel(R) Xeon(R) CPU 5160 @ 3.00GHz. Convergence Criterion Since dp-glasso operates on the the primal formulation and glasso operates on the dual -to make the convergence criteria comparable across examples we based it on the relative change in the primal objective values i.e. f (Θ) (1.1) across two successive iterations: f (Θ k ) − f (Θ k−1 ) |f (Θ k−1 )| ≤ TOL,(9.1) where one iteration refers to a full sweep across p rows/columns of the precision matrix (for dp-glasso ) and covariance matrix (for glasso ); and TOL denotes the tolerance level or level of accuracy of the solution. To compute the primal objective value for the glasso algorithm, the precision matrix is computed from W via direct inversion (the time taken for inversion and objective value computation is not included in the timing comparisons). Computing the objective function is quite expensive relative to the computational cost of the iterations. In our experience convergence criteria based on a relative change in the precision matrix for dp-glasso and the covariance matrix for glasso seemed to be a practical choice for the examples we considered. However, for reasons we described above, we used criterion 9.1 in the experiments. Observations Figure 3 presents the times taken by the algorithms to converge to an accuracy of TOL = 10 −4 on a grid of λ values. The figure shows eight different scenarios with p > n, corresponding to the two different covariance models Type-1 (left panel) and Type-2 (right panel). It is quite evident that dp-glasso with warm-starts (Primal-Warm) outperforms all the other algorithms across all the different examples. All the algorithms converge quickly for large values of λ (typically high sparsity) and become slower with decreasing λ. For large p and small λ, convergence is slow; however for p > n, the non-sparse end of the regularization path is really not that interesting from a statistical viewpoint. Warm-starts apparently do not always help in speeding up the convergence of glasso ; for example see Figure 3 with (n, p) = (500, 1000) (Type 1) and (n, p) = (500, 800) (Type 2). This probably further validates the fact that warm-starts in the case of glasso need to be carefully designed, in order for them to speed-up convergence. Note however, that glasso with the warm-starts prescribed is not even guaranteed to converge -we however did not come across any such instance among the experiments presented in this section. Based on the suggestion of a referee we annotated the plots in Figure 3 with locations in the regularization path that are of interest. For each plot, two vertical dotted lines are drawn which correspond to the λs at which the distance of the estimated precision matrix Θ λ from the population precision matrix is minimized wrt to the · 1 norm (green) and · F norm (blue). The optimal λ corresponding to the · 1 metric chooses sparser models than those chosen by · F ; the performance gains achieved by dp-glasso seem to be more prominent for the latter λ. Table 1 presents the timings for all the four algorithmic variants on the twelve different (n, p) combinations listed above for Type 1. For every example, we report the total time till convergence on a grid of twenty λ values for two different tolerance levels: TOL ∈ {10 −4 , 10 −5 }. Note that the dp-glasso returns positive definite and sparse precision matrices even if the algorithm is terminated at a relatively small/moderate accuracy level -this is not the case in glasso. The rightmost column presents the proportion of non-zeros averaged across the entire path of solutions Θ λ , where Θ λ is obtained by solving (1.1) to a high precision i.e. 10 −6 , by algorithms glasso and dp-glasso and averaging the results. Again we see that in all the examples dp-glasso with warm-starts is the clear winner among its competitors. For a fixed p, the total time to trace out the path generally decreases with increasing n. There is no clear winner between glasso with warm-starts and glasso without warm-starts. It is often seen that dpglasso without warm-starts converges faster than both the variants of glasso (with and without warm-starts). Table 2 reports the timing comparisons for Type 2. Once again we see that in all the examples Primal-Warm turns out to be the clear winner. For n ≤ p = 1000, we observe that Primal-Warm is generally faster for Type-2 than Type-1. This however, is reversed for smaller values of p ∈ {800, 500}. Primal-Cold is has a smaller overall computation time for Type-1 over Type-2. Table 1 Table showing the performances of the four algorithms glasso (Dual-Warm/Cold) and dp-glasso (Primal-Warm/Cold) for the covariance model Type-1. We present the times (in seconds) required to compute a path of solutions to (1.1) (on a grid of twenty λ values) for different (n, p) combinations and relative errors (as in (9.1)). The rightmost column gives the averaged sparsity level across the grid of λ values. dp-glasso with warm-starts is consistently the winner across all the examples p / n relative Total time (secs) to compute a path of solutions Average % error (TOL) Dual-Cold Dual-Warm Primal-Cold Primal-Warm Zeros in path In some cases (for example n ≤ p = 1000), we see that Primal-Warm in Type-2 converges much faster than its competitors on a relative scale than in Type-1this difference is due to the variations in the structure of the covariance matrix. Micro-array example We consider the data-set introduced in [1] and further studied in [10,7]. In this experiment, tissue samples were analyzed using an Affymetrix Oligonucleotide array. The data was processed, filtered and reduced to a subset of 2000 gene expression values. The number of Colon Adenocarcinoma tissue samples is n = 62. For the purpose of the experiments presented in this section, we pre-screened the genes to a size of p = 725. We obtained this subset of genes using the idea of exact covariance thresholding introduced in our paper [7]. We thresholded the sample correlation matrix obtained from the 62 × 2000 microarray data-matrix into connected components with a threshold of 0.00364 1 -the genes belonging to the largest connected component formed our pre-screened gene pool of size Table 3 Comparisons among algorithms for a microarray dataset with n = 62 and p = 725, for different tolerance levels (TOL). We took a grid of fifteen λ values, the average % of zeros along the whole path is 90.8 The results presented below in Table 3 show timing comparisons of the four different algorithms: Primal-Warm/Cold and Dual-Warm/Cold on a grid of fifteen λ values in the log-scale. Once again we see that the Primal-Warm outperforms the others in terms of speed and accuracy. Dual-Warm performs quite well in this example. Conclusions This paper explores some of the apparent mysteries in the behavior of the glasso algorithm introduced in [5]. These have been explained by leveraging the fact that the glasso algorithm is solving the dual of the graphical lasso problem (1.1), by block coordinate ascent. Each block update, itself the solution to a convex program, is solved via its own dual, which is equivalent to a lasso problem. The optimization variable is W, the covariance matrix, rather than the target precision matrix Θ. During the course of the iterations, a working version of Θ is maintained, but it may not be positive definite, and its inverse is not W. Tight convergence is therefore essential, for the solutionΘ to be a proper inverse covariance. There are issues using warm starts with glasso, when computing a path of solutions. Unless the sequence of λs are sufficiently close, since the "warm start"s are not dual feasible, the algorithm can get into trouble. We have also developed two primal algorithms p-glasso and dp-glasso. The former is more expensive, since it maintains the relationship W = Θ −1 at every step, an O(p 3 ) operation per sweep across all row/columns. dp-glasso is similar in flavor to glasso except its optimization variable is Θ. It also solves the dual problem when computing its block update, in this case a box-QP. This box-QP has attractive sparsity properties at both ends of the regularization path, as evidenced in some of our experiments. It maintains a positive definite Θ throughout its iterations, and can be started at any positive definite matrix. Our experiments show in addition that dp-glasso is faster than glasso. An R package dpglasso that implements dp-glasso is available on CRAN. With q denoting the maximum off-diagonal entry of S (in absolute value), we solved (1.1) using glasso at λ = 0.9 × q. The covariance matrix for this λ was taken as a warm-start for the glasso algorithm with λ ′ = λ × 0.01. The smallest eigen-value of the working covariance matrix W produced by the glasso algorithm, upon updating the first row/column was: −0.002896128, which is clearly undesirable for the convergence of the algorithm glasso. This is why the algorithm glasso breaks down. Example 2. The example is similar to above, with (n, p) = (10, 50), the seed of random number generator in R being set to set.seed(2008) and X n×p is the data-matrix with iid Gaussian entries. If the covariance matrix W λ which solves problem (1.1) with λ = 0.9×max i =j |s ij | is taken as a warm-start to the glasso algorithm with λ ′ = λ × 0.1 -the algorithm fails to converge. Like the previous example, after the first row/column update, the working covariance matrix has negative eigen-values. Appendix B: More examples and comparisons This section is a continuation to Section 9, in that it provides further examples comparing the performance of algorithms glasso and dp-glasso. The experimental data is generated as follows. For a fixed value of p, we generate a matrix A p×p with random Gaussian entries. The matrix is symmetrized by A ← (A + A ′ )/2. Approximately half of the off-diagonal entries of the matrix are set to zero, uniformly at random. All the eigen-values of the matrix A are lifted so that the smallest eigen-value is zero. The noiseless version of the precision matrix is given by Θ = A + τ I p×p . We generated the sample covariance matrix S by adding symmetric positive semi-definite random noise N to Θ −1 ; i.e. S = Θ −1 + N, where this noise is generated in the same manner as A. We considered four different values of p ∈ {300, 500, 800, 1000} and two different values of τ ∈ {1, 4}. For every p, τ combination we considered a path of twenty λ values on the geometric scale. For every such case four experiments were performed: Primal-Cold, Primal-Warm, Dual-Cold and Dual-Warm (as described in Section 9). Each combination was run 5 times, and the results averaged, to avoid dependencies on machine loads. Figure 4 shows the results. Overall, dp-glasso with warm starts performs the best, especially at the extremes of the path. We gave some explanation for this in Section 6. For the largest problems (p = 1000) their performances are comparable in the central part of the path (though dp-glasso dominates), but at the extremes dp-glasso dominates by a large margin. The timings in seconds for the four different algorithmic versions glasso (with and without warm-starts) and dp-glasso (with and without warm-starts) for a grid of twenty λ values on the log-scale. The horizontal axis is indexed by the proportion of zeros in the solution. Lemma 1 . 1Consider the primal problem (1.1) and its stationarity conditions (2.1). These are equivalent to the stationarity conditions for the box-constrained SDP maximizẽ Γ: Γ ∞≤λ g(Γ) := log det(S +Γ) + p (4.1) under the transformation S +Γ = Θ −1 . Lemma 2 . 2Assume W 11 ≻ 0. The stationarity equations W 11β + s 12 + λγ 12 = 0, (4.7) Theorem 4.1. glasso performs block-coordinate ascent on the box-constrained SDP (4.1), the Lagrange dual of the primal problem (1.1). Each of the block steps are themselves box-constrained QPs, which glasso optimizes via their Lagrange duals. = −Θ 11 (s 12 +γ)/w 22 (c) Solve for θ 22 using (5.5). Fig 2 . 2Fig 2. Figure illustrating some negative properties of glasso using a typical numerical example. [Left Panel] The precision matrix produced after every row/column update need not be the exact inverse of the working covariance matrix -the squared Frobenius norm of the error is being plotted across iterations. [Right Panel] The estimated precision matrix Θ produced by glasso need not be positive definite along iterations; plot shows minimal eigen-value. Figure illustrating some negative properties of glasso using a typical numerical example. [Left Panel] The precision matrix produced after every row/column update need not be the exact inverse of the working covariance matrix -the squared Frobenius norm of the error is being plotted across iterations. [Right Panel] The estimated precision matrix Θ produced by glasso need not be positive definite along iterations; plot shows minimal eigen-value. Remark 4 . 4A simple consequence of Lemmas 3 and 4 is that the QPs arising in the process, namely the ℓ 1 regularized QPs (2.13), (3.1) and the box-constrained QPs (4.9) and (5.3) are all valid convex programs, since all the respective matrices W 11 , Θ −1 11 and W −1 11 , Θ 11 appearing in the quadratic forms are PD. 0. 5 , 5if |j − i| = 1, i = 2, . . . , (p − 1); 0.25, if |j − i| = 2, i = 3, . . . , (p − 2); 1, if i = j, i = 1, . . . , p; and 0 otherwise. a) p = 1000, n ∈ {1500, 1000, 500}. (b) p = 800, n ∈ {1000, 800, 500}. (c) p = 500, n ∈ {800, 500, 200}. (d) p = 200, n ∈ {500, 200, 50}. Fig 3 . 3The timings in seconds for the four different algorithmic versions: glasso (with and without warm-starts) and dp-glasso (with and without warm-starts) for a grid of λ values on the log-scale. [Left Panel] Covariance model for Type-1, [Right Panel] Covariance model for Type-2. The horizontal axis is indexed by the proportion of zeros in the solution. The vertical dashed lines correspond to the optimal λ values for which the estimated errors Θ λ − Θ 1 (green) and Θ λ − Θ F (blue) are minimum. Fig 4. The timings in seconds for the four different algorithmic versions glasso (with and without warm-starts) and dp-glasso (with and without warm-starts) for a grid of twenty λ values on the log-scale. The horizontal axis is indexed by the proportion of zeros in the solution. corresponding to the covariance matrix W produced by glasso algorithm as a function of the iteration index (each column/row update). [Middle Panel] The successive differences of the primal objective values -the zero crossings indicate non-monotonicity. [Right Panel] The successive differences in the dual objective values -there are no zero crossings, indicating that glasso produces a monotone sequence of dual objective values.100 200 300 400 500 −71.0 −70.5 −70.0 Iteration Index Criterion Primal Objective Dual Objective 100 200 300 400 500 −0.06 −0.02 0.02 0.04 0.06 Iteration Index Primal Differences 0 100 200 300 400 500 −0.010 0.000 0.010 0.020 Iteration Index Dual Differences Fig 1. [Left panel] The objective values of the primal criterion (1.1) and the dual crite- rion (4.1) Table 2 Table 2showing comparative timings of the four algorithmic variants of glasso and dp-glasso for the covariance model in Type-2. This table is similar toTable 1, displaying results for Type-1. dp-glasso with warm-starts consistently outperforms all its competitors TOL) Dual-Cold Dual-Warm Primal-Cold Primal-Warm Zeros in pathp / n relative Total time (secs) to compute a path of solutions Average % error ( p = 725. This (subset) data-matrix of size (n, p) = (62, 725) is used for our experiments.relative Total time (secs) to compute a path of solutions error (TOL) Dual-Cold Dual-Warm Primal-Cold Primal-Warm 10 −3 515.15 406.57 462.58 334.56 10 −4 976.16 677.76 709.83 521.44 = λγ 12 ,p 12 = abs(β) (4.12) this is the largest value of the threshold for which the size of the largest connected component is smaller than 800 AcknowledgementsWe would like to thank Robert Tibshirani and his research group at Stanford Statistics for helpful discussions. We are also thankful to the anonymous referees whose comments led to improvements in this presentation.Appendix A: Additional numerical illustrations and examplesThis section complements the examples provided in the paper with further experiments and illustrations.A.1. Examples: Non-convergence of glasso with warm-startsThis section illustrates with examples that warm-starts for the glasso need not converge. This is a continuation of examples presented in Section 8. Example 1. We took (n, p) = (2, 5) and setting the seed of the random number generator in R as set.seed(2008) we generated a data-matrix X n×p with iid standard Gaussian entries. The sample covariance matrix S is given below: Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. U Alon, N Barkai, D A Notterman, K Gish, S Ybarra, D Mack, A J Levine, Proceedings of the National Academy of Sciences of the United States of America. the National Academy of Sciences of the United States of America96Alon, U., Barkai, N., Notterman, D. A., Gish, K., Ybarra, S., Mack, D. and Levine, A. J. (1999). Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Proceedings of the National Academy of Sciences of the United States of America 96 6745-6750. Model Selection Through Sparse Maximum Likelihood Estimation for Multivariate Gaussian or Binary Data. O Banerjee, L E Ghaoui, A Aspremont, Journal of Machine Learning Research. 92417243Banerjee, O., Ghaoui, L. E. and d'Aspremont, A. (2008). Model Selection Through Sparse Maximum Likelihood Estimation for Multivariate Gaussian or Binary Data. Journal of Machine Learning Research 9 485- 516. MR2417243 A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. A Beck, M Teboulle, SIAM J. Imaging Sciences. 22486527Beck, A. and Teboulle, M. (2009). A Fast Iterative Shrinkage- Thresholding Algorithm for Linear Inverse Problems. SIAM J. Imaging Sciences 2 183-202. MR2486527 Convex Optimization. Cambridge University Press. S Boyd, L Vandenberghe, 2061575Boyd, S. and Vandenberghe, L. (2004). Convex Optimization. Cam- bridge University Press. MR2061575 Sparse inverse covariance estimation with the graphical lasso. J Friedman, T Hastie, R Tibshirani, Biostatistics. 9Friedman, J., Hastie, T. and Tibshirani, R. (2007). Sparse inverse covariance estimation with the graphical lasso. Biostatistics 9 432-441. T Hastie, R Tibshirani, J Friedman, The Elements of Statistical Learning, Second Edition: Data Mining, Inference, and Prediction. New YorkSpringer22722294Series in Statistics)Hastie, T., Tibshirani, R. and Friedman, J. (2009). The Elements of Statistical Learning, Second Edition: Data Mining, Inference, and Predic- tion (Springer Series in Statistics), 2 ed. Springer New York. MR2722294 Exact Covariance Thresholding into Connected Components for Large-Scale Graphical Lasso. R Mazumder, T Hastie, Journal of Machine Learning Research. 13Mazumder, R. and Hastie, T. (2012). Exact Covariance Thresholding into Connected Components for Large-Scale Graphical Lasso. Journal of Machine Learning Research 13 781-794. MR2913718 High-dimensional graphs and variable selection with the lasso. N Meinshausen, P Bühlmann, Annals of Statistics. 342278363Meinshausen, N. and Bühlmann, P. (2006). High-dimensional graphs and variable selection with the lasso. Annals of Statistics 34 1436-1462. MR2278363 Gradient methods for minimizing composite objective function Technical Report. Y Nesterov, 76Center for Operations Research and Econometrics (CORE), Catholic University of Louvain. Tech. RepNesterov, Y. (2007). Gradient methods for minimizing composite objec- tive function Technical Report, Center for Operations Research and Econo- metrics (CORE), Catholic University of Louvain. Tech. Rep, 76. Sparse Permutation Invariant Covariance Estimation. A J Rothman, P J Bickel, E Levina, J Zhu, Electronic Journal of Statistics. 22417391Rothman, A. J., Bickel, P. J., Levina, E. and Zhu, J. (2008). Sparse Permutation Invariant Covariance Estimation. Electronic Journal of Statis- tics 2 494-515. MR2417391 Model selection and estimation in the Gaussian graphical model. M Yuan, Y Lin, Biometrika. 94Yuan, M. and Lin, Y. (2007). Model selection and estimation in the Gaussian graphical model. Biometrika 94 19-35. MR2367824
[]
[ "Effects of confinement and crowding on folding of model proteins published in: Biosystems. 2008 Dec;94(3):248-52", "Effects of confinement and crowding on folding of model proteins published in: Biosystems. 2008 Dec;94(3):248-52" ]
[ "M Wojciechowski \nInstitute of Physics\nPolish Academy of Sciences\nAl. Lotników 32/4602-668WarsawPoland\n", "Marek Cieplak \nInstitute of Physics\nPolish Academy of Sciences\nAl. Lotników 32/4602-668WarsawPoland\n" ]
[ "Institute of Physics\nPolish Academy of Sciences\nAl. Lotników 32/4602-668WarsawPoland", "Institute of Physics\nPolish Academy of Sciences\nAl. Lotników 32/4602-668WarsawPoland" ]
[]
We perform molecular dynamics simulations for a simple coarse-grained model of crambin placed inside of a softly repulsive sphere of radius R. The confinement makes folding at the optimal temperature slower and affects the folding scenarios, but both effects are not dramatic. The influence of crowding on folding are studied by placing several identical proteins within the sphere, denaturing them, and then by monitoring refolding. If the interactions between the proteins are dominated by the excluded volume effects, the net folding times are essentially like for a single protein. An introduction of inter-proteinic attractive contacts hinders folding when the strength of the attraction exceeds about a half of the value of the strength of the single protein contacts. The bigger the strength of the attraction, the more likely is the occurrence of aggregation and misfolding.
10.1016/j.biosystems.2008.06.016
[ "https://arxiv.org/pdf/0811.4581v1.pdf" ]
15,951,464
0811.4581
11d266c2cf500747cee4ff1899f447e16501615c
Effects of confinement and crowding on folding of model proteins published in: Biosystems. 2008 Dec;94(3):248-52 27 Nov 2008 M Wojciechowski Institute of Physics Polish Academy of Sciences Al. Lotników 32/4602-668WarsawPoland Marek Cieplak Institute of Physics Polish Academy of Sciences Al. Lotników 32/4602-668WarsawPoland Effects of confinement and crowding on folding of model proteins published in: Biosystems. 2008 Dec;94(3):248-52 27 Nov 2008 We perform molecular dynamics simulations for a simple coarse-grained model of crambin placed inside of a softly repulsive sphere of radius R. The confinement makes folding at the optimal temperature slower and affects the folding scenarios, but both effects are not dramatic. The influence of crowding on folding are studied by placing several identical proteins within the sphere, denaturing them, and then by monitoring refolding. If the interactions between the proteins are dominated by the excluded volume effects, the net folding times are essentially like for a single protein. An introduction of inter-proteinic attractive contacts hinders folding when the strength of the attraction exceeds about a half of the value of the strength of the single protein contacts. The bigger the strength of the attraction, the more likely is the occurrence of aggregation and misfolding. I. INTRODUCTION There is a growing interest in studies of biomolecules enclosed within a limited space. One reason is that almost all life processes take place in compartments such as cells where concentrations of proteins, lipids, shugars, and nucleic acids are large (Ellis and Minton, 2006). Such conditions are also desired in artificial life systems such as the liposomes that allow for protein synthesis within their interior (Murtas et al. , 2007). Chaperonin cages (Hartl and Hayer-Hartl, 2002), that assist in folding and refolding processes of proteins, offer an example of compartmentalization at a still smaller length scale. Another reason for the interest in the confinement effects is provided by recent advances in nanotechnology and resulting novel encapsulation techniques. These involve, for instance, reverse micelles which are mimetic systems of biological membranes composed of amphiphilic molecules. These molecules self-organize so that the polar head-groups point inward and hydrocarbon chains face the organic solvent (Luisi et al. , 1988;Matzke et al. , 1992;Melo et al. , 2003). The amount of the entrapped water is controlled by experimental conditions and a typical radius of the corresponding sphere can be as small as ∼20Å. The water molecules at the inner surface have a propensity to organize (Moilanen et al. , 2007) and the conditions within need not be uniform (Baruah et al., 2006). When it comes to larger confined systems, there are many microfluidic ways to deposit droplets on surfaces, e.g. in the context of the protein and DNA microarrays (Duroux et al., 2007). It is thus interesting to undertake theoretical studies of proteins that are confined. A simple way to introduce confinement is through a sphere (Baumketner et al., 2003;Rathore et al., 2006), or a cage (Takagi et al., 2003), which are repulsive to proteins located on the inside. A sphere which has attractive hydrophobic and repulsive hydrophilic patches on the inside has been also discussed (Jewett et al., 2004) to elucidate the workings of chaperonins. One can also generate cavities by using many spheres, repulsive on the outside, to immitate the effects of crowding (Cheung et al., 2005). Most of the studies carried out so far have been focused on thermodynamics. The confinement has been found to lead to a greater thermodynamic stability, broader and taller specific heat and more compact unfolded conformations (Rathore et al., 2006;Takagi et al., 2003). Crowding is expected to enhance these effects even further (Cheung et al., 2005). In this paper, we consider the kinetics of folding of a protein. This problem has already been studied by, Baumketner et al. (Baumketner et al., 2003) and Jewett et al. (Jewett et al., 2004). In the case of the confining repulsive sphere (Baumketner et al., 2003), the wall potential was represented by V wall,B (r) = 4ε wall πR s 5r σ r − R s 10 − σ r + R s 10 ,(1) where R s is the radius of the sphere, r is the distance of a C α atom from the center of the sphere, ε wall is the strength of the potential, and σ is take to be equal to 3.8Å, i.e. to the distance between two consecutive C α atoms in a protein. The folding time, determined at various temperatures, has been found to depend on R s in a complicated manner. For instance, at temperatures below the optimal temperature it decreases with increasing the R s , but it increases above this temperature. In the case of the non-uniform sphere (Jewett et al., 2004) the physics involved depends on the strength of attraction to the hydrophobic patches in the model. If the attractive patches act as strongly as the hydrophobic interactions in the protein, the protein sticks to the wall and folding is arrested, i.e. it takes forever. A reduction in the strength of the attraction leads to a lowering of the folding time until a minimum is reached and then the folding time increases to a finite value when the wall becomes purely repulsive. Here, we focus on the effects of confinement and crowding within a softly repulsive sphere. We investigate the kinetics of folding of a single protein as a function of the radius of the sphere and then also as a function of the interactions between atoms belonging to different proteins when several proteins are confined together. In order to house the proteins, we consider a wall represented by the truncated and shifted Lennard-Jones potential V wall (r) =    4ε wall σ R−r 12 − σ R−r 6 + ε wall f or (R − r) < r 0 0 f or (R − r) ≥ r 0(2) where R will be referred to as the radius of the sphere, and σ = r 0 · 2 − 1 6 . We take r 0 = 4Å (which is equal to the size of the repulsive core in the non-native contacts as defined below). The specific form of a purely repulsive potential representing the sphere should not matter much for the kinetics of folding. However, what may matter more is the choice of a model of the protein. Baumketner et al. (Baumketner et al., 2003) use a 27-bead minimalistic model proposed by Honeycutt and Thirumalai (Honeycutt and Thirumalai, 1992) in which pair-wise interactions depend on whether the amino acids involved are hydrophobic or polar. We use another simple coarse-grained model, a Go-like model (Go and Abe, 1981), in an implementation developed in refs. (Cieplak and Hoang, 2003;Cieplak et al., 2004;Cieplak and Sulkowska, 2005;Hoang and Cieplak, 2000;Kwiecinska and Cieplak, 2005;Szymczak and Cieplak, 2006) and perform molecular dynamics studies of folding and unfolding. The Go-like models are rather imperfect tools to use in the context of folding, but are often found to be adequate to settle various qualitative issues, especially of a comparatory nature. Their advantage is that they allow for a thorough statistical analysis of time dependent processes involving large conformational changes. As an illustration of this approach, we consider crambin. This is an α − β protein comprising of 46 amino acids. In its native state, the radius of gyration is about 9.7Å and the largest distance between a pair of its C α atoms is 30Å. The minimum value of R that does not violate the steric constraints and still allows for meaningful conformational transition is 18Å and the corresponding plot of the V wall potential is shown in Figure 1, together with a schematic representation of the native conformation. We find that confinement under optimal folding conditions makes folding last longer but not more than by a factor of 2. We then consider the effects of crowding by placing up to twelve identical proteins inside of the sphere and studying refolding of the thermally denatured conformations. If a protein acts on another protein only through their excluded volumes, then the folding process, at optimality, is almost the same as for the single protein case. If one introduces attractive contacts between the proteins then, above their certain strength, the folding is hindered more substantially, even under the optimal conditions, since the conformational collapse now competes with aggregation. II. METHODS The details of the approach are explained in refs. Cieplak , 2007, 2008). Briefly, each amino acid is represented by a bead located at the C α position. The beads are tethered into a chain by the harmonic interactions. The local backbone stiffness is represented by a chirality potential (Kwiecinska and Cieplak, 2005) that favors the native sense of the chirality, i.e. the native values of the dihedral angles. The interactions between the amino acids are divided into the repulsive non-native contacts and attractive native contacts. The division is based on the absence or presence of the atomic overlaps in the experimentally determined native conformation. The attractive native contacts between amino acids i and j in distance r ij are described by the Lennard Jones potential V LJ ij = 4ε(( σij r ) 12 − ( σij r ) 6 ), where the length parameter σ ij is determined so that the minimum in the potential agrees with the distance determined experimentally. The energy parameter ε should be of order 1 -1.6 kcal/mol. We set ε wall to be equal to ε. The model incorporates implicit solvent effects through temperature controlled random forces and strong velocity dependent damping. Room temperature situations are considered to arise in the vicinity of k B T /ε of order of 0.3 (k B is the Boltzmann constant and T is the temperature). The time scale, τ , in the molecular simulations is of order of 1 ns as it is set by time needed to cover distances of order of a typical σ ij through diffusion and not through a ballistic motion. When studying folding, we determine the median time, t f old , needed to establish all native contacts for the first time when starting from an unfolded conformation. A native contact is assumed to be established when r ij < 1.5 σ ij . t f old is determined based on at least 301 trajectories. A convenient way to represent the sequencing of the folding events is through the scenario diagrams in which one shows average times when a specific contact gets established for the first time. The contacts are labelled by their sequential distance |i − j|. Simulations of folding require a prior generation of extended denatured structures to fold from. These are obtained by unfolding the native structure by applying a high temperature. Time scales required to arrive at a state in which all native contacts are broken are too long to achieve in the simulations. Instead, we take the criterion of all contacts with |i − j| > 5 being broken (see a discussion in ref. (Sulkowska and Cieplak , 2007)). The corresponding median unfolding time is denoted by t unf . Confinement puts restrictions on the possible starting conformations since they must fit the sphere. Thus we generate the starting sets by first placing a protein (or proteins) in a native state in a sphere of radius R 0 , then setting k B T /ε at 1.0, and finally by storing conformations obtained at the end of a 10000τ -long process of the unfolding dynamics (the corresponding t unf ranged between 100 and 2000 τ depending on R 0 ). One can simulate refolding either for R = R 0 , which is the simplest situation, but one may also consider refolding of the R 0 -generated structures in a larger space when R > R 0 and, in particular, for R = ∞. The resulting unfolded structures are governed by the value of R 0 as shown in Figure 2 which provides structure characterization through the average radius of gyration, < R g >, and the average fraction of the native contacts that are still present, Q. Notice that even for very large values of R 0 there is always a small fraction of unbroken native contacts. These are usually associated with the α helices. In the case of n > 1 molecules (n up to 12 was considered), we place them together at the center of the sphere and then move them away from one another in small steps along arbitrary directions until they stop overlapping (see Figure 3). The system is then unfolded thermally for 10000 τ and the resulting structures (see Figure 4) are used for refolding studies. t f old for n molecules was determined by calculating the folding time of the individual molecules and then by taking the median over the molecules and over various trajectories. Each trajectory was stopped either when a time cutoff was exceeded or when each molecule was declared to arrive at the native conformation at some point during the evolution. Another possible criterion would involve requiring a simultaneous establishment of all native contacts in the system. This would yield folding times that are significantly longer and comparing systems with different values of n would not be relevant in the context of crowding since the behavior of a single molecule is what is of the interest here. The scenarios of folding were realized like in the case of one molecule -just the averaging gets enhanced by the factor of n. Figure 5 shows t f old as a function of T for the three choices of R 0 : 18, 30, and 50Å. The left panel corresponds to refolding in an unrestricted space whereas the right panel to refolding in the sphere with the original radius of confinement. In the unrestricted space, t f old is nearly independent of R 0 . On the other hand, the presence of the sphere makes a change. There is not much of a difference in folding between R 0 = R = 30Å and 50Å, but there is a noticeable increase in t f old as one considers smaller values of R: t f old increases from 150 to 200 τ when R=18Å is considered. The differences become more visible when comparing folding from starting conformations corresponding to a given R 0 and then evolved in the sphere or in the unrestricted space. For R 0 =18Å, the change is from 137 τ to 200 τ , i.e. by a factor of 1.5. We interpret this finding as originating from the wall exerting a restriction on the process of the collapse: the protein may bounce with the wall on the way to its globular form and may need time to find another path. Figure 6 shows that the folding scenarios show more substantial sensitivity to the value of R 0 of the starting conformations than the folding times do (both for the unrestricted refolding, shown in the left panel, and for R = R 0 ). The observation is that the tighter the initial confinement, the faster the establishment of the individual native contacts. The complete folding, however, requires a simultaneous establishment of all contacts and this circumstance is sensitive to confinement to a much lesser degree. III. FOLDING OF A SINGLE PROTEIN IV. FOLDING OF A SEVERAL PROTEINS We now discuss the effects of crowding and we place n crambin molecules together into a sphere of radius R=36Å. n varies between 1 and 12. We monitor the folding process when R=R 0 . We first consider the simplest case in which the only way one molecule knows about the other is through the finite size of the hard cores (r 0 = 4Å) associated with the beads . Fig 7 shows that under the conditions of optimal folding, t f old essentially does not depend on n, i.e. when folding goes well, processes in one molecule do not affect movements of the other. We now repeat these calculations after introducing attractive interactions between the molecules. Since these molecules are all identical, the simplest way to do it is by introducing inter-protein attractive contact interactions in the following way. If there is a contact between i and j in one molecule λ then we generate a similar contact between i λ in molecule λ and j κ in molecule κ. Each intermolecular contact is assigned an amplitude of ε I in the corresponding Lennard-Jones potential. We vary ε I /ε between 0 and 1. The value of σ ij is kept the same as for the single protein. The corresponding potential is given by V I (r ij ′ ) =        4ε σij r ij ′ 12 − σij r ij ′ 6 + ε − ε I f or (r ij ′ ) < 2 1/6 σ ij (repulsive) 4ε I σij r ij ′ 12 − σij r ij ′ 6 f or r ij ′ ≥ 2 1/6 σ ij (attractive)(3) where the prime in j ′ indicates a different molecule. The resulting folding times are shown in Figure 8. It is seen that t f old is not affected by the inter-protein attraction as long as ε I /ε does not exceed a treshold value and then essentially all trajectories fold. An example of a folded conformation is shown in Figure 9. For larger values of ε I , the molecules influence one other which makes t f old longer and increases the frequency of the misfolding events, as shown in the inset of Figure 8. The threshold value decreases with n. For n=4 it is ∼ 0.5 and for n=12 it is ∼ 0.3. The misfolding is primarily due to aggregation and entanglement. An example of a misfolded conformation is shown in Figure 10. In most trajectories, the secondary structures (like helices 7-19 and 23-30) in individual molecules are established the first. At this stage, misfolding may occur since the intermolecular interactions tend to bring the secondary structures from various molecules together, generating a steric hindrance to further folding. It should be noted that the presence of a substantial ε I also affects the folding scenario as seen in Figure 11 for ε I /ε=0 and 0.7 -the case for which t f old varies by a factor of about 6. It is seen that the inter-protein interactions delay establishment of the contacts as they compete with the internal interactions. However, if the contact establishment times are normalized to t f old to bring out the relative changes in the structure of the scenario diagram then such relative times get shorter. V. CONCLUSIONS Our studies within the simple model employed here suggest that both confinement and crowding can affect the folding process of a protein. The confinement related effects on the folding time are weak unless there are attractive interactions between the proteins. It is also expected that increasing the number of proteins in a given sphere (n bigger than 4) will eventually affect folding significantly. This point remains to be studied further. The crowding effects may get enhanced further when one accounts for the hydrodynamic interactions in the system. One way to include these interactions is discussed in ref. (Szymczak and Cieplak, 2007). It would be worthwhile to check other model proteins to check for any universalities in the behavior. This paper is dedicated to Prof. Zbigniew Grzywna on his 60'th birthday and was motivated by a discussion with Nancy E. Levinger. The work involved has been supported by the grant N N202 0852 33 from the Ministry of Science and Higher Education in Poland. FIG. 1 : 1The potential of the spherical wall with R=18Å used in this paper. r denotes the distance of a bead form the center of the sphere. FIG. 2 :FIG. 5 :FIG. 6 :FIG. 7 :FIG. 8 :FIG. 11 : 2567811Geometrical characteristics of the unfolded structures of a single protein, as obtained for kBT /ε=1.0, as a function of the radius of the sphere in which they were generated.FIG. 3: Four model molecules of crambin in their native state placed in the sphere. FIG. 4: An example of an unfolded conformation of four proteins. The median folding times for the model crambin as described in the main text. The left panel is for folding in an inifinitely large sphere. The values of R0 characterize the nature of the starting unfolded conformations. The right panel is for spheres of three sizes as indicated. The folding scenarios in the unrestricted space (the left panel) and confined space (the right panel) for R0=18Å (the solid symbols) and 50Å (the open symbols). The temperature is set to kBT /ε = 0.3 which corresponds to the temperature of optimal folding. The symbols α1 + α2 and β1 + β2 indicate contacts between the two α-helices and two β-strands that are present in crambin. Folding time for an ensemble of n proteins in the spherical cavity of R=36Å when there are no attractive interactions between the proteins. The values of n range from 1 to 12 as indicated. Median folding time for n proteins at kBT /ε=0.3 as a function of the inter-protein interaction strength εI . The inset shows the fraction of misfolded trajectories. FIG. 9: An example of a properly folded ensemble of four proteins for εI /ε = 0.5. FIG. 10: An example of a misfolded and entangled conformation of four proteins corresponding to εI /ε The folding scenario for four proteins at kBT /ε=0.3 for εI /ε=0 (crosses) and 0.7 (open squares). Effect of confinement in chaperonin assisted protein folding: rate enhancement by decreasing the roughness of the folding energy landscape. A Baukmketner, A I Jewett, J E Shea, J. Mol. Biol. 332Baukmketner A., Jewett A. I., and Shea J. E., 2006, Effect of confinement in chaperonin assisted protein folding: rate enhancement by decreasing the roughness of the folding energy landscape. J. Mol. Biol., 332:701-13. When is water not water? Exploring water confined in large reverse micelles using a highly charged inorganic molecular probe. B Baruah, J M Roden, M Sedgwick, N M Correa, D C Crans, N E Levinger, J. Am. Chem. Soc. 128Baruah B., Roden J. M., Sedgwick M., Correa N. M., Crans D. C., and Levinger N. E., 2006, When is water not water? Exploring water confined in large reverse micelles using a highly charged inorganic molecular probe. J. Am. Chem. Soc., 128: 12758-65. Molecular crowding enhances native state stability and refolding rates of globular proteins. M S Cheung, D Klimov, Thirumalai D , Proc. Natl. Acad. Sci. USA. 102Cheung M. S., Klimov D., and Thirumalai D., 2005, Molecular crowding enhances native state stability and refolding rates of globular proteins. Proc. Natl. Acad. Sci. USA, 102:4753-8. Universality classes in folding times of proteins. M Cieplak, T X Hoang, Biophys. J. 84Cieplak M. and Hoang T. X., 2003, Universality classes in folding times of proteins. Biophys. J., 84:475-488. Thermal effects in stretching of go-like models of titin and secondary structures. M Cieplak, T X Hoang, M O Robbins, Proteins: Struct. Funct. Genet. 56Cieplak M., Hoang T. X., and Robbins M . O., 2004, Thermal effects in stretching of go-like models of titin and secondary structures. Proteins: Struct. Funct. Genet., 56:285-97. Thermal unfolding of proteins. M Cieplak, J I Sulkowska, J. Chem. Phys. 123194908Cieplak M. and Sulkowska J. I., 2005, Thermal unfolding of proteins. J. Chem. Phys., 123:194908. Light-induced immobilisation of biomolecules as an attractive alternative to microdroplet dispensing-based arraying technologies. M Duroux, E Skovsen, M T Neves-Petersen, L Duroux, L Gurevich, S B Petersen, Proteomics. 7Duroux M., Skovsen E., Neves-Petersen M. T., Duroux L., Gurevich L., and Petersen S. B., 2007, Light-induced immobilisation of biomolecules as an attractive alternative to microdroplet dispensing-based arraying technologies. Proteomics, 7:3491-99, 2007. Protein aggregation in crowded environments. R J Ellis, A P Minton, Biol Chem. 387Ellis R. J. and Minton A. P., 2006, Protein aggregation in crowded environments. Biol Chem., 387:485-97. Noninteracting local-structure model of folding and unfolding transition in globular proteins. N Go, H Abe, I. Formulation. Biopolymers. 20Go N. and Abe H., 1981, Noninteracting local-structure model of folding and unfolding transition in globular proteins. I. Formulation. Biopolymers, 20:991-1011. Molecular chaperones in the cytosol: from nascent chain to folded protein. F U Hartl, M Hayer-Hartl, Science. 295Hartl F. U. and Hayer-Hartl M., 2002, Molecular chaperones in the cytosol: from nascent chain to folded protein. Science, 295:1852-8. Molecular dynamics of folding of secondary structures in go-like models of proteins. T X Hoang, M Cieplak, J. Chem. Phys. 112Hoang T. X. and Cieplak M., 2000, Molecular dynamics of folding of secondary structures in go-like models of proteins. J. Chem. Phys., 112:6851-62. The nature of folded states of globular proteins. J D Honeycutt, D Thirumalai, Biopolymers. 32Honeycutt J. D. and Thirumalai D., 1992, The nature of folded states of globular proteins. Biopolymers, 32:695-709. Accelerated folding in the weak hydrophobic environment of a chaperonin cavity: creation of an alternate fast folding pathway. A I Jewett, A Baumketner, Shea J.-E , Proc. Natl. Acad. Sci. USA. 101Jewett A. I., Baumketner A., and Shea J.-E., 2004, Accelerated folding in the weak hydrophobic environment of a chaperonin cavity: creation of an alternate fast folding pathway. Proc. Natl. Acad. Sci. USA, 101:13192-7. Chirality and protein folding. J I Kwiecinska, M Cieplak, J. Phys. Cond. Mat. 17Kwiecinska J. I. and Cieplak M., 2005, Chirality and protein folding. J. Phys. Cond. Mat., 17:S1565-80. Reverse micelles as hosts for proteins and small molecules. P L Luisi, M Giomini, M P Pileni, Robinson B H , Biochem. Biophys. Acta. 947Luisi P. L., Giomini M., Pileni M. P., and Robinson B. H., 1988, Reverse micelles as hosts for proteins and small molecules. Biochem. Biophys. Acta, 947:209-46. Mechanisms of protein solubilization in reverse micelles. S F Matzke, A L Creagh, C A Haynes, J M Prausnitz, H W Blanch, Biotechnol. Bioeng. 40Matzke S. F., Creagh A. L., Haynes C. A., Prausnitz J. M., and Blanch H. W., 1992, Mechanisms of protein solubilization in reverse micelles. Biotechnol. Bioeng., 40:91-102. Cutinase-AOT interactions in reverse micelles: the effect of 1-hexanol. E P Melo, S M B Costa, J M S Cabral, P Fojan, S B Petersen, Chem. Phys. Lipids. 124Melo E. P., Costa S. M. B., Cabral J. M. S., Fojan P., and Petersen S. B., 2003, Cutinase-AOT interactions in reverse micelles: the effect of 1-hexanol. Chem. Phys. Lipids, 124:37-47. Confinement or the nature of the interface? Dynamics of nanoscopic water. D E Moilanen, N E Levinger, D B Spry, M D Fayer, J. Am. Chem. Soc. 129Moilanen D. E., Levinger N. E., Spry D. B., and Fayer M. D., 2007, Confinement or the nature of the interface? Dynamics of nanoscopic water. J. Am. Chem. Soc., 129:14311-18. Protein synthesis in liposomes with a minimal set of enzymes. G Murtas, Y Kuruma, P Bianchini, A Diaspro, P G Luisi, Biochem., Biophys. Res. Commmun. 363Murtas G., Kuruma Y., Bianchini P., Diaspro A., Luisi P. G., 2007, Protein synthesis in liposomes with a minimal set of enzymes. Biochem., Biophys. Res. Commmun., 363:12-7. Confinement effects on the thermodynamics of protein folding: Monte Carlo simulations. N Rathore, Iv T A Knotts, J J De Pablo, Biophys. J. 90Rathore N., Knotts IV T. A., and de Pablo J. J., 2006, Confinement effects on the thermodynamics of protein folding: Monte Carlo simulations. Biophys. J., 90:1767-73. Mechanical stretching of proteins -a theoretical survey of the Protein Data Bank. J I Sulkowska, Cieplak M , J. Phys.: Cond.Matter. 19283201Sulkowska J. I., and Cieplak M., 2007, Mechanical stretching of proteins -a theoretical survey of the Protein Data Bank. J. Phys.: Cond.Matter, 19:283201. Stretching to understand proteins -a survey of the Protein Data Bank. J I Sulkowska, Cieplak M , Biophys. J. 94Sulkowska J. I., and Cieplak M., 2008, Stretching to understand proteins -a survey of the Protein Data Bank. Biophys. J., 94:6-13. Stretching of proteins in a uniform flow. P Szymczak, M Cieplak, J. Chem. Phys. 125164903Szymczak P. and Cieplak M., 2006, Stretching of proteins in a uniform flow. J. Chem. Phys., 125:164903. Influence of hydrodynamic interactions on mechanical unfolding of proteins. P Szymczak, M Cieplak, J. Phys.: Cond. Matter. 19285224Szymczak P. and Cieplak M., 2007, Influence of hydrodynamic interactions on mechanical unfolding of proteins. J. Phys.: Cond. Matter, 19:285224. How protein thermodynamics and folding mechanisms are altered by the chaperonin cage: molecular simulations. F Takagi, N Koga, Takada S , Proc. Natl. Acad. Sci. USA. 100Takagi F., Koga N., and Takada S., 2003, How protein thermodynamics and folding mechanisms are altered by the chaperonin cage: molecular simulations. Proc. Natl. Acad. Sci. USA, 100:11367-72.
[]
[ "Finite sample inference for empirical Bayesian methods", "Finite sample inference for empirical Bayesian methods" ]
[ "Hien Duy Nguyen \nSchool of Mathematics and Physics\nUniversity of Queensland\n4067St. Lucia\n\nDepartment of Mathematics and Statistics\nLa Trobe University\n3086Bundoora\n", "Mayetri Gupta \nSchool of Mathematics and Statistics\nUniversity of Glasgow\nG12 8QQGlasgow\n" ]
[ "School of Mathematics and Physics\nUniversity of Queensland\n4067St. Lucia", "Department of Mathematics and Statistics\nLa Trobe University\n3086Bundoora", "School of Mathematics and Statistics\nUniversity of Glasgow\nG12 8QQGlasgow" ]
[]
In recent years, empirical Bayesian (EB) inference has become an attractive approach for estimation in parametric models arising in a variety of real-life problems, especially in complex and high-dimensional scientific applications. However, compared to the relative abundance of available general methods for computing point estimators in the EB framework, the construction of confidence sets and hypothesis tests with good theoretical properties remains difficult and problem specific. Motivated by the universal inference framework of Wasserman et al.[2020], we propose a general and universal method, based on holdout likelihood ratios, and utilizing the hierarchical structure of the specified Bayesian model for constructing confidence sets and hypothesis tests that are finite sample valid. We illustrate our method through a range of numerical studies and real data applications, which demonstrate that the approach is able to generate useful and meaningful inferential statements in the relevant contexts. arXiv:2302.14531v1 [stat.ME] 28 Feb 2023 S Haastrup. Comparison of some Bayesian analyses of heterogeneity in group life insurance.
10.1111/sjos.12643
[ "https://export.arxiv.org/pdf/2302.14531v1.pdf" ]
257,232,635
2302.14531
3bbfb6387b91cb21f3e3d213ab2ac77842144c64
Finite sample inference for empirical Bayesian methods Hien Duy Nguyen School of Mathematics and Physics University of Queensland 4067St. Lucia Department of Mathematics and Statistics La Trobe University 3086Bundoora Mayetri Gupta School of Mathematics and Statistics University of Glasgow G12 8QQGlasgow Finite sample inference for empirical Bayesian methods In recent years, empirical Bayesian (EB) inference has become an attractive approach for estimation in parametric models arising in a variety of real-life problems, especially in complex and high-dimensional scientific applications. However, compared to the relative abundance of available general methods for computing point estimators in the EB framework, the construction of confidence sets and hypothesis tests with good theoretical properties remains difficult and problem specific. Motivated by the universal inference framework of Wasserman et al.[2020], we propose a general and universal method, based on holdout likelihood ratios, and utilizing the hierarchical structure of the specified Bayesian model for constructing confidence sets and hypothesis tests that are finite sample valid. We illustrate our method through a range of numerical studies and real data applications, which demonstrate that the approach is able to generate useful and meaningful inferential statements in the relevant contexts. arXiv:2302.14531v1 [stat.ME] 28 Feb 2023 S Haastrup. Comparison of some Bayesian analyses of heterogeneity in group life insurance. Introduction Let D n = (X i ) i∈ [n] be our data, presented as a sequence of n ∈ N = {1, 2, . . . } random variables X i ∈ X (i ∈ [n] = {1, . . . , n}). For each i ∈ [n], let Θ i ∈ T be a random variable with probability density function (PDF) π (θ i ; ψ), where ψ ∈ P is a hyperparameter. Furthermore, suppose that [X i |Θ i = θ i ] arises from a family of data generating processes (DGPs) with conditional PDFs f (x i |Θ i = θ i ) = f (x i |θ i ) , and that the sequence ((X i , Θ i )) i∈ [n] is independent. Suppose that (Θ i ) i∈ [n] is realized at ϑ * n = (θ * i ) i∈ [n] , where each realization θ * i (i ∈ [n]) is unknown, and where ψ is also unknown. Let I ⊂ [n], and write ϑ * I = (θ * i ) i∈I . When I = {i}, we shall use the shorthand I = i, where it causes no confusion. Under this setup, for significance level α ∈ (0, 1), we wish to draw inference regarding the realized sequence ϑ * n by way of constructing 100 (1 − α) % confidence sets C α i (D n ) that satisfy: Pr θ * i [θ * i ∈ C α i (D n )] ≥ 1 − α,(1) and p-values P I (D n ) for testing null hypotheses H 0 : ϑ * I ∈ T I,0 ⊂ T |I| that satisfy: sup ϑ * I ∈T I,0 Pr ϑ * I [P I (D n ) ≤ α] ≤ α,(2) where Pr θ * i and Pr ϑ * I denote probability measures consistent with the PDF f (x i |θ * i ), for each i ∈ [n], and for all i ∈ I, respectively. That is, for a measurable set A ⊂ X n , and assuming absolute continuity of Pr ϑ * I with respect to some measure m (typically the Lebesgue or counting measure), we can write Pr ϑ * I (A) = A i∈I f (x i |θ * i ) j / ∈I f (x j |θ j ) dm (d n ) ,(3) where θ j is an arbitrary element of T, for each j / ∈ I. The setup above falls within the framework of empirical Bayesian (EB) inference, as exposited in the volumes of Maritz and Lwin [1989], Ahmed and Reid [2001], Serdobolskii [2008], Efron [2010], and Bickel [2020]. Over the years, there has been a sustained interest in the construction and computation of EB point estimators for ϑ * n , in various contexts, with many convenient and general computational tools now made available, for instance, via the software of Johnstone and Silverman [2005], Leng et al. [2013], Koenker and Gu [2017], and Narasimhan and Efron [2020]. Unfortunately, the probabilistic properties of ϑ * n tend to be difficult to characterize, making the construction of confidence sets and hypothesis tests with good theoretical properties relatively less routine than the construction of point estimators. When restricted to certain classes of models, such constructions are nevertheless possible, as exemplified by the works of Casella and Hwang [1983], Morris [1983a], Laird and Louis [1987], Datta et al. [2002], Tai and Speed [2006], Hwang et al. [2009], Hwang and Zhao [2013], and Yoshimori and Lahiri [2014], among others. In this work, we adapt the universal inference framework of Wasserman et al. [2020] to produce valid confidence sets and p-values with properties (1) and (2), respectively, for arbitrary estimators of ϑ * n . As with the constructions of Wasserman et al. [2020], the produced inferential methods are all valid for finite sample size n and require no assumptions beyond correctness of model specification. The confidence sets and p-values arise by construction of holdout likelihood ratios that can be demonstrated to have the e-value property, as described in Vovk and Wang [2021] (see also the s-values of Grunwald et al., 2020 andthe betting values of Shafer, 2021). Here, we are able to take into account the hierarchical structure of the Bayesian specified model by using the fact that parameterized e-values are closed when averaged with respect to an appropriate probability measure (cf. Vovk, 2007 andKaufmann andKoolen, 2018). Due to the finite sample correctness of our constructions, we shall refer to our methods as finite sample EB (FSEB) techniques. Along with our methodological developments, we also demonstrate the application of our FSEB techniques in numerical studies and real data applications. These applications include the use of FSEB methods for constructing confidence intervals (CIs) for the classic mean estimator of Stein [1956], and testing and CI construction in Poisson-gamma models and Beta-binomial models, as per Koenker and Gu [2017] and Hardcastle and Kelly [2013], respectively. Real data applications are demonstrated via the analysis of insurance data from Haastrup [2000] and differential methylation data from Cruickshanks et al. [2013]. In these real and synthetic applications, we show that FSEB methods, satisfying conditions (1) and (2), are able to generate useful and meaningful inferential statements. We proceed as follows. In Section 2, we introduce the confidence set and p-value constructions for drawing inference regarding EB models. In Section 3, numerical studies of simulated data are used to demonstrate the applicability and effectiveness of FSEB constructions. In Section 4, FSEB methods are applied to real data to further show the practicality of the techniques. Lastly, in Section 5, we provide discussions and conclusions regarding our results. Confidence sets and hypothesis tests We retain the notation and setup from Section 1. For each subset I ⊂ [n], let us write D I = (X i ) i∈I and D I = (X i ) i∈[n]\I . Suppose that we have available some estimator of ψ that only depends on D I (and not D I ), which we shall denote byψ I,n . Furthermore, for fixed ψ, write the integrated and unintegrated likelihood of the data D I , as L I (ψ) = i∈I T f (X i |θ i ) π (θ i ; ψ) dn(θ i )(4) and l I (ϑ I ) = i∈I f (X i |θ i ) ,(5) respectively, where ϑ I = (θ i ) i∈I (here, ϑ {i} = θ i ). We note that in (4), we have assumed that π(·; ψ) is a density function with respect to some measure on T, n. Define the ratio statistic: R I,n (ϑ I ) = L I ψ I,n /l I (ϑ I ) ,(6) and consider sets of the form C α i (D n ) = {θ ∈ T : R i,n (θ) ≤ 1/α} . The following Lemma is an adaptation of the main idea of Wasserman et al. [2020] for the context of empierical Bayes estimators, and allows us to show that C α i (D n ) satisfies property (1). Lemma 1. For each I ⊂ [n] and fixed sequence ϑ * n ∈ T n , E ϑ * I [R I,n (ϑ * I )] = 1. Proof. Let d I andd I be realizations of D I and D I , respectively. Then, using (3), write E θ * I [R I,n (ϑ * I )] = X n R I,n (ϑ * I ) i∈I f (x i |θ * i ) j / ∈I f (x j |θ j ) dm (d n ) = (i) X n−|I| X |I| L I ψ I,n l I (ϑ * I ) i∈I f (x i |θ * i ) dm (d I ) j / ∈I f (x j |θ j ) dm d I = (ii) X n−|I| X |I| L I ψ I,n dm (d I ) j / ∈I f (x j |θ j ) dm d I = (iii) X n−|I| j / ∈I f (x j |θ j ) dm d I = (iv) 1. Here, (i) is true by definition of (6), (ii) is true by definition of (5), (iii) is true by the fact that (4) is a probability density function on X |I| , with respect to m, and (iv) is true by the fact that j / ∈I f (x j |θ j ) is a probability density function on X n−|I| , with respect to m. Proposition 1. For each i ∈ [n], C α i (D n ) is a 100 (1 − α) % confidence set, in the sense that Pr θ * i [θ * i ∈ C α i (D n )] ≥ 1 − α. Proof. For each i, Markov's inequality states that Pr θ * i [R i,n (θ * i ) ≥ 1/α] ≤ αE θ * i [R i,n (θ * i )] = α, which implies that Pr θ * i [θ * i ∈ C α i (D n )] = Pr θ * i [R i,n (θ * i ) ≤ 1/α] ≥ 1 − α by Lemma 1. Next, we consider the testing of null hypotheses H 0 : ϑ * I ∈ T I,0 against an arbitrary alternative H 1 : ϑ * I ∈ T I,1 ⊆ T |I| . To this end, we define the maximum unintegrated likelihood estimator of ϑ * I , under H 0 asθ I ∈ θ I ∈ T I,0 : l I θ I = sup ϑ I ∈T I,0 l I (ϑ I ) .(7) Using (7), and again lettingψ I,n be an arbitrary estimator of ψ, depending only on D I , we define the ratio test statistic T I (D n ) = L I ψ I,n /l I θ I . The following result establishes the fact that the p-value P I (D n ) = 1/T I (D n ) has the correct size, under H 0 . Proposition 2. For any α ∈ (0, 1) and ϑ * I ∈ T I,0 , Pr ϑ * I [P I (D n ) ≤ α] ≤ α. Proof. Assume that ϑ * I ∈ T I,0 . By Markov's inequality, we have Pr ϑ * I [T I (D n ) ≥ 1/α] ≤ αE ϑ * I [T I (D n )] = αE ϑ * I   L I ψ I,n l I θ I   ≤ (i) αE ϑ * I   L I ψ I,n l I (ϑ * I )   = (ii) α, where the (i) is true due to the fact that l I θ I ≥ l I (ϑ * I ), by the definition of (7), and the (ii) is true due to Lemma 1. We note that Propositions 1 and 2 are empirical Bayes analogues of Theorems 1 and 2 from Wasserman et al. [2020], which provide guarantees for universal inference confidence set and hypothesis test constructions, respectively. Furthermore, the use of Lemma 1 in the proofs also imply that the CIs constructed via Proposition 1 are e-CIs, as defined by Xu et al. [2022], and the p-values obtained via Proposition 2 can be said to be e-value calibrated, as per the definitions of Wang and Ramdas [2022]. FSEB examples and some numerical results To demonstrate the usefulness of the FSEB results from Section 2, we shall present a number of synthetic and real world applications of the confidence and testing constructions. All of the computation is conducted in the R programming environment (R Core Team, 2020) and replicable scripts are made available at https://github.com/hiendn/Universal_EB. Where unspecified, numerical optimization is conducted using the optim() or optimize() functions in the case of multivariate and univariate optimization, respectively. Stein's problem We begin by studying the estimation of normal means, as originally considered in Stein [1956]. Here, we largely follow the exposition of Efron [2010, Ch. 1] and note that the estimator falls within the shrinkage paradigm exposited in Serdobolskii [2008]. We consider this setting due to its simplicity and the availability of a simple EB-based method to compare our methodology against. Let ((X i , Θ i )) i∈[n] be IID and for each i ∈ [n], Θ i ∼ N 0, ψ 2 (ψ 2 > 0) and [X i |Θ i = θ i ] ∼ N (θ i , 1), where N µ, σ 2 is the normal law with mean µ ∈ R and variance σ 2 > 0. We assume that ψ 2 is unknown and that we observe data D n and wish to construct CIs for the realizations θ * n , which characterize the DGP of the observations X n . Following Efron [2010, Sec. 1.5], when ψ 2 is known, the posterior distribution of [Θ n |X n = x n ] is N g ψ 2 x n , g ψ 2 , where g ψ 2 = ψ 2 / 1 + ψ 2 . Using the data D n , we have the fact that n−1 i=1 X 2 i ∼ ψ 2 + 1 χ 2 n−1 , where χ 2 ν is the chi-squared distribution with ν degrees of freedom. This implies a method-of-moment estimator for g of the form: ḡ n = 1 − (n − 2) / n i=1 X 2 i , in the case of unknown ψ 2 . We can simply approximate the distribution of [Θ n |D n ] as N (ḡ n X n ,ḡ n ), although this approximation ignores the variability ofḡ n . As noted by Efron [2010, Sec. 1.5], via a hierarchical Bayesian interpretation using an objective Bayesian prior, we may instead deduce the more accurate approximate distribution: N ḡ n X n ,ḡ n + 2 X n (1 −ḡ n ) 2 / [n − 2] .(8) Specifically, Efron [2010] considers the hyperparameter ψ 2 as being a random variable, say Ψ 2 , and places a so-called objective (or non-informative) prior on Ψ 2 . In particular, the improper prior as- The approximation then provides 100 (1 − α) % posterior credible intervals for Θ n of the form sumption that Ψ 2 + 1 ∼ Uniform (0, ∞) isg n X n ± ζ 1−α/2 ḡ n + 2 X n (1 −ḡ n ) 2 n − 2 ,(9) where ζ 1−α/2 is the (1 − α/2) quantile of the standard normal distribution. This posterior result can then be taken as an approximate 100 (1 − α) % confidence interval for θ * n . Now, we wish to apply the FSEB results from Section 2. Here, I = {n}, and from the setup of the problem, we have f (x n |θ n ) = φ (x n ; θ n , 1) and π (θ n ; ψ) = φ θ n ; 0, ψ 2 , where φ x; µ, σ 2 is the normal PDF with mean µ and variance σ 2 . Thus, L I (ψ) = R φ (X n ; θ, 1) φ θ; 0, ψ 2 dθ = φ X n ; 0, 1 + ψ 2 and l I (θ n ) = φ (x n ; θ n , 1), which yields a ratio statistic of the form R I,n (θ n ) = L I (ψ −n ) /l I (θ n ) = φ X n ; 0, 1 +ψ 2 −n /φ (X n ; θ n , 1) , when combined with an appropriate estimatorψ 2 −n for ψ 2 , using onlyD I,n = D n−1 . We can obtain the region C α I (D n ) by solving R I,n (θ n ) ≤ 1/α to obtain: (X n − θ) 2 ≤ 2 log (1/α) + 2 log 1 +ψ 2 −n + X 2 n 1 +ψ 2 −n , which, by Proposition 1, yields the 100 (1 − α) % CI for θ * n : X n ± 2 log (1/α) + 2 log 1 +ψ 2 −n + X 2 n 1 +ψ 2 −n .(10) We shall consider implementations of the CI of form (10) using the estimator ψ 2 −n = max 0, s 2 −n − 1 , where s 2 −n is the sample variance of theD I,n , and s 2 −n − 1 is the method of moment estimator of ψ 2 . The maximum operator stops the estimator from becoming negative and causes no problems in the computation of (10). 1.000 1.455 * The results on these lines are computed from 968, 967, and 969 replicates, respectively, from top to bottom. This was due to the negative estimates of the standard error in the computation of (9). We now compare the performances of the CIs of forms (9) and (10). To do so, we shall consider data sets of sizes n ∈ {10, 100, 1000}, ψ 2 ∈ 1 2 , 5 2 , 10 2 , and α ∈ {0.05, 0.005, 0.0005}. For each triplet n, ψ 2 , α , we repeat the computation of (9) and (10) 1000 times and record the coverage probability and average relative widths of the intervals (computed as the width of (10) divided by that of (9)). The results of our experiment are presented in Table 1. From Table 1, we observe that the CIs of form (9) tended to produce intervals with the desired levels of coverage, whereas the FSEB CIs of form (10) tended to be conservative and contained the parameter of interest in almost all replications. The price that is paid for this conservativeness is obvious when viewing the relative widths, which implies that for 95% CIs, the EB CIs of form (10) are twice as wide, on average, when compared to the CIs of form (9). However, the relative widths decrease as α gets smaller, implying that the intervals perform relatively similarly when a high level of confidence is required. We further observe that n and ψ 2 had little effect on the performances of the intervals except in the case when n = 10 and ψ 2 = 1, whereupon it was possible for the intervals of form (9) to not be computable in some cases. From these results we can make a number of conclusions. Firstly, if one is willing to make the necessary hierarchical and objective Bayesian assumptions, as stated in Efron [2010, Sec. 1.5], then the intervals of form (9) provide very good performance. However, without those assumptions, we can still obtain reasonable CIs that have correct coverage via the FSEB methods from Section 2. Furthermore, these intervals become more efficient compared to (9) when higher levels of confidence are desired. Lastly, when n is small and ψ 2 is also small, the intervals of form (9) can become uncomputable and thus one may consider the use of (10) as an alternative. Poisson-gamma count model The following example is taken from Koenker and Gu [2017] and was originally studied in Norberg [1989] and then subsequently in Haastrup [2000]. In this example, we firstly consider IID parameters (Θ i ) i∈[n] generated with gamma DGP: Θ i ∼ Gamma (a, b), for each i ∈ [n] , where a > 0 and b > 0 are the shape and rate hyperparameters, respectively, which we put into ψ. Then, for each i, we suppose that the data D n = (X i ) i∈ [n] , depending on the covariate sequence w n = (w i ) i∈ [n] , has the Poisson DGP: [X i |Θ i = θ i ] ∼ Poisson (θ i w i ), where w i > 0. We again wish to use the data D n to estimate the realization of Θ n : θ * n , which characterizes the DGP of X n . Under the specification above, for each i, we have the fact that (X i , Θ i ) has the joint PDF: f (x i , θ i ; ψ) = b a Γ (a) θ a−1 i exp (bθ i ) (θ i w i ) xi exp (−θ i w i ) x i ,(11) which we can marginalize to obtain f (x i ; ψ) = x i + a + 1 x i b w i + b a w i w i + b xi ,(12) and which can be seen as a Poisson-gamma mixture model. We can then construct the likelihood of D n using expression (12), from which we may compute maximum likelihood estimatesψ n = â n ,b n of ψ. Upon noting that (11) implies the conditional expectation E [Θ i |X i = x i ] = (x i + a) / (w i + b), we obtain the estimator for θ * n :θ n = X i +â n w i +b n .(13) Confidence intervals We again wish to apply the general result from Section 2 to construct CIs. Firstly, we have I = {n} and f (x n |θ n ) = (θ n w n ) xn exp (−θ n w n ) x n and π (θ n ; ψ) = b a Γ (a) θ a−1 n exp (bθ n ) . As per (12), we can write L I (ψ) = X n + a + 1 X n b w n + b a w n w n + b Xn . Then, since l I (θ n ) = f (X n |θ n ), we have R I,n (θ n ) = L I (ψ) /l I (θ n ) = X n +â −n + 1 X n b −n w n +b −n â−n w n w n +b −n Xn X n (θ n w n ) Xn exp (−θ n w n ) , when combined with an estimatorψ −n = â −n ,b −n of ψ, using onlyD I,n = D n−1 . For any α ∈ (0, 1), we then obtain a 100 (1 − α) % CI for θ n by solving R I,n (θ n ) ≤ 1/α, which can be done numerically. We shall use the MLE of ψ, computed with the dataD I,n and marginal PDF (12), as the estimatorψ −n . To demonstrate the performance of the CI construction, above, we conduct the following numerical experiment. We generate data sets consisting of n ∈ {10, 100, 1000} observations characterized by hyperparameters ψ = (a, b) = {(2, 2) , (2, 5) , (5, 2)}, and we compute intervals using significance levels α ∈ {0.05, 0.005, 0.0005}. Here, we shall generate w n IID uniformly between 0 and 10. For each triplet (n, ψ, α), we repeat the construction of our CIs 1000 times and record the coverage probability and average width for each case. The results of the experiment are reported in Table 2. From Table 2, we observe that the empirical coverage of the CIs are higher than the nominal value and are thus behaving as per the conclusions of Proposition 1. As expected, we also find that increasing the nominal confidence level also increases the coverage proportion, but at a cost of increasing the lengths of the CIs. From the usual asymptotic theory of maximum likelihood estimators, we anticipate that increasing n will decrease the variance of the estimatorψ −n . However, as in Section 3.1, this does not appear to have any observable effect on either the coverage proportion nor lengths of the CIs. Hypothesis tests Next, we consider testing the null hypothesis H 0 : θ * n−1 = θ * n . To this end, we use the hypothesis testing framework from Section 2. That is, we let I = {n − 1, n} and estimate ψ via the maximum likelihood estimatorψ I,n = (a I,n , b I,n ), computed from the dataD I,n = D n−2 . We can write 8.714 * The results on these lines are computed from 999, 999, and 998 replicates, respectively. This was due to there being no solutions to the inequality R I,n (θ n ) ≤ 1/α, with respect to θ n > 0 in some cases. L I ψ I,n = n i=n−1 X i + a I,n + 1 X i b I,n w i + b I,n a I,n w i w i + b I,n Xi ,l I (ϑ * I ) = n i=n−1 (θ * i w i ) Xn exp (−θ * i w i ) X i , and ϑ * I = θ * n−1 , θ * n . We are also required to compute the maximum likelihood estimator of ϑ * I , under H 0 , as per (7), which can be written as ϑ I ∈ θ = (θ, θ) : l I θ = sup θ>0 n i=n−1 (θw i ) Xn exp (−θw i ) X i . Using the components above, we define the test statistic T I (D n ) = L I ψ I,n /l I θ I , from which we can derive the p-value P I (D n ) = 1/T I (D n ) for testing H 0 . To demonstrate the application of this test, we conduct another numerical experiment. As in Section 3.2.1, we generate data sets of sizes n ∈ {10, 100, 1000}, where the data D n−1 are generated with parameters (Θ i ) i∈[n−1] arising from gamma distributions with hyperparameters ψ = (a, b) = {(2, 2) , (2, 5) , (5, 2)}. The final observation X n , making up D n , is then generated with parameter Θ n = Θ n−1 + ∆, where ∆ ∈ {0, 1, 5, 10}. As before, we generate the covariate sequence w n IID uniformly between 0 and 10. For each triplet (n, ψ, ∆), we test H 0 : θ * n−1 = θ * n 1000 times and record the average number of rejections under at the levels of significance α ∈ {0.05, 0.005, 0.0005}. The results are then reported in Table 3. The results for the ∆ = 0 cases in Table 3 show that the tests reject true null hypotheses at below the nominal sizes α, in accordance with Proposition 2. For each combination of n and ψ, as ∆ increases, the proportion of rejections increase, demonstrating that the tests become more powerful when detecting larger differences between θ * n−1 and θ * n , as expected. There also appears to be an increase in power due to larger sample sizes. This is an interesting outcome, since we can only be sure that sample size affects the variability of the estimator ψ I,n . Overall, we can be confident that the tests are behaving as required, albeit they may be somewhat underpowered as they are not achieving the nominal sizes. Beta-binomial data series Data from genome-level biological studies, using modern high-throughput sequencing technologies [Krueger et al., 2012], often take the form of a series of counts, which may be modelled through sets of non-identical (possibly correlated) binomial distributions, with beta priors, in a Bayesian framework. The question of interest may vary, for example, from assessing the range of likely values for the binomial parameter in a particular region of the data, to comparing whether two sections of one or more data series are generated from identical distributions. For purposes of demonstrating the performance of the FSEB method in these scenario, we will make the simplifying assumption that all data points are independently distributed, within, as well as across, any of G data series that may be observed. Confidence Sets First, let us assume that we only have a single series, i.e. G = 1. Then, we can assume X i ∼ Bin(m i , θ i ), and propose a common prior distribution for Θ i (i = 1, . . . , n): Beta(γ, β). Using the techniques described in Section 2, we can find confidence sets for θ * i , (i = 1, . . . , n). For each i, we define, as previously, a subset I = {i}, so that D I = X i and D I = (X i ) i∈[n]\{i} . We then have, R I,n (ϑ I ) = L I ψ I,n l I (ϑ I ) , where l I (ϑ I ) = m i x i θ xi i (1 − θ i ) mi−xi and L I ψ I,n = θi f (x i |θ i )π(θ i ;γ −n ,β −n )dθ i , which gives the ratio R I,n (ϑ I ) = B(x i +γ −n , m i − x i +β −n ) B(γ,β −n )θ xi i (1 − θ i ) mi−xi .(14) Here,γ −n andβ −n are the empirical Bayes estimates of γ and β, given bŷ Here, m i (i ∈ [n]) were given integer values uniformly generated in the range [15,40]. In all cases, it was seen that the CIs had perfect coverage, always containing the true value of θ * i . An example of the n = 10 case is shown in Figure 1. γ −n = (φ −1 EB − 1)μ EB andβ −n = (φ −1 EB − 1)(1 −μ EB ), whereμ EB = 1 n − 1 j∈[n]\i x j m j , φ EB = mV x µ(1 − µ) − 1 (m − 1), Hypothesis testing Aiming to detect genomic regions that may have differing characteristics between two series, a pertinent question of interest may be considered by testing the hypotheses: H 0 : θ * i1 = θ * i2 vs. H 1 : θ * i1 = θ * i2 , for every i ∈ [n] (with G = 2 series). Then, D n = (X i ) i∈[n] , where X i = (X i1 , X i2 ). From Section 2, the ratio test statistic takes the form T I (D n ) = L I γ I,n ,β I,n /l I θ I , whereγ I,n andβ I,n are EB estimators of γ and β, depending only onD I,n = D n \{X i1 , X i2 }. With ϑ I = xi1+xi2 mi1+mi2 =θ i , write l I θ I = f (x i1 , x i2 |θ i ), and L I γ I,n ,β I,n = T f (x i1 |θ i )f (x i2 |θ i )π(θ i ;γ I,n ,β I,n )dθ i = m i1 x i1 m i2 x i2 B(x i1 +γ I,n , m i1 − x i1 +β I,n )B(x i2 +γ I,n , m i2 − x i2 +β I,n ) B(γ I,n ,β I,n ) 2 , which gives T I (D n ) = B(x i1 +γ I,n , m i1 − x i1 +β I,n )B(x i2 +γ I,n , m i2 − x i2 +β I,n ) [B(γ I,n ,β I,n )] 2θ xi1+xi2 i (1 −θ i ) mi1+mi2−xi1−xi2 , whereγ I,n andβ I,n are calculated in a similar fashion to Section 3.3.1 except that data from both sequences should be used to estimateμ EB andφ EB , in the sense that µ EB = 1 2n − 2 k =i 2 g=1 x kg m kg , and φ EB = mV xŷ µ EB (1 −μ EB ) − 1 (m − 1), wherem = 1 2n − 2 k =i 2 g=1 m kg , and V xy = 1 2n − 2 k =i 2 g=1 x kg m kg −μ EB 2 . In our first simulation, we assessed the performance of the test statistic in terms of the Type I error. Assuming a window size of n = 20, realized data (x i1 , x i2 ) (i ∈ [n]), were simulated from independent binomial distributions with θ * i1 = θ * i2 = θ * i (i = 1, . . . , n), with θ * i ranging between 0.1 and 0.9, and m i1 , m i2 ∈ N uniformly and independently sampled from the range [15,40]. The first panel of Next, we assessed the power of the test statistic at three levels of significance (α ∈ {0.01, 0.02, 0.05}) and differing effect sizes. For each i (i ∈ [n]), θ * i1 was set to be a value between 0.05 and 0.95, and θ * i2 = θ * i1 + ∆, where 0.1 < ∆ < 0.9 (with θ * i2 < 1). A sample of 20 replicates were simulated under each possible set of values of (θ * 1 , θ * 2 ). The second panel of Figure 2 shows that the power functions increased rapidly to 1 as the difference ∆ was increased. In our next numerical experiment, we generated data sets of sizes n ∈ {10, 100, 1000}, where realized observations x i1 , and x i2 are simulated from independent binomial distributions with parameters θ * i1 and θ * i2 , respectively (i ∈ [n]). For each i, θ i1 * was generated from a beta distribution, in turn, with hyperparameters ψ = (γ, β) ∈ {(2, 2), (2, 5), (5, 2)}; and θ * i2 = θ * i1 + ∆, where ∆ ∈ {0, 0.2, 0.5, 0.9}. We Table 4. Similarly to the Poisson-gamma example, it can be seen that the tests reject true null hypotheses at below the nominal sizes α, in each case. For each combination of n and ψ, as ∆ increases, the rejection rate increases, making the tests more powerful as expected, when detecting larger differences between θ * i1 and θ * i2 , frequently reaching a power of 1 even when the difference was not maximal. There did not appear to be a clear increase in power with the sample size, within the settings considered. Overall, we may conclude, as previously, that the tests are behaving as expected, although both this example and the Poisson-gamma case show that the tests may be underpowered as they do not achieve the nominal size for any value of α. As an additional assessment of how FSEB performs in comparison to other tests in a similar setting, we carried out a number of additional simulation studies, in which FSEB was compared with Fisher's exact test and a score test, over various settings of n, ψ and ∆, as well as for different ranges of To analyze the data, we use the Poisson-gamma model and estimate the generative parameters ϑ * n using estimates of form (13). Here, each θ * i can be interpreted as an unobserved multiplicative occupation specific risk factor that influences the number of claims made within occupation group i. To obtain individually-valid 95% CIs for each of the n estimates, we then apply the method from Section 3.2.1. We present both the estimated risk factors and their CIs in Figure 3. m i (i = 1 ∈ [n] From Figure 3, we notice that most of the estimates of ϑ * n are between zero and two, with the exception of occupation group i = 22, which has an estimated risk factor of θ * 22 = 2.59. Although the risk factors are all quite small, the associated CIs can become very large, as can be seen in the top plot. This is due to the conservative nature of the CI constructions that we have already observed from Section 3.1. We observe that wider CIs were associated with observations where X i = 0, with w i being small. In particular, the largest CI, occurring for i = 55, has response X 55 = 0 and the smallest covariate value in the data set: w 55 = 4.45. The next largest CI occurs for i = 5 and also corresponds to a response X 5 = 0 and the second smallest covariate value w 5 = 11.30. Figure 4: Estimates of risk factors ϑ * n for the Norberg data set along with the associated simultaneous 95% confidence set. The estimated risk factors for each occupation group is depicted as a cross and the simultaneous confidence set can be constructed via the cartesian product of the adjusted CIs, depicted as lines. The plot is focused on the risk factor range between 0 and 10. inferential observations are only valid when considering each of the n CIs, individually, and under the assumption that we had chosen to draw inference regarding the corresponding parameter of the CI, before any data are observed. However If we wish to draw inference regarding all n elements of ϑ * n , simultaneously, then we should instead construct a 100 (1 − α) % simultaneous confidence setC α (D n ), with the property that Pr ϑ * n ϑ * n ∈C α (D n ) ≥ 1 − α. Using Bonferroni's inequality, we can takeC α (D n ) to be the Cartesian product of the individual 100 (1 − α/n) % (adjusted) CI for each parameter θ * i : C α (D n ) = n i=1 C α/n i (D n ) . Using the α = 0.05, we obtain the 95% simultaneous confidence set that appears in Figure 4. We observe that the simultaneous confidence set now permits us to draw useful inference regarding multiple parameters, at the same time. For example, inspecting the n adjusted CIs, we observe that the occupations corresponding to indices i ∈ {8, 22, 50} all have lower bounds above 0.5. Thus, interpreting these indices specifically, we can say that each of the three adjusted confidence intervals, which yield the inference that the risk factors θ * i > 0.5 for i ∈ {8, 22, 50}, contains the parameter θ * i with probability 0.95, under repeated sampling. Since our individual CI and adjusted CI constructions are e-CIs, one can alternatively approach the problem of drawing simultaneously valid inference via the false coverage rate (FCR) controlling techniques of Xu et al. [2022]. Using again the parameters θ * i corresponding to i ∈ {8, 22, 50}, as an example, we can use Theorem 2 of Xu et al. [2022] to make the statement that the three adjusted CIs C 3α/n i (D n ), for i ∈ {8, 22, 50}, can be interpreted at the FCR controlled level α ∈ (0, 1), in the sense that E ϑ * I(Dn )   i∈I θ * i / ∈ C |I(Dn)|α/n i (D n ) max {1, |I (D n )|}   ≤ α, where I (D n ) is a data-dependent subset of parameter indices. In particular, we observe the realization {8, 22, 50} of I (D n ), corresponding to the data-dependent rule of selecting indices with adjusted CIs C α/n i (D n ) with lower bounds greater than 0.5. Here, A = 1 if statement A is true and 0, otherwise. Clearly, controlling the FCR at level α yields narrower CIs for each of our the three assessed parameters than does the more blunt simultaneous confidence set approach. In particular, the 95% Differential methylation detection in bisulphite sequencing data DNA methylation is a chemical modification of DNA caused by the addition of a methyl (CH 3 -) group to a DNA nucleotide -usually a C that is followed by a G -called a CpG site, which is an important factor in controlling gene expression over the human genome. Detecting differences in the methylation patterns between normal and ageing cells can shed light on the complex biological processes underlying human ageing, and hence has been an important scientific problem over the last decade [Smith and Meissner, 2013]. Methylation patterns can be detected using high-throughput bisulphite sequencing experiments [Krueger et al., 2012], in which data are generated in the form of sequences of numbers of methylated cytosines, x ig , among the total counts of cytosines, m ig , for n CpG sites on a genome (i ∈ [n]), for G groups of cell types g ∈ [G]. Often, there are G = 2 groups, as in our example that follows, for which the question of interest is to detect regions of differential methylation in the DNA of normal and ageing cells. Based on the setup above, a set of bisulphite sequencing data from an experiment with G groups might be considered as G series of (possibly correlated) observations from non-identical binomial distributions. The degree of dependence between adjacent CpG sites typically depends on the genomic distance between these loci, but since these are often separated by hundreds of bases, for the moment it is assumed that this correlation is negligible and is not incorporated into our model. Figure 5: FSEB test statistics over a segment of methylation data. The panels show the demarcation of loci into differentially methylated (coded as "1") and non-differentially methylated sites (coded as "0") with an overlay of a moving average with a window size of 10 CpG sites, at significance level cutoffs of 0.0005, 0.005, and 0.05. Application to Methylation data from Human chromosome 21 We evaluated the test statistic T I (D n ) over a paired segment of methylation data from normal and ageing cells, from 100, 000 CpG sites on human chromosome 21 [Cruickshanks et al., 2013]. After data cleaning and filtering (to remove sites with too low or too high degrees of experimental coverage, that can introduce errors), 58, 361 sites remained for analysis. Figure 5 shows the predicted demarcation of the data into differentially and non-differentially methylated sites over the entire region, at three cutoff levels of significance, overlaid with a moving average using a window size of 10 sites. It was observed that large values of the test statistic were often found in grouped clusters, which would be biologically meaningful, as loss of methylation in ageing cells is more likely to be highly region-specific, rather than randomly scattered over the genome. The overall rejection rates for the FSEB procedure corresponding to significance levels of α = 0.0005, 0.05, 0.02 and 0.01 were found to be 0.0012, 0.0154, 0.0092, and 0.0064, respectively. As a comparison to other methods for detecting differential methylation, we also applied site-by-site Fisher tests and score tests as implemented for bisulphite sequencing data in the R Bioconductor package DMRcaller [Catoni et al., 2018]. For purposes of comparison, we used two significance level cutoffs of 0.05 and 0.0005 for our FSEB test statistic, along with the same cutoffs subject to a Benjamini-Hochberg FDR correction for the other two testing methods. Figure 6 shows the comparison between the calculated site-specific p-values of the Fisher and score tests with the calculated FSEB test statistic (all on the logarithmic scale) over the entire genomic segment, which indicates a remarkable degree of overlap in the regions of differential methylation. There are, however, significant differences as well, in both the numbers of differential methylation calls and their location. In particular, the FSEB test statistic appeared to have stronger evidence for differential methylation in two regions, one on the left side of the figure, and one towards the centre. The Fisher test, being the most conservative, almost missed this central region (gave a very weak signal), while the score test gave a very high proportion of differential methylation calls compared to both other methods -however, the results from the score test may not be as reliable as many cells contained small numbers of counts which may render the test assumptions invalid. Table 5 gives a summary of the overlap and differences of the results from the different methods at two levels of significance, indicating that with FDR corrections, the Fisher test appears to be the most conservative, the score test the least conservative, and the FSEB procedure in-between the two. We also calculated, for each pair of methods, the proportion of matching calls, defined as the ratio of the number of sites predicted by both methods as either differentially methylated, or non-differentially methylated, to the total number of sites. These proportions indicated a high degree of concordance, especially between FSEB and Fisher tests, with the score test showing the least degree of concordance at both levels of significance. As expected, the degree of concordance decreased with an increase in α, but only slightly so, between the FDR-corrected Fisher test and FSEB. Conclusion EB is a powerful and popular paradigm for conducting parametric inference in situations where the DGP can be assumed to possess a hierarchical structure. Over the years, general frameworks for point estimation have been developed for EB, such as via the shrinkage estimators of Serdobolskii [2008] or the various method of moments and likelihood-based methods described in Maritz and Lwin [1989, Sec. 3]. Contrastingly, the construction of interval estimators and hypothesis tests for EB parameters rely primarily on bespoke derivations and analysis of the specific models under investigation. In this paper, we have adapted the general universal inference framework for finite sample valid interval estimation and hypothesis testing of Wasserman et al. [2020] to construct a general framework within the EB setting, which we refer to as the FSEB technique. In Section 2, we proved that these Figure 6: Results of three testing procedures to detect sites of differential methylation over a segment of methylation data. The first two panels show the negative logarithms of the FDR-corrected p-values for the (i) Fisher test (− log p F ) and (ii) score test (− log p S ), while the third panel shows the logarithm of the FSEB test statistic (log T (D n )). The black curve in each plot corresponds to a moving average with a window size of 10. The points are coloured by differential methylation state call: green if differentially methylated, and red if not, at test size 0.05. This point is further elaborated in Section 4, where we also showed that our FSEB approach can be usefully applied to draw inference from real world data, in the contexts of insurance risk and the bioinformatics study of DNA methylation. We note that although our framework is general, due to it being Markov inequality-based, it shares the same general criticism that may be laid upon other universal inference methods, which is that the confidence sets and hypothesis tests can often be conservative, in the sense that the nominal confidence level or size is not achieved. The lack of power due to the looseness of Markov's inequality was first mentioned and discussed in Wasserman et al. [2020], where it is also pointed out that, in the universal inference setting, the logarithm of the analogous ratio statistics to (6) Dunn et al. [2021], which led to very minor increases in power in some cases with small sample sizes without affecting the Type I error. With such an outcome not entirely discernible from sampling error, and with the substantial increase to computational cost, it does not seem worthwhile to employ the subsampling-based approach here. A possible reason for the lack improvement in power observed, despite subsampling, can be attributed to the fact that the sets I, and their complements, are not exchangeable; since the indices fundamentally define the hypotheses and parameters of interest. However, we note that since the methodology falls within the e-value framework, it also inherits desirable properties, such as the ability to combine test statistics by averaging [Vovk and Wang, 2021], and the ability to more-powerfully conduct false discovery rate control when tests are arbitrarily dependent [Wang and Ramdas, 2022]. Overall, we believe that FSEB techniques can be usefully incorporated into any EB-based inference setting, especially when no other interval estimators or tests are already available, and are a useful addition to the statistical tool set. Although a method that is based on the careful analysis of the particular setting is always preferable in terms of exploiting the problem specific properties in order to generate powerful tests and tight intervals, FSEB methods can always be used in cases where such careful analyses may be mathematically difficult or overly time consuming. 3 : 3Experimental results for testing the hypothesis H 0 : θ * n−1 = θ * n for Poisson-gamma count models. The Rejection Proportion columns report the average number of rejections, from 1000 tests, at levels of significance α ∈ {0.05, 0.005, 0.0005}. Figure 1 : 1Plots of 95% confidence regions for θ * i when true values of θ * i span the interval 0.1 to 0.9 (n = 10). Here, the 95% CIs are given by the points where the curves for log R I,n (ϑ I ) intersect with the horizontal line (black), representing a confidence level of 1 − α = 0.95. Each CI can be seen to contain the corresponding true value of θ * i , represented by a vertical line of the same colour as the interval.μ EB ) 2 . Further, B (a, b) = 1 0 t a−1 (1 − t) b−1 dt isthe Beta function, taking inputs a > 0 and b > 0.We simulated data from the binomial model under two cases: (a) setting beta hyperparameters (α, β) = (10, 10), and hierarchically simulating θ * i , i ∈ [n], and then x i from a binomial distribution; and (b) setting a range of θ * i (i ∈ [n]) values equidistantly spanning the interval (0.1, 0.9) for n = 10, 100. Figure 2 2shows the calculated test statistic values T I (D n ) for the 20 genomic indices on the logarithmic scale, over 100 independently replicated datasets, with horizontal lines displaying values log(1/α), for significance levels α ∈ {0.01, 0.02, 0.05}. No points were observed above the line corresponding to α = 0.01, indicating that the Type I error of the test statistic does not exceed the nominal level. Figure 2 : 2Panel (a): Test statistic for 100 replications of the beta-binomial example under the null hypothesis of equality of proportions. The three horizontal lines correspond to cutoffs according to significance levels of α = 0.05 (green), α = 0.02 (blue), and α = 0.01 (turquoise). Panel (b): Power function over different values of ∆ = θ * 2 − θ * 1 at three levels of significance: α ∈ {0.01, 0.02, 0.05}. generated 100 instances of data under each setting and assessed the power of the FSEB test statistic through the number of rejections at levels α ∈ {0.0005, 0.005, 0.05}. The results are shown in Figure 3 : 3, upon observation of the bottom plot, we see that although some of the CIs are too wide to be meaningful, there are still numerous meaningful CIs that provide confidence regarding the lower limits as well as upper limits of the underlying risk factors. In particular, we observe that the CIs for occupation groups i = 26 and i = 54 are remarkably narrow and precise. Of course, the preceding Estimates of risk factors ϑ * n for the Norberg data set along with associated 95% CIs. The estimated risk factor for each occupation group is depicted as a cross and the associate (individuallyvalid) CI is depicted as a line. The top plot displays the CIs at their entire lengths, whereas the bottom plot displays only the risk factor range between 0 and 10. simultaneous adjusted CIs obtained via Bonferroni's inequality are (0.775, 4.485), (1.375, 5.520), and (0.505, 3.565), and the 0.05 level FCR controlled adjusted CIs are (0.810, 4.300), (1.430, 5.390), and (0.555, 3.390), for the parameters θ * i corresponding to the respective parameters i ∈ {8, 22, 50}. Overall, these are positive results as we do not know of another general method for generating CIs in this EB setting, whether individually or jointly. Table 1 : 1Stein's problem simulation results reported as average performances over 1000 replications.n ψ 2 α Coverage of (9) Coverage of (10) Relative Width 10 1 2 0.05 0.948 * 1.000 * 1.979 * 0.005 0.988 * 1.000 * 1.738 * 0.0005 0.993 * 1.000 * 1.641 * 5 2 0.05 0.943 1.000 1.902 0.005 0.994 1.000 1.543 0.0005 0.999 1.000 1.388 10 2 0.05 0.947 1.000 2.058 0.005 0.994 1.000 1.633 0.0005 0.999 1.000 1.455 100 1 2 0.05 0.937 0.999 2.068 0.005 0.997 1.000 1.806 0.0005 1.000 1.000 1.697 5 2 0.05 0.949 1.000 1.912 0.005 0.995 1.000 1.540 0.0005 1.000 1.000 1.395 10 2 0.05 0.947 1.000 2.068 0.005 0.995 1.000 1.635 0.0005 0.999 1.000 1.455 1000 1 2 0.05 0.949 0.999 2.087 0.005 0.991 1.000 1.815 0.0005 1.000 1.000 1.705 5 2 0.05 0.963 1.000 1.910 0.005 0.997 1.000 1.544 0.0005 1.000 1.000 1.399 10 2 0.05 0.942 1.000 2.066 0.005 0.995 1.000 1.632 0.0005 0.999 Table 2 : 2Experimental results for CIs constructed for Poisson-gamma count models. The Coverage and Length columns report the coverage proportion and average lengths in each scenario, as computed from 1000 replications.n ψ α Coverage Length 10 (2, 2) 0.05 0.998 3.632 0.005 1.000 5.484 0.0005 1.000 6.919 (2, 5) 0.05 0.999 2.976 0.005 0.999 3.910 0.0005 1.000 5.481 (5, 2) 0.05 0.997 * 5.468 * 0.005 0.999 * 7.118 * 0.0005 1.000 * 8.349 * 100 (2, 2) 0.05 0.998 3.898 0.005 0.999 5.277 0.0005 1.000 6.883 (2, 5) 0.05 0.999 2.958 0.005 1.000 3.914 0.0005 1.000 5.374 (5, 2) 0.05 1.000 5.628 0.005 1.000 7.124 0.0005 1.000 8.529 1000 (2, 2) 0.05 1.000 4.070 0.005 1.000 5.424 0.0005 1.000 6.344 (2, 5) 0.05 0.999 3.049 0.005 1.000 3.960 0.0005 1.000 5.479 (5, 2) 0.05 0.998 5.297 0.005 1.000 7.205 0.0005 1.000 Table ). Comparisons were made using the p-values as well as false discovery rate (FDR) corrected p-values arising from FDR control methods [Wang and Ramdas, 2022], and are presented Table 4 : 4Experimental results for testing the hypothesis H 0 : θ * Norwegian workmen. Here, we have n = 72 observations D n , containing total number of death claims X i , along with covariates w n , where w i is the number of years of exposure, normalized by a factor of 344, for i ∈ [n]. Here each i is an individual occupation group.in the online Supplementary Materials (Tables S1-S8 and Figures S1-S8). It is evident in almost all cases (and especially in case C, which most closely resembles the real life application scenario) that (i) the power levels are very similar across methods, especially as values of n, m i (i ∈ [n]) and effect sizes increase, and (ii) in every case, there are some settings in which Fisher's test and the score test are anti-conservative (even after FDR correction), with their Type I error greatly exceeding the nominal levels of significance, while this never occurs for FSEB, even without FDR correction. 4 Real-data applications 4.1 The Norberg data We now wish to apply the FSEB CI construction from Section 3.2.1 to produce CIs in a real data application. We shall investigate the Norberg data set from the REBayes package of Koenker and Gu [2017], obtained from Haastrup [2000]. These data pertain to group life insurance claims from Table 5 : 5Comparison of differential methylation calling results between different methods: (i) FSEB (ii) Fisher tests with FDR-adjusted p-values (FF) (iii) Fisher tests, unadjusted (F) (iv) score tests with FDR-adjusted p-values (SF) and (v) score tests, unadjusted (S). The upper table gives the proportions of sites called to be differentially expressed under the tests of sizes α ∈ {0.0005, 0.05}. The lower table gives the proportion of overlaps between differential methylation calls from each pair of methods at a fixed level α ∈ {0.0005, 0.05}. FSEB techniques generate valid confidence sets and hypothesis tests of the correct size. In Section 3, we demonstrated via numerical simulations, that the FSEB methods can be used in well-studied synthetic scenarios. There, we highlight that the methods can generate meaningful inference for realistic DGPs.Proportion of rejections at level α = 0.0005 α = 0.05 FSEB 0.0012 0.0154 FF 0.0003 0.0097 F 0.0098 0.1102 SF 0.1333 0.1528 S 0.1457 0.2926 Proportion of overlap in matching calls at level α = 0.0005 α = 0.05 Method FF F SF S Method FF F SF S FSEB 0.999 0.991 0.866 0.856 FSEB 0.992 0.905 0.860 0.723 FF 0.991 0.867 0.855 FF 0.900 0.857 0.717 F 0.858 0.864 SF 0.777 0.818 SF 0.988 S 0.860 have tail probabilities that scale, in α, like those of χ 2 statistics. The conservativeness of universal inference constructions is further discussed in the works of Dunn et al. [2021], Tse and Davison [2022], and Strieder and Drton [2022], where the topic is thoroughly explored via simulations and theoretical results regarding some classes of sufficiently regular problems. We observe this phenomenon in the comparisons in Sections 3.1 (and further expanded in the Supplementary Materials). We also explored subsampling-based tests within the FSEB framework, along the lines proposed by I M Johnstone and B W Silverman. EbayesThresh: R programs for empirical Bayes thresholding. Journal of Statistical Software, 12:1-38, 2005. E Kaufmann and W M Koolen. Mixture martingales revisited with applications to sequential tests and confidence intervals. arXiv:1811.11419v1, 2018. R Koenker and J Gu. REBayes: Empirical Bayes mixture methods in R. Journal of Statistical Software, F Krueger, B Kreck, A Franke, and S R Andrews. DNA methylome analysis using short bisulfite sequencing data. Nature Methods, 9:145-151, 2012. N M Laird and T A Louis. Empirical Bayes confidence intervals based on bootstrap samples. Journal of the American Statistical Association, 82:739-750, 1987. N Leng, J A Dawson, J A Thomson, V Ruotti, A I Rissman, B M G Smits, J D Haag, M N Gould, R M Stewart, and C Kendziorski. EBSeq: an empirical Bayes hierarchical model for inference in RNA-seq experiments. Bioinformatics, 29:1035-1043, 2013. J S Maritz and T Lwin. Empirical Bayes Methods. CRC Press, Boca Raton, 1989. C N Morris. Parametric empirical Bayes inference: theory and applications. Journal of the American Statistical Association, 78:47-55, 1983a. C N Morris. Parametric empirical bayes confidence intervals. In Scientific inference, data analysis, and robustness. Elsevier, 1983b. B Narasimhan and B Efron. deconvolveR: a G-modeling program for deconvolution and empirical Bayes estimation. Journal of Statistical Software, 94:1-20, 2020. R Norberg. Experience rating in group life insurance. Scandinavian Actuarial Journal, 1989:194-224, 1989. V I Serdobolskii. Multiparametric Statistics. Elsevier, Amsterdam, 2008.G Shafer. Testing by betting: a strategy for statistical and scientific communication. Journal of the Royal Statistical Society B, 184:407-431, 2021. Z D Smith and A Meissner. DNA methylation: roles in mammalian development. Nature Reviews Genetics, 14:204-220, 2013. C Stein. Inadmissibility of the usual estimator for the mean of a multivariate normal distribution. In Berkeley Symposium on Mathematical Statistics and Probability, 1956. D Strieder and M Drton. On the choice of the splitting ratio for the split likelihood ratio test. arXiv:2203.06748, 2022. Y C Tai and T P Speed. A multivariate empirical Bayes statistic for replicated microarray time course data. Annals of Statistics, 34:2387-2412, 2006. T Tse and A C Davison. A note on universal inference. Stat, to appear, 2022. V Vovk. Strong confidence intervals for autoregression. arXiv:0707.0660v1, 2007. V Vovk and R Wang. E-values: calibration, combination, and applications. Annals of Statistics, 49: 1736-1754, 2021. R Wang and A Ramdas. False discovery rate control with e-values. Journal of the Royal Statistical Society B, 84:822-852, 2022. L Wasserman, A Ramdas, and S Balakrishnan. Universal inference. Proceedings of the National Academy of Sciences, 117:16880-16890, 2020. Z Xu, R Wang, and A Ramdas. Post-selection inference for e-value based confidence intervals. arXiv:2203.12572, 2022. M Yoshimori and P Lahiri. A second-order efficient empirical Bayes confidence interval. Annals of Statistics, 42:1233-1261, 2014.82:1-26, 2017. i1 = θ * i2 for Beta-binomial count series models. The Rejection proportion columns report the average number of rejections, from 100 test replicates, at levels of significance α ∈ {0.05, 0.005, 0.0005}.Rejection proportion at level α n ψ ∆ 0.0005 0.005 0.05 10(2, 2) 0 0.000 0.000 0. S E , Ahmed , N Reid, Empirical Bayes and Likelihood Inference. New YorkSpringerS E Ahmed and N Reid, editors. Empirical Bayes and Likelihood Inference. Springer, New York, 2001. Genomics Data Analysis: False Discovery Rates and Empirical Bayes Methods. D R Bickel, CRC PressBoca RatonD R Bickel. Genomics Data Analysis: False Discovery Rates and Empirical Bayes Methods. CRC Press, Boca Raton, 2020. Empirical Bayes confidence sets for the mean of a multivariate normal distribution. G Casella, J T Hwang, Journal of the American Statistical Association. 78G Casella and J T Hwang. Empirical Bayes confidence sets for the mean of a multivariate normal distribution. Journal of the American Statistical Association, 78:688-698, 1983. DMRcaller: a versatile R/Bioconductor package for detection and visualization of differentially methylated regions in CpG and non-CpG contexts. M Catoni, A P Tsang, N R Greco, Zabet, Nucleic Acids Research. 46114M Catoni, J M Tsang, A P Greco, and N R Zabet. DMRcaller: a versatile R/Bioconductor package for detection and visualization of differentially methylated regions in CpG and non-CpG contexts. Nucleic Acids Research, 46:e114, 2018. Senescent cells harbour features of the cancer epigenome. T H A Cruickshanks, D M Mcbryan, N D Nelson, P P Vanderkraats, Shah, T S Van Tuyn, C Rai, Brock, D Donahue, M E S Dunican, R Drotar, J R R Meehan, S L Edwards, P D Berger, Adams, Nature Cell Biology. 15H A Cruickshanks, T McBryan, D M Nelson, N D Vanderkraats, P P Shah, J van Tuyn, T S Rai, C Brock, G Donahue, D S Dunican, M E Drotar, R R Meehan, J R Edwards, S L Berger, and P D Adams. Senescent cells harbour features of the cancer epigenome. Nature Cell Biology, 15: 1495-1506, 2013. On an asymptotic theory of conditional and unconditional coverage probabilities of empirical Bayes confidence intervals. M G S Datta, D D Ghosh, P Smith, Lahiri, Scandinavian Journal of Statistics. 29G S Datta, M Ghosh, D D Smith, and P Lahiri. On an asymptotic theory of conditional and un- conditional coverage probabilities of empirical Bayes confidence intervals. Scandinavian Journal of Statistics, 29:139-152, 2002. R Dunn, Ramdas, L Balakrishnan, Wasserman, arXiv:2104.14676Gaussian universal likelihood ratio testing. R Dunn, A Ramdas, S Balakrishnan, and L Wasserman. Gaussian universal likelihood ratio testing. arXiv:2104.14676, 2021. Large-scale Inference. B Efron, Cambridge University PressCambridgeB Efron. Large-scale Inference. Cambridge University Press, Cambridge, 2010. Safe testing. P Grunwald, W M De Heide, Koolen, Information Theory and Applications Workshop (ITA). 2020P Grunwald, R de Heide, and W M Koolen. Safe testing. In Information Theory and Applications Workshop (ITA), 2020.
[ "https://github.com/hiendn/Universal_EB." ]
[ "Some exact computations on the twisted butterfly state in string field theory", "Some exact computations on the twisted butterfly state in string field theory" ]
[ "Yuji Okawa [email protected] \nCenter for Theoretical Physics\nMassachusetts Institute of Technology Cambridge\nRoom 6-30402139MAUSA\n" ]
[ "Center for Theoretical Physics\nMassachusetts Institute of Technology Cambridge\nRoom 6-30402139MAUSA" ]
[]
The twisted butterfly state solves the equation of motion of vacuum string field theory in the singular limit. The finiteness of the energy density of the solution is an important issue, but possible conformal anomaly resulting from the twisting has prevented us from addressing this problem. We present a description of the twisted regulated butterfly state in terms of a conformal field theory with a vanishing central charge which consists of the ordinary bc ghosts and a matter system with c = 26. Various quantities relevant to vacuum string field theory are computed exactly using this description. We find that the energy density of the solution can be finite in the limit, but the finiteness depends on the subleading structure of vacuum string field theory. We further argue, contrary to our previous expectation, that contributions from subleading terms in the kinetic term to the energy density can be of the same order as the contribution from the leading term which consists of the midpoint ghost insertion.1 For a recent review, see[5].
10.1088/1126-6708/2004/01/066
[ "https://export.arxiv.org/pdf/hep-th/0310264v4.pdf" ]
14,807,218
hep-th/0310264
c04a791a854c3582d6430b3d9cc7206b7b6759c3
Some exact computations on the twisted butterfly state in string field theory arXiv:hep-th/0310264v4 28 Mar 2005 Yuji Okawa [email protected] Center for Theoretical Physics Massachusetts Institute of Technology Cambridge Room 6-30402139MAUSA Some exact computations on the twisted butterfly state in string field theory arXiv:hep-th/0310264v4 28 Mar 2005 The twisted butterfly state solves the equation of motion of vacuum string field theory in the singular limit. The finiteness of the energy density of the solution is an important issue, but possible conformal anomaly resulting from the twisting has prevented us from addressing this problem. We present a description of the twisted regulated butterfly state in terms of a conformal field theory with a vanishing central charge which consists of the ordinary bc ghosts and a matter system with c = 26. Various quantities relevant to vacuum string field theory are computed exactly using this description. We find that the energy density of the solution can be finite in the limit, but the finiteness depends on the subleading structure of vacuum string field theory. We further argue, contrary to our previous expectation, that contributions from subleading terms in the kinetic term to the energy density can be of the same order as the contribution from the leading term which consists of the midpoint ghost insertion.1 For a recent review, see[5]. Introduction Vacuum string field theory [1,2,3] is a conjectured form of Witten's cubic open string field theory [4] expanded around the tachyon vacuum. 1 The action of vacuum string field theory is given by replacing the BRST operator in Witten's string field theory with a different operator Q which is conjectured to be made purely of ghost fields [1]. The equation of motion of vacuum string field theory, Q |Ψ + |Ψ * Ψ = 0, (1.1) then factorizes into the matter and ghost sectors, and the matter part of the equation can be solved by the matter sector of a star-algebra projector [2,6]. Using the conformal field theory (CFT) formulation of string field theory [7,8], we can construct such a star-algebra projector for any given consistent open-string boundary condition [9]. The resulting solution is conjectured to describe the D-brane corresponding to the openstring boundary condition [9,10]. Based on this description of D-branes in vacuum string field theory, it has been shown that ratios of D-brane tensions [9,11], the openstring mass spectrum on any D-brane [12], and the absolute value of the D25-brane tension [12] can be reproduced correctly. One important assumption in deriving these results is the existence of a solution to the ghost part of the equation. Moreover, in the derivation of the absolute value of the D25-brane tension in [12], the energy density of the full solution which consists of both matter and ghost sectors was related to the on-shell three-tachyon coupling constant on a D25-brane so that the energy density must be finite in order to have a finite string coupling constant. A solution to the equation of motion of vacuum string field theory was first found by Hata and Kawano [13] in the operator formulation [14,15,16,17,18] with a particular choice of the kinetic operator Q. It turned out later that their solution is the sliver state in the twisted ghost CFT [19,20], and their kinetic operator is a c-ghost insertion at the open-string midpoint [21]. It was further shown in [19] that any star-algebra projector in the twisted ghost CFT solves the equation of motion of vacuum string field theory with Q being the midpoint c-ghost insertion. However, what has been shown in solving the equation of motion of vacuum string field theory is the proportionality between Q |Ψ and |Ψ * Ψ . As long as the proportionality constant is finite, the equation of motion can be satisfied by a finite rescaling of the solution. However, if it is infinite or vanishing, the normalization of the solution becomes singular so that regularization would be necessary to make it well-defined. In fact, the normalization of the Hata-Kawano solution seems singular [13,21]. Even if the solution itself is well-defined, we may encounter singularities when we compute Ψ|Q|Ψ and Ψ|Ψ * Ψ to evaluate the energy density of the solution. In fact, various solutions of string field theory based on the identity string field [22,23,24,25,26,27,28,29,30,31,32,33,34] suffer from the notorious singularity coming from the inner product of the identity string field with itself. Furthermore, even if the quantities Ψ|Q|Ψ and Ψ|Ψ * Ψ can be made well-defined, it is still a nontrivial question whether or not the equation of motion is satisfied when it is contracted with the solution itself, namely, whether or not Ψ|Q|Ψ + Ψ|Ψ * Ψ = 0 holds. We therefore recognize that there are these nontrivial steps to establish the existence of a solution with a finite energy density. One technical difficulty which has prevented us from addressing these questions in the CFT approach is possible conformal anomaly coming from the twisting. In this paper, we present a description of a class of twisted surface states in terms of the system of the ordinary bc ghosts and a matter CFT with c = 26 to overcome this difficulty. Since the total central charge vanishes, no conformal anomaly arises when we make a conformal transformation in the process of gluing string fields. This is important because the generalized gluing and resmoothing theorem [8] holds only when the total central charge vanishes as was emphasized in [35]. We in particular study the twisted regulated butterfly state [19,36,37] in detail, and compute various quantities involving this state exactly. We find that the proportionality constant between Q |Ψ and |Ψ * Ψ is in fact singular in the case of the twisted butterfly state. However, the energy density of the solution can be finite in the limit by appropriately scaling the normalization of the twisted regulated butterfly state and the kinetic operator Q. This is good news for the vacuum string field theory conjecture. However, there is another subtlety. The conjectured kinetic operator of vacuum string field theory consisting of the midpoint c-ghost insertion with a divergent coefficient requires regularization [19,36,38] and, as we will show in this paper, the subleading structure of the kinetic operator is necessary in order for vacuum string field theory to have a parameter corresponding to the string coupling constant. The question is then whether or not details of the regularization contribute to the physics in the limit. Apparently, the midpoint ghost insertion dominates in the kinetic operator and the twisted butterfly state provides a formal solution in the limit. However, we find that subleading terms of the kinetic term can contribute to the energy density at the same order as the leading term. This is not an immediate problem of vacuum string field theory, but the subleading structure may ruin the factorization of the matter and ghost sectors at the leading order. In this paper, we present the first quantitative approach to this issue. The organization of the paper is as follows. We provide the description of the twisted regulated butterfly state in terms of the ordinary bc ghosts and a matter CFT with c = 26 in Section 2. We then present exact computations of various quantities involving the twisted regulated butterfly state in Section 3. The issues raised in the introduction are discussed in Section 4. Section 5 is devoted to conclusion and discussion. Our conventions and terminology on the CFT formulation of string field theory are summarized in Appendix A. Some details of computations in Sections 2 and 3 are given in Appendices B and C. 2 Twisted regulated butterfly state in terms of the ordinary ghost CFT Regulated butterfly state The butterfly state is a star-algebra projector originally found by the level-truncation analysis of vacuum string field theory in [19], and its properties were further studied in [36,37]. The regulated butterfly state |B t introduced in [36,37] is a regularization of the butterfly state parameterized by t in the range 0 ≤ t < 1. It is defined by φ|B t = f t • φ(0) (2.1) for any state |φ in the Fock space, where f t (ξ) = ξ √ 1 + t 2 ξ 2 . (2.2) All CFT correlation functions in this paper are evaluated on an upper-half plane, and we use the doubling trick. The butterfly state |B is given by the regulated butterfly state in the limit t → 1. It is a singular state like other star-algebra projectors such as the sliver state [39]. The singularity can be seen, for example, by the fact that the open-string midpoint f t (i) reaches the boundary in the limit t → 1. However, an inner product of the regulated butterfly state with a state |φ in the Fock space is well-defined even in the limit t → 1 and is given by φ|B = lim t→1 φ|B t = f B • φ(0) , (2.3) where f B (ξ) = ξ √ 1 + ξ 2 . (2.4) In the opposite limit t → 0, the regulated butterfly state reduces to the SL(2, R)invariant vacuum |0 . We can use different conformal transformations to represent the same surface state. For example, the regulated butterfly state can also be represented as φ|B t = h t • φ(0) (2.5) for any state |φ in the Fock space, where h t (ξ) is a conformal transformation with a parameter p: h t (ξ) = ξ ξ + p √ 1 + t 2 ξ 2 . (2.6) The conformal transformation h t (ξ) is related to f t (ξ) by an SL(2, R) transformation z/(z + p) which maps the infinity to 1. The conformal transformations with different values of p for (2.6) are all equivalent and define the same state. This representation will be useful when there is an operator insertion at the infinity in the representation in terms of f t (ξ). In this representation, the inner product φ|B in the butterfly limit is given by φ|B = h B • φ(0) , (2.7) where h B (ξ) = ξ ξ + p √ 1 + ξ 2 . (2.8) The regulated butterfly state has a simple representation in terms of a single Virasoro generator [36,37]. It is given by |B t = exp − t 2 2 L −2 |0 . (2.9) Twisted regulated butterfly state The twisted ghost CFT introduced in [19] in the context of vacuum string field theory is defined by changing the energy-momentum tensor in the strip coordinates as T ′ (w) = T (w) − ∂ j g (w), T ′ (w) = T (w) −∂  g (w). (2.10) A twisted surface state is defined by a correlation function in the twisted ghost CFT. The twisted regulated butterfly state |B ′ t is defined by φ|B ′ t = h t • φ ′ (0) ′ (2.11) for any state |φ in the Fock space, and the twisted butterfly state |B ′ is given by the singular limit of |B ′ t : φ|B ′ = lim t→1 φ|B ′ t = h B • φ ′ (0) ′ . (2.12) The prime on the correlation functions denotes that they are evaluated in the twisted CFT. Note that the state-operator correspondence is modified because the twisting changes the conformal properties of the ghost fields. For example, the state c 1 |0 corresponds to the operator c(0) in the ordinary bc CFT, but it corresponds to the identity operator in the twisted CFT. In other words, the state c 1 |0 corresponds to the SL(2, R)-invariant vacuum |0 ′ in the twisted CFT. We denoted the operator in the twisted CFT corresponding to |φ by φ ′ (0). In the operator formalism, the twisted regulated butterfly state is represented as |B ′ t = exp − t 2 2 L ′ −2 |0 ′ = exp − t 2 2 L ′ −2 c 1 |0 ,(2.13) where L ′ −2 is the Virasoro generator in the twisted CFT. Twisted regulated butterfly state in terms of the ordinary ghost CFT Because of the twisting, the total central charge of the system consisting of the twisted ghost CFT and a matter CFT with c = 26 no longer vanishes. Therefore, we have to deal with conformal anomaly. It would be useful if we can represent twisted surface states in terms of correlation functions of CFT with a vanishing central charge. Let us try to find such a representation of the twisted regulated butterfly state |B ′ t . The twisting corresponds to a different coupling to the world-sheet curvature in the language of bosonization. The world-sheet curvature vanishes in the strip coordinates so that we can easily find the relation between representations in the ordinary and twisted CFT's for a state in the Fock space. For example, the state c 1 |0 corresponds to the insertion of c(0) in the ordinary bc CFT and to the insertion of the identity operator in the twisted CFT. The regulated butterfly state is not a state in the Fock space, but it takes a simple form in the strip coordinates (τ, σ) where a state in the Fock space is represented as a path integral over the semi-infinite strip −∞ < τ ≤ 0, 0 ≤ σ ≤ π with a wave function in the infinite past τ = −∞. The regulated butterfly state is represented as a path integral over this region with a slit from (τ, σ) = (−∞, π/2) to (τ, σ) = (ln t, π/2). The boundary of the surface runs from (τ, σ) = (0, π) to (τ, σ) = (0, 0) as follows: (0, π) → (−∞, π), (−∞, π/2) → (ln t, π/2) → (−∞, π/2), (−∞, 0) → (0, 0). When t = 0, the slit vanishes and the state reduces to the SL(2, R)-invariant vacuum |0 . In the limit t → 1, the boundary reaches the open-string midpoint (τ, σ) = (0, π/2), and the wave function at τ = 0 factorizes into those of the left and right half-strings. The world-sheet curvature vanishes in the interior of this region, and it also vanishes on the boundary except for a point (τ, σ) = (ln t, π/2). The wave functions at τ = −∞ in the regions 0 ≤ σ ≤ π/2 and π/2 ≤ σ ≤ π should correspond to insertions of c ghosts when they are mapped to a point just as in the case of the SL(2, R)-invariant vacuum |0 . Note that the boundary changes its direction by the amount of π when it goes to τ = −∞ and comes back. At (τ, σ) = (ln t, π/2), the change in the direction is −π. Therefore, the b ghost should be inserted in the ordinary CFT when this point is mapped to a point where the boundary does not bend. To summarize, the twisted regulated butterfly state |B ′ t , which is a surface state without any operator insertions in the twisted CFT, should be described as the same surface state with two c-ghost and one b-ghost insertions in the ordinary bc CFT. If we use the conformal transformation f t (ξ) in the definition, the insertion points of the c ghosts are −1/t and 1/t, and that of the b ghost is the infinity, which is outside of the patch. If we use the conformal transformation h t (ξ) instead, all these three points are simultaneously finite. The twisted regulated butterfly state |B ′ t is then represented as follows: φ|B ′ t = − t 2 (p 2 t 2 − 1) 2 c 1 1 + pt b(1) c 1 1 − pt h t • φ(0) (2.14) for any state |φ in the Fock space. The p-dependence of the normalization factor in (2.14) is determined by the covariance under the SL(2, R) conformal transformation pz/(pz + q(1 − z)) which changes the parameter p to q. The normalization is then fixed by the condition B ′ t |c 0 c 1 |0 density = 1 for any t in 0 ≤ t < 1, where the subscript density denotes that the inner product has been divided by the space-time volume. This normalization evidently coincides with that in the representation (2.13). The definitions with different values of p are all equivalent, and the inner product φ|B ′ t is independent of p. The twisted regulated butterfly state is now represented by a correlation function in the system of the ordinary bc ghosts and a matter CFT with c = 26, which is free from conformal anomaly. We will present an explicit proof of the equivalence between this representation and the definition (2.11) in the next subsection. As a consistency check, let us consider the limit t → 0. The two c ghosts approach the b ghost in the limit so that these three operators can be replaced by the leading term in the operator product expansion (OPE). Since c 1 1 + pt b(1) c 1 1 − pt = − 2 pt c(1) + O(1), (2.15) the inner product φ|B ′ t becomes φ|B ′ t → 1 p c(1) h t=0 • φ(0) (2.16) in the limit t → 0. It can be easily verified that this coincides with φ|c 1 |0 . Therefore, the representation of |B ′ t correctly reduces to that of c 1 |0 in the limit t → 0. There is another useful representation of the twisted regulated butterfly state |B ′ t . Let us make an inversion I(z) = −1/z to the conformal transformation f t (ξ): I • f t (ξ) = − √ 1 + t 2 ξ 2 ξ . (2.17) Using this conformal transformation, the twisted regulated butterfly state is represented as φ|B ′ t = − t 2 c(−t) b(0) c(t) I • f t • φ(0) . (2.18) The origin ξ = 0 is mapped to the infinity by I • f t (ξ), but the three ghost-insertion points are finite. The coordinate z ′ = I • f t (ξ) is related to the coordinate z = h t (ξ) by z ′ = z − 1 pz , z = 1 1 − pz ′ . (2.19) The twisted regulated butterfly state satisfies the Siegel gauge condition φ|b 0 |B ′ t = 0 for an arbitrary t. This is obvious in the representation (2.13). It can also be easily seen in the representation (2.18) in the following way. If we define z = I • f t (ξ), b 0 can be expressed as b 0 = dξ 2πi ξ b(ξ) = dz 2πi t 2 − z 2 z b(z), (2.20) where the contour of the ξ-integral encircles the origin counterclockwise and that of the z-integral encircles −t, the origin, and t counterclockwise. The pole in the OPE between b(z) and c(±t) is canceled by the zero in (2.20), and the pole in (2.20) at z = 0 is canceled by the zero in the OPE between b(z) and b(0). Therefore, φ|b 0 |B ′ t = 0 for an arbitrary t in the representation (2.18). Proof of the equivalence In this subsection, we will verify our representation (2.14) of the twisted regulated butterfly state by providing an explicit proof of the equivalence between |B ′ t and | B ′ t which are defined by B ′ t |φ = − t 2 (p 2 t 2 − 1) 2 c 1 1 + pt b(1) c 1 1 − pt h t • φ(0) (2.21) and B ′ t |φ = h t • φ ′ (0) ′ ,(2.22) for any state |φ in the Fock space, respectively. 2 Let us begin with the case where |φ = c −n b −m c 0 c 1 |0 , and define M nm ≡ B ′ t |c −n b −m c 0 c 1 |0 , M nm ≡ B ′ t |c −n b −m c 0 c 1 |0 ,(2.23) and M(z, w) ≡ B ′ t |c(w)b(z)c 0 c 1 |0 , M(z, w) ≡ B ′ t |c ′ (w)b ′ (z)c 0 |0 ′ . (2.24) Note that |0 ′ = c 1 |0 . Since the modes c n and b n are related to the ordinary and twisted ghosts as c n = dw 2πi w n−2 c(w) = dw 2πi w n−1 c ′ (w), (2.25) b m = dz 2πi z m+1 b(z) = dz 2πi z m b ′ (z), (2.26) respectively, we have M nm = dw 2πi 1 w n+2 dz 2πi 1 z m−1 M(z, w),(2.27) and M nm = dw 2πi 1 w n+1 dz 2πi 1 z m M (z, w). (2.28) Therefore, if M(z, w) is w/z times M (z, w) , the two inner products M nm and M nm coincide. Let us compute M (z, w) and M(z, w). In the twisted ghost CFT, the state |0 ′ corresponds to the identity operator, and c 0 |0 ′ corresponds to the operator c ′ (0). Therefore, M (z, w) is given by M(z, w) = h t • c ′ (w) h t • b ′ (z) h t • c ′ (0) ′ = dh t (z) dz c ′ (h t (w)) b ′ (h t (z)) c ′ (0) ′ = dh t (z) dz 1 h t (w) − h t (z) h t (w) h t (z) . (2.29) Note that the conformal dimensions of b ′ and c ′ are 1 and 0, respectively. In the ordinary ghost CFT, the state c 1 |0 corresponds to the operator c(0), and c 0 c 1 |0 corresponds to −c∂c(0), which is a primary field and its conformal dimension is −1. Therefore, M(z, w) is given by M(z, w) = t 2 (p 2 t 2 − 1) 2 c 1 1 + pt b(1) c 1 1 − pt h t • c(w) h t • b(z) h t • c∂c(0) = pt 2 (p 2 t 2 − 1) 2 dh t (z) dz 2 dh t (w) dw −1 × c 1 1 + pt b(1) c 1 1 − pt c(h t (w)) b(h t (z)) c∂c(0) = dh t (z) dz 2 dh t (w) dw −1 1 h t (w) − h t (z) × 1 1 + pt − h t (w) 1 1 − pt − h t (w) h t (w) 2 1 − h t (w) × 1 1 + pt − h t (z) 1 1 − pt − h t (z) h t (z) 2 1 − h t (z) −1 . (2.30) This expression for M(z, w) looks very different from (2.29) for M(z, w). However, since dh t (z) dz = p 2 t 2 − 1 z h t (z) h t (z) − 1 h t (z) − 1 1 + pt h t (z) − 1 1 − pt , (2.31) the expression (2.30) can be simplified to give M(z, w) = w z dh t (z) dz 1 h t (w) − h t (z) h t (w) h t (z) = w z M(z, w). (2.32) Thus we have shown that M nm = M nm . It is straightforward to generalize the proof for an arbitrary |φ . Let us define M n 1 m 1 n 2 m 2 ···n k m k ≡ B ′ t |c −n 1 b −m 1 c −n 2 b −m 2 · · · c −n k b −m k c 0 c 1 |0 , M n 1 m 1 n 2 m 2 ···n k m k ≡ B ′ t |c −n 1 b −m 1 c −n 2 b −m 2 · · · c −n k b −m k c 0 c 1 |0 ,(2.33) and M(z 1 , w 1 , z 2 , w 2 , · · · , z k , w k ) ≡ B ′ t |c(w 1 )b(z 1 )c(w 2 )b(z 2 ) · · · c(w k )b(z k )c 0 c 1 |0 , M (z 1 , w 1 , z 2 , w 2 , · · · , z k , w k ) ≡ B ′ t |c ′ (w 1 )b ′ (z 1 )c ′ (w 2 )b ′ (z 2 ) · · · c ′ (w k )b ′ (z k )c 0 |0 ′ . (2.34) Using (2.31), we can show that M(z 1 , w 1 , z 2 , w 2 , · · · , z k , w k ) = w 1 z 1 w 2 z 2 · · · w k z k M (z 1 , w 1 , z 2 , w 2 , · · · , z k , w k ). (2.35) Therefore, we have M n 1 m 1 n 2 m 2 ···n k m k = k i=1 dw i 2πi 1 w n i +2 i dz i 2πi 1 z m i −1 i M(z 1 , w 1 , z 2 , w 2 , · · · , z k , w k ) = k i=1 dw i 2πi 1 w n i +1 i dz i 2πi 1 z m i i M(z 1 , w 1 , z 2 , w 2 , · · · , z k , w k ) = M n 1 m 1 n 2 m 2 ···n k m k . (2.36) This completes the proof that B ′ t |φ = B ′ t |φ for any state |φ in the Fock space. Generalization to other surface states Our result for the twisted regulated butterfly state can be generalized to a certain class of other surface states. The key identity of the proof in the previous subsection was (2.31). In general, if a surface state |Σ is defined by a conformal transformation f (ξ) through the relation φ|Σ = f • φ(0) (2.37) for any state |φ in the Fock space, where the correlation function is evaluated on an upper-half plane, and the conformal transformation f (ξ) satisfies df (ξ) dξ = 1 C f (ξ) − f (0) ξ i (f (ξ) − p i ) α i , (2.38) where C is a factor independent of ξ, and i α i = 1, (2.39) the twisted surface state |Σ ′ can be represented by a correlation function in the ordinary ghost CFT with c-ghost insertions at p i when α i = 1 and b-ghost insertions at p i when α i = −1. The insertion points do not have to be on the boundary. Furthermore, we can handle the case where the ghost numbers α i take values other than 1 or −1, for example, by bosonization. The condition (2.38) can be brought to a more convenient form in terms of the inverse function ξ = f −1 (z). It is given by d ln f −1 (z) dz = C z − f (0) i (z − p i ) −α i , (2.40) where C is a factor independent of z. Let us first verify the condition (2.40) for the regulated butterfly state. The inverse function h −1 t (z) is given by h −1 t (z) = pz (1 − (1 + pt)z)(1 − (1 − pt)z) , (2.41) and the derivative of ln h −1 t (z) is d ln h −1 t (z) dz = 1 p 2 t 2 − 1 z − 1 z z − 1 1+pt z − 1 1−pt , (2.42) which takes the form of (2.40). Now consider the wedge state [40,41,42]. It is labeled by a real number n with n ≥ 1 and defined by the conformal transformation f n (ξ) = n 2 tan 2 n arctan ξ . (2.43) In the large n limit, the wedge state becomes a star-algebra projector, which is called the sliver state. When n is a positive integer, the wedge state is given by a star product of n − 1 vacuum states. For example, the wedge state with n = 2 is the vacuum state |0 itself, and |0 * |0 for n = 3, |0 * |0 * |0 for n = 4, and so on. In this case, we expect that the twisted wedge state can be represented in terms of the ordinary ghost CFT by inserting a c ghost at the midpoint of the boundary of each vacuum state and by putting an appropriate operator with a negative ghost number at the string midpoint which has an excess angle. Let us examine if the wedge state satisfies the condition (2.40). The inverse function f −1 n (z) is given by f −1 n (z) = tan n 2 arctan 2z n , (2.44) and the derivative of ln f −1 n (z) is d ln f −1 n (z) dz = 2n 2 4z 2 + n 2 sin n arctan 2z n −1 . ( 2.45) It is not obvious if this can be transformed to the form (2.40), but when n is an odd positive integer, it turns out to be the case: d ln f −1 n (z) dz = n 2 (−1) n−1 2 2z z − ni 2 n 2 −1 z + ni 2 n 2 −1 n−1 m=1 z − n 2 tan mπ n −1 . (2.46) A derivation of this expression is given in Appendix B. As we guessed, if we insert n−1 c ghosts at n 2 tan mπ n , m = 1, 2, · · · , n − 1, (2.47) which are the midpoints of the boundary of the n − 1 vacuum states, and operators with ghost number 1 − n/2 at ±ni/2, the twisted wedge state can be described by a correlation function of a CFT with a vanishing central charge when n is an odd positive integer. It will be straightforward to generalize the derivation to the case where n is an even positive integer by making an appropriate SL(2, R) transformation to avoid the operator insertion at the infinity. However, the generalization to the case where n is not an integer is nontrivial, and it is not clear if such a description in terms of a CFT with a vanishing central charge exists. The operator insertion at the open-string midpoint, which corresponds to ±ni/2 under the doubling trick, implies that the singularity of the twisted sliver state is not completely resolved in our description by regularizing it to the twisted wedge state. In particular, the star multiplication of two twisted wedge states is not well-defined because of coincident operators at the open-string midpoint. The remaining singularity has to be regularized further, for example, by displacing the operators to ±i(n/2 + ǫ) with ǫ > 0 in our description. 3 Once the remaining singularity is regularized, we can in principle carry out computations involving the twisted wedge state in a CFT with a vanishing central charge. However, such computations would be much more awkward to handle compared to the case of the twisted regulated butterfly state. First, when n is an odd integer, the ghost charge of the operators at ±ni/2, which is 1 − n/2, is fractional so that we have to bosonize the ghosts. When n is an even integer, the charge is an integer but a large negative number if n is large. Therefore, it would be in practice difficult to handle without bosonization. Second, the number of operator insertions increases as n becomes large, and it diverges in the large n limit. In the case of the twisted regulated butterfly state, the number of operator insertions is three for an arbitrary t. Furthermore, the open-string midpoint is as regular as that of a state in the Fock space. These are the reasons why we study the twisted regulated butterfly state among other star-algebra projectors. Q = 1 2i (c(i) − c(−i)),(3.1) and |φ is a state in the Fock space. We will also denote a state in the Fock space by |φ throughout the rest of the paper. Let us compute these quantities in this section. B ′ t |Q|φ Since the open-string midpoint of the regulated butterfly state |B t is as regular as that of an ordinary state in the Fock space, the quantities B ′ t |Q|φ and B ′ t |Q|B ′ t are well-defined for 0 ≤ t < 1. We do not need to regularize the midpoint c-ghost insertion Q unlike the case of the wedge state. The operator Q is mapped to i 2p √ 1 − t 2 (1 − ip √ 1 − t 2 ) 2 c 1 1 − ip √ 1 − t 2 −(1 + ip √ 1 − t 2 ) 2 c 1 1 + ip √ 1 − t 2 (3.2) by the conformal transformation h t (ξ) so that B ′ t |Q|φ is given by B ′ t |Q|φ = − it 4p (p 2 t 2 − 1) 2 √ 1 − t 2 × (1 − ip √ 1 − t 2 ) 2 c 1 1 + pt b(1) c 1 1 − pt c 1 1 − ip √ 1 − t 2 h t • φ(0) −(1 + ip √ 1 − t 2 ) 2 c 1 1 + pt b(1) c 1 1 − pt c 1 1 + ip √ 1 − t 2 h t • φ(0) . (3.3) As a check, it can be easily verified from this expression that B ′ t |Qc 1 |0 = 1 for any t in the range 0 ≤ t < 1. In the butterfly limit t → 1, the c ghost coming from Q approaches b so that the two operators can be replaced by the leading term of the OPE: b(1) c 1 1 ± ip √ 1 − t 2 = ± 1 ip 2(1 − t) + O(1). (3.4) Therefore, the leading term of B ′ t |Q|φ in the limit t → 1 is finite and given by B ′ t |Q|φ = − (p 2 − 1) 2 2p 2 c 1 1 + p c 1 1 − p h B • φ(0) + O(1 − t). (3.5) Note that terms of O( √ 1 − t) cancel so that the next-to-leading order is O(1 −t). Note also that h t (ξ) = h B (ξ) + O(1 − t). (3.6) 3.2 B ′ t * B ′ t |φ Let us denote the conformal transformation associated with the surface state |B t * B t by f t (ξ): φ|B t * B t = f t • φ(0) . (3.7) We also require that f t (0) = 0 and f t (1) = − f t (−1) to fix the ambiguity coming from SL(2, R) transformations. The overall normalization of f t (ξ) is still undermined, but it will also be fixed shortly. When we deal with the star multiplication of the regulated butterfly state, it is convenient to use theẑ coordinate defined bŷ z = arctan ξ. (3.8) It was shown in [37] thatẑ = arctan ξ and z = f t (ξ) are related by dẑ dz = 1 1 + d 2 z 2 1 − β 2 z 2 (1 − α 2 z 2 )(1 − γ 2 z 2 ) ,(3.9) where α, β, γ, and d are functions of t. 4 We use this relation to fix the overall normalization of f t (ξ), namely, dẑ dz z=0 = 1. (3.10) Schnabl derived an explicit form of f t (ξ) in [36]. By appropriately rescaling the expression in [36] to satisfy (3.10), f t (ξ) is given by f t (ξ) = 3 4 9 − a 2 (1 − a 2 )(a 2 + 3) tan 2 2 3 arctan ξ 2 + t 2 1 + t 2 ξ 2 − a 2 3 ,(3.11) where a ≡ √ 3 tan 2 3 arctan t . (3.12) It can be easily verified that f t (ξ) reduces to f B (ξ) in the limit t → 1: f t (ξ) = f B (ξ) + O(1 − t). (3.13) This shows that the butterfly state is a star-algebra projector. The functions α, β, γ, and d were not determined explicitly in [37], but they can be determined from (3.11) and given by α = 2(1 + a) 3 + a 1 + a 3 − a , β = 2 9 − a 2 (1 − a 2 )(a 2 + 3), γ = 2(1 − a) 3 − a 1 − a 3 + a , d = 2 1 − a 2 9 − a 2 . (3.14) In order to compute B ′ t * B ′ t |φ , we need to know where the inserted operators for each of |B ′ t are mapped to and the conformal factors associated with the mapping. Let us prepare two B ′ t |φ 's in the coordinates z i = f t (ξ i ) where i = 1, 2 to construct B ′ t * B ′ t |φ in the coordinate z = f t (ξ). The relation between z i and z can be derived in theẑ representation. It was shown in [37] thatẑ i = arctan ξ i andẑ = arctan ξ coincide up to a possible translation by the amount of ±π in the outside region of the image of the local coordinate. Therefore, tanẑ i = tanẑ for i = 1, 2. Since tan 2ẑ i = z 2 i 1 − t 2 z 2 i , tan 2ẑ = (1 + d 2 z 2 ) 3 2 − (1 − 4β 2 d 2 z 2 ) (1 + d 2 z 2 ) 3 2 + (1 − 4β 2 d 2 z 2 ) ,(3.15) z i and z are related by z 2 i = (1 + d 2 z 2 ) 3 2 − (1 − 4β 2 d 2 z 2 ) (1 + t 2 )(1 + d 2 z 2 ) 3 2 + (1 − t 2 )(1 − 4β 2 d 2 z 2 ) (3.16) for both i = 1 and i = 2. The b ghost is inserted at the infinity in the z i coordinate so that it is convenient to make an inversion. Let us introduce the inverted coordinates z ′ i = I • f t (ξ i ) = −1/z i for i = 1, 2. They are related to the z coordinate of B ′ t * B ′ t |φ by z ′2 = (1 + t 2 )(1 + d 2 z 2 ) 3 2 + (1 − t 2 )(1 − 4β 2 d 2 z 2 ) (1 + d 2 z 2 ) 3 2 − (1 − 4β 2 d 2 z 2 ) ,(3.17) where z ′ = z ′ 1 or z ′ = z ′ 2 . As we have presented in (2.18), two c ghosts are inserted at t and −t, and one b ghost is inserted at the origin in the z ′ coordinate. It was shown in [37] that these points are mapped to z ′ = −t → z = 1 α , z ′ = 0 → z = 1 β , z ′ = t → z = 1 γ , (3.18) or to z ′ = −t → z = − 1 γ , z ′ = 0 → z = − 1 β , z ′ = t → z = − 1 α ,(3.19) depending on whether z ′ = z ′ 1 or z ′ = z ′ 2 . It can be verified that the relation (3.17) is satisfied when (z ′2 , z 2 ) = (t 2 , 1/α 2 ), (z ′2 , z 2 ) = (0, 1/β 2 ), and (z ′2 , z 2 ) = (t 2 , 1/γ 2 ). Let us next compute dz ′ /dz at these points to determine conformal factors. From (3.17) and the fact that dz ′ /dz > 0 at these points, it is not too difficult to derive the following results: dz ′ dz z=± 1 α = a(3 + a)(1 + a) 2 8t (1 + a)(3 − a) , dz ′ dz z=± 1 β = (1 − t 4 )(1 − a 2 )(3 + a 2 ) 4 √ 3 , dz ′ dz z=± 1 γ = a(3 − a)(1 − a) 2 8t (1 − a)(3 + a) . (3.20) Therefore, the inserted operators are mapped from the z ′ coordinate to the z coordinate including their conformal factors as follows: c(∓t) → a(3 + a)(1 + a) 2 8t (1 + a)(3 − a) c ± 1 α , a(3 − a)(1 − a) 2 8t (1 − a)(3 + a) c ∓ 1 γ , b(0) → 48 (1 − t 4 )(1 − a 2 )(3 + a 2 ) b ± 1 β . (3.21) In order to discuss the limit t → 1 and to compare B ′ t * B ′ t |φ with B ′ t |Q|φ , it is convenient to make the SL(2, R) transformation z/(z + p). The operators in the z coordinate are mapped to c ± 1 α → (pα ± 1) 2 pα 2 c 1 1 ± pα , b ± 1 β → p 2 β 4 (1 ± pβ) 4 b 1 1 ± pβ , c ± 1 γ → (pγ ± 1) 2 pγ 2 c 1 1 ± pγ . (3.22) Collecting all the conformal factors and taking into account the normalization factor −t/2 in (2.18), B ′ t * B ′ t |φ is given by B ′ t * B ′ t |φ = 9 64 a 4 (1 − a 2 )(9 − a 2 ) t 2 (1 − t 4 ) 2 (3 + a 2 ) 2 β 8 (p 2 α 2 − 1) 2 (1 − p 2 γ 2 ) 2 α 4 γ 4 (1 − p 2 β 2 ) 4 × c 1 1 + pα b 1 1 + pβ c 1 1 + pγ × c 1 1 − pγ b 1 1 − pβ c 1 1 − pα h t • φ(0) , (3.23) where we have defined h t (ξ) by h t (ξ) = f t (ξ) f t (ξ) + p . (3.24) When |φ is c 1 |0 , B ′ t * B ′ t |c 1 |0 is given by B ′ t * B ′ t |c 1 |0 density = − 9 256 1 t 2 (1 − t 4 ) 2 a 2 (3 + a 2 ) 2 (9 − a 2 )(3 + a 2 ) 1 − a 2 , (3.25) where the subscript density denotes that the quantity is divided by the volume factor of space-time. We will also use this notation in what follows. The expression (3.25) reproduces the familiar result in the limit t → 0 where |B ′ t reduces to c 1 |0 : lim t→0 B ′ t * B ′ t |c 1 |0 density = − 3 √ 3 4 3 (3.26) In the butterfly limit t → 1, the t-dependent quantities a, α, β, γ, and h t (ξ) behave as follows: 1 − a ≃ O(1 − t), α → 1, β ≃ O( √ 1 − t), γ ≃ O (1 − t) 3 2 , h t (ξ) = h B (ξ) + O(1 − t). (3.27) Therefore, four of the six ghost insertions approach 1 in the limit so that they can be replaced by the leading term in the OPE: b 1 1 + pβ c 1 1 + pγ c 1 1 − pγ b 1 1 − pβ = 4 √ 2 p 2 + O(1 − t). (3.28) Note that terms of O( √ 1 − t) cancel so that the next-to-leading order is O(1 − t). The leading term of B ′ t * B ′ t |φ in the limit is given by B ′ t * B ′ t |φ = 27 √ 6 1024 1 (1 − t) 3 (p 2 − 1) 2 p 2 c 1 1 + p c 1 1 − p h B • φ(0) + O 1 (1 − t) 2 . (3.29) Therefore, the leading term of B ′ t * B ′ t |φ is proportional to that of B ′ t |Q|φ , B ′ t * B ′ t |φ = − 27 √ 6 512 1 (1 − t) 3 B ′ t |Q|φ + O 1 (1 − t) 2 ,(3.30) but the proportionality constant diverges as 1/(1 − t) 3 . This shows that the twisted regulated butterfly state satisfies the equation of motion of vacuum string field theory in the singular butterfly limit when its kinetic operator Q is given by the midpoint c-ghost insertion. However, the singularity in the proportionality constant implies that the normalization of the solution does not have a finite limit if the kinetic operator Q is given by Q times a finite factor. We will discuss more about this result in the context of vacuum string field theory in Section 4 after computing other quantities in the following subsections. 3.3 B ′ t |Q|B ′ t Because |B ′ t satisfies the Siegel gauge condition, this quantity reduces to B ′ t |c 0 |B ′ t , which has already been computed by Schnabl [36] in the operator formalism by evaluating conformal anomaly. Nevertheless, it is useful to compute it in our CFT approach, and in fact our method can be applied to the computation of B ′ t * B ′ t |B ′ t in the next subsection. Our strategy for computing B ′ t |Q|B ′ t is the same as in the case of B ′ t * B ′ t |φ in the previous subsection. Gluing of two regulated butterfly states can be easily done in thê z representation. The resulting surface can be mapped to an upper-half plane by an appropriate conformal transformation. If we denote the coordinate in the upper-half plane by z, the relation between z and z ′ = I • f t (ξ) for each of the two regulated butterfly states can be derived through theẑ representation. We will present details of this process in Appendix C. The final relation between z and z ′ is as follows: 4(z ′2 − t 2 ) [z ′2 − (1 + t 2 )] 2 = (1 − z 2 ) 2 − 4q 2 z 2 4(1 + q 2 )z 2 , (3.31) where q = 2t 1 − t 2 . (3.32) From this relation, we can compute how operators are mapped from the z ′ coordinate to the z coordinate. The operators b(0), c(±t) in the z ′ coordinate are mapped in the following way: c(−t) → 1 2(1 + t 2 ) 1 − t 1 + t c − 1 + t 1 − t , 1 2(1 + t 2 ) 1 + t 1 − t c 1 − t 1 + t , b(0) → 4 1 − t 4 b(−1), 4 1 − t 4 b(1), c(t) → 1 2(1 + t 2 ) 1 + t 1 − t c − 1 − t 1 + t , 1 2(1 + t 2 ) 1 − t 1 + t c 1 + t 1 − t . (3.33) The kinetic operator Q is expressed in the z coordinate as Q → − 1 − t 2 2(1 + t 2 ) (c(i) + c(−i)) . (3.34) Collecting all the conformal factors and taking into account the normalization factor −t/2 in (2.18), B ′ t |Q|B ′ t is given by B ′ t |Q|B ′ t = − t 2 8(1 − t 2 )(1 + t 2 ) 7 × c − 1 + t 1 − t b(−1) c − 1 − t 1 + t c(i) c 1 − t 1 + t b(1) c 1 + t 1 − t + c − 1 + t 1 − t b(−1) c − 1 − t 1 + t c(−i) c 1 − t 1 + t b(1) c 1 + t 1 − t . (3.35) It is straightforward to evaluate the correlation functions and the result is given by B ′ t |Q|B ′ t density = 1 (1 − t 4 ) 3 . (3.36) We have reproduced the result presented in Appendix C of [36] when the parameters s ands of [36] are given by s =s = −t 2 /2. 3.4 B ′ t * B ′ t |B ′ t The computation of B ′ t * B ′ t |B ′ t is almost parallel to that of B ′ t |Q|B ′ t in the previous subsection. Gluing of three twisted regulated butterfly states can be done in theẑ representation. The resulting surface can be mapped to an upper-half plane by an appropriate conformal transformation. The derivation of the relation between the z coordinate of the upper-half plane and the coordinate z ′ = I • f t (ξ) for each of the three regulated butterfly states will be given in Appendix C. The final relation between z and z ′ is as follows: 4(z ′2 − t 2 ) [z ′2 − (1 + t 2 )] 2 = z 2 (z 2 − 3) 2 − q 2 (1 − 3z 2 ) 2 (1 + q 2 )(1 − 3z 2 ) 2 , (3.37) where q was defined in (3.32). The operators b(0), c(±t) in the z ′ coordinate are mapped to the following operators in the z coordinate: b(0) → 64 9 1 1 − t 4 b(− √ 3), 4 9 1 1 − t 4 b(0), 64 9 1 1 − t 4 b( √ 3), c(−t) → √ 3 16 a(9 − a 2 )(1 − a) t(3 + a 2 )(1 + a) c − 3 + a √ 3(1 − a) , √ 3 4 a(9 − a 2 ) t(3 + a 2 )(1 − a 2 ) c − a √ 3 , √ 3 16 a(9 − a 2 )(1 + a) t(3 + a 2 )(1 − a) c 3 − a √ 3(1 + a) , c(t) → √ 3 16 a(9 − a 2 )(1 + a) t(3 + a 2 )(1 − a) c − 3 − a √ 3(1 + a) , √ 3 4 a(9 − a 2 ) t(3 + a 2 )(1 − a 2 ) c a √ 3 , √ 3 16 a(9 − a 2 )(1 − a) t(3 + a 2 )(1 + a) c 3 + a √ 3(1 − a) . (3.38) Collecting all the conformal factors and taking into account the normalization factor −t/2 in (2.18) , B ′ × c − 3 + a √ 3(1 − a) b(− √ 3) c − 3 − a √ 3(1 + a) × c − a √ 3 b(0) c a √ 3 c 3 − a √ 3(1 + a) b( √ 3) c 3 + a √ 3(1 − a) . (3.39) After calculating the correlation function, the density of B ′ t * B ′ t |B ′ t is given by B ′ t * B ′ t |B ′ t density = − 3 √ 3 4 1 (1 − t 2 )(1 − t 4 ) 3 . (3.40) The result reproduces the familiar value −(3 √ 3/4) 3 in the limit t → 0 where |B ′ Relation to vacuum string field theory We have provided the representation (2.14) of the twisted regulated butterfly state in terms of a CFT with a vanishing central charge in Section 2, and have presented some exact computations of various quantities relevant to vacuum string field theory in Section 3. We are now ready to discuss the issues we raised in the introduction. Vacuum string field theory conjecture The action of Witten's cubic open string field theory [4] is given by S = − 1 α ′3 g 2 T 1 2 Ψ|Q B |Ψ + 1 3 Ψ|Ψ * Ψ ,(4.1) where g T is the on-shell three-tachyon coupling constant. If we expand the action around the solution |Ψ 0 corresponding to the tachyon vacuum as |Ψ = |Ψ 0 + | Ψ , it will take the same form except for the kinetic operator: S = S 0 − 1 α ′3 g 2 T 1 2 Ψ|Q| Ψ + 1 3 Ψ| Ψ * Ψ ,(4.2) where S 0 is the value of the action for |Ψ 0 and Q is the kinetic operator at the tachyon vacuum. It was conjectured in [1] that Q can be made purely of ghost fields by field redefinition, and string field theory with this conjectured form of the kinetic operator is called vacuum string field theory. A more specific conjecture on Q was put forward later in [19]. The kinetic operator Q does not seem to be made purely of ghost fields when we expand the action around the approximate solution constructed by level truncation. It was conjectured that there exists a one-parameter family of field redefinition which takes Q to the following form: Q = Q ǫ [1 + o(ǫ)] , (4.3) where Q is the midpoint c-ghost insertion defined in (3.1), ǫ is the parameter of the field redefinition, and we denoted o(ǫ) by terms which vanish in the limit ǫ → 0. In the singular limit ǫ → 0, the midpoint c-ghost insertion Q dominates in the kinetic operator Q with an infinite coefficient. Subleading structure of vacuum string field theory If the physics depends on the details of the subleading terms of the kinetic operator, the vacuum string field theory conjecture would not be so useful. We know very little about the subleading terms, and in fact even the form of the leading term Q/ǫ is a conjecture. It is implicitly assumed in the conjecture that a kind of universality works as in the case of renormalizable quantum field theory. If such a universality exists, can we set the subleading terms to zero and define Q as the ǫ → 0 limit of Q/ǫ? The action is then given by S leading = − 1 α ′3 g 2 T 1 2ǫ Ψ|Q|Ψ + 1 3 Ψ|Ψ * Ψ . (4.4) The answer to this question seems to be negative from the following argument. Unlike the case of Witten's string field theory with the BRST operator Q B , there exists a field redefinition which keeps the cubic term intact but changes the normalization of Q. Therefore, if (4.4) is exact and there are no subleading terms, the value of the coupling constant g T can be changed by field redefinition. Let us demonstrate this explicitly. It is known that field redefinition generated by K n = L n − (−1) n L −n preserves the cubic term. A simple example of field redefinition which changes the normalization of Q is given by |Ψ = e aK 2 | Ψ since [K 2 , Q] = 4Q. Therefore, if we redefine the string field as |Ψ = e a(K 2 −4) | Ψ and write the action in terms of | Ψ , the action is multiplied by an overall factor e −12a so that this changes the coupling constant g T to e 6a g T . We can construct infinitely many such examples in terms of a linear combination of K n because [K 2n , Q] = −4n(−1) n Q. There is another problem if we assume that the action (4.4) is exact. We can absorb not only g T but also ǫ by redefinition. For example, if we redefine the string field as |Ψ = α ′ g 2 3 T exp − 1 4 ln ǫ + 1 6 ln g T + 1 4 ln α ′ K 2 | Ψ (4.5) and write the action in terms of | Ψ , the action does not contain any parameters. Since actions with different values of ǫ are equivalent to each other as long as ǫ is finite, the limit ǫ → 0 does not make sense. Therefore, we conclude that subleading terms in (4.3) must be present in order for vacuum string field theory to have a parameter corresponding to the string coupling constant. Finiteness of the energy density The twisted regulated butterfly state |B ′ t with an appropriate normalization factor N solves the equation of motion of vacuum string field theory at the leading order. We are now ready to discuss the issue of finiteness of the energy density of the solution. Let us first compute the normalization factor N . The equation of motion derived from the leading term of the action (4.4) consists of two terms. When |Ψ = N |B ′ t , they are given by N ǫ B ′ t |Q|φ = − N ǫ (p 2 − 1) 2 2p 2 c 1 1 + p c 1 1 − p h B • φ(0) [1 + O(1 − t)] , N 2 B ′ t * B ′ t |φ = 27 √ 6 1024 N 2 (1 − t) 3 (p 2 − 1) 2 p 2 c 1 1 + p c 1 1 − p h B • φ(0) × [1 + O(1 − t)] . (4.6) Therefore, the equation of motion at the leading order is solved by N = 512 27 √ 6 (1 − t) 3 ǫ . (4.7) Note that how we should scale 1 − t as ǫ goes to zero is not determined by the analysis at this order. Let us next evaluate the energy density. The values of the two terms in the action (4.4) are given by − N 2 2α ′3 g 2 T ǫ B ′ t |Q|B ′ t density = − 65536 2187 (1 − t) 3 α ′3 g 2 T (1 + t) 3 (1 + t 2 ) 3 ǫ 3 ∼ − 1024 2187 (1 − t) 3 α ′3 g 2 T ǫ 3 , − N 3 3α ′3 g 2 T B ′ t * B ′ t |B ′ t density = 524288 √ 2 2187 (1 − t) 3 α ′3 g 2 T (1 + t) 6 (1 + t 2 ) 3 ǫ 3 ∼ 1024 √ 2 2187 (1 − t) 3 α ′3 g 2 T ǫ 3 ,(4.8) where we have also presented their leading behavior when t → 1. Therefore, both terms can be made finite simultaneously if 1 − t scales as ǫ in the limit ǫ → 0. However, it should be noted that we cannot make such a scaling by hand. Since we are solving a nonlinear equation, the energy density of the solution must be determined for a given value of ǫ if we take into account subleading terms. The fact that the scaling between ǫ and 1 − t is not determined by the analysis at the leading order will probably be related to the property of Q that its normalization can be changed by field redefinition. Therefore, it is encouraging that our result (4.8) admits a limit which makes both terms finite simultaneously, but whether the energy density is finite or not depends on the subleading structure of vacuum string field theory. Possible relevance of the subleading terms Even if we assume that the two terms in (4.4) remain finite in the limit ǫ → 0, it is still a nontrivial question whether or not the equation of motion is satisfied when it is contracted with the solution itself in the limit: Ψ|Q|Ψ + Ψ|Ψ * Ψ = 0. (4.9) To make the point clearer, let us introduce the following quantity: Ψ * Ψ|φ Ψ|Q|φ Ψ|Q|Ψ Ψ * Ψ|Ψ . (4.10) If this quantity is different from 1, the equation of motion contracted with the solution itself is not compatible with the one contracted with a state |φ in the Fock space. This quantity is independent of the normalizations of Q, |Ψ , and |φ so that if Q is dominated by Q/ǫ and |Ψ is given by N |B ′ t in the limit, the quantity reduces to lim t→1 B ′ t * B ′ t |φ B ′ t |Q|φ B ′ t |Q|B ′ t B ′ t * B ′ t |B ′ t . (4.11) From the results in Section 3, it is given by lim t→1 B ′ t * B ′ t |φ B ′ t |Q|φ B ′ t |Q|B ′ t B ′ t * B ′ t |B ′ t = √ 2 3 . (4.12) It is finite and independent of |φ in the limit, but different from 1. Therefore, the equations Ψ|Q|Ψ + Ψ|Ψ * Ψ = 0 and φ|Q|Ψ + φ|Ψ * Ψ = 0 for a state |φ in the Fock space are not compatible if we assume that the kinetic operator is dominated by the midpoint ghost insertion and that the solution is dominated by the twisted regulated butterfly state. This conclusion holds whatever scaling limit we may take for ǫ, t, and N . What does this result imply? We do not think that it is an immediate problem of the vacuum string field theory conjecture. First of all, we have chosen the butterfly state among infinitely many star-algebra projectors and regularized it in a particular way. There might be a better choice of a regulated star-algebra projector, although it is generally expected that all the star-algebra projectors are formally equivalent. Another possibility is that subleading terms in the kinetic operator Q contribute to Ψ|Q|Ψ at the same order as the leading term. Possible subleading terms in the solution |Ψ may also contribute to Ψ|Q|Ψ or Ψ|Ψ * Ψ at the same order. When we say that Q is dominated by Q/ǫ as in (4.3), we mean that φ 1 |Q|φ 2 is dominated by φ 1 |Q|φ 2 /ǫ for any arbitrary pair of states |φ 1 and |φ 2 in the Fock space. However, the twisted regulated butterfly state is not in the Fock space so that B ′ t |Q|B ′ t /ǫ may not dominate in B ′ t |Q|B ′ t . We will demonstrate that this is in fact possible. Let us introduce the following operator: Q η = e η 2 L 0 Qe η 2 L 0 . (4.13) This operator cannot be regarded as a possible regularization of Q because this does not satisfy the requirements for a kinetic operator of string field theory. For example, Q 2 η does not vanish. We use this operator only to demonstrate possible relevance of subleading terms in the quantity B ′ t |Q η |B ′ t . Let us first consider the quantity B ′ t |Q η |φ . It is more or less obvious that B ′ t |Q η |φ reduces to B ′ t |Q|φ in the limit t → 1, η → 0 with the constraint e η/2 t < 1. 5 We can also confirm this from an explicit expression of B ′ t |Q η |φ : B ′ t |Q η |φ = − it 4p (e 2η p 2 t 2 − 1) 2 √ 1 − e η t 2 e η 2 × (1 − ipe η 2 √ 1 − e η t 2 ) 2 c 1 1 + e η pt b(1) c 1 1 − e η pt × c 1 1 − ipe η 2 √ 1 − e η t 2 h e η t • φ(0) −(1 + ipe η 2 √ 1 − e η t 2 ) 2 c 1 1 + e η pt b(1) c 1 1 − e η pt × c 1 1 + ipe η 2 √ 1 − e η t 2 h e η t • φ(0) . (4.14) We can also compute B ′ t |Q η |B ′ t exactly. When e η 2 L 0 acts on |B ′ t , it just changes the parameter t to e η/2 t and the normalization of the state. It is not difficult to show that e η 2 L 0 |B ′ t = e − η 2 |B ′ e η/2 t . (4.15) 5 The constraint is necessary to avoid a singularity which occurs when the c ghost from Q η and the b ghost from |B ′ t coincide. Therefore, B ′ t |Q η |B ′ t is given by B ′ t |Q η |B ′ t density = e −η B ′ e η/2 t |Q|B ′ e η/2 t density = 1 e η (1 − e 2η t 4 ) 3 . (4.16) We can see from this expression that subleading terms can contribute at the same order as the leading term. Let us look at the next-to-leading order. By comparing terms of O(η) on both sides, we find 6 B ′ t |(L 0 Q + QL 0 )|B ′ t density = 12t 4 (1 − t 4 ) 4 − 2 (1 − t 4 ) 3 . (4.17) It behaves as 1/(1 − t) 4 when t → 1, which is more singular than the behavior 1/(1 − t) 3 of B ′ t |Q|B ′ t . Therefore, if η scales as 1 − t, the next-to-leading term B ′ t |(L 0 Q + QL 0 )|B ′ t contribute at the same order as the leading term B ′ t |Q|B ′ t . Higher-order terms in η also contribute at the same order under this scaling. If we compute the quantity (4.11) with Q replaced by Q η , the result can be different from √ 2/3. When η = 2a(1 − t), for example, it is given by lim t→1 B ′ t * B ′ t |φ B ′ t |Q 2a(1−t) |φ B ′ t |Q 2a(1−t) |B ′ t B ′ t * B ′ t |B ′ t = √ 2 3 1 (1 − a) 3 . (4.18) As can be seen from this example, the quantity Ψ|Q|Ψ may not be saturated by Ψ|Q|Ψ /ǫ, and this might be the origin of the discrepancy between the value √ 2/3 in (4.12) and the expected value 1. If this is the case, the factorization of the matter and ghost sectors at the leading order of vacuum string field theory might be ruined by the subleading terms of the kinetic operator. Conclusion and discussion We have presented the description of the twisted regulated butterfly state |B ′ t in terms of a CFT with a vanishing central charge given by (2.14). This description enabled us to carry out exact computations involving the twisted regulated butterfly state without evaluating conformal anomaly. As is emphasized in [35], the generalized gluing and resmoothing theorem [8] holds only when the total central charge vanishes. We can now make use of this theorem for computations involving the twisted regulated butterfly state using our description. Our method can also be applied to a class of other twisted surface states which satisfy the condition (2.40). We have derived exact expressions of the quantities B ′ some analytic formulas regarding the regulated butterfly state such as the explicit expressions of the parameters (3.14) introduced in [37], and the exact relations between coordinates (3.17), (3.31), and (3.37). In our description of the twisted regulated butterfly state, the way it solves the equation of motion of vacuum string field theory at the leading order can be understood through the OPE's (3.4) and (3.28). The twisted regulated butterfly state is represented as the regulated butterfly state with the three operators insertions c, b, and c along the boundary with this ordering. In the computation of B ′ t |Q|φ , the c ghost from Q approaches the b ghost in |B ′ t and replaces it by the identity operator, which is the leading term of the OPE (3.4). In the computation of B ′ t * B ′ t |φ , six ghosts c, b, c, c, b, and c are inserted along the boundary of the glued surface with this ordering. Four of them, b, c, c, and b approach the midpoint of the boundary and are replaced by the identity operator, which is the leading term of the OPE (3.28). In both cases, we end up with the same state at the leading order in the limit t → 1 given by the butterfly state with two c-ghost insertions on the boundary. Once we understand this mechanism, we can construct different solutions of the equation of motion of vacuum string field theory by appropriately inserting operators into the regulated butterfly state. Furthermore, we can also use the same strategy for solving the equation of motion of Witten's string field theory, or even solving the equations of motion of Witten's and vacuum string field theories simultaneously [43]. With the exact results obtained in Section 3, we have also discussed the issue of finiteness of the energy density of the solution of vacuum string field theory. We first argued that subleading terms in the kinetic operator Q are necessary in order to have a parameter corresponding to the string coupling constant. We then found that there exists a scaling limit of the parameters ǫ of vacuum string field theory and t of the regulated butterfly state which gives a finite energy density, but whether or not this scaling limit is realized depends on the subleading terms of Q. Finally, we found that the equation of motion contracted with the solution itself is not compatible with that contracted with a state in the Fock space if we assume that the midpoint ghost insertion Q/ǫ and the twisted regulated butterfly state dominate in the quantities Ψ|Q|Ψ and Ψ|Ψ * Ψ . We demonstrated that it is indeed possible for subleading terms of Q to contribute at the same order as the leading term in Ψ|Q|Ψ . Unfortunately, we know very little about the subleading terms of Q. In fact, it is a nontrivial problem to construct a consistent next-to-leading term to be added to the leading term given by Q/ǫ. 7 Constructing a solution to the equation of motion of vacuum string field theory with a finite energy density by taking the singular limit of the regulated butterfly state seems rather subtle. Our approach presented in this paper enables us to study this important issue quantitatively, and we believe that it will be useful for further investigation in the future. In the case of a flat space-time in 26 dimensions, the matter sector of correlation functions is normalized as follows: 1 matter = d 26 x. (A.2) In this paper, we only consider correlation functions which are independent of spacetime coordinates so that the space-time volume always factors out. The density of the correlation function of three c ghosts in the whole system which consists of the matter and ghost sectors is given by c(z 1 )c(z 2 )c(z 3 ) density = (z 1 − z 2 )(z 1 − z 3 )(z 2 − z 3 ). (A.3) The normalization of |φ is fixed by the condition that the SL(2, R)-invariant vacuum |0 corresponds to the identity operator. From the normalization of correlation functions (A.3) and the mode expansion (2.25), the normalization of the inner product is then fixed as follows: 0|c −1 c 0 c 1 |0 density = 1, (A.4) where the inner product has been divided by the space-time volume as denoted by the subscript density. which follows from transformations which map the glued surfaces in theẑ representation to an upper-half plane. Let us first consider a simpler problem. Prepare a regulated butterfly state in thê z representation, and glue the left half of the open string with the right half. The resulting surface can be represented as the region |Reẑ| ≤ π/4 of an upper-half plane with a boundary running from −π/4 + i arctanh t to π/4 + i arctanh t as − π 4 + i arctanh t → − π 4 → π 4 → π 4 + i arctanh t, (C.1) and with −π/4 + iy being identified with π/4 + iy for y ≥ arctanh t. A conformal transformation which maps this surface to an upper-half plane can be easily found because the structure of this surface is the same as a surface for φ|B ′ t with a state |φ in the Fock space. Therefore, it takes the following form: z = A arctan Bz √ 1 − q 2 z 2 , dẑ dz = AB (1 + (B 2 − q 2 )z 2 ) √ 1 − q 2 z 2 , (C.2) where A, B, and q are parameters to be determined. We choose z = i to be the openstring midpoint which corresponds to the infinity in theẑ coordinate. Then it follows from the argument in [37] that dẑ/dz must have a pole at z = ±i. This condition determines that B = √ 1 + q 2 . The parameter A is fixed by the condition π 2 = ∞ −∞ dz dẑ dz = ∞ −∞ dz A √ 1 + q 2 (1 + z 2 ) √ 1 − q 2 z 2 . (C.3) By evaluating the residue at the pole z = i in dẑ/dz, this condition determines that A = 1/2. The parameter q is determined in terms of t by the condition that the point z = π/4 + i arctanh t should be mapped to the infinity in the z coordinate. From z 2 = tan 2 2ẑ 1 + q 2 + q 2 tan 2 2ẑ , (C. 4) it follows that 1 + q 2 + q 2 tan 2 π 2 + 2i arctanh t = 0. (C.5) Therefore, q is given by 2t/(1 − t 2 ). To summarize, the conformal transformation is determined to beẑ = 1 2 arctan √ 1 + q 2 z √ 1 − q 2 z 2 (C.6) with q = 2t 1 − t 2 . (C.7) The conformal transformations associated with B ′ From the relation between tan 2ẑ i and the z i coordinate in (3.15), tan 2 2ẑ i is given by tan 2 2ẑ i = 2 tanẑ i 1 − tan 2ẑ i 2 = 4z 2 i (1 − t 2 z 2 i ) [1 − (1 + t 2 )z 2 i ] 2 = 4(z ′2 − t 2 ) [z ′2 − (1 + t 2 )] 2 , (C.15) where we have introduced z ′ i = −1/z i as in Subsection 3.2 and z ′ stands for one of the z ′ i 's. From (C.10), (C.13), (C.14), and (C.15), the relation between z and z ′ is given by 4(z ′2 − t 2 ) [z ′2 − (1 + t 2 )] 2 = (1 − z 2 ) 2 − 4q 2 z 2 4(1 + q 2 )z 2 (C. 16) for B ′ t |Q|B ′ t , and 4(z ′2 − t 2 ) [z ′2 − (1 + t 2 )] 2 = z 2 (z 2 − 3) 2 − q 2 (1 − 3z 2 ) 2 (1 + q 2 )(1 − 3z 2 ) 2 (C.17) for B ′ t * B ′ t |B ′ t . The ghost number of |B ′ t is one so that the ghost number of |φ must be two in order to have a nonvanishing inner product. Therefore,B ′ t |φ = φ|B ′ t . Similarly, B ′ t |φ = φ| B ′ t . Exact computations of various quantitiesNow we have the representation (2.14) of the twisted regulated butterfly state |B ′ t in terms of a CFT with a vanishing central charge, we can compute various quantities involving this state without evaluating conformal anomaly. We are particularly interested in the quantitiesB ′ t |Q|φ , B ′ t * B ′ t |φ , B ′ t |Q|B ′ t ,and B ′ t * B ′ t |B ′ t in the context of vacuum string field theory, where There is a possibility that the singularity from the coincident operators is canceled by singular conformal factors so that we obtain a finite result in the limit ǫ → 0. We do not know whether the singularity resulting from the operator insertion at the open-string midpoint is really harmful or just an artifact of our description. However, regularization is necessary as long as we use our description of the twisted wedge state. In[37], α, β and γ are denoted by a, b, and c, respectively. We have changed the names to avoid possible confusion with bc ghosts. In fact, the points ±1/b and ±1/c are the positions where b and c ghosts, respectively, will be inserted. t * B ′ t |B ′t is given byB ′ t * B ′ t |B ′ t = − a 6 (9 − a 2 ) 6 2 9 3 3 t 3 (1 − t 4 ) 3 (3 + a 2 ) 6 (1 − a 2 ) 2 t reduces to c 1 |0 . On the other hand, B ′ t * B ′ t |B ′ t diverges in the butterfly limit t → 1 as 1/(1 − t)6 . t |Q|φ , B ′ t * B ′ t |φ , B ′ t |Q|B ′ t , and B ′ t * B ′ t |B ′t for any state |φ in the Fock space in Section 3. We have provided6 This quantity has been computed in Appendix C of[36] in the operator formalism. An interesting family of kinetic operators of cubic string field theory were recently constructed[26] and studied[28,44,45,32,34]. t and B ′ t * B ′ t |B ′ t , we have to glue two and three regulated butterfly states, respectively. Gluing them can be easily done in theẑ representation as in the case of B ′ t * B ′ t |φ in Subsection 3.2. We then need to derive conformal AcknowledgementsI would like to thank Takuya Okuda, Hirosi Ooguri, Martin Schnabl, and Barton Zwiebach for useful discussions. This work was supported in part by the DOE grants DF-FC02-94ER40818 (MIT) and DE-FG03-92ER40701 (Caltech), and by a McCone Fellowship in Theoretical Physics from California Institute of Technology.Appendix A. Conformal field theory formulation of string field theoryIn the CFT formulation of string field theory[7,8], an open string field is represented as a wave functional obtained by a path integral over a certain region in a Riemann surface. For example, a state |φ in the Fock space can be represented as a wave functional on the arc |ξ| = 1 in an upper-half complex plane of ξ by path-integrating over the interior of the upper half of the unit disk |ξ| < 1 with the corresponding operator φ(0) inserted at the origin and with the boundary condition of the open string imposed on the part of the real axis −1 ≤ ξ ≤ 1. A more general class of states such as the regulated butterfly state can be defined by a path integral over a different region of a Riemann surface with a boundary and with possible operator insertions. When we parameterize the open string on the arc as ξ = e iθ with 0 ≤ θ ≤ π, we refer to the region π/2 ≤ θ ≤ π as the left half of the open string, and to the region 0 ≤ θ ≤ π/2 as the right half of the open string. We also refer to the point θ = π/2 as the open-string midpoint. We use the standard definitions[4]of the inner product φ 1 |φ 2 and the star product |φ 1 * φ 2 . The state |φ 1 * φ 2 is defined by gluing together the right half of the open string of |φ 1 and the left half of the open string of |φ 2 . Gluing can be made by conformal transformations which map the two regions to be glued together into the same region. The inner product φ 1 |φ 2 is defined by gluing the left and right halves of the open string of |φ 1 * φ 2 .We use the doubling trick throughout the paper. For example, bc ghosts on an upper-half plane are extended to the lower-half plane by c(z) =c(z) and b(z) =b(z). The normalization of correlation functions of the bc ghost system is given byAppendix B. Derivation of (2.46)We will derive the expression (2.46) from(2.45), Appendix C. Derivation of (3.31) and (3.37)We will derive (3.31) and(3.37)in this appendix. When we construct B ′ t |Q|B ′ t |Q|B ′ t and B ′ t * B ′ t |B ′ t can be constructed from (C.6) by the following trick. In the case of B ′ t |Q|B ′ t , let us consider the conformal transformation associated with a wedge state (2.43) with n = 4:where a different normalization has been chosen such thatz = i is mapped to z = i. If the coordinatez is mapped toẑ by the map (C.6), the coordinate z is mapped to the surface associated with B ′ t |Q|B ′ t . Sincẽthe map from z toẑ is given bŷIn the case of B ′ t * B ′ t |B ′ t , consider the conformal transformation associated with a wedge state with n = 6 and make an inversion:where the normalization has been again chosen such thatz = i is mapped to z = i. The purpose of the inversion is to avoid an operator insertion at z = ∞. If the coordinatez is mapped toẑ by the map (C.6), the coordinate z is mapped to the surface associated withthe map from z toẑ is given bŷLet us next consider the relation between theẑ coordinate for either B ′ t |Q|B ′ t or B ′ t * B ′ t |B ′ t and theẑ i coordinate for each of |B ′ t , namely, i = 1, 2 for B ′ t |Q|B ′ t and i = 1, 2, 3 for B ′ t * B ′ t |B ′ t . They are related byẑ =ẑ i + mπ/4 with an appropriate odd integer m so that tan 2 2ẑ i = cot 2 2ẑ. (C.14) String field theory around the tachyon vacuum. L Rastelli, A Sen, B Zwiebach, arXiv:hep-th/0012251Adv. Theor. Math. Phys. 5L. Rastelli, A. Sen and B. Zwiebach, "String field theory around the tachyon vacuum," Adv. Theor. Math. Phys. 5, 353 (2002) [arXiv:hep-th/0012251]. Classical solutions in string field theory around the tachyon vacuum. L Rastelli, A Sen, B Zwiebach, arXiv:hep-th/0102112Adv. Theor. Math. Phys. 5L. Rastelli, A. Sen and B. Zwiebach, "Classical solutions in string field theory around the tachyon vacuum," Adv. Theor. Math. Phys. 5, 393 (2002) [arXiv:hep-th/0102112]. L Rastelli, A Sen, B Zwiebach, arXiv:hep-th/0106010Vacuum string field theory. L. Rastelli, A. Sen and B. Zwiebach, "Vacuum string field theory," arXiv:hep-th/0106010. Noncommutative Geometry And String Field Theory. E Witten, Nucl. Phys. B. 268253E. Witten, "Noncommutative Geometry And String Field Theory," Nucl. Phys. B 268, 253 (1986). W Taylor, B Zwiebach, arXiv:hep-th/0311017D-Branes, Tachyons, and String Field Theory. W. Taylor and B. Zwiebach, "D-Branes, Tachyons, and String Field Theory," arXiv:hep-th/0311017. Analytical construction of a nonperturbative vacuum for the open bosonic string. V A Kostelecky, R Potting, arXiv:hep-th/0008252Phys. Rev. D. 6346007V. A. Kostelecky and R. Potting, "Analytical construction of a nonperturbative vacuum for the open bosonic string," Phys. Rev. D 63, 046007 (2001) [arXiv:hep-th/0008252]. String Field Theory On The Conformal Plane. 1. Kinematical Principles. A Leclair, M E Peskin, C R Preitschopf, Nucl. Phys. B. 317411A. LeClair, M. E. Peskin and C. R. Preitschopf, "String Field Theory On The Conformal Plane. 1. Kinematical Principles," Nucl. Phys. B 317, 411 (1989). String Field Theory On The Conformal Plane. 2. Generalized Gluing. A Leclair, M E Peskin, C R Preitschopf, Nucl. Phys. B. 317464A. LeClair, M. E. Peskin and C. R. Preitschopf, "String Field Theory On The Conformal Plane. 2. Generalized Gluing," Nucl. Phys. B 317, 464 (1989). Boundary CFT construction of D-branes in vacuum string field theory. L Rastelli, A Sen, B Zwiebach, arXiv:hep-th/0105168JHEP. 011145L. Rastelli, A. Sen and B. Zwiebach, "Boundary CFT construction of D-branes in vacuum string field theory," JHEP 0111, 045 (2001) [arXiv:hep-th/0105168]. Oscillator representation of the BCFT construction of D-branes in vacuum string field theory. P Mukhopadhyay, arXiv:hep-th/0110136JHEP. 011225P. Mukhopadhyay, "Oscillator representation of the BCFT construction of D-branes in vacuum string field theory," JHEP 0112, 025 (2001) [arXiv:hep-th/0110136]. Ratio of tensions from vacuum string field theory. K Okuyama, arXiv:hep-th/0201136JHEP. 020350K. Okuyama, "Ratio of tensions from vacuum string field theory," JHEP 0203, 050 (2002) [arXiv:hep-th/0201136]. Open string states and D-brane tension from vacuum string field theory. Y Okawa, arXiv:hep-th/0204012JHEP. 02073Y. Okawa, "Open string states and D-brane tension from vacuum string field theory," JHEP 0207, 003 (2002) [arXiv:hep-th/0204012]. Open string states around a classical solution in vacuum string field theory. H Hata, T Kawano, arXiv:hep-th/0108150JHEP. 011138H. Hata and T. Kawano, "Open string states around a classical solution in vacuum string field theory," JHEP 0111, 038 (2001) [arXiv:hep-th/0108150]. Operator Formulation Of Interacting String Field Theory. D J Gross, A Jevicki, Nucl. Phys. B. 2831D. J. Gross and A. Jevicki, "Operator Formulation Of Interacting String Field Theory," Nucl. Phys. B 283, 1 (1987). Operator Formulation Of Interacting String Field Theory. 2. D J Gross, A Jevicki, Nucl. Phys. B. 287225D. J. Gross and A. Jevicki, "Operator Formulation Of Interacting String Field Theory. 2," Nucl. Phys. B 287, 225 (1987). Covariant Interacting String Field Theory In The Fock Space Representation. N Ohta, Phys. Rev. D. 343785Erratum-ibid. D 35, 2627 (1987)N. Ohta, "Covariant Interacting String Field Theory In The Fock Space Representation," Phys. Rev. D 34, 3785 (1986) [Erratum-ibid. D 35, 2627 (1987)]. The Vertex Function In Witten's Formulation Of String Field Theory. E Cremmer, A Schwimmer, C B Thorn, Phys. Lett. B. 17957E. Cremmer, A. Schwimmer and C. B. Thorn, "The Vertex Function In Witten's Formulation Of String Field Theory," Phys. Lett. B 179, 57 (1986). The Ghost Vertex In E. Witten's String Field Theory. S Samuel, Phys. Lett. B. 181255S. Samuel, "The Ghost Vertex In E. Witten's String Field Theory," Phys. Lett. B 181, 255 (1986). Ghost structure and closed strings in vacuum string field theory. D Gaiotto, L Rastelli, A Sen, B Zwiebach, arXiv:hep-th/0111129Adv. Theor. Math. Phys. 6403D. Gaiotto, L. Rastelli, A. Sen and B. Zwiebach, "Ghost structure and closed strings in vacuum string field theory," Adv. Theor. Math. Phys. 6, 403 (2003) [arXiv:hep-th/0111129]. The equality of solutions in vacuum string field theory. T Okuda, arXiv:hep-th/0201149Nucl. Phys. B. 641393T. Okuda, "The equality of solutions in vacuum string field theory," Nucl. Phys. B 641, 393 (2002) [arXiv:hep-th/0201149]. Ghost kinetic operator of vacuum string field theory. K Okuyama, arXiv:hep-th/0201015JHEP. 020127K. Okuyama, "Ghost kinetic operator of vacuum string field theory," JHEP 0201, 027 (2002) [arXiv:hep-th/0201015]. A Purely Cubic Action For String Field Theory. G T Horowitz, J Lykken, R Rohm, A Strominger, Phys. Rev. Lett. 57283G. T. Horowitz, J. Lykken, R. Rohm and A. Strominger, "A Purely Cubic Action For String Field Theory," Phys. Rev. Lett. 57, 283 (1986). Wilson lines and classical solutions in cubic open string field theory. T Takahashi, S Tanimoto, arXiv:hep-th/0107046Prog. Theor. Phys. 106T. Takahashi and S. Tanimoto, "Wilson lines and classical solutions in cubic open string field theory," Prog. Theor. Phys. 106, 863 (2001) [arXiv:hep-th/0107046]. CFT description of identity string field: Toward derivation of the VSFT action. I Kishimoto, K Ohmori, arXiv:hep-th/0112169JHEP. 020536I. Kishimoto and K. Ohmori, "CFT description of identity string field: Toward derivation of the VSFT action," JHEP 0205, 036 (2002) [arXiv:hep-th/0112169]. Exact solutions of open bosonic string field theory. J Kluson, arXiv:hep-th/0202045JHEP. 020443J. Kluson, "Exact solutions of open bosonic string field theory," JHEP 0204, 043 (2002) [arXiv:hep-th/0202045]. Marginal and scalar solutions in cubic open string field theory. T Takahashi, S Tanimoto, arXiv:hep-th/0202133JHEP. 020333T. Takahashi and S. Tanimoto, "Marginal and scalar solutions in cubic open string field theory," JHEP 0203, 033 (2002) [arXiv:hep-th/0202133]. Marginal deformations in the open bosonic string field theory for N D0-branes. J Kluson, arXiv:hep-th/0203089Class. Quant. Grav. 20J. Kluson, "Marginal deformations in the open bosonic string field theory for N D0-branes," Class. Quant. Grav. 20, 827 (2003) [arXiv:hep-th/0203089]. Open string field theory around universal solutions. I Kishimoto, T Takahashi, arXiv:hep-th/0205275Prog. Theor. Phys. 108591I. Kishimoto and T. Takahashi, "Open string field theory around universal solutions," Prog. Theor. Phys. 108, 591 (2002) [arXiv:hep-th/0205275]. J Kluson, arXiv:hep-th/0205294New solution of the open bosonic string field theory. J. Kluson, "New solution of the open bosonic string field theory," arXiv:hep-th/0205294. Time dependent solution in open bosonic string field theory. J Kluson, arXiv:hep-th/0208028J. Kluson, "Time dependent solution in open bosonic string field theory," arXiv:hep-th/0208028. Exact solutions in open bosonic string field theory and marginal deformation in CFT. J Kluson, arXiv:hep-th/0209255J. Kluson, "Exact solutions in open bosonic string field theory and marginal deformation in CFT," arXiv:hep-th/0209255. Tachyon condensation and universal solutions in string field theory. T Takahashi, arXiv:hep-th/0302182Nucl. Phys. B. 670161T. Takahashi, "Tachyon condensation and universal solutions in string field theory," Nucl. Phys. B 670, 161 (2003) [arXiv:hep-th/0302182]. Exact solutions in SFT and marginal deformation in BCFT. J Kluson, arXiv:hep-th/0303199JHEP. 031250J. Kluson, "Exact solutions in SFT and marginal deformation in BCFT," JHEP 0312, 050 (2003) [arXiv:hep-th/0303199]. Gauge fixing and scattering amplitudes in string field theory around universal solutions. T Takahashi, S Zeze, arXiv:hep-th/0304261Prog. Theor. Phys. 110159T. Takahashi and S. Zeze, "Gauge fixing and scattering amplitudes in string field theory around universal solutions," Prog. Theor. Phys. 110, 159 (2003) [arXiv:hep-th/0304261]. On the generalized gluing and resmoothing theorem. T Asakawa, T Kugo, T Takahashi, arXiv:hep-th/9805119Prog. Theor. Phys. 100437T. Asakawa, T. Kugo and T. Takahashi, "On the generalized gluing and resmoothing theorem," Prog. Theor. Phys. 100, 437 (1998) [arXiv:hep-th/9805119]. Anomalous reparametrizations and butterfly states in string field theory. M Schnabl, arXiv:hep-th/0202139Nucl. Phys. B. 649101M. Schnabl, "Anomalous reparametrizations and butterfly states in string field theory," Nucl. Phys. B 649, 101 (2003) [arXiv:hep-th/0202139]. Star algebra projectors. D Gaiotto, L Rastelli, A Sen, B Zwiebach, arXiv:hep-th/0202151JHEP. 020460D. Gaiotto, L. Rastelli, A. Sen and B. Zwiebach, "Star algebra projectors," JHEP 0204, 060 (2002) [arXiv:hep-th/0202151]. Split string field theory II. D J Gross, W Taylor, arXiv:hep-th/0106036JHEP. 010810D. J. Gross and W. Taylor, "Split string field theory II," JHEP 0108, 010 (2001) [arXiv:hep-th/0106036]. The singular geometry of the sliver. G Moore, W Taylor, arXiv:hep-th/0111069JHEP. 02014G. Moore and W. Taylor, "The singular geometry of the sliver," JHEP 0201, 004 (2002) [arXiv:hep-th/0111069]. Tachyon potentials, star products and universality. L Rastelli, B Zwiebach, arXiv:hep-th/0006240JHEP. 010938L. Rastelli and B. Zwiebach, "Tachyon potentials, star products and universality," JHEP 0109, 038 (2001) [arXiv:hep-th/0006240]. Some properties of string field algebra. I Kishimoto, arXiv:hep-th/0110124JHEP. 01127I. Kishimoto, "Some properties of string field algebra," JHEP 0112, 007 (2001) [arXiv:hep-th/0110124]. Wedge states in string field theory. M Schnabl, arXiv:hep-th/0201095JHEP. 03014M. Schnabl, "Wedge states in string field theory," JHEP 0301, 004 (2003) [arXiv:hep-th/0201095]. Solving Witten's string field theory using the butterfly state. Y Okawa, arXiv:hep-th/0311115Phys. Rev. D. 6986001Y. Okawa, "Solving Witten's string field theory using the butterfly state," Phys. Rev. D 69, 086001 (2004) [arXiv:hep-th/0311115]. Closed string amplitudes from gauge fixed string field theory. N Drukker, arXiv:hep-th/0207266Phys. Rev. D. 67126004N. Drukker, "Closed string amplitudes from gauge fixed string field theory," Phys. Rev. D 67, 126004 (2003) [arXiv:hep-th/0207266]. On different actions for the vacuum of bosonic string field theory. N Drukker, arXiv:hep-th/0301079JHEP. 030817N. Drukker, "On different actions for the vacuum of bosonic string field theory," JHEP 0308, 017 (2003) [arXiv:hep-th/0301079].
[]
[ "Simulation of spin-polarized scanning tunneling spectroscopy on complex magnetic surfaces: Case of a Cr monolayer on Ag(111)", "Simulation of spin-polarized scanning tunneling spectroscopy on complex magnetic surfaces: Case of a Cr monolayer on Ag(111)" ]
[ "Krisztián Palotás ", "Werner A Hofer ", "László Szunyogh ", "\nDepartment of Theoretical Physics\nBudapest University of Technology and Economics\nBudafokiút 8H-1111BudapestHungary\n", "\nSurface Science Research Centre\nUniversity of Liverpool\nL69 3BXLiverpoolUnited Kingdom\n", "\nand Economics, Department of Theoretical Physics and Condensed Matter Research Group of the Hungarian Academy of Sciences\nBudapest University of Technology\nBudafokiút 8H-1111BudapestHungary\n" ]
[ "Department of Theoretical Physics\nBudapest University of Technology and Economics\nBudafokiút 8H-1111BudapestHungary", "Surface Science Research Centre\nUniversity of Liverpool\nL69 3BXLiverpoolUnited Kingdom", "and Economics, Department of Theoretical Physics and Condensed Matter Research Group of the Hungarian Academy of Sciences\nBudapest University of Technology\nBudafokiút 8H-1111BudapestHungary" ]
[]
We propose a computationally efficient atom-superposition-based method for simulating spinpolarized scanning tunneling spectroscopy (SP-STS) on complex magnetic surfaces based on the sample and tip electronic structures obtained from first principles. We go beyond the commonly used local density of states (LDOS) approximation for the differential conductance, dI/dV. The capabilities of our approach are illustrated for a Cr monolayer on a Ag(111) surface in a noncollinear magnetic state. We find evidence that the simulated tunneling spectra and magnetic asymmetries are sensitive to the tip electronic structure, and we analyze the contributing terms. Related to SP-STS experiments, we show a way to simulate two-dimensional differential conductance maps and qualitatively correct effective spin polarization maps on a constant current contour above a magnetic surface.
10.1103/physrevb.85.205427
[ "https://arxiv.org/pdf/1202.1257v2.pdf" ]
118,452,915
1202.1257
ba03b8989c05176785b62c20e1f9f3d33f61db59
Simulation of spin-polarized scanning tunneling spectroscopy on complex magnetic surfaces: Case of a Cr monolayer on Ag(111) 7 May 2012 (Dated: May 8, 2012) Krisztián Palotás Werner A Hofer László Szunyogh Department of Theoretical Physics Budapest University of Technology and Economics Budafokiút 8H-1111BudapestHungary Surface Science Research Centre University of Liverpool L69 3BXLiverpoolUnited Kingdom and Economics, Department of Theoretical Physics and Condensed Matter Research Group of the Hungarian Academy of Sciences Budapest University of Technology Budafokiút 8H-1111BudapestHungary Simulation of spin-polarized scanning tunneling spectroscopy on complex magnetic surfaces: Case of a Cr monolayer on Ag(111) 7 May 2012 (Dated: May 8, 2012)numbers: 7225Ba6837Ef7115-m7322-f * Electronic address: palotas@phybmehu 2 We propose a computationally efficient atom-superposition-based method for simulating spinpolarized scanning tunneling spectroscopy (SP-STS) on complex magnetic surfaces based on the sample and tip electronic structures obtained from first principles. We go beyond the commonly used local density of states (LDOS) approximation for the differential conductance, dI/dV. The capabilities of our approach are illustrated for a Cr monolayer on a Ag(111) surface in a noncollinear magnetic state. We find evidence that the simulated tunneling spectra and magnetic asymmetries are sensitive to the tip electronic structure, and we analyze the contributing terms. Related to SP-STS experiments, we show a way to simulate two-dimensional differential conductance maps and qualitatively correct effective spin polarization maps on a constant current contour above a magnetic surface. I. INTRODUCTION The scanning tunneling microscope (STM) and its spectroscopic mode (STS) proved to be extremely useful for studying local physical and chemical phenomena on surfaces since the invention of the STM 30 years ago [1,2]. The progress of experimental techniques in the last two decades was remarkable, thus, more sophisticated theoretical models and simulation tools are needed to explain all relevant details of electron tunneling transport measurements [3,4]. STS theory and applications are recently focused on extracting surface local electronic properties from experimental differential conductance (dI/dV ) data [5][6][7][8][9]. The role of the tip electronic structure has been identified to be crucial on the dI/dV tunneling spectra, see e.g. Refs. [7,10], and a theoretical method has been proposed to separate the tip and sample contributions to STS [11]. An emerging research field in surface science is the investigation of magnetism at the nanoscale and atomic scale with the aim of achieving ultrahigh information density for data storage purposes [12,13]. Spin-polarized scanning tunneling microscopy (SP-STM) [14] is admittedly an important tool for studying magnetism on surfaces. Recent experimental advances using this technique allow the investigation of complex magnetic structures (frustrated antiferromagnets, spin spirals, skyrmion lattices, etc.) [15][16][17][18]. Spin-polarized scanning tunneling spectroscopy (SP-STS) has recently been used to find inversion of spin polarization above magnetic adatoms [19][20][21], and the effect has been explained theoretically [22]. Furthermore, SP-STS is useful to study atomic magnetism [23], many-body effects on substrate-supported adatoms [24], or magnetic interactions between adatoms [25] as well. The effect of differently magnetized surface regions on SP-STS has also been reported [26,27], and the role of tip effects on SP-STS [28,29] and on achieving giant magnetic contrast [30] have also been highlighted. Our work is concerned with the presentation of an efficient simulation method for SP-STS based on first principles electronic structure data. We extend our atom-superposition-based method [29,31] in the spin-polarized Tersoff-Hamann framework [32] for simulating SP-STS by including the bias dependent background and tip-derivative terms into the calculated differential conductance following Passoni et al. [7]. The method is computationally cheap and it can be applied using results of any ab initio electronic structure code. The main advance of our tunneling model is the inclusion of the tip electronic structure, which is neglected in Refs. [32,33], and it is only taken into account in a model way in Ref. [7]. Our method, based on first principles calculation of the tip electronic structure, enables to study tip effects on the SP-STS spectra. Taking a prototype frustrated hexagonal antiferromagnetic system, a Cr monolayer on Ag(111) in a noncollinear magnetic 120 • Néel state, we simulate differential conductance tunneling spectra and magnetic asymmetries to illustrate the applicability of our method, and we analyze the contributing terms. Note that a three-dimensional (3D) approach to STS has been presented recently, that is applicable to nonmagnetic systems only, and it takes into account the symmetry of the tip states but not the electronic structure of the tip apex [34]. Our model is also a 3D approach in the sense that we sum up contributions from individual transitions between the tip apex atom and each of the surface atoms assuming the one-dimensional (1D) Wentzel-Kramers-Brillouin (WKB) approximation for electron tunneling processes in all these transitions, thus we call it a 3D WKB approach. The paper is organized as follows: The atom-superposition theoretical model of SP-STS is presented in section II. As an application, we investigate the surface of one monolayer (ML) Cr on Ag(111) in section III. We simulate differential conductance tunneling spectra and magnetic asymmetries with two tip models, and we analyze the contributing terms to dI/dV . Moreover, we show simulation results of bias dependent two-dimensional (2D) differential conductance and qualitatively correct effective spin polarization maps following a constant current contour above the surface, corresponding to a standard experimental setup. Our conclusions are found in section IV. Finally, in appendix A, we report the 1D WKB theory of STS, and give alternative expressions for the dI/dV . II. THEORETICAL MODEL OF ATOM-SUPERPOSITION SP-STS The 1D WKB theory for nonmagnetic STS is a well established approach [5,35], see appendix A. Here, we extend it to spin-polarized systems, and adapt it to an atom superposition framework, which enables a computationally inexpensive calculation of tunneling properties based on first principles electronic structure data. In magnetic STM junctions, the total tunneling current can be decomposed into nonspinpolarized (TOPO) and spin-polarized (MAGN) parts [32,33,36,37], I T OT AL = I T OP O + I M AGN .(1) Following the spin-polarized Tersoff-Hamann model [32] and its adaptation to the atom superposition framework [31,33], the magnetic contribution to the simple expression of the differential conductance at a given energy is proportional to the scalar product of the tip and sample magnetic density of states (DOS) vectors, m T (E) and m S (E), respectively, dI M AGN dU (E) ∝ m T (E)m S (E).(2) Thus, the spin-polarized parts of dI/dV can similarly be calculated within the 1D WKB approximation as reported in appendix A, just replacing n T (E)n S (E) by m T (E)m S (E). We formulate the tunneling current, the differential conductance and their TOPO and MAGN parts within the atom superposition framework following Ref. [31]. Here, we assume that electrons tunnel through one tip apex atom, and we sum up contributions from individual transitions between this apex atom and each of the surface atoms assuming the 1D WKB approximation for electron tunneling processes in all these transitions. The tunneling current at the tip position R T IP (x, y, z) and at bias voltage V is given by I T OT AL (x, y, z, V ) = I T OP O (x, y, z, V ) + I M AGN (x, y, z, V ),(3) where the TOPO and MAGN terms are formally given as I T OP O (x, y, z, V ) = V 0 dI T OP O dU (x, y, z, U, V )dU (4) I M AGN (x, y, z, V ) = V 0 dI M AGN dU (x, y, z, U, V )dU.(5) The integrands are the so-called virtual differential conductances, dI T OP O dU (x, y, z, U, V ) = ε 2 e 2 h α e −2κ(U,V )dα(x,y,z) n T (E T F + eU − eV )n α S (E S F + eU) (6) dI M AGN dU (x, y, z, U, V ) = ε 2 e 2 h α e −2κ(U,V )dα(x,y,z) m T (E T F + eU − eV )m α S (E S F + eU). (7) Here, e is the elementary charge, h the Planck constant, and E T F and E S F the Fermi energies of the tip and the sample surface, respectively. ε 2 e 2 /h ensures that the dI/dU is correctly measured in the units of A/V . ε has been chosen to 1 eV, but its actual value has to be determined comparing simulation results to experiments. The sum over α corresponds to the atomic superposition and has to be carried out, in principle, over all surface atoms. Convergence tests, however, showed that including a relatively small number of atoms in the sum provides converged dI/dU values [29]. The tip and sample electronic structures are included into this model via projected DOS (PDOS) onto the atoms, i.e. n T (E) and n α S (E) denote projected charge DOS onto the tip apex and the αth surface atom, respectively, while m T (E) and m α S (E) are projected magnetization DOS vectors onto the corresponding atomic spheres. They can be obtained from collinear or noncollinear electronic structure calculations [31]. In the present work we determine the noncollinear PDOS for the sample surface and we use a collinear PDOS for a model CrFe tip [22]. The transmission probability for electrons tunneling between states of atom α on the surface and the tip apex is of the simple form, T (E S F + eU, V, d α (x, y, z)) = e −2κ(U,V )dα(x,y,z) .(8) This corresponds to a spherical exponential decay of the electron wavefunctions. Here, d α (x, y, z) = |R T IP (x, y, z) − R α | is the distance between the tip apex and the surface atom labeled by α with position vector R α . Assuming an effective rectangular potential barrier between the tip apex and each surface atom, the vacuum decay κ can be written as κ(U, V ) = 1 2m φ S + φ T + eV 2 − eU ,(9) where the electron's mass is m, is the reduced Planck constant, and φ S and φ T are the average electron workfunction of the sample surface and the local electron workfunction of the tip apex, respectively. The method of determining the electron workfunctions is reported in Ref. [31]. κ is treated within the independent orbital approximation [33,38,39], which means that the same spherical decay is used for all type of orbitals. The interpretation of our simulation results with quantitative reliability compared to experiments has to be taken with care due to this approximation. However, extension of our model to take into account orbital dependent vacuum decay following Chen's work [40] is planned in the future, which is relevant for a more advanced description of tunneling between directional orbitals. Moreover, in our model, the electron charge and magnetization local density of states above the sample surface in the vacuum, n LDOS and m LDOS , respectively, are approximated by the following expressions: n LDOS (x, y, z, E S F + eU) = α e −2κ(U )dα(x,y,z) n α S (E S F + eU)(10)m LDOS (x, y, z, E S F + eU) = α e −2κ(U )dα(x,y,z) m α S (E S F + eU).(11) with κ(U) = 1 2m (φ S − eU). (12) Note that the exact LDOS can be obtained by explicitly calculating the decay of the electron states into the vacuum taking their orbital symmetry into account as well, not via such a simple 3D WKB model. Our approach, however, has computational advantages as discussed in Ref. [31]. Similarly to the tunneling current, the physical differential conductance can be decomposed into non-spinpolarized (TOPO) and spin-polarized (MAGN) parts and it can be written at the tip position R T IP (x, y, z) and at bias voltage V as dI T OT AL dV (x, y, z, V ) = dI T OP O dV (x, y, z, V ) + dI M AGN dV (x, y, z, V ),(13) where the contributions are given as [see Eq.(A10) in appendix A] dI T OP O dV (x, y, z, V ) = dI T OP O dU (x, y, z, V, V ) + B T OP O (x, y, z, V ) + D T OP O T (x, y, z, V ) (14) dI M AGN dV (x, y, z, V ) = dI M AGN dU (x, y, z, V, V ) + B M AGN (x, y, z, V ) + D M AGN T (x, y, z, V ).(15) Here, B and D T are the background and tip-derivative terms, respectively, see appendix A. The background term, which contains the bias-derivative of the transmission function, is usually taken into account in recent STS theories [7,34,35], while the tip-derivative term containing the energy derivative of the tip DOS is rarely considered in the recent literature. Obviously, the total differential conductance can also be written in the same structure, dI T OT AL dV (x, y, z, V ) = dI dU (x, y, z, V, V ) + B(x, y, z, V ) + D T (x, y, z, V )(16) with dI dU (x, y, z, V, V ) = dI T OP O dU (x, y, z, V, V ) + dI M AGN dU (x, y, z, V, V )(17)B(x, y, z, V ) = B T OP O (x, y, z, V ) + B M AGN (x, y, z, V ) (18) D T (x, y, z, V ) = D T OP O T (x, y, z, V ) + D M AGN T (x, y, z, V ).(19) In order to calculate the background term, we need the bias-derivative of the transmission function. Using Eq.(8) and the given form of the vacuum decay in Eq.(9), we obtain ∂T ∂V (E S F + eU, V, d α (x, y, z)) = − me 2 d α (x, y, z) T (E S F + eU, V, d α (x, y, z)) κ(U, V ) .(20) Considering this, and the corresponding dI/dV components in the 1D WKB model, Eqs. (A12) and (A13) in appendix A, the background and the tip-derivative contributions can be written as B T OP O (x, y, z, V ) = −ε 2 me 3 2π 3 (21) × α d α (x, y, z) V 0 e −2κ(U,V )dα(x,y,z) κ(U, V ) n T (E T F + eU − eV )n α S (E S F + eU)dU B M AGN (x, y, z, V ) = −ε 2 me 3 2π 3 (22) × α d α (x, y, z) V 0 e −2κ(U,V )dα(x,y,z) κ(U, V ) m T (E T F + eU − eV )m α S (E S F + eU)dU D T OP O T (x, y, z, V ) = −ε 2 e 2 h (23) × α V 0 e −2κ(U,V )dα(x,y,z) ∂n T ∂U (E T F + eU − eV )n α S (E S F + eU)dU D M AGN T (x, y, z, V ) = −ε 2 e 2 h (24) × α V 0 e −2κ(U,V )dα(x,y,z) ∂m T ∂U (E T F + eU − eV )m α S (E S F + eU)dU. Thus, we formulated all components of the differential conductance in spin-polarized tunnel junctions within the atom superposition framework using first principles electronic structure of the sample and the tip. Note that all dI/dV expressions in Eq.(A22) in appendix A can similarly be calculated within our 3D WKB approach. I(x, y, z, V ) and dI/dV (x, y, z, V ) can be calculated at (x, y, z) grid points of a threedimensional (3D) fine grid in a finite box above the surface. The recipe for simulating SP-STM images based on the 3D current map is given in Ref. [31]. Here, we focus on the simulation of dI/dV spectra. From the 3D differential conductance map, data can be extracted that are directly comparable to experiments. For example, a single point spectrum corresponds to a fixed (x 0 , y 0 , z 0 ) tip position, and two-dimensional (2D) spectra can also be obtained, where the image resolution is determined by the density of (x, y) grid points. There are usually two ways to define a 2D differential conductance map [15]. The first method fixes the tip height at z = Z stab = const and scans the surface, dI/dV (x, y, Z stab , V ). The second option measures dI/dV on a constant current contour, I T OT AL = I stab = const, which is the widely used method in experiments. Simulation of this can be done in two steps: First, we calculate the 3D current map with the given bias voltage V stab , and at the second step we determine the height profile of a constant current contour, z(x, y, V stab , I stab ), using logarithmic interpolation [31]. V stab and I stab are the tunneling parameters, and they stabilize the tip position at the height of z(x, y, V stab , I T OT AL = I stab ) above the (x, y) sample surface point. The 2D differential conductance map on the constant current contour is then given by dI/dV (x, y, z(x, y, V stab , I T OT AL = I stab ), V ), where the V -dependence is obtained by sweeping the bias voltage range using a lock-in technique in experiments [15]. Recently, experimental efforts have been made to extract the TOPO component of the tunneling current [41], and measure spectroscopic data on such constant current contours, i.e. at [42]. According to Ref. [15], a constant tunneling transmission enables an easier interpretation of measured 2D spectroscopic data. We believe that a constant TOPO current contour is closer to this constant tunneling transmission criterion than a constant TOTAL current contour due to the appearance of spin dependent effects in the latter one. I T OP O = const On the other hand, the calculation of any current contour is simple within our 3D WKB approach [31]. Since the I T OP O = const experimental method is not routinely available at the moment, we restrict ourselves to consider the I T OT AL = const contours when calculating the 2D differential conductance maps, and we will show examples in the next section. By simulating differential conductance spectra above a magnetic surface with parallel (P) and antiparallel (AP) tip magnetization directions with respect to a pre-defined direction (usually the magnetization direction of a chosen surface atom is taken), the so-called magnetic asymmetry can be defined [21]. In our case this quantity can be calculated at all considered positions of the tip apex atom, i.e. at all (x, y, z) grid points within our finite box above the surface: A(x, y, z, V ) = dI P /dV (x, y, z, V ) − dI AP /dV (x, y, z, V ) dI P /dV (x, y, z, V ) + dI AP /dV (x, y, z, V ) .(25) From this, the magnetic asymmetry can similarly be calculated on appropriate constant current contours as described in the previous paragraph. Using Eq. (13), and the fact that the magnetic contribution for the AP tip magnetization direction dI AP M AGN /dV equals −dI P M AGN /dV , since the tip magnetization PDOS vector m T (E) changes sign at all energies compared to the P tip magnetization direction, the differential conductances take the following form: dI P /dV (x, y, z, V ) = dI T OP O /dV (x, y, z, V ) + dI P M AGN /dV (x, y, z, V ) dI AP /dV (x, y, z, V ) = dI T OP O /dV (x, y, z, V ) − dI P M AGN /dV (x, y, z, V ).(26) Thus, the magnetic asymmetry can be expressed as the fraction of the MAGN and TOPO differential conductances from Eqs. (14) and (15) as A dI/dV (x, y, z, V ) = dI P M AGN /dV (x, y, z, V ) dI T OP O /dV (x, y, z, V ) (27) = dI P M AGN /dU(x, y, z, V, V ) + B P M AGN (x, y, z, V ) + D M AGN,P T (x, y, z, V ) dI T OP O /dU(x, y, z, V, V ) + B T OP O (x, y, z, V ) + D T OP O T (x, y, z, V ) . This is the correct magnetic asymmetry expression based on the physical differential conductances that can be obtained from experiments. However, a magnetic asymmetry can similarly be defined taking the virtual differential conductances from Eqs. (6) and (7): A dI/dU (x, y, z, V ) = dI P M AGN /dU(x, y, z, V, V ) dI T OP O /dU(x, y, z, V, V ) .(28) This is an important quantity since it is related to the vacuum spin polarization of the sample in a simple way [21]: A dI/dU (x, y, z, V ) = P T (E T F )P S (x, y, z, E S F + eV ) = ESP (x, y, z, V ),(29) i.e., A dI/dU (x, y, z, V ) is the effective spin polarization (ESP): the scalar product of the tip spin polarization vector at its Fermi level, P T (E T F ), and the vacuum spin polarization vector of the sample at R T IP (x, y, z), eV above the sample Fermi level, P S (x, y, z, E S F + eV ). Following above, it is clear that the determination of the sample spin polarization from experimentally measured spectra is not straightforward since the experimentally accessible magnetic asymmetry according to the equivalent expressions Eq.(25) and Eq. (27) contains the background and tip-derivative terms as well. On the other hand, we can easily calculate ESP(x, y, z, V ) within our method. There are even more possibilities to define magnetic asymmetries, by adding the background terms in Eqs. (21) and (22), or the tip-derivative terms in Eqs. (23) and (24) to the corresponding virtual differential conductance and then performing the division: A dI/dU +B (x, y, z, V ) = dI P M AGN /dU(x, y, z, V, V ) + B P M AGN (x, y, z, V ) dI T OP O /dU(x, y, z, V, V ) + B T OP O (x, y, z, V ) ,(30)A dI/dU +D T (x, y, z, V ) = dI P M AGN /dU(x, y, z, V, V ) + D M AGN,P T (x, y, z, V ) dI T OP O /dU(x, y, z, V, V ) + D T OP O T (x, y, z, V ) .(31) As dI/dU(V, V ) is one component of dI/dV (V ) according to Eq.(16), we will compare them and also the magnetic asymmetry expressions in Eqs. (27)- (31), in order to estimate the error one makes when neglecting the background and tip-related components of dI/dV (V ) for a given combination of a complex magnetic surface and a magnetic tip. On the other hand, we will calculate qualitatively correct bias dependent 2D effective spin polarization maps following a constant current contour. It has to be noted that the presented method can also be applied to study nonmagnetic systems, where all magnetic contributions equal zero and the corresponding topographic STS spectra can be simulated. Of course, in this case, the magnetic asymmetry is zero. III. RESULTS AND DISCUSSION In order to demonstrate the reliability and capabilities of our model for simulating SP-STS on complex magnetic surfaces, we consider a sample surface with noncollinear magnetic order. One ML Cr on Ag(111) is a prototype of frustrated hexagonal antiferromagnets [33]. Due to the geometrical frustration of the antiferromagnetic exchange interactions between Cr spin moments, its magnetic ground state is a noncollinear 120 • Néel state [32]. In the presence of spin-orbit coupling, two types of Néel states with opposite chiralities can form, and one of them is energetically favored [31]. We performed geometry relaxation and electronic structure calculations based on Density Functional Theory (DFT) within the Generalized Gradient Approximation (GGA) implemented in the Vienna Ab-initio Simulation Package (VASP) [43][44][45]. A plane wave basis set for electronic wavefunction expansion together with the projector augmented wave (PAW) method [46] has been applied, while the exchange-correlation functional is parametrized according to Perdew and Wang (PW91) [47]. For calculating the fully noncollinear electronic structure we used the VASP code as well [48,49], with spin-orbit coupling considered. We model the Cr/Ag (111) E) = n ↑ (E) + n ↓ (E), and m(E) = n ↑ (E) − n ↓ (E) . It is seen that the majority spin PDOS dominates over the minority spin PDOS below E S F + 0.54 eV, while m S (E) < 0 above E S F + 0.54 eV. This implies a spin polarization reversal at this particular energy [31]. In our model, the vacuum local density of states (LDOS) is obtained by the superposition of spherically decaying electron states according to the independent orbital approximation. Above a complex magnetic surface, the spin up and spin down notations are meaningless since there is no global spin quantization axis. Instead, we can consider the charge and magnetization (vector) character of the LDOS obtained from the PDOS, as defined in Eqs. (10) and (11) for n LDOS and m LDOS , respectively. Above a surface Cr atom with lateral position (x 0 , y 0 ), both vacuum LDOS behave the same way as the corresponding PDOS, thus the spin polarization vector in vacuum P LDOS (x 0 , y 0 , z, E) = m LDOS (x 0 , y 0 , z, E)/n LDOS (x 0 , y 0 , z, E) equals the one obtained from the PDOS, i.e. P S (E) = m S (E)/n S (E). Moving out of the high symmetry lateral position above a surface atom (x 0 , y 0 ), m LDOS will vary due to the different atomic spin quantization axes for all three Cr atoms in the magnetic surface unit cell and the considered vacuum decays. n LDOS will, however, remain qualitatively unchanged. The lateral variation of m LDOS will result in a position dependent vacuum spin polarization vector of the sample surface. This quantity multiplied by the tip spin polarization vector results in the effective spin polarization, defined in Eq. (29), which will be simulated later. Dependence of the tunneling spectra on the tip electronic structure can be studied by considering different tip models. In this work we compare spectra and magnetic asymmetries measured by a magnetic CrFe tip and an electronically flat magnetic tip. The electronic structure data of the CrFe tip apex was taken from Ref. [22], where the tip was modeled as 9), where κ(U, V ) has an explicit V -dependence, and we assume that φ T = φ S . Alternatively, a simpler model for κ(U) can be considered without V -dependence as in Eq. (12). In this case the background term of the differential conductance B(V ) is zero, since the tunneling transmission does not depend on the bias voltage, and the physical differential conductance equals the virtual differential conductance, i.e. in the opposite direction. We find that the absolute value of the current is higher in the negative bias range compared to the positive range. This is due to the surface and tip electronic structures. The sample occupied PDOS combined with the tip unoccupied PDOS is greater than the sample unoccupied PDOS combined with the tip occupied PDOS, see Figure 1. Performing a numerical differentiation of I(V ) with respect to V , we obtain the differential conductance at this particular tip position. As can be seen this is extremely noisy, and a smoothing procedure should be applied to it before further analysis. Alternatively, the differential conductance can be calculated using Eq.(16), implemented within the atom superposition approach. Figure 2 shows that dI/dV obtained this way (black curve) is a smooth function that fits precisely to the noisy numerical derivative of the current. There is more discussion about avoiding the numerical differentiation of the tunneling current in determining the dI/dV , e.g. in Ref. [11]. We obtain more information about the dI/dV by analyzing its components, the virtual differential conductance dI/dU(V, V ), the background term B(V ), and the tip-derivative term D T (V ). We find that dI/dU(V, V ) differs less than 10 % compared to dI/dV in the bias range [-0.01 V, +0.01V], i.e. practically at the common Fermi level of tip and sample. This means that the virtual differential conductance approximation for the dI/dV (also known as the LDOS approximation) is not sufficient except at zero bias, where they are identical, dI/dV (0) = dI/dU(0, 0). Moreover, one can recognize that most part of the dI/dV peak structure is already included in the dI/dU(V, V ) term, which is qualitatively similar to the charge PDOS of the surface Cr atom of the sample, n S (E), see Figure 1. Figure 2. It can be seen that its sign is in agreement with Ref. [7] and it has a non-trivial bias dependence. This is essentially due to the extra 1/κ(U, V ) factor in the energy integration of the background term, Eqs. (21) and (22), compared to the tunneling current expression. The B(V )/I(V ) function could, in principle, be calculated at different tip-sample distances (z), and could be compared to analytical expressions denoted by f (z, V ) reported in [7]. The comparison is, however, not straightforward due to two reasons. First, the analytical expressions were reported based on the 1D WKB approximation, whereas our model is a 3D atomic superposition approach based on WKB, which results in an effective transmission coefficient, different from the 1D WKB transmission. Note that a 3D approach to STS with another effective transmission coefficient has recently been reported by Donati et al. [34]. Second, in Figure 2 we reported the sum of the TOPO and MAGN contributions, while the related STS literature is concerned with nonmagnetic systems only, which corresponds to the analysis of the topographic part of the spin-polarized results. Consideration of the spin-polarized tunneling complicates the analytical calculations that are unavailable at the moment. The analysis of B(z, V )/I(z, V ) along the discussed lines could be a future research direction that is beyond the scope of the present study. In the following we focus on the comparison of SP-STS spectra by reversing the tip magnetization direction, and also using the flat magnetic tip model. Figure 3 shows simulated single point differential conductance tunneling spectra following Eq.(16), probed with the flat magnetic tip and the model CrFe tip, z = 3.5Å above a surface Cr atom. Parallel (P) and antiparallel (AP) tip magnetization directions are set relative to the underneath surface Cr atom. It can clearly be seen that measuring the spectra with oppositely magnetized tips of the same type result in different differential conductance curves, in agreement with SP-STS experiments performed on oppositely magnetized sample areas with a fixed tip magnetization direction [19,21]. For the flat magnetic tip, two different vacuum decays, κ(U) and κ(U, V ) are assumed using Eqs. (12) and (9), respectively. For the bias-independent vacuum decay (dotted curves) we find that dI P /dV > dI AP /dV below V = +0.54 V, while dI P /dV < dI AP /dV above V = +0.54 V. In our previous work [29] we identified the effective spin polarization [P T (E)P S (E) = m T (E)m S (E)/(n T (E)n S (E))] responsible for this effect. This is the decisive factor for determining the sign of the magnetic contribution to dI/dV at energy E in the improved SP-STS model presented in section II as well. The magnetic part of the physical differential conductance is given in Eq. (15). The inclusion of a realistic tip electronic structure into our model complicates the spectra even more. This is demonstrated in Figure 3 for the CrFe tip model (solid lines). In this case all three terms contribute to the differential conductance, and dI/dV (V ) = dI/dU(V, V ) + B(V ) + D T (V ) . Thus, the relative heights of the differential conductance tunneling spectra dI P /dV and dI AP /dV are determined by the superposition of the mag- For the P tip magnetization, dI P /dV is the same as the black solid curve in Figure 2, and its contributions are also shown there. In Figure 3, we observe more changes of the relative height of the dI P /dV and dI AP /dV spectra measured with the CrFe tip than with the flat tip. These include the sign changes of the magnetic part of the spectra, similarly as before. We find that dI P /dV > dI AP /dV in the bias interval [-1.04 V, +0.49 V], and a reversed relation is obtained in the complementary bias regime. Comparing the spectra to the ones measured with the flat magnetic tip, we see that they are qualitatively closer to the κ(U, V ) model used for the flat tip due to the presence of the background terms. Moreover, the individual features coming from the sample and the tip electronic structures can be assigned. In our case we identify the peak at -1.2 V, indicated by a vertical dotted line in Figure 3, coming from the CrFe tip electronic structure since it is missing from the spectra calculated with the flat tip. All other features are related to the sample electronic structure as they appear in the spectra measured with the flat tip. The relative heights of the differential conductance tunneling spectra dI P /dV and dI AP /dV can also be determined from the magnetic asymmetry, Eq. (25). Let us compare the magnetic asymmetries calculated from the spectra in Figure 3 using the two magnetic tips. Moreover, for the CrFe tip we compare the asymmetry expressions defined in Eqs. (27)- (31), in order to estimate the error one makes when neglecting the background and tip-related components of dI/dV (V ). Figure 4 shows the calculated asymmetry functions at z = 3.5Å above a surface Cr atom. It can be seen that A F lat,κ(U ) (V ) and A F lat,κ(U,V ) (V ) (dashed curves) behave qualitatively similarly. In addition, A F lat,κ(U ) (V ) is greater than it is also quantitatively close to the physical magnetic asymmetry as its sign changes occur at -1.01 V and +0.5 V, and it is within 10% relative error compared to A CrF e,dI/dV (V ) in an increased bias interval [-0.90 V, +0. 45 V]. Summarizing this paragraph, the contribution of all three terms to the dI/dV (V ) according to Eq.(16) is needed to define the physical magnetic asymmetry, that should be comparable to experiments. A F lat,κ(U,V ) (V ) Our method presented in section II also enables one to simulate two-dimensional (2D) dI/dV and magnetic asymmetry maps in high spatial resolution above the surface, that can be compared to results of SP-STS experiments. Such experiments are routinely performed while the tip follows a constant TOTAL current contour, see e.g. Ref. [51]. is plotted in the bottom left part of Figure 5. The apparent height of the Cr atom with parallel magnetic moment to the tip is lower than those of the other two Cr atoms in the magnetic surface unit cell. This has been explained in a previous work [31]. The surface scan area and the magnetic unit cell are shown in the top left part of Figure 5, indicated by a black-bordered rectangle, and a yellow (light gray) rhombus, respectively. For calculating the differential conductance-related 2D maps, we vary the vertical position z of the tip apex atom following the constant current contour shown in the bottom left part of Figure 5. Thus, spin-resolved dI/dV and magnetic asymmetry maps can be simulated at different V bias voltages corresponding to experiments. As an example, dI/dV (x, y) and the effective spin polarization ESP(x, y), see Eq. (29), are shown in the middle and right columns of Figure 5, respectively, calculated at bias voltages V = +0.5 V (top) and V = +0.6 V (bottom). We chose these voltages close to the spin polarization reversal of the sample surface at 0.54 eV above its Fermi level, see Figure 1, and Ref. [31]. Indeed, the reversal of the 2D dI/dV map at V = +0.6 V compared to V = +0.5 V can clearly be seen. While the SP-STM image at +1 V and the dI/dV map at +0.6 V show the same type of contrast, the dI/dV signal is inverted for +0.5 V. Since P T = +0.8 is constant in the full energy range, this effect is due to the surface electronic structure. At +0.6 V bias, all surface Cr spin polarization vectors point opposite to their local magnetic moment directions [31], and since P T = +0.8 is set with respect to the (1/2, We suggest that by applying our method to magnetic surfaces, two-dimensional dI P /dV (x, y), dI AP /dV (x, y), and magnetic asymmetry A(x, y) maps can be constructed on appropriate current contours at arbitrary V bias, corresponding to SP-STS experiments. Similarly, an ESP(x, y) map can be simulated. We stress again that the ESP can not simply be obtained from experimental magnetic asymmetry due to the presence of the background and tip-derivative terms. By explicitly considering the tip electronic structure in our SP-STS model based on experimental information, it would help in a more reasonable interpreta-tion of experimentally measured tunneling spectra, magnetic asymmetries, and effective spin polarization. IV. CONCLUSIONS We presented an efficient simulation method for spin-polarized scanning tunneling spectroscopy based on first principles electronic structure data within our atom superposition framework [31] by including the bias dependent background and tip-derivative terms into the differential conductance formula following Passoni et al. [7]. We showed that our simulated data can be related to standard experimental setups. Taking the tip electronic structure into account, the effect of a richer variety of electronic structure properties can be investigated on the tunneling transport within the indicated approximations (atom superposition, orbital-independent spherical vacuum decay). The method is computationally cheap and it can be applied based on the results of any ab initio electronic structure code. Taking a prototype frustrated hexagonal antiferromagnetic system, a Cr monolayer on Ag(111) in a noncollinear magnetic 120 • Néel state, we simulated differential conductance tunneling spectra and magnetic asymmetries to illustrate the applicability of our method, and we analyzed the contributing terms. We found that the features of the tunneling spectra are coming from the virtual differential conductance and tip-derivative terms, and the background term is proportional to the tunneling current. We showed evidence that the tunneling spectra and the related magnetic asymmetries are sensitive to the tip electronic structure and to the vacuum decay. We also demonstrated a simulation method for 2D dI/dV , magnetic asymmetry, and qualitatively correct effective spin polarization maps above a complex magnetic surface following a constant current contour. Finally, we pointed out that the magnetic asymmetry obtained from experiments can not simply be related to the sample spin polarization due to the presence of the background and tip-derivative terms. We report the formulation of the tunneling current and the differential conductance in the framework of the one-dimensional (1D) WKB approximation, which has been used in our atom superposition approach in section II. Assuming elastic tunneling, the non-spinpolarized part of the tunneling current at zero temperature is given by [5,35] I(V, d) = C E S F +eV E S F T (E, V, d)n T (E)n S (E)dE,(A1) where V is the bias voltage, d the tip-sample distance, C an appropriate constant, E S F the Fermi energy of the sample surface, e the elementary charge, T the tunneling transmission coefficient, while n T (E) and n S (E) are the tip and sample densities of states, respectively. Performing a change of variable from E to U using E = E S F + eU, the tunneling current reads I(V, d) = Ce V 0 T (E S F + eU, V, d)n T (E S F + eU)n S (E S F + eU)dU.(A2) The applied bias voltage V in the tunnel junction defines the difference between tip and sample Fermi levels, E T F = E S F + eV . Using this, the energy dependence of n T (E) can be rewritten related to the tip Fermi level E T F , and the tunneling current can be reformulated as I(V, d) = Ce V 0 T (E S F + eU, V, d)n T (E T F + eU − eV )n S (E S F + eU)dU.(A3) We denote the integrand by the formal quantity, dI dU (U, V, d) = CeT (E S F + eU, V, d)n T (E T F + eU − eV )n S (E S F + eU),(A4) called virtual differential conductance. The tunneling current can then be expressed as I(V, d) = V 0 dI dU (U, V, d)dU.(A5) The physical differential conductance can be obtained as the derivative of the tunneling current with respect to the bias voltage. This can formally be written as dI dV (V, d) = dI dU (V, V, d) + V 0 ∂ ∂V ′ dI dU (U, V ′ , d) V ′ =V dU,(A6) or using Eq.(A4) as dI dV (V, d) = CeT (E S F + eV, V, d)n T (E T F )n S (E S F + eV ) (A7) + Ce V 0 ∂ ∂V ′ T (E S F + eU, V ′ , d)n T (E T F + eU − eV ′ ) V ′ =V n S (E S F + eU)dU. This is a known formula in the literature [5,35]. If the tip electronic structure is assumed to be energetically flat, i.e. n T (E) = n T , which is still a widely used approximation in the recent literature, then the V -dependence of n T (E T F + eU − eV ) disappears, i.e. n T (E T F + eU − eV ) = n T , and the differential conductance becomes dI dV (V, d) = CeT (E S F + eV, V, d)n T (E T F )n S (E S F + eV ) (A8) + Cen T V 0 ∂T ∂V (E S F + eU, V, d)n S (E S F + eU)dU Here, the second term is the so-called background term, which is a monotonous function of the bias voltage [35]. Going beyond the assumption of the electronically flat tip by incorporating the tip electronic structure in the differential conductance expression, the effect of the tip can be studied on the tunneling spectra. The explicit energy dependence of n T (E) can be calculated from first principles [22,29], or can be included in a model way [7]. Following Eq.(A7), the differential conductance can be reformulated as dI dV (V, d) = CeT (E S F + eV, V, d)n T (E T F )n S (E S F + eV ) (A9) + Ce V 0 ∂T ∂V (E S F + eU, V, d)n T (E T F + eU − eV )n S (E S F + eU)dU + Ce V 0 T (E S F + eU, V, d) ∂n T ∂V (E T F + eU − eV )n S (E S F + eU)dU. Using that ∂n T (E T F +eU −eV )/∂V = −∂n T (E T F +eU −eV )/∂U, the differential conductance at bias voltage V can be written as a sum of three terms, dI dV (V, d) = dI dU (V, V, d) + B(V, d) + D T (V, d) (A10) with dI dU (V, V, d) = CeT (E S F + eV, V, d)n T (E T F )n S (E S F + eV ) (A11) B(V, d) = Ce V 0 ∂T ∂V (E S F + eU, V, d)n T (E T F + eU − eV )n S (E S F + eU)dU (A12) D T (V, d) = −Ce V 0 T (E S F + eU, V, d) ∂n T ∂U (E T F + eU − eV )n S (E S F + eU)dU. (A13) Here, B(V, d) is the background term usually considered in recent STS theories [7,34,35], and D T (V, d) is a term containing the energy derivative of the tip density of states (DOS), which is rarely taken into account for practical STS calculations and analyses of experimental STS data. It can be shown that an alternative expression for the differential conductance can be derived using integration by parts, dI dV (V, d) = dI dU (0, V, d) + B(V, d) + B 2 (V, d) − D S (V, d) (A14) with dI dU (0, V, d) = CeT (E S F , V, d)n T (E T F − eV )n S (E S F ) (A15) B 2 (V, d) = Ce V 0 ∂T ∂U (E S F + eU, V, d)n T (E T F + eU − eV )n S (E S F + eU)dU (A16) D S (V, d) = −Ce V 0 T (E S F + eU, V, d)n T (E T F + eU − eV ) ∂n S ∂U (E S F + eU)dU. (A17) This way another background term, B 2 (V, d) enters the differential conductance formula, and the energy derivative of the sample DOS appears in the term D S (V, d). The average of the two dI/dV expressions can also be formed as dI dV (V, d) = 1 2 dI dU (0, V, d) + dI dU (V, V, d) + B(V, d) + 1 2 B 2 (V, d) + 1 2 [D T (V, d) − D S (V, d)] ,(A18) which gives a third alternative form for the differential conductance within the 1D WKB approximation. On the other hand, by subtracting Eq.(A14) from Eq.(A10), one gets 0 = dI dU (V, V, d) − dI dU (0, V, d) − [B 2 (V, d) − D T (V, d) − D S (V, d)] .(A19) This is trivial since B 2 (V, d) − D T (V, d) − D S (V, d) is related to the partial derivative of dI/dU(U, V, d) with respect to U: B 2 (V, d) − D T (V, d) − D S (V, d) = Ce V 0 ∂ ∂U dI dU (U, V, d)dU = dI dU (V, V, d) − dI dU (0, V, d). (A20) From the three equivalent dI/dV formulas in Eqs. (A10), (A14) and (A18), the calculation of Eq.(A10) needs the least mathematical operations, thus, we adopted this formula to our atom superposition approach in section II in order to simulate STS spectra based on electronic structure data calculated from first principles. Finally, note that using the transmission function in Eq. (8), and the given form of the vacuum decay in Eq. (9), the derivative of the transmission probability with respect to U is obtained as ∂T ∂U (E S F + eU, V, d) = 2med T (E S F + eU, V, d) 2 κ(U, V ) = −2 ∂T ∂V (E S F + eU, V, d).(A21) Here, we also considered the bias-derivative of the transmission, Eq. (20). Therefore, for this particular transmission function, B 2 (V, d) = −2B(V, d), and the dI/dV can be expressed as dI dV (V, d) = dI dU (V, V, d) + B(V, d) + D T (V, d) (A22) = dI dU (0, V, d) − B(V, d) − D S (V, d) = 1 2 dI dU (0, V, d) + dI dU (V, V, d) + 1 2 [D T (V, d) − D S (V, d)]. This formulation helps the better understanding of the structure of the differential conductance, and its contributing terms, and could prove to be useful for extracting information about the tip and sample electronic structures from experimentally measured spectra in the future. system by a slab of a five-layer Ag substrate and one monolayer Cr film on each side, where the surface Cr layers and the first Ag layers underneath have been fully relaxed. A separating vacuum region of 14.6Å width in the surface normal (z) direction has been set up between neighboring supercell slabs. The average electron workfunction above the surface is φ S = 4.47 eV. We used an 11 × 11 × 1 Monkhorst-Pack (MP) [50] k-point grid for calculating the projected electron DOS (PDOS) onto the surface Cr atoms in our ( √ 3 × √ 3) magnetic surface unit cell [31]. The energy dependent charge and magnetization PDOS, n S (E) and m S (E), respectively, are shown in Figure 1. We obtained these quantities from noncollinear calculations. The spin quantization axis of each surface Cr atom is chosen to be parallel to their magnetic moment direction, and m S (E) is the projection of the magnetization PDOS vector m S (E)to this direction. Except the spin quantization axes of the three different Cr atoms in the magnetic surface unit cell, their electronic structure is the same. We can interpret the results in terms of the commonly used spin up (↑, majority) and spin down (↓, minority) channels with respect to the atomic spin quantization axis, where n( a single Cr apex atom on the Fe(001) surface. Ferriani et al. furthermore reported that an antiferromagnetic coupling of the Cr adatom to the Fe(001) surface is energetically preferred, and the vacuum spin polarization is fairly constant at around +0.8 in the energy range [E T F −1 eV, E T F + 1 eV][22]. The local electron workfunction above the tip apex is assumed to be φ T = 4.5 eV, that has been used to obtain the energy dependent vacuum decay in Eq.(9).The charge and magnetization PDOS of the Cr apex atom, n T (E) and m T (E), respectively, are shown inFigure 1.We obtain qualitative correspondence to the PDOS of the sample Cr atom. However, due to the different surface orientation and the different local environment of the Cr/Ag(111) surface and Cr/Fe(001) tip apex Cr atoms, the sample and tip Cr PDOS are quantitatively different. Concerning magnetic properties, we find a spin polarization reversal at E T F + 0.7 eV. On the other hand, there is no energy dependent vacuum spin polarization reversal observed in Ref. [22]. Ferriani et al. analyzed this effect in detail for an Fe adatom on top of the Fe(001) surface, and they found a competition between majority sp and minority d states with different decays into the vacuum. Such an orbital dependent vacuum decay is not included in our model at the moment, but work is in progress to consider such effects. The electronically flat magnetic tip has been modeled based on the electronic structure of the Cr apex (PDOS) of the CrFe tip. The charge and absolute magnetization PDOS, n T (E) and |m T (E)|, respectively, have been averaged in the [E T F − 2 eV, E T F + 2 eV] range. We obtained n T = 1.33/eV and m T = 1.06/eV, also shown in Figure 1. Thus, the spin polarization is P T = m T /n T = +0.8. In this case, the tip-derivative term of the differential conductance D T (V ) is zero, since ∂n T (E)/∂E = ∂m T (E)/∂E = 0. The vacuum decay can be modeled using Eq.( dI/dV (V ) = dI/dU(V, V ). On the other hand, by assuming a V -dependent vacuum decay κ(U, V ), B(V ) is not zero and it contributes to the total differential conductance, i.e. dI/dV (V ) = dI/dU(V, V ) + B(V ). Figure 2 2shows the bias dependence of the total tunneling current I(V ), calculated using Eq.(3), at the position z = 3.5Å above a surface Cr atom probed with the CrFe tip having parallel (P) magnetization direction compared to the underlying surface Cr atom. Positive current means tunneling from the tip to the sample surface, whereas the current is negative Apart from this, the peak structure of D T (V ), calculated via Eqs. (23)-(24), clearly shows up in the dI/dV , particularly pronounced at high bias voltages. The reason is the rapidly changing tip electronic structure in these energy regions, see Figure 1. The features from dI/dU(V, V ) and D T (V ) are transferred to the physical differential conductance, since B(V ), calculated via Eqs. (21)-(22), is smooth compared to the other two components in the whole bias range. Moreover, we find that B(V ) is a monotonous function of the bias voltage, and it is nearly proportional to I(V ) as has been reported earlier for different levels of STS theories [7, 34]. The proportionality function B(V )/I(V ) is plotted in the inset of Figure 1 . 1Since the vacuum decay does not depend on the bias voltage V for the dotted curves, and the tip is electronically flat, dI M AGN /dV (V ) = dI M AGN /dU(V, V ). Thus, the sign change of dI M AGN /dV occurs at the sign change of dI M AGN /dU(V, V ), i.e. at the reversal of the sample spin polarization vector at 0.54 eV above the sample Fermi level[31], see also For the flat magnetic tip and the assumed bias dependent vacuum decay (dashed curves) we find that dI P /dV > dI AP /dV below V = +0.5 V, and dI P /dV < dI AP /dVabove V = +0.5 V, i.e.the sign change of the magnetic component is slightly shifted toward zero bias. The reason is the nonzero background term B M AGN (V ) due to κ(U, V ), and dI M AGN /dV (V ) = dI M AGN /dU(V, V ) + B M AGN (V ) has to be considered. Note that D M AGN T (V ) is still zero because of the constant tip magnetization PDOS. Comparing the two vacuum decay models for the flat tip, it is clear that the topographic part of the background term has another effect on the heights of the spectra, i.e. they are enhanced and reduced in the negative and positive bias ranges, respectively, compared to the κ(U) model. On the other hand, the features of the spectra (peaks and dips) occur at the same bias positions for both vacuum decay models. netic dI M AGN /dU(V, V ), B M AGN (V ), and D M AGN T (V ) terms. The role of the effective spin polarization is more complicated, since, apart from the dI M AGN /dU(V, V ) term, it appears in the dI/dV expression through the bias-integrated quantities B M AGN (V ) and D M AGN T (V ). in almost the full studied bias range. The opposite relation holds between 0 V and +0.3 V only, however, the relative difference between the two quantities is less than 1.4 % in this regime. Moreover, these two magnetic asymmetries are within 5% relative difference in the bias range [-0.23 V, +0.31 V].Considering the CrFe tip, the experimentally measurable magnetic asymmetry A CrF e,dI/dV (V ) (black solid curve) is qualitatively different from the two asymmetry functions calculated with the flat tip, e.g. it has a richer structure at positive bias voltages. More importantly, it has an extra sign change occurring at -1.04 V apart from +0.49 V. These correspond to the height changes of dI P /dV and dI AP /dV relative to each other inFigure 3. Let us estimate the error of the magnetic asymmetry when neglecting the background and the tip-derivative terms. According to Eq.(28), A CrF e,dI/dU (V ) (curve with symbol 'o') considers the virtual differential conductances only. It is within 10% relative error compared to A CrF e,dI/dV (V ) in the bias range [-0.65 V, +0.1 V]. However, its sign does not correspond to A CrF e,dI/dV (V ) in the bias intervals [-2 V, -1.04 V] and [+0.49 V, +0.54 V]. Adding the background term B(V ) to dI/dU(V, V ) results in an improved differential conductance expression, and A CrF e,dI/dU +B (V ) (curve with symbol '+'), defined in Eq.(30), behaves qualitatively similarly to A CrF e,dI/dU (V ) above -0.65 V. However, its sign change is shifted to +0.45 V from +0.54 V. Additionally, a sign change in the negative bias range occurs at -1.62 V. Close to the sample Fermi level, A CrF e,dI/dU +B (V ) is within 10% relative error compared to A CrF e,dI/dV (V ) in a decreased bias range of [-0.34 V, +0.1 V]. Finally, by adding the tip-derivative term D T (V ) to dI/dU(V, V ), A CrF e,dI/dU +D T (V ) (curve with symbol 'x'), defined in Eq.(31), shows the most closely related shape to A CrF e,dI/dV (V ). Furthermore, of our method, where we used the flat tip model with tip magnetization direction M T IP parallel to the (1/2, √ 3/2) direction (i.e. the magnetization direction of the surface Cr atom at the bottom left corner of the scan area, see top left part of Figure 5). Moreover, κ(U, V ), Eq.(9) has been used for the vacuum decay. By choosing V stab = +1 V, we calculate the 3D TOTAL current map in a box above the surface. From this 3D data we extract the current contour of I T OT AL = 54 nA, which is around 3.5Å above the sample surface and has a corrugation of 4.2 pm. This contour, z(x, y, V stab = +1V, I T OT AL = 54nA) √ 3/2) direction (M T IP ), the leading term of the magnetic differential conductance, dI M AGN /dU(V, V ) is negative above the surface Cr atom with parallel magnetic moment to the tip. Moreover, the sign of dI M AGN /dU(V, V ) changes to positive above the other two Cr atoms in the magnetic unit cell. This results in the minimal total dI/dV (x, y) above the Cr atom at the bottom left corner of the scan area (22.9 nA/V, magn. moment parallel to tip), whereas above the other two Cr atoms dI/dV is maximal (23.6 nA/V, magn. moment not in line with tip). This happens even though the topographic differential conductance is higher above the Cr atom which is lower-lying on the constant current contour. Similarly, the case of +0.5 V is reversed, since all surface Cr spin polarization vectors point along their local magnetic moment directions[31] and the maximal total dI/dV (x, y) is achieved above the Cr atom at the bottom left corner of the scan area (16.5 nA/V, magn. moment parallel to tip), whereas above the other two Cr atoms dI/dV is lower (16.0 nA/V, magn. moment not in line with tip). The minimal dI/dV =15.8 nA/V is obtained above the midpoint of the lines connecting two dI/dV maxima. If we introduce the notation of dI P /dV (x, y) for the above calculated differential conductances with P parallel to the indicated M T IP inFigure 5, then the antiparallel tip orientation is denoted by AP, and dI AP /dV (x, y) can similarly be calculated. For the very same reason as discussed, a reversed tip magnetization direction would result in a reversed dI AP /dV map concerning the heights above the non-equivalent magnetic Cr atoms. Thus, at +0.6 V the difference between dI P /dV (x, y) and dI AP /dV (x, y) is minimal and negative above the bottom left Cr atom in the scan area, and maximal and positive above the other two Cr atoms, while the opposite is true at +0.5 V. These explain qualitatively well the simulated ESP(x, y) maps, see the right column ofFigure 5. The ESP(x, y) = 0 contour acts as a border between surface regions with positive and negative ESP at the given bias. Note that the sign of the tip spin polarization has a crucial effect on the ESP(x, y) map. Reversing the sign of P T compared to the M T IP direction would result in a reversed ESP(x, y) map. FIG. 1 : 1Projected charge and magnetization DOS of the surface Cr atom of the sample Cr/Ag(111), the tip apex Cr atom of the Cr/Fe(001) tip[22], and the flat tip. FIG. 2 : 2(Color online) Comparison of single point differential conductance tunneling spectra calculated from numerical differentiation of the tunneling current I(V ), and dI/dV calculated according to Eq.(16), and its contributing terms, the virtual differential conductance dI/dU (V, V ), the background term B(V ), and the tip-derivative term D T (V ). The model CrFe tip apex is 3.5Å above a surface Cr atom and its magnetization direction is parallel to that of the underlying surface Cr atom. The inset shows the ratio of B(V )/I(V ). FIG. 3 :FIG. 4 : 34(Color online) Comparison of simulated single point differential conductance tunneling spectra following Eq.(16), probed with the flat magnetic tip and the model CrFe tip, 3.5Å above a surface Cr atom. Parallel (P) and antiparallel (AP) tip magnetization directions are set relative to the underneath surface Cr atom. For the flat magnetic tip, two different vacuum decays, κ(U ) and κ(U, V ) are assumed using Eqs.(12) and(9), respectively. The vertical dotted line at -1.2 V shows the bias position of the identified STS peak coming from the electronic structure of the (Color online) Comparison of magnetic asymmetries 3.5Å above a surface Cr atom probed with the flat magnetic tip and the model CrFe tip. A F lat,κ(U ) , A F lat,κ(U,V ) , and A CrF e,dI/dV are calculated from the corresponding P and AP spectra shown in Figure 3. For the CrFe tip we compare the magnetic asymmetry expressions defined in Eqs. (27)-(31).FIG. 5: (Color online) Top left: Surface geometry of 1 ML Cr on Ag(111). The Cr and Ag atoms are denoted by spheres colored by green (medium gray) and purple (dark gray), respectively, while the magnetic moments of individual Cr atoms are indicated by (red) arrows. The ( is drawn by yellow (light gray) color. The surface Cr positions are denoted by 'x'. Bottom left: Constant current contour about 3.5Å above the surface with I T OT AL (V stab = +1V)=54 nA calculated with the flat magnetic tip using κ(U, V ), Eq.(9). The tip magnetization direction (M T IP ) is indicated by an arrow. Middle column: Simulated 2D differential conductance maps dI/dV (x, y, V = +0.5V) (top middle; min. 15.8, max. 16.5 nA/V), and dI/dV (x, y, V = +0.6V) (bottom middle; min. 22.9, max. 23.6 nA/V), while the tip is following the constant current contour at the bottom left of the figure. Minimum (MIN) and maximum (MAX) values are indicated. Right column: Simulated effective spin polarization (ESP) maps on the same current contour following Eq.(29), ESP(x, y, V = +0.5V) (top right), and ESP(x, y, V = +0.6V) (bottom right). Black contours correspond to zero ESP, and the regions with positive (+) and negative (-) ESP are indicated. The surface magnetic unit cell is drawn by a yellow (light gray) rhombus on each 2D map. AcknowledgmentsThe authors thank Paolo Ferriani and Stefan Heinze for providing the electronic structure data of the CrFe tip. Financial support of the Magyary Foundation, EEA and Norway Grants, the Hungarian Scientific Research Fund (OTKA PD83353, K77771), the Bolyai . G Binnig, H Rohrer, C Gerber, E Weibel, Appl. Phys. Lett. 40178G. Binnig, H. Rohrer, C. Gerber, and E. Weibel, Appl. Phys. Lett. 40, 178 (1982). . G Binnig, H Rohrer, C Gerber, E Weibel, Phys. Rev. Lett. 4957G. Binnig, H. Rohrer, C. Gerber, and E. Weibel, Phys. Rev. Lett. 49, 57 (1982). . W A Hofer, A S Foster, A L Shluger, Rev. Mod. Phys. 751287W. A. Hofer, A. S. Foster, and A. L. Shluger, Rev. Mod. Phys. 75, 1287 (2003). . W A Hofer, Prog. Surf. Sci. 71147W. A. Hofer, Prog. Surf. Sci. 71, 147 (2003). . V A Ukraintsev, Phys. Rev. B. 5311176V. A. Ukraintsev, Phys. Rev. B 53, 11176 (1996). . B Koslowski, C Dietrich, A Tschetschetkin, P Ziemann, Phys. Rev. B. 7535421B. Koslowski, C. Dietrich, A. Tschetschetkin, and P. Ziemann, Phys. Rev. B 75, 035421 (2007). . M Passoni, F Donati, A Li Bassi, C S Casari, C E Bottani, Phys. Rev. B. 7945404M. Passoni, F. Donati, A. Li Bassi, C. S. Casari, and C. E. Bottani, Phys. Rev. B 79, 045404 (2009). . M Ziegler, N Néel, A Sperl, J Kröger, R Berndt, Phys. Rev. B. 80125402M. Ziegler, N. Néel, A. Sperl, J. Kröger, and R. Berndt, Phys. Rev. B 80, 125402 (2009). . B Koslowski, H Pfeifer, P Ziemann, Phys. Rev. B. 80165419B. Koslowski, H. Pfeifer, and P. Ziemann, Phys. Rev. B 80, 165419 (2009). . T Kwapiński, M Ja, Surf. Sci. 6041752T. Kwapiński and M. Ja lochowski, Surf. Sci. 604, 1752 (2010). . W A Hofer, A Garcia-Lekue, Phys. Rev. B. 7185401W. A. Hofer and A. Garcia-Lekue, Phys. Rev. B 71, 085401 (2005). The Physics of Ultra-High Density Magnetic Recording. E M L Plumer, J Van Ek, D Weller, Springer Series in Surface Science. 41SpringerE. M. L. Plumer, J. van Ek, and D. Weller, The Physics of Ultra-High Density Magnetic Recording, Springer Series in Surface Science Vol. 41 (Springer, Berlin, Germany, 2001). . N Weiss, T Cren, M Epple, S Rusponi, G Baudot, S Rohart, A Tejeda, V Repain, S Rousset, P Ohresser, F Scheurer, P Bencok, H Brune, Phys. Rev. Lett. 95157204N. Weiss, T. Cren, M. Epple, S. Rusponi, G. Baudot, S. Rohart, A. Tejeda, V. Repain, S. Rousset, P. Ohresser, F. Scheurer, P. Bencok, and H. Brune, Phys. Rev. Lett. 95, 157204 (2005). . M Bode, Rep. Prog. Phys. 66523M. Bode, Rep. Prog. Phys. 66, 523 (2003). . R Wiesendanger, Rev. Mod. Phys. 811495R. Wiesendanger, Rev. Mod. Phys. 81, 1495 (2009). . W Wulfhekel, C L Gao, J. Phys. Condens. Matter. 2284021W. Wulfhekel and C. L. Gao, J. Phys. Condens. Matter 22, 084021 (2010). . D Serrate, P Ferriani, Y Yoshida, S.-W Hla, M Menzel, K Bergmann, S Heinze, A Kubetzka, R Wiesendanger, Nature Nanotechnology. 5350D. Serrate, P. Ferriani, Y. Yoshida, S.-W. Hla, M. Menzel, K. von Bergmann, S. Heinze, A. Kubetzka, and R. Wiesendanger, Nature Nanotechnology 5, 350 (2010). . S Heinze, K Bergmann, M Menzel, J Brede, A Kubetzka, R Wiesendanger, G Bihlmayer, S Blügel, Nature Physics. 7713S. Heinze, K. von Bergmann, M. Menzel, J. Brede, A. Kubetzka, R. Wiesendanger, G. Bihlmayer, and S. Blügel, Nature Physics 7, 713 (2011). . Y Yayon, V W Brar, L Senapati, S C Erwin, M F Crommie, Phys. Rev. Lett. 9967202Y. Yayon, V. W. Brar, L. Senapati, S. C. Erwin, and M. F. Crommie, Phys. Rev. Lett. 99, 067202 (2007). . B W Heinrich, C Iacovita, M V Rastei, L Limot, J P Bucher, P A Ignatiev, V S Stepanyuk, P Bruno, Phys. Rev. B. 79113401B. W. Heinrich, C. Iacovita, M. V. Rastei, L. Limot, J. P. Bucher, P. A. Ignatiev, V. S. Stepa- nyuk, and P. Bruno, Phys. Rev. B 79, 113401 (2009). . L Zhou, F Meier, J Wiebe, R Wiesendanger, Phys. Rev. B. 8212409L. Zhou, F. Meier, J. Wiebe, and R. Wiesendanger, Phys. Rev. B 82, 012409 (2010). . P Ferriani, C Lazo, S Heinze, Phys. Rev. B. 8254411P. Ferriani, C. Lazo, and S. Heinze, Phys. Rev. B 82, 054411 (2010). . J Wiebe, L Zhou, R Wiesendanger, J. Phys. D: Appl. Phys. 44464009J. Wiebe, L. Zhou, and R. Wiesendanger, J. Phys. D: Appl. Phys. 44, 464009 (2011). . N Néel, J Kröger, R Berndt, Phys. Rev. B. 82233401N. Néel, J. Kröger, and R. Berndt, Phys. Rev. B 82, 233401 (2010). . M Ternes, A J Heinrich, W.-D Schneider, J. Phys. Condens. Matter. 2153001M. Ternes, A. J. Heinrich, and W.-D. Schneider, J. Phys. Condens. Matter 21, 053001 (2009). . K Schouteden, D A Muzychenko, C Van Haesendonck, J. Nanosci. Nanotechnol. 83616K. Schouteden, D. A. Muzychenko, and C. Van Haesendonck, J. Nanosci. Nanotechnol. 8, 3616 (2008). . B W Heinrich, C Iacovita, M V Rastei, L Limot, P A Ignatiev, V S Stepanyuk, J P Bucher, Eur. Phys. J. B. 7549B. W. Heinrich, C. Iacovita, M. V. Rastei, L. Limot, P. A. Ignatiev, V. S. Stepanyuk, and J. P. Bucher, Eur. Phys. J. B 75, 49 (2010). . G Rodary, S Wedekind, H Oka, D Sander, J Kirschner, Appl. Phys. Lett. 95152513G. Rodary, S. Wedekind, H. Oka, D. Sander, and J. Kirschner, Appl. Phys. Lett. 95, 152513 (2009). . K Palotás, W A Hofer, L Szunyogh, Phys. Rev. B. 83214410K. Palotás, W. A. Hofer, and L. Szunyogh, Phys. Rev. B 83, 214410 (2011). . W A Hofer, K Palotás, S Rusponi, T Cren, H Brune, Phys. Rev. Lett. 10026806W. A. Hofer, K. Palotás, S. Rusponi, T. Cren, and H. Brune, Phys. Rev. Lett. 100, 026806 (2008). . K Palotás, W A Hofer, L Szunyogh, Phys. Rev. B. 84174428K. Palotás, W. A. Hofer, and L. Szunyogh, Phys. Rev. B 84, 174428 (2011). . D Wortmann, S Heinze, P Kurz, G Bihlmayer, S Blügel, Phys. Rev. Lett. 864132D. Wortmann, S. Heinze, P. Kurz, G. Bihlmayer, and S. Blügel, Phys. Rev. Lett. 86, 4132 (2001). . S Heinze, Appl. Phys. A. 85407S. Heinze, Appl. Phys. A 85, 407 (2006). . F Donati, S Piccoli, C E Bottani, M Passoni, New J. Phys. 1353058F. Donati, S. Piccoli, C. E. Bottani, and M. Passoni, New J. Phys. 13, 053058 (2011). . M Passoni, C E Bottani, Phys. Rev. B. 76115404M. Passoni and C. E. Bottani, Phys. Rev. B 76, 115404 (2007). . H Yang, A R Smith, M Prikhodko, W R L Lambrecht, Phys. Rev. Lett. 89226101H. Yang, A. R. Smith, M. Prikhodko, and W. R. L. Lambrecht, Phys. Rev. Lett. 89, 226101 (2002). . A R Smith, R Yang, H Yang, W R L Lambrecht, A Dick, J Neugebauer, Surf. Sci. 561154A. R. Smith, R. Yang, H. Yang, W. R. L. Lambrecht, A. Dick, and J. Neugebauer, Surf. Sci. 561, 154 (2004). . J Tersoff, D R Hamann, Phys. Rev. Lett. 501998J. Tersoff and D. R. Hamann, Phys. Rev. Lett. 50, 1998 (1983). . J Tersoff, D R Hamann, Phys. Rev. B. 31805J. Tersoff and D. R. Hamann, Phys. Rev. B 31, 805 (1985). . C J Chen, Phys. Rev. B. 428841C. J. Chen, Phys. Rev. B 42, 8841 (1990). . H F Ding, W Wulfhekel, J Henk, P Bruno, J Kirschner, Phys. Rev. Lett. 90116603H. F. Ding, W. Wulfhekel, J. Henk, P. Bruno, and J. Kirschner, Phys. Rev. Lett. 90, 116603 (2003). . A Tange, C L Gao, B Y Yavorsky, I V Maznichenko, C Etz, A Ernst, W Hergert, I Mertig, W Wulfhekel, J Kirschner, Phys. Rev. B. 81195410A. Tange, C. L. Gao, B. Y. Yavorsky, I. V. Maznichenko, C. Etz, A. Ernst, W. Hergert, I. Mertig, W. Wulfhekel, and J. Kirschner, Phys. Rev. B 81, 195410 (2010). . G Kresse, J Furthmüller, Comput. Mater. Sci. 615G. Kresse and J. Furthmüller, Comput. Mater. Sci. 6, 15 (1996). . G Kresse, J Furthmüller, Phys. Rev. B. 5411169G. Kresse and J. Furthmüller, Phys. Rev. B 54, 11169 (1996). . J Hafner, J. Comput. Chem. 292044J. Hafner, J. Comput. Chem. 29, 2044 (2008). . G Kresse, D Joubert, Phys. Rev. B. 591758G. Kresse and D. Joubert, Phys. Rev. B 59, 1758 (1999). . J P Perdew, Y Wang, Phys. Rev. B. 4513244J. P. Perdew and Y. Wang, Phys. Rev. B 45, 13244 (1992). . D Hobbs, G Kresse, J Hafner, Phys. Rev. B. 6211556D. Hobbs, G. Kresse, and J. Hafner, Phys. Rev. B 62, 11556 (2000). . D Hobbs, J Hafner, J. Phys. Condens. Matter. 127025D. Hobbs and J. Hafner, J. Phys. Condens. Matter 12, 7025 (2000). . H J Monkhorst, J D Pack, Phys. Rev. B. 135188H. J. Monkhorst and J. D. Pack, Phys. Rev. B 13, 5188 (1976). . A Kubetzka, P Ferriani, M Bode, S Heinze, G Bihlmayer, K Bergmann, O Pietzsch, S Blügel, R Wiesendanger, Phys. Rev. Lett. 9487204A. Kubetzka, P. Ferriani, M. Bode, S. Heinze, G. Bihlmayer, K. von Bergmann, O. Pietzsch, S. Blügel, and R. Wiesendanger, Phys. Rev. Lett. 94, 087204 (2005).
[]
[]
[ "Zhedong Zheng [email protected] \nCentre for Artificial Intelligence\nUniversity of Technology Sydney\n\n", "Liang Zheng [email protected] \nCentre for Artificial Intelligence\nUniversity of Technology Sydney\n\n", "Yi Yang [email protected] \nCentre for Artificial Intelligence\nUniversity of Technology Sydney\n\n" ]
[ "Centre for Artificial Intelligence\nUniversity of Technology Sydney\n", "Centre for Artificial Intelligence\nUniversity of Technology Sydney\n", "Centre for Artificial Intelligence\nUniversity of Technology Sydney\n" ]
[]
The main contribution of this paper is a simple semisupervised pipeline that only uses the original training set without collecting extra data. It is challenging in 1) how to obtain more training data only from the training set and 2) how to use the newly generated data. In this work, the generative adversarial network (GAN) is used to generate unlabeled samples. We propose the label smoothing regularization for outliers (LSRO). This method assigns a uniform label distribution to the unlabeled images, which regularizes the supervised model and improves the baseline.We verify the proposed method on a practical problem: person re-identification (re-ID). This task aims to retrieve a query person from other cameras. We adopt the deep convolutional generative adversarial network (DCGAN) for sample generation, and a baseline convolutional neural network (CNN) for representation learning. Experiments show that adding the GAN-generated data effectively improves the discriminative ability of learned CNN embeddings. On three large-scale datasets, Market-1501, CUHK03 and DukeMTMC-reID, we obtain +4.37%, +1.6% and +2.46% improvement in rank-1 precision over the baseline CNN, respectively. We additionally apply the proposed method to fine-grained bird recognition and achieve a +0.6% improvement over a strong baseline. The code is available at https://github.com/layumi/Person-reID_GAN .
10.1109/iccv.2017.405
[ "https://arxiv.org/pdf/1701.07717v5.pdf" ]
2,683,207
1701.07717
0d03e842ae9512d1a890b7ee8aea1343e491ffc9
22 Aug 2017 Zhedong Zheng [email protected] Centre for Artificial Intelligence University of Technology Sydney Liang Zheng [email protected] Centre for Artificial Intelligence University of Technology Sydney Yi Yang [email protected] Centre for Artificial Intelligence University of Technology Sydney 22 Aug 2017Unlabeled Samples Generated by GAN Improve the Person Re-identification Baseline in vitro The main contribution of this paper is a simple semisupervised pipeline that only uses the original training set without collecting extra data. It is challenging in 1) how to obtain more training data only from the training set and 2) how to use the newly generated data. In this work, the generative adversarial network (GAN) is used to generate unlabeled samples. We propose the label smoothing regularization for outliers (LSRO). This method assigns a uniform label distribution to the unlabeled images, which regularizes the supervised model and improves the baseline.We verify the proposed method on a practical problem: person re-identification (re-ID). This task aims to retrieve a query person from other cameras. We adopt the deep convolutional generative adversarial network (DCGAN) for sample generation, and a baseline convolutional neural network (CNN) for representation learning. Experiments show that adding the GAN-generated data effectively improves the discriminative ability of learned CNN embeddings. On three large-scale datasets, Market-1501, CUHK03 and DukeMTMC-reID, we obtain +4.37%, +1.6% and +2.46% improvement in rank-1 precision over the baseline CNN, respectively. We additionally apply the proposed method to fine-grained bird recognition and achieve a +0.6% improvement over a strong baseline. The code is available at https://github.com/layumi/Person-reID_GAN . Introduction Unsupervised learning can serve as an important auxiliary task to supervised tasks [14,29,11,28]. In this work, we propose a semi-supervised pipeline that works on the original training set without an additional data collection process. First, the training set is expanded with unlabeled data using a GAN. Then our model minimizes the sum of the supervised and the unsupervised losses through a new * To whom all correspondence should be addressed. Figure 1. The pipeline of the proposed method. There are two components: a generative adversarial model [27] for unsupervised learning and a convolutional neural network for semi-supervised learning. "Real Data" represents the labeled data in the given training set; "Training data" includes both the "Real Data" and the generated unlabeled data. We aim to learn more discriminative embeddings with the "Training data". regularization method. This method is evaluated with person re-ID, which aims to spot the target person in different cameras. This has been recently viewed as an image retrieval problem [50]. This paper addresses three challenges. First, current research in GANs typically considers the quality of the sample generation with and without semi-supervised learning in vivo [24,32,27,7,26,41]. Yet a scientific problem remains unknown: moving the generated samples out of the box and using them in currently available learning frameworks. To this end, this work uses unlabeled data produced by the DCGAN model [27] in conjunction with the labeled training data. As shown in Fig. 1, our pipeline feeds the newly generated samples into another learning machine (i.e. a CNN). Therefore, we use the term "in vitro" to differentiate our method from [24,32,27,7]; these methods perform semi-supervised learning in the discriminator of the GANs (in vivo). Second, the challenge of performing semi-supervised learning using labeled and unlabeled data in CNN-based methods remains. Usually, the unsupervised data is used as a pre-training step before supervised learning [28,11,14]. Our method uses all the data simultaneously. In [25,18,24,32], the unlabeled/weak-labeled real data are assigned labels according to pre-defined training classes, but our method assumes that the GAN generated data does not belong to any of the existing classes. The proposed LSRO method neither includes unsupervised pre-training nor label assignments for the known classes. We address semisupervised learning from a new perspective. Since the unlabeled samples do not belong to any of the existing classes, they are assigned a uniform label distribution over the training classes. The network is trained not to predict a particular class for the generated data with high confidence. Third, in person re-ID, data annotation is expensive, because one has to draw a pedestrian bounding box and assign an ID label to it. Recent progress in this field can be attributed to two factors: 1) the availability of large-scale re-ID datasets [49,51,44,19] and 2) the learned embedding of pedestrians using a CNN [8,10]. That being said, the number of images for each identity is still limited, as shown in Fig. 2. There are 17.2 images per identities in Market-1501 [49], 9.6 images in CUHK03 [19], and 23.5 images in DukeMTMC-reID [30] on average. So using additional data is non-trivial to avoid model overfitting. In the literature, pedestrian images used in training are usually provided by the training sets, without being expanded. So it is unknown if a larger training set with unlabeled images would bring any extra benefit. This observation inspired us to resort to the GAN samples to enlarge and enrich the training set. It also motivated us to employ the proposed regularization to implement a semi-supervised system. In an attempt to overcome the above-mentioned challenges, this paper 1) adopts GAN in unlabeled data generation, 2) proposes the label smoothing regularization for outliers (LSRO) for unlabeled data integration, and 3) reports improvements over a CNN baseline on three person re-ID datasets. In more details, in the first step, we train DCGAN [27] on the original re-ID training set. We generate new pedestrian images by inputting 100-dim random vectors in which each entry falls within [-1, 1]. Some generated samples are shown in Fig. 3 and Fig. 5. In the second step, these unlabeled GAN-generated data are fed into the ResNet model [13]. The LSRO method regularizes the learning process by integrating the unlabeled data and, thus, reduces the risk of over-fitting. Finally, we evaluate the proposed method on person re-ID and show that the learned embeddings demonstrate a consistent improvement over the strong ResNet baseline. To summarize, our contributions are: • the introduction of a semi-supervised pipeline that integrates GAN-generated images into the CNN learning machine in vitro; • an LSRO method for semi-supervised learning. The integration of unlabeled data regularizes the CNN learning process. We show that the LSRO method is Figure 2. The image distribution per class in the dataset Market-1501 [49], CUHK03 [19] and DukeMTMC-reID [30]. We observe that all these datasets suffer from the limited images per class. Note that there are only a few classes with more than 20 images. superior to the two available strategies for dealing with unlabeled data; and • a demonstration that the proposed semi-supervised pipeline has a consistent improvement over the ResNet baseline on three person re-ID datasets and one finegrained recognition dataset. Related Work In this section, we will discuss the relevant works on GANs, semi-supervised learning and person re-ID. Generative Adversarial Networks The generative adversarial networks (GANs) learn two sub-networks: a generator and a discriminator. The discriminator reveals whether a sample is generated or real, while the generator produces samples to cheat the discriminator. The GANs are first proposed by Goodfellow et al. [12] to generate images and gain insights into neural networks. Then, DCGANs [27] provides some techniques to improve the stability of training. The discriminator of DC-GAN can serve as a robust feature extractor. Salimans et al. [32] achieve a state-of-art result in semi-supervised classification and improves the visual quality of GANs. InfoGAN [7] learns interpretable representations by introducing latent codes. On the other hand, GANs also demonstrate potential in generating images for specific fields. Pathak et al. [26] propose an encoder-decoder method for image inpainting, where GANs are used as the image generator. Similarly, Yeh et al. [45] improve the inpainting performance by introducing two loss types. In [41], 3D object images are generated by a 3D-GAN. In this work, we do not focus on investigating more sophisticated sample generation methods. Instead, we use a basic GAN model [27] to generate unlabeled samples from the training data and show that these samples help improve discriminative learning. Semi-supervised Learning Semi-supervised learning is a sub-class of supervised learning taking unlabeled data into consideration, especially [27] trained on the Market-1501 training set [49]. (b) The bottom row shows the real samples in training set. Although the generated images in (a) can be easily recognized as fake images by a human, they still serve as an effective regularizer in our experiment. when the volume of annotated data is small. On the one hand, some research treats unsupervised learning as an auxiliary task to supervised learning. For example, in [14], Hinton et al. learn a stack of unsupervised restricted Boltzmann machines to pre-train the model. Ranzato et al. propose to reconstruct the input at every level of a network to get a compact representation [28]. In [29], the auxiliary task of ladder networks is to denoise representations at every level of the model. On the other hand, several works assign labels to the unlabeled data. Papandreou et al. [25] combine strong and weak labels in CNNs using an expectation-maximization (EM) process for image segmentation. In [18], Lee assigns a "pseudo label" to the unlabeled data in the class that has the maximum predicted probability. In [24,32], the samples produced by the generator of the GAN are all taken as one class in the discriminator. Departing from previous semi-supervised works, we adopt a different regularization approach by assigning a uniform label distribution to the generated samples. Person Re-identification Some pioneering works focus on finding discriminative handcrafted features [22,23,20]. Recent progress in person re-ID mainly consists of advancing CNNs. Yi et al. [46] split a pedestrian image into three horizontal parts and respectively train three part-CNNs to extract features. Similarly, Cheng et al. [8] split the convolutional map into four parts and fuse the part features with the global feature. In [19], Li et al. add a new layer that multiplies the activation of two images in different horizontal stripes. They use this layer to explicitly allow patch matching in the CNN. Later, Ahmed et al. [4] improve the performance by proposing a new patch matching layer that compares the activation of two images in neighboring pixels. In addition, Varior et al. [35] combine the CNN with some gate functions, aiming to adaptively focus on the salient parts of input image pairs, this method is limited by computational inefficiency because the input should be image pairs. A CNN can be very discriminative by itself without explicit part-matching. Zheng et al. [50,51] directly use a conventional fine-tuning approach (called the IDdiscriminative embedding, or IDE) on the Market-1501 dataset [49] and its performance exceeds many other recent results. Wu et al. [43] combine the CNN embedding with hand-crafted features. In [52], Zheng et al. combine an identification model with a verification model and improve the fine-tuned CNN performance. In this work, we adopt the IDE model [50,51] as a baseline, and show that the GAN samples and LSRO effectively improve its performance. Recently, Barbosa et al. [5] propose synthesizing human images through a photorealistic body generation software. These images are used to pre-train an IDE model before dataset-specific fine-tuning. Our method is different from [5] in both data generation and the training strategy. Network Overview In this section, we describe the pipeline of the proposed method. As shown in Fig. 1, the real data in the training set is used to train the GAN model. Then, the real training data and the newly generated samples are combined into training input for the CNN. In the following section, we will illustrate the structure of the two components, i.e., the GAN and the CNN, in detail. Note that, our system does not make major changes to the network structures of the GAN or the CNN with one exception -the number of neurons in the last fully-connected layer in the CNN is modified according to the number of training classes. Generative Adversarial Network Generative adversarial networks have two components: a generator and a discriminator. For the generator, we follow the settings in [27]. We start with a 100-dim random vector and enlarge it to 4 × 4 × 16 using a linear function. To enlarge the tensor, five deconvolution functions are used with a kernel size of 5 × 5 and a stride of 2. Every deconvolution is followed by a rectified linear unit and batch normalization. Additionally, one optional deconvolutional layer with a kernel size of 5 × 5 and a stride of 1, and one tanh function are added to fine-tune the result. A sample that is 128 × 128 × 3 in size can then be generated. The input of the discriminator network includes the generated images and the real images in the training set. We use five convolutional layers to classify whether the generated image is fake. Similarly, the size of the convolutional filters is 5 × 5 and their stride is 2. We add a fully-connected layer to perform the binary classification (real or fake). Convolutional Neural Network The ResNet-50 [13] model is used in our experiment. We resize the generated images to 256 × 256 × 3 using bilinear sampling. The generated images are mixed with the original training set as the input of the CNN. That is, the labeled and unlabeled data are simultaneously trained. These training images are shuffled. Following the conventional fine-tuning strategy [50], we use a model pre-trained on ImageNet [31]. We modify the last fully-connected layer to have K neurons to predict the K-classes, where K is the number of the classes in the original training set (as well as the merged new training set). Unlike [24,32], we do not view the new samples as an extra class but assign a uniform label distribution over the existing classes. So the last fully-connected layer remains K-dimensional. The assigned label distribution of the generated images is discussed in the next section. The Proposed Regularization Method In this section, we first revisit the label smoothing regularization (LSR), which is used for fully-supervised learning. We then extend LSR to the scenario of unlabeled learning, yielding the proposed label smoothing regularization for outliers (LSRO) method. Label Smoothing Regularization Revisit LSR was proposed in the 1980s and recently rediscovered by Szegedy et al. [33]. In a nutshell, LSR assigns small values to the non-ground truth classes instead of 0. This strategy discourages the network to be tuned towards the ground truth class and thus reduces the chances of over-fitting. LSR is proposed for use with the cross-entropy loss [33]. Formally, let k ∈ {1, 2, ..., K} be the pre-defined classes of the training data, where K is the number of classes. The cross-entropy loss can be formulated as: l = − K k=1 log (p(k))q(k),(1) where p(k) ∈ [0, 1] is the predicted probability of the input belonging to class k, and can be outputted by CNN. It is derived from the softmax function which normalizes the output of the previous fully-connected layer. q(k) is the ground truth distribution. Let y be the ground truth class label, q(k) can be defined as: q(k) = 0 k = y 1 k = y .(2) If we discard the 0 terms in Eq. 1, the cross-entropy loss is equivalent to only considering the ground truth term in Eq. 3. l = − log (p(y)). Figure 4. The label distributions of a real image and a GANgenerated image in our system. We use a classical label distribution (Eq. 2) for the real image (left). For the generated image (right), we employ the proposed LSRO label distribution (Eq. 6), e.g. a uniform distribution on every training class because the generated image is assumed to belong to none of the training classes. We employ a cross-entropy loss that combines the two types of label distributions as the optimization objective (Eq. 7). So, minimizing the cross-entropy loss is equivalent to maximizing the predicted probability of the ground-truth class. In [33], the label smoothing regularization (LSR) is introduced to take the distribution of the non-ground truth classes into account. The network is thus encouraged not to be too confident towards the ground truth. In [33], the label distribution q LSR (k) is written as: q LSR (k) = ε K k = y 1 − ε + ε K k = y ,(4) where ε ∈ [0, 1] is a hyperparameter. If ε is zero, Eq. 4 reduces to Eq. 2. If ε is too large, the model may fail to predict the ground truth label. So in most cases, ε is set to 0.1. Szegedy et al. assume that the non-ground truth classes take on a uniform label distribution. Considering Eq. 1 and Eq. 4, the cross-entropy loss evolves to: l LSR = −(1 − ε) log (p(y)) − ε K K k=1 log (p(k)).(5) Compared with Eq. 3, Eq. 5 pays additional attention to the other classes, rather than only the ground truth class. In this paper, we do not employ LSR on the IDE baseline because it yields a slightly lower performance than using Eq. 2 (see Section 5.3). We re-introduce LSR because it inspires us in designing the LSRO method. Label Smoothing Regularization for Outliers The label smoothing regularization for outliers (LSRO) is used to incorporate the unlabeled images in the network. This extends LSR from the supervised domain to leverage unsupervised data generated by the GAN. In LSRO, we propose a virtual label distribution for the unlabeled images. We set the virtual label distribution to be uniform over all classes, due to two inspirations. 1) We assume that the generated samples do not belong to any predefined classes. 2) LSR assumes a uniform distribution over the all classes to address over-fitting. During testing, we expect that the maximum class probability of a generated image will be low, i.e., the network will fail to predict a particular class with high confidence. Formally, for a generated image, its class label distribution, q LSRO (k), is defined as: q LSRO (k) = 1 K .(6) We call Eq. 6 the label smoothing regularization for outliers (LSRO). The one-hot distribution defined in Eq. 2 will still be used for the loss computation for the real images in the training set. Combining Eq. 2, Eq. 6 and Eq. 1, we can re-write the cross-entropy loss as: l LSRO = −(1 − Z) log (p(y)) − Z K K k=1 log (p(k)). (7) For a real training image, Z = 0. For a generated training image, Z = 1. So our system actually has two types of losses, one for real images and one for generated images. Advantage of LSRO. Using LSRO, we can deal with more training images (outliers) that are located near the real training images in the sample space, and introduce more color, lighting and pose variances to regularize the model. For instance, if we only have one green-clothed identity in the training set, the network may be misled into considering that the color green is a discriminative feature, and this limits the discriminative ability of the model. By adding generated training samples, such as an unlabeled green-clothed person, the classifier will be penalized if it makes the wrong prediction towards the labeled green-clothed person. In this manner, we encourage the network to find more underlying causes and to be less prone to over-fitting. We only use the GAN trained on the original training set to produce outlier images. It would be interesting to further evaluate whether real-world unlabeled images are able to achieve a similar effect (see Table 4). Competing methods. We compare LSRO with two alternative methods. Details of both methods are available in existing literature [24,32,18]; breif descriptions follow. • All in one. Using [24,32], a new class label is created, i.e., K + 1, and every generated sample is assigned to this class. CNN training follows in Section 5.2. • Pseudo label. Using [18], during network training, each incoming GAN-image is passed forward through the current network and is assigned a pseudo label by taking the maximum value of the probability prediction vector (p(k) in Eq. 1). This GAN-image can be thus trained in the network with this pseudo label. During training, the pseudo label is assigned dynamically, so that the same GAN-image may receive different pseudo labels each time it is fed into the network. In our experiments, we begin feeding GAN images and assigning them pseudo labels after 20 epochs. We also set a global weight to the softmax loss of 0.1 to the GAN and 1 to the real images. Our experimental results show that the two methods also work on the GAN images and that LSRO is superior to "All in one" and "Pseudo label". Explanations are provided in the Section 5.3. Experiment We mainly evaluate the proposed method using the Market-1501 [49] dataset, because it is a large scale and has a fixed training/testing split. We also report results on the CUHK03 dataset [19], but due to the computational cost of 20 training/testing splits, we only use the GAN images generated from the Market-1501 dataset. In addition, we evaluate our method on a recently released pedestrian dataset DukeMTMC-reID [30] and a fine-grained recognition dataset CUB-200-2011 [38]. Person Re-id Datasets Market-1501 is a large-scale person re-ID dataset collected from six cameras. It contains 19,732 images for testing and 12,936 images for training. The images are automatically detected by the deformable part model (DPM) [9], so misalignment is common, and the dataset is close to realistic settings. There are 751 identities in the training set and 750 identities in the testing set. There are 17.2 images per identity in the training set. We use all the 12,936 detected images from the training set to train the GAN. CUHK03 contains 14,097 images of 1,467 identities. Each identity is captured by two cameras on the CUHK campus. This dataset contains two image sets. One is annotated by hand-drawn bounding boxes, and the other is produced by the DPM detector [9]. We use the detected set in this paper. There are 9.6 images per identity in the training set. We report the averaged result after training/testing 20 times. We use the single shot setting. DukeMTMC-reID is a subset of the newly-released multi-target, multi-camera pedestrian tracking dataset [30]. The original dataset contains eight 85-minute highresolution videos from eight different cameras. Handdrawn pedestrian bounding boxes are available. In this work, we use a subset of [30] for image-based re-ID, in the format of the Market-1501 dataset [49]. We crop pedestrian images from the videos every 120 frames, yielding 36,411 total bounding boxes with IDs annotated by [30]. The DukeMTMC-reID dataset for re-ID has 1,812 identities from eight cameras. There are 1,404 identities appearing in more than two cameras and 408 identities (distractor ID) who appear in only one camera. We randomly select 702 IDs as the training set and the remaining 702 IDs as the testing set. In the testing set, we pick one query image for each ID in each camera and put the remaining images in the gallery. As a result, we get 16,522 training images with 702 identities, 2,228 query images of the other 702 identities and 17,661 gallery images. The evaluation protocol is available on our website [2]. Some example re-ID results from the DukeMTMC-reID are shown in Fig. 6. Implementation Details CNN re-ID baseline. We adopt the CNN re-ID baseline used in [50,51]. Specifically, the Matconvnet [37] package is used. During training, We use the ResNet-50 model [13] and modify the fully-connected layer to have 751, 702 and 1,367 neurons for Market-1501, DukeMTMC-reID and CUHK03, respectively. All the images are resized to 256 × 256 before being randomly cropped into 224 × 224 with random horizontal flipping. We insert a dropout layer before the final convolutional layer and set the dropout rate to 0.5 for CUHK03 and 0.75 for Market-1501 and DukeMTMC-reID, respectively. We use stochastic gradient descent with momentum 0.9. The learning rate of the convolution layers is set to 0.002 and decay to 0.0002 after 40 epochs and we stop training after the 50th epochs. During testing, we extract the 2,048-dim CNN embedding in the last convolutional layer for an 224 × 224 input image. The similarity between two images is calculated by a cosine distance for ranking. GAN training and testing. We use Tensorflow [3] and the DCGAN package [1] to train the GAN model using the provided data in the original training set without preprocessing (e.g., foreground detection). All the images are resized to 128×128 and randomly flipped before training. We use Adam [15] with the parameters β 1 = 0.5, β 2 = 0.99. We stop training after 30 epochs. During GAN testing, we input a 100-dim random vector in GAN, and the value of each entry ranges in [-1, 1]. The outputted image is resized to 256 × 256 and then used in CNN training (with LSRO). More GAN images are shown in Fig. 5. Evaluation The ResNet baseline. Using the training/testing procedure described in Section 5.2, we report the baseline performance of ResNet in Table 1, Table 5 and Table 3. The rank-1 accuracy is 73.69%, 71.5% and 60.28% on Market-1501, CUHK03 and DukeMTMC-reID respectively. Our baseline results are on par with the those reported in [50,52]. Note that the baseline alone exceeds many previous works [20,36,47]. The GAN images improve the baseline. As shown in Table 2, when we add 24, 000 GAN images to the CNN training, our method significantly improves the re-ID performance on Market-1501. We observe improvement of +4.37% (from 73.69% to 78.06%) and +4.75% (from 51.48% to 56.23%) in rank-1 accuracy and mAP, respectively. On CUHK03, we observe improvements of +1.6%, +1.2%, +0.8%, and +1.6% in rank-1, 5, 10 accuracy and mAP, respectively. The improvement on CUHK03 is relatively small compared to that of Market-1501, because the DCGAN model is trained on Market-1501 and the generated images share a more similar distribution with Market-1501 than CUHK03. We also observe improvements of +2.46% and +2.14% in rank-1 and mAP, respectively, on the strong ResNet baseline in the DukeMTMC-reID dataset. These results indicate that the unlabeled images generated by the GAN effectively yield improvements over the baseline using the LSRO method. The impact of using different numbers of GAN images during training. We evaluate how the number of GAN images affects the re-ID performance. Since unlabeled data is easy to obtain, we expect the model would learn more general knowledge as the number of unlabeled images increases. The results on Market-1501 are shown in Table 2. We note that the number of real training images in Market-1501 is 12,936. Two observations are made. First, the addition of different numbers of GAN images consistently improves the baseline. Adding approximately 3×GAN images compared to the real training set still has a +2.38% improvement to rank-1 accuracy. Second, the peak performance is achieved when 2×GAN images are added. When too few GAN sample are incorporated into the system, the regularization ability of the LSRO is inadequate. In contrast, when too many GAN samples are present, the learning machine tends to converge towards assigning uniform prediction probabilities to all the training samples, which is not desirable. Therefore, a trade-off is recommended to avoid poor regularization and over-fitting method Single Query Multi. Query rank-1 mAP rank-1 mAP BoW+kissme [49] 44. 42 Table 2. Comparison of LSRO, "All in one", and "Pseudo label" under different numbers of GAN-generated images on Market-1501. We show that LSRO is superior to the other two methods whose best performance is highlighted in blue and red, respectively. Rank-1 accuracy (%) and mAP (%) are shown. method rank-1 mAP BoW+kissme [49] 25. 13 12.17 LOMO+XQDA [20] 30.75 17.04 Basel. [50,52] 65. 22 44.99 Basel. + LSRO 67.68 47.13 Table 3. Comparison of the baseline on DukeMTMC-reID. Rank-1 accuracy (%) and mAP (%) are shown. of uniform label distributions. GAN images vs. real images in training. To further evaluate the proposed method, we replace the GAN images with the real images from CUHK03 which are viewed as unlabeled in training. Since CUHK03 only contains 14,097 images, we randomly select 12,000 for the fair comparison. Experimental results are shown in Table 4. We compare the results obtained using the 12,000 CUHK03 images and the 12,000 GAN images. We find the real data from CUHK03 also assists in the regularization and improves the Table 5. Comparison of the state-of-the-art reports on the CUHK03 dataset. We list the fine-tuned ResNet baseline as well. The mAP (%) and rank1 (%) precision are presented. * the respective paper is on ArXiv but not published. performance. But the model trained with GAN-generated data is sightly better. In fact, although the images generated from DCGAN are visually imperfect (see Fig. 3), they still possess similar regularization ability as the real images. Comparison with the two competing methods. We compare the LSRO method with the "All in one" and "Pseudo label" methods implied in [24,32] and [18], respectively. The experimental results on Market-1501 are summarized in Table 2. We first observe that both strategies yield improvement over the baseline. The "All in one" method treats all the unlabeled samples as a new class, which forces the network to make "careful" predictions for the existing K classes. The "Pseudo label" method gradually labels the new data, and thus introduces more variance to the network. Nevertheless, we find that LSRO exceeds both strategies by approximately +1% ∼ +2%. We speculate the reason is that the "All in one" method makes a coarse label estimation, while the "Pseudo label" originally assumes that all the unlabeled data belongs to the existing classes [18] which is not true in person re-ID. While these two methods still use the one-hot label distribution, the LSRO method makes a less stronger assumption (label smoothing) towards the labels of the GAN images. These reasons may explain why LSRO has a superior performance. Comparison with the state-of-the-art methods. We compare our method with the state-of-the-art methods on Market-1501 and CUHK03, listed in Table 1 and Table 5, respectively. On Market-1501, we achieve rank-1 accuracy = 78.06%, mAP = 56.23% when using the single query mode, which is the best result compared to the published papers, and the second best among all the available results including ArXiv papers. On CUHK03, we arrive at rank-1 accuracy = 73.1%, mAP = 77.4% which is also very competitive. The previous best result is produced by combining the identification and verification losses [10,52]. We further investigate whether the LSRO could work on this model. We fine-tuned the publicly available model in [52] with LSRO and achieve state-of-the-art results rank-1 accuracy = 83.97%, mAP = 66.07% on Market-1501. On CUHK03, we also observe a state-of-the art performance rank-1 accuracy = 84.6%, mAP = 87.4%. We, therefore, show that the LSRO method is complementary to previous methods due to the regularization of the GAN data. Fine-grained Recognition Fine-grained recognition also faces the problem of a lack of training data and annotations. To further test the effectiveness of our method, we provide results on the CUB-200-2011 dataset [38]. This dataset contains 200 bird classes with 29.97 training images per class on average. Bounding boxes are used in both training and testing. We do not use part annotations. In our implementation, the ResNet baseline has a recognition accuracy of 82.6%, which is slightly higher than the 82.3% reported in [21]. This is the baseline method model annotation top-1 Zhang et al. [48] AlexNet 2×part 76.7 Zhang et al. [48] VGGNet 2×part 81.6 Liu et al. [21] ResNet-50 attribute 82.9 Wang et al. [39] 3×VGGNet × 83.0 Basel. [21] ResNet-50 × 82.6 Basel.+LSRO ResNet-50 × 83.2 Basel.+LSRO 2×ResNet-50 × 84.4 Table 6. We show the recognition accuracy (%) on CUB-200-2011. The proposed method has a 0.6% improvement over the competitive baseline. The two-model ensemble shows a competitive result. we will compare our method with. Using the same pipeline in Fig. 1, we train DCGAN on the 5,994 images in the training set, and then we combine the real images with the generated images (see Fig. 5) to train the CNN. During testing, we adopt the standard 10crop testing [17], which uses 256 × 256 images as input and the averaged prediction as the classification result. As shown in Table 6, the strong baseline outperforms some recent methods, and the proposed method further yields an improvement of +0.6% (from 82.6% to 83.2%). We also combine the two models generated by our method with different initializations to form an ensemble. This leads to a 84.4% recognition accuracy. In [21], Liu et al. report a 85.5% accuracy with a five-model ensemble using parts and a global scene. We do not include this result because extra annotations are used. We focus on the regularization ability of the GAN, but not on producing a state-of-the-art result. Conclusion In this paper, we propose an "in vitro" usage of the GANs for representation learning, i.e., person re-identification. Using a baseline DCGAN model [27], we show that the imperfect GAN images effectively demonstrate their regularization ability when trained with a ResNet baseline model. Through the proposed LSRO method, we mix the unlabeled GAN images with the labeled real training images for simultaneous semi-supervised learning. Albeit simple, we demonstrate consistent performance improvement over the re-ID and fine-grained recognition baseline systems, which sheds light on the practical use of GAN-generated data. In the future, we will continue to investigate on whether GAN images of better visual quality yield superior results when integrated into supervised learning. This paper provides some baseline evaluations using the imperfect GAN images and the future investigation would be intriguing. Figure 3 . 3Examples of GAN images and real images. (a) The top two rows show the pedestrian samples generated by DCGAN Figure 5 . 5The newly generated images from a DCGAN model trained on DukeMTMC-reID and CUB-200-2011. Through LSRO, they are added to the training sets of DukeMTMC-reID and CUB-200-2011 to regularize the CNN model. Figure 6 . 6Sample retrieval results on DukeMTMC-reID using the proposed method. The images in the first column are the query images. The retrieved images are sorted according to the similarity scores from left to right. The correct matches are in the blue rectangles, and the false matching images are in the red rectangles. DukeMTMC-reID is challenging because it contains pedestrians with occlusions and similar appearance. Table 1. Comparison of the state-of-the-art methods reported on the Market-1501 dataset. We also provide results of the fine-tuned ResNet baseline. Rank-1 precision (%) and mAP (%) are listed. * the respective paper is on ArXiv but not published.20.76 - - MR CNN [34] 45.58 26.11 56.59 32.26 FisherNet [42] 48.15 29.94 - - SL [6] 51.90 26.35 - - S-LSTM [36] - - 61.6 35.3 DNS [47] 55.43 29.87 71.56 46.03 Gate Reid [35] 65.88 39.55 76.04 48.45 SOMAnet [5]* 73.87 47.89 81.29 56.98 Verif.-Identif. [52]* 79.51 59.87 85.84 70.33 DeepTransfer [10]* 83.7 65.5 89.6 73.8 Basel. [50, 52]* 73.69 51.48 81.47 63.95 Basel. + LSRO 78.06 56.23 85.12 68.52 Verif-Identif. + LSRO 83.97 66.07 88.42 76.10 # GAN Img. LSRO All in one Pseudo label rank-1 mAP rank-1 mAP rank-1 mAP 0 (basel.) 73.69 51.48 73.69 51.48 73.69 51.48 12,000 76.81 55.32 75.33 52.82 76.07 53.56 18,000 77.26 55.55 77.20 55.04 76.34 53.45 24,000 78.06 56.23 76.63 55.12 75.80 53.03 30,000 77.38 55.48 75.95 55.18 75.21 52.65 36,000 76.07 54.59 76.87 55.47 74.67 52.38 . Dcgan-Tensorflow Package, DCGAN-tensorflow package. https://github.com/carpedm20/DCGAN-tensorflow. . Dukemtmc-Reid Dataset, DukeMTMC-reID Dataset. https://github.com/layumi/DukeMTMC-reID_evaluation. Tensorflow: A system for large-scale machine learning. M Abadi, P Barham, J Chen, Z Chen, A Davis, J Dean, M Devin, S Ghemawat, G Irving, M Isard, OSDI. M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, et al. Tensorflow: A system for large-scale machine learning. In OSDI, 2016. An improved deep learning architecture for person re-identification. E Ahmed, M Jones, T K Marks, CVPR. E. Ahmed, M. Jones, and T. K. Marks. An improved deep learning architecture for person re-identification. In CVPR, 2015. I B Barbosa, M Cristani, B Caputo, A Rognhaugen, T Theoharis, arXiv:1701.03153Looking beyond appearances: Synthetic training data for deep cnns in re-identification. I. B. Barbosa, M. Cristani, B. Caputo, A. Rognhaugen, and T. Theo- haris. Looking beyond appearances: Synthetic training data for deep cnns in re-identification. arXiv:1701.03153, 2017. Similarity learning with spatial constraints for person re-identification. D Chen, Z Yuan, B Chen, N Zheng, CVPR. D. Chen, Z. Yuan, B. Chen, and N. Zheng. Similarity learning with spatial constraints for person re-identification. In CVPR, 2016. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. X Chen, Y Duan, R Houthooft, J Schulman, I Sutskever, P Abbeel, NIPS. X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel. Infogan: Interpretable representation learning by infor- mation maximizing generative adversarial nets. In NIPS, 2016. Person reidentification by multi-channel parts-based cnn with improved triplet loss function. D Cheng, Y Gong, S Zhou, J Wang, N Zheng, CVPR. D. Cheng, Y. Gong, S. Zhou, J. Wang, and N. Zheng. Person re- identification by multi-channel parts-based cnn with improved triplet loss function. In CVPR, 2016. Object detection with discriminatively trained part-based models. P F Felzenszwalb, R B Girshick, D Mcallester, D Ramanan, TPAMI. 329P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part-based models. TPAMI, 32(9):1627-1645, 2010. Deep transfer learning for person re-identification. M Geng, Y Wang, T Xiang, Y Tian, arXiv:1611.05244M. Geng, Y. Wang, T. Xiang, and Y. Tian. Deep transfer learning for person re-identification. arXiv:1611.05244, 2016. Multiprediction deep boltzmann machines. I Goodfellow, M Mirza, A Courville, Y Bengio, NIPS. I. Goodfellow, M. Mirza, A. Courville, and Y. Bengio. Multi- prediction deep boltzmann machines. In NIPS, 2013. Generative adversarial nets. I Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, Y Bengio, NIPS. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, 2014. Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, CVPR. K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016. Reducing the dimensionality of data with neural networks. G E Hinton, R R Salakhutdinov, Science. 3135786G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504-507, 2006. Adam: A method for stochastic optimization. D Kingma, J Ba, arXiv:1412.6980D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014. Large scale metric learning from equivalence constraints. M Köstinger, M Hirzer, P Wohlhart, P M Roth, H Bischof, CVPR. M. Köstinger, M. Hirzer, P. Wohlhart, P. M. Roth, and H. Bischof. Large scale metric learning from equivalence constraints. In CVPR, 2012. Imagenet classification with deep convolutional neural networks. A Krizhevsky, I Sutskever, G E Hinton, NIPS. A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classifica- tion with deep convolutional neural networks. In NIPS, 2012. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. D.-H Lee, ICML Workshop. D.-H. Lee. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In ICML Workshop, 2013. Deepreid: Deep filter pairing neural network for person re-identification. W Li, R Zhao, T Xiao, X Wang, CVPR. W. Li, R. Zhao, T. Xiao, and X. Wang. Deepreid: Deep filter pairing neural network for person re-identification. In CVPR, 2014. Person re-identification by local maximal occurrence representation and metric learning. S Liao, Y Hu, X Zhu, S Z Li, CVPR. S. Liao, Y. Hu, X. Zhu, and S. Z. Li. Person re-identification by local maximal occurrence representation and metric learning. In CVPR, 2015. Localizing by describing: Attribute-guided attention localization for fine-grained recognition. X Liu, J Wang, S Wen, E Ding, Y Lin, arXiv:1605.06217X. Liu, J. Wang, S. Wen, E. Ding, and Y. Lin. Localizing by describ- ing: Attribute-guided attention localization for fine-grained recogni- tion. arXiv:1605.06217, 2016. Bicov: a novel image representation for person re-identification and face verification. B Ma, Y Su, F Jurie, BMVC. B. Ma, Y. Su, and F. Jurie. Bicov: a novel image representation for person re-identification and face verification. In BMVC, 2012. Covariance descriptor based on bioinspired features for person re-identification and face verification. B Ma, Y Su, F Jurie, Image and Vision Computing. 326B. Ma, Y. Su, and F. Jurie. Covariance descriptor based on bio- inspired features for person re-identification and face verification. Image and Vision Computing, 32(6):379-390, 2014. Semi-supervised learning with generative adversarial networks. A Odena, arXiv:1606.01583A. Odena. Semi-supervised learning with generative adversarial net- works. arXiv:1606.01583, 2016. Weaklyand semi-supervised learning of a deep convolutional network for semantic image segmentation. G Papandreou, L.-C Chen, K P Murphy, A L Yuille, ICCV. G. Papandreou, L.-C. Chen, K. P. Murphy, and A. L. Yuille. Weakly- and semi-supervised learning of a deep convolutional network for semantic image segmentation. In ICCV, 2015. Context encoders: Feature learning by inpainting. D Pathak, P Krahenbuhl, J Donahue, T Darrell, A A Efros, CVPR. D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros. Context encoders: Feature learning by inpainting. In CVPR, 2016. Unsupervised representation learning with deep convolutional generative adversarial networks. ICLR. A Radford, L Metz, S Chintala, A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. ICLR, 2016. Semi-supervised learning of compact document representations with deep networks. M Ranzato, M Szummer, ICML. M. Ranzato and M. Szummer. Semi-supervised learning of compact document representations with deep networks. In ICML, 2008. Semi-supervised learning with ladder networks. A Rasmus, M Berglund, M Honkala, H Valpola, T Raiko, NIPS. A. Rasmus, M. Berglund, M. Honkala, H. Valpola, and T. Raiko. Semi-supervised learning with ladder networks. In NIPS, 2015. Performance measures and a data set for multi-target, multi-camera tracking. E Ristani, F Solera, R Zou, R Cucchiara, C Tomasi, ECCV Workshop. E. Ristani, F. Solera, R. Zou, R. Cucchiara, and C. Tomasi. Perfor- mance measures and a data set for multi-target, multi-camera track- ing. In ECCV Workshop, 2016. Imagenet large scale visual recognition challenge. O Russakovsky, J Deng, H Su, J Krause, S Satheesh, S Ma, Z Huang, A Karpathy, A Khosla, M Bernstein, IJCV. 1153O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. IJCV, 115(3):211-252, 2015. Improved techniques for training gans. T Salimans, I Goodfellow, W Zaremba, V Cheung, A Radford, X Chen, NIPS. T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In NIPS, 2016. Rethinking the inception architecture for computer vision. C Szegedy, V Vanhoucke, S Ioffe, J Shlens, Z Wojna, CVPR. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Re- thinking the inception architecture for computer vision. In CVPR, 2016. Multiregion bilinear convolutional neural networks for person re-identification. E Ustinova, Y Ganin, V Lempitsky, arXiv:1512.05300E. Ustinova, Y. Ganin, and V. Lempitsky. Multiregion bi- linear convolutional neural networks for person re-identification. arXiv:1512.05300, 2015. Gated siamese convolutional neural network architecture for human re-identification. R R Varior, M Haloi, G Wang, ECCV. R. R. Varior, M. Haloi, and G. Wang. Gated siamese convolutional neural network architecture for human re-identification. In ECCV, 2016. A siamese long short-term memory architecture for human re-identification. R R Varior, B Shuai, J Lu, D Xu, G Wang, ECCV. R. R. Varior, B. Shuai, J. Lu, D. Xu, and G. Wang. A siamese long short-term memory architecture for human re-identification. In ECCV, 2016. Matconvnet -convolutional neural networks for matlab. A Vedaldi, K Lenc, ACMMM. A. Vedaldi and K. Lenc. Matconvnet -convolutional neural networks for matlab. In ACMMM, 2015. C Wah, S Branson, P Welinder, P Perona, S Belongie, The Caltech-UCSD Birds-200-2011 Dataset. Technical reportC. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 Dataset. Technical report, 2011. Multiple granularity descriptors for fine-grained categorization. D Wang, Z Shen, J Shao, W Zhang, X Xue, Z Zhang, ICCV. D. Wang, Z. Shen, J. Shao, W. Zhang, X. Xue, and Z. Zhang. Multi- ple granularity descriptors for fine-grained categorization. In ICCV, 2015. Joint learning of single-image and cross-image representations for person reidentification. F Wang, W Zuo, L Lin, D Zhang, L Zhang, CVPR. F. Wang, W. Zuo, L. Lin, D. Zhang, and L. Zhang. Joint learn- ing of single-image and cross-image representations for person re- identification. In CVPR, 2016. Learning a probabilistic latent space of object shapes via 3d generativeadversarial modeling. J Wu, C Zhang, T Xue, B Freeman, J Tenenbaum, NIPS. J. Wu, C. Zhang, T. Xue, B. Freeman, and J. Tenenbaum. Learn- ing a probabilistic latent space of object shapes via 3d generative- adversarial modeling. In NIPS, 2016. Deep linear discriminant analysis on fisher networks: A hybrid architecture for person re-identification. L Wu, C Shen, A Van Den, Hengel, Pattern Recognition. L. Wu, C. Shen, and A. van den Hengel. Deep linear discrimi- nant analysis on fisher networks: A hybrid architecture for person re-identification. Pattern Recognition, 2016. An enhanced deep feature representation for person re-identification. S Wu, Y.-C Chen, X Li, A.-C Wu, J.-J You, W.-S Zheng, WACV. S. Wu, Y.-C. Chen, X. Li, A.-C. Wu, J.-J. You, and W.-S. Zheng. An enhanced deep feature representation for person re-identification. In WACV, 2016. End-to-end deep learning for person search. T Xiao, S Li, B Wang, L Lin, X Wang, arXiv:1604.01850T. Xiao, S. Li, B. Wang, L. Lin, and X. Wang. End-to-end deep learning for person search. arXiv:1604.01850, 2016. R Yeh, C Chen, T Y Lim, M Hasegawa-Johnson, M N Do, arXiv:1607.07539Semantic image inpainting with perceptual and contextual losses. R. Yeh, C. Chen, T. Y. Lim, M. Hasegawa-Johnson, and M. N. Do. Semantic image inpainting with perceptual and contextual losses. arXiv:1607.07539, 2016. Deep metric learning for practical person re-identification. D Yi, Z Lei, S Z Li, arXiv:1407.4979D. Yi, Z. Lei, and S. Z. Li. Deep metric learning for practical person re-identification. arXiv:1407.4979, 2014. Learning a discriminative null space for person re-identification. L Zhang, T Xiang, S Gong, arXiv:1603.02139L. Zhang, T. Xiang, and S. Gong. Learning a discriminative null space for person re-identification. arXiv:1603.02139, 2016. Part-based r-cnns for fine-grained category detection. N Zhang, J Donahue, R Girshick, T Darrell, ECCV. N. Zhang, J. Donahue, R. Girshick, and T. Darrell. Part-based r-cnns for fine-grained category detection. In ECCV, 2014. Scalable person re-identification: A benchmark. L Zheng, L Shen, L Tian, S Wang, J Wang, Q Tian, ICCV. L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian. Scalable person re-identification: A benchmark. In ICCV, 2015. Person re-identification: Past, present and future. L Zheng, Y Yang, A G Hauptmann, arXiv:1610.02984L. Zheng, Y. Yang, and A. G. Hauptmann. Person re-identification: Past, present and future. arXiv:1610.02984, 2016. L Zheng, H Zhang, S Sun, M Chandraker, Q Tian, arXiv:1604.02531Person re-identification in the wild. L. Zheng, H. Zhang, S. Sun, M. Chandraker, and Q. Tian. Person re-identification in the wild. arXiv:1604.02531, 2016. A discriminatively learned cnn embedding for person re-identification. Z Zheng, L Zheng, Y Yang, arXiv:1611.05666Z. Zheng, L. Zheng, and Y. Yang. A discriminatively learned cnn embedding for person re-identification. arXiv:1611.05666, 2016.
[ "https://github.com/layumi/Person-reID_GAN", "https://github.com/carpedm20/DCGAN-tensorflow.", "https://github.com/layumi/DukeMTMC-reID_evaluation." ]
[ "Geometrical effects on the downstream conductance in quantum-Hall-superconductor hybrid systems", "Geometrical effects on the downstream conductance in quantum-Hall-superconductor hybrid systems" ]
[ "A David \nUniv. Grenoble Alpes\nCEA, Grenoble INP\nIRIG\n38000Pheliqs, GrenobleFrance\n", "J S Meyer \nUniv. Grenoble Alpes\nCEA, Grenoble INP\nIRIG\n38000Pheliqs, GrenobleFrance\n", "M Houzet \nUniv. Grenoble Alpes\nCEA, Grenoble INP\nIRIG\n38000Pheliqs, GrenobleFrance\n" ]
[ "Univ. Grenoble Alpes\nCEA, Grenoble INP\nIRIG\n38000Pheliqs, GrenobleFrance", "Univ. Grenoble Alpes\nCEA, Grenoble INP\nIRIG\n38000Pheliqs, GrenobleFrance", "Univ. Grenoble Alpes\nCEA, Grenoble INP\nIRIG\n38000Pheliqs, GrenobleFrance" ]
[]
We consider a quantum Hall (QH) region in contact with a superconductor (SC), i.e., a QH-SC junction. Due to successive Andreev reflections, the QH-SC interface hosts hybridized electron and hole edge states called chiral Andreev edge states (CAES). We theoretically study the transport properties of these CAES by using a microscopic, tight-binding model. We find that the transport properties strongly depend on the contact geometry and the value of the filling factor. We notice that it is necessary to add local barriers at the corners of the junction in order to reproduce such properties, when using effective one-dimensional models.
10.1103/physrevb.107.125416
[ "https://export.arxiv.org/pdf/2210.16867v2.pdf" ]
257,649,419
2210.16867
3700ee88b53f976f5390d2a54e9e714558e7cce8
Geometrical effects on the downstream conductance in quantum-Hall-superconductor hybrid systems A David Univ. Grenoble Alpes CEA, Grenoble INP IRIG 38000Pheliqs, GrenobleFrance J S Meyer Univ. Grenoble Alpes CEA, Grenoble INP IRIG 38000Pheliqs, GrenobleFrance M Houzet Univ. Grenoble Alpes CEA, Grenoble INP IRIG 38000Pheliqs, GrenobleFrance Geometrical effects on the downstream conductance in quantum-Hall-superconductor hybrid systems (Dated: March 23, 2023) We consider a quantum Hall (QH) region in contact with a superconductor (SC), i.e., a QH-SC junction. Due to successive Andreev reflections, the QH-SC interface hosts hybridized electron and hole edge states called chiral Andreev edge states (CAES). We theoretically study the transport properties of these CAES by using a microscopic, tight-binding model. We find that the transport properties strongly depend on the contact geometry and the value of the filling factor. We notice that it is necessary to add local barriers at the corners of the junction in order to reproduce such properties, when using effective one-dimensional models. I. INTRODUCTION Combining systems displaying a quantum Hall effect and superconductors is a difficult task, as the magnetic field needed to realize the quantum Hall effect tends to suppress superconductivity. If successful, it leads to interesting phenomena as the superconductor may induce correlations in the chiral edge states of the quantum Hall system. In particular, the formation of so-called chiral Andreev edge states (CAES) has been predicted. Semiclassically, these CAES result from skipping orbits of electrons and holes involving Andreev reflections at the quantum Hall-superconductor (QH-SC) interface [1][2][3][4]. Quantummechanically, the edge states along that interface are described as hybridized electron and hole states [5][6][7][8][9]. Their use for topologically protected quantum computing was also considered [10][11][12]. A number of recent experiments have succeeded in creating QH-SC hybrid systems using either graphene [13][14][15][16] or InAS two-dimensional electron gas (2DEG) [17], and observing evidence for CAES in the so-called downstream conductance. Namely, the downstream conductance measures the conversion of electrons into holes, involving the transfer of Cooper pairs into the superconductor along the interface. The larger the conversion probability, the smaller the downstream conductance and, in particular, it becomes negative when the conversion probability exceeds one half. While the experiments [13][14][15][16][17] did indeed measure negative downstream conductances, questions remain about the magnitude and the parameter dependence of the effect that do not match simple models: the observed signal is much smaller than expected. Furthermore, it shows either an irregular pattern [13][14][15] or remains roughly constant [17] when sweeping the field or the gate voltage, while simple models predict a regular oscillation. This stimulated further theoretical research. A suppression of the measured signal may be explained by the absorption of quasiparticles in the superconductor, for example, by subgap states in nearby vortices [14,[18][19][20], whereas the oscillations may be strongly affected by disorder [18,19]. Here we explore a different aspect that has not been addressed before: the role of the geometry. Namely, the downstream conductance does not probe only the properties of the QH-SC interface, but also the scattering properties at the point where this interface meets the QHvacuum interface. We find that these scattering probabili-ties strongly depend on the geometry of the contact region. In particular, a pronounced dependence of the angle between the QH-vacuum interface and the QH-SC interface is observed. Interestingly, this opens the possibility of creating asymmetric structures, where the angles are different on the two sides of the superconductor, that may display an enhanced overall electron-hole conversion probability. This may even lead to a situation where the downstream conductance becomes negative on average. Note that to study the effect of geometry, a full twodimensional description of the system is necessary -simple one-dimensional models commonly used in the literature are not sufficient. Some aspects may be captured by using a generalized one-dimensional model, though there is no obvious way to determine its parameters. The paper is organized as follows. In Sec. II, we present the system and the downstream conductance formula based on edge state transport whose parameters have to be computed. To do so, we first use a two-dimensional model in Sec. III. In particular, we start by studying a continuous model in Sec. III A that allows one to determine the properties of the edge states at an infinitely long interface. We then use a tight-binding model in Sec. III B to obtain the scattering probabilities at the points where two different interfaces, i.e., QH-vacuum and QH-SC, meet. With these two ingredients, we have all that is needed to compute the downstream conductance. In Sec. IV, we address the question whether the prior results may be obtained from an effective one-dimensional model. Further considerations on the role of additional nonchiral edge states and the effects of temperature can be found in Sec. V, before we conclude in Sec. VI. Some details were relegated to the appendices. II. SYSTEM AND CONDUCTANCE FORMULA The conductance along the edge of a system in the quantum Hall regime can be attributed to the properties of its chiral edge states. We are interested in the regime, where one spin-degenerate Landau level is occupied in the quantum Hall region, i.e., there are two chiral edge states. A typical process contributing to G d is shown. An incoming electron |e scatters at the first corner, propagates along the QH-SC interface as a superposition of quasielectron |qe and quasihole |qh CAES, then scatters at the second corner, and finally exits the superconductor in a superposition of electron |e and hole |h . The hole probability P h = |p h | 2 of the outgoing state depends on the scattering processes at the corners as well as the interference of the CAES propagation along the QH-SC interface. pure electron or hole states, the CAES along an edge with a superconductor are a superposition of electron and hole components. In the following, we will call them quasielectron when their momentum at the Fermi level is negative and quasihole when their momentum at the Fermi level is positive. As we will see below, for the system under consideration, this choice is in agreement with the pure electron and hole states obtained when Andreev processes are suppressed. We want to study the situation where the edge of the quantum Hall system is in contact with a superconductor over a region with finite length L as shown in Fig. 1. In that case, we can define a probability P h that an incoming electronlike state is transformed into an outgoing holelike state. It depends on the properties of the CAES along the QH-SC interface as well as the scattering amplitudes at the two corners, which begin and end that interface. Assuming ballistic propagation along the interface with a given material, the probability P h can be written as [8] P h = τ 1 (1 − τ 2 ) + τ 2 (1 − τ 1 ) +2 τ 1 (1 − τ 2 )τ 2 (1 − τ 1 ) cos(2k 0 L + φ 12 ),(1) where τ 1 is the probability that the electron is converted into a quasihole at the beginning of the QH-SC interface, whereas τ 2 is the probability that a quasielectron is con-verted into a hole at the end of the QH-SC interface. The second line describes the interference resulting from the fact that the particle may propagate along the QH-SC interface either as a quasielectron with momentum −k 0 or as a quasihole with momentum +k 0 . The phase shift φ 12 depends on the phases of the scattering amplitudes at the two corners. At zero temperature, the differential downstream conductance, G d (0) = ∂I/∂V | V =0 , where V is the voltage applied to the upstream reservoir and I is the current flowing into the downstream reservoir (see Fig. 1), is directly related to the probability P h at the Fermi level, namely G d (0) = G 0 (1 − 2P h ), where G 0 = 2e 2 /h is the conductance quantum. A negative downstream conductance is a clear signature of the Andreev conversion taking place at the QH-SC interface. Note that the average conductance is given asḠ d = G 0 i=1,2 (1 − 2τ i ). For τ 1 = τ 2 it is limited to positive values, whereas τ 1 = τ 2 allows one to realizeḠ d < 0. For completeness, let us mention that the maximal downstream conductance is G max d = G 0 [1 − 2( τ 1 (1 − τ 2 ) − τ 2 (1 − τ 1 ) ) 2 ], while the minimal downstream conductance is G min d = G 0 [1−2( τ 1 (1 − τ 2 )+ τ 2 (1 − τ 1 ) ) 2 ]. In the symmetric case τ 1 = τ 2 ≡ τ , this yields G max d = G 0 and G min d = G 0 [1 − 8τ (1 − τ )] . Thus, to model the experimentally measured downstream conductance, we need to determine k 0 as well as the probabilities τ i associated with the contact points between the QH region, the vacuum, and the superconductor. In the following, we show that k 0 can be obtained semianalytically from a microscopic model of an infinite QH-SC interface. By contrast, there is no simple model for the probabilities τ i . We study their dependence on system parameters and, in particular, the geometry of the contact points using tight-binding simulations. To conclude, we compare with an effective 1D model. III. TWO-DIMENSIONAL MODEL A. Continuum model of an infinite QH-SC interface We will consider an interface along the y axis such that the region x < 0 is in the quantum Hall regime whereas the region x > 0 is a superconductor. The microscopic Hamiltonian can be written in the form H = H 0 − µ(x) ∆(x) ∆ * (x) −H * 0 + µ(x)(2) with r = (x, y) and H 0 = 1 2m (−i∇ − eA(x)) 2 + V (x),(3) using units where = 1. Here, µ(x) = µ QH Θ(−x) + µ SC Θ(x) accounts for the drop of the chemical potential measured from the band bottom in the 2DEG and the superconductor, µ QH and µ SC , respectively, ∆(x) = ∆Θ(x) is the superconducting order parameter with amplitude ∆ (that we will choose to be real in the following), the potential V (x) = V 0 δ(x) with strength V 0 models an interface barrier, and Θ(x) is the Heaviside function. Note that we neglect self-consistency of the order parameter. Furthermore, we assume that the magnetic field in the superconductor is screened. Thus, choosing the Landau gauge that preserves translational invariance along the interface, we set A(x) = BxΘ(−x)û y . The wave functions can then be written in the form: Ψ(r) = e ikyy L y ψ ky (x),(4) where L y is the length of the system along the y-direction and ψ ky is the transverse wave function associated with longitudinal wave vector k y . Following [21], we can determine the CAES by writing the wave functions ψ QH ky (x) in the half-space x < 0 and ψ SC ky (x) in the half-space x > 0, and matching them at the interface to obtain an eigenstate of Eq. (2) at energy E. In the QH region, one obtains ψ QH ky (x) = c QH + 1 0 χ + (x) + c QH − 0 1 χ − (x) (5) with χ ± (x) = N ± U − µ QH ± E ω c , − √ 2 l B (x ∓ k y l 2 B ) ,(6) where U (a, z) are parabolic cylinder functions that vanish as z → −∞ (see Ref. [22] for the formal definition), and N ± are normalization coefficients such that 0 −∞ dx |χ ± (x)| 2 = 1. Here we introduced the cyclotron frequency ω c = eB/m and the magnetic length l B = 1/ √ eB. Restricting ourselves to the regime |E| < ∆ , the wavefunctions in the SC region take the form [23]: ψ SC ky (x) = c SC + 1 √ 2 γ 1 φ(x) + c SC − 1 √ 2 γ * 1 φ * (x) (7) with φ(x) = √ 2 Im q e iqx , q 2 = (k SC F ) 2 − k 2 y + 2im∆ 1 − 2 ,(8) and γ = +i √ 1 − 2 , where = E/∆ and k SC F = √ 2mµ SC . The matching procedure, ψ QH ky (0) = ψ SC ky (0) and ∂ x ψ SC ky (0) − ∂ x ψ QH ky (0) = 2mV 0 ψ SC ky (0) , yields the following secular equation for the energy E(k y ) [5]: s(E, k y ) ≡ GH c 2 + d 2 + G H + d(G H + GH ) + c √ 1 − 2 (G H − GH ) = 0 (9) with the shorthand notations c = Re q, d = Im q + 2mV 0 , G = χ + (0), G = χ + (0), H = χ − (0), and H = χ − (0), where the primes denote derivatives with respect to x. When the filling factor ν ≡ 2µ QH /ω c is in the range 1 < ν < 3, the chemical potential lies between the first and second (spin-degenerate) Landau levels of the QH region, and one obtains a single pair of CAES. An example of the spectrum is shown in Fig. 2, where we considered an ideal interface, i.e., µ QH = µ SC and V 0 = 0. At low energies, we see the two linearly dispersing CAES with energies The Fermi momentum k 0 appearing in the interference term for the downstream conductance, cf. (1), can be obtained by solving s(0, ∓k 0 ) = 0, which in general has to be done numerically. We observe that, as long as ∆ µ QH , µ SC , the momentum k 0 does not depend on ∆. In the limit of a large interface barrier Z ≡ 2mV 0 k QH F 1 and ν → 3, we find k 0 l B 1 and an analytical solution is possible, Fig. 3, we show the evolution of k 0 as a function of the barrier strength Z for various values of the filling factor. Typically k 0 decreases with increasing ν, except for a small region of intermediate values of Z and fillings ν close to three. E ± (k y ) = v CAES (k y ± k 0 ).namely k 0 ≈ (3 − ν) √ π/4l B [9]. In The velocity of the low-energy states is given as v CAES = − ∂ ky s(E, k y )/∂ E s(E, k y ) E=0,|ky|=k0 . One may also compute the electron and hole content of the states, ψ(x) = (ψ e (x), ψ h (x)). We define f h = ∞ −∞ dx |ψ h | 2 = 1 − ∞ −∞ dx |ψ e | 2 .(10) In particular, the result for the states at the Fermi level reads: f + h =1 − c 2 0 H 2 0 1+ 1 4q 0 " G 2 0 + (g 0 + q 0 "G 0 ) 2 |q 0 | 2 × × g 2 0 + c 2 0 H 2 0 1+ 1 2q 0 " G 2 0 + g 2 0 c 2 0 −1 ,(11) where g 0 = G 0 + d 0 G 0 and q 0 " = Im q 0 . Furthermore, the subscript 0 indicates that the previously introduced quantities have to be taken at E = 0 and k y = −k 0 . Particle-hole symmetry implies f − h = 1 − f + h . In the limit Z → ∞, we recover pure electron and hole states, f + h = 0 and f − h = 1. By contrast, at Z = 0 and in the limit ∆ → 0, one finds an equal repartition between electron and hole components, f + h = f − h = 1/2. As an illustrative example, we represent the content f + h of the quasielectron CAES as a function of the barrier strength Z for various values of the filling factor ν in Fig. 4. B. Tight-binding simulation and scattering probabilities We now turn to the scattering probabilities at the corners where the QH-vacuum interface and the QH-SC interface meet. In addition to the system parameters, this corner can be characterized by two angles as shown in Fig. 1: the angle θ QH,i that the QH-vacuum interface forms with the continuation of the QH-SC interface and the angle θ SC,i that the SC-vacuum interface forms with the continuation of the QH-SC interface. To ensure that there is no overlap, the angles must satisfy θ QH,i + θ SC,i > 0. To compute the scattering probabilities τ i as a function of these angles and system parameters, we perform tight-binding simulations with a discretized version of the Hamiltonian (3) on a square lattice using the Kwant software [24]. Introducing the Nambu spinor Ψ i = (c i , c † i ) T , where c † i (c i ) is the operator that creates (annihilates) an electron at the position r i = (x i , y i ), the second-quantized tight-binding Hamiltonian reads: H T B = i ψ † i [(4t − µ i + V i ) σ z + ∆ i σ x ] ψ i + i,j ψ † i te iφij σz σ z ψ j ,(12) where σ x/z are Pauli matrices in Nambu space, and i, j denotes pairs of nearest neighbor sites. The barrier potential is given as V i = V 0 δ xi,0 Θ L 2 −|y i | , where δ i,j is the Kronecker delta. In the QH region, µ i = µ QH and ∆ i = 0, whereas in the SC region, µ i = µ SC and ∆ i = ∆. Using a Peierls substitution, the hopping matrix element t = 1/(2ma 2 ), where a is the lattice spacing, acquires a field-dependent phase [25] φ ij = − πB φ 0 (x i + x j )(y j − y i )θ − x i + x j 2(13) with φ 0 the flux quantum. This lattice model matches the continuum model as long as the hopping energy is the largest energy scale, ∆, µ QH , µ SC t. We further make realistic assumptions ∆ µ QH ≤ µ SC . As the conversion probability from electron to quasihole at the first corner is equal to the conversion probability from quasielectron to hole at the second corner when parameters are chosen the same [8], τ 1 (θ QH , θ SC ) = τ 2 (θ QH , θ SC ) ≡ τ (θ QH , θ SC ), it is sufficient to simulate the first QH-SC corner. The python code is available on Zenodo [26]. When not specified, we set t = 1 and µ SC = t/20. Figure 5 shows the dependence of τ on angles for µ QH = µ SC = 10∆. In Fig. 5a, θ QH = 90 • is fixed while θ SC varies. We see a weak dependence of τ on θ SC for angles up to 90 • . This is not surprising as the propagation of the chiral edge states does not involve the SC-vacuum interface. The residual effect of θ SC on the scattering probability is due to the modified decay of the edge state wave function into the bulk in the vicinity of the corner. As shown in Appendix B, in this regime τ also shows a more pronounced dependence on the value of ∆ that controls the decay length in the superconductor. We illustrate this in Fig. 6 by plotting the probability density |ψ e (r)| 2 −|ψ h (r)| 2 of an incoming electron state; it can be seen that it is vanishingly small at angles < 90 • within the SC region. By contrast, τ decreases as θ QH is increased. This is shown in Fig. 5b, where θ SC = 90 • is fixed while θ QH varies. The stronger sensitivity of τ on θ QH can be understood as stemming from the fact that this angle directly determines the propagation direction of the edge state and thus the projection of the momentum of the incoming state onto the direction of the interface. A more realistic interface is obtained when allowing for different values of µ QH and µ SC , as well as for an interface barrier Z = 0. As an example, in Fig. 7 we show the evolution of τ as a function of θ QH with µ SC = 2µ QH and Z = 0.7. The behavior is qualitatively similar though the variation with the angle is less pronounced. The stronger variation with ν reflects the stronger variation of f + h at intermediate values of Z, shown in Fig. 4. IV. ONE-DIMENSIONAL MODEL Effective one-dimensional models are very useful to obtain a qualitative understanding of the edge state physics. They have been extensively used in recent works [14,15,17,19,20] to describe the CAES. In this section, we address the question of how to incorporate the effects discussed in previous sections into such an effective model. The starting point is the one-dimensional Bogoliubov-de Gennes Hamiltonian, H = − i 2 {v(y), ∂ y } − µ(y)∆(y) ∆ * (y) − i 2 {v(y), ∂ y } + µ(y) ,(14) where y denotes the coordinate along the QH edge,∆(y) are the induced superconducting correlations, v(y) is the edge state velocity in the absence of superconducting correlations, and µ(y) is an effective chemical potential. Furthermore, {., .} is the anticommutator. Choosing all the parameters to be independent of y allows one to extract the zero-energy momentum k 0 , the velocity v CAES , as well as the hole content f ± h of the CAES as introduced in Sec. III A. Diagonalizing H, one finds E ± (k y ) = vk y ± µ 2 +∆ 2 and f ± h = 1 ± µ/ µ 2 +∆ 2 /2. To match the results of Sec. III A, we thus set v = v CAES , µ = −v CAES k 0 (1 − 2f + h ),(15)∆ = v CAES k 0 f + h (1 − f + h ).(16) The simplest model often used to describe scattering at the corner consists of choosing a step function for the induced correlations,∆(y) =∆Θ(y). Matching of the wave functions at the position of the step, y = 0, directly yields the conversion probability of an electron into a quasihole: τ 0 = f + h .(17) This clearly is not sufficient to correctly describe the scattering-if only because it doesn't depend on the geometry of the contact point. Furthermore, it can be shown that choosing a different velocity v vac = v and/or effective chemical potentialμ vac =μ for the QH-vacuum interface at y < 0 does not modify this result. To obtain a conversion probability τ = τ 0 , one needs to include a spatial variation of the induced correlations∆(y) in the vicinity of y = 0. We thus consider a more general model with a barrier region, −L b /2 < y < L b /2, characterized by the parameters v(y) = v b , µ(y) = µ b and∆(y) = ∆ b e iφ b . Note that the relative superconducting phase between the barrier and the bulk is allowed as time-reversal symmetry is broken by the applied field. Solving the Schrödinger equation in the three regions (QH-vacuum interface at y < −L b /2, barrier, and QH-SC interface at y > L b /2), matching the solutions at y = ±L b /2, and solving the resulting system, we obtain: τ = √ τ 0 cos β b + √ 1 − τ 0 sin β b 2 (18) −4 τ 0 (1 − τ 0 ) sin β b cos β b cos 2 φ b − δ b 2 with α b = µ 2 b + ∆ 2 b L b /v b , sin β b = sin α b ∆ b / µ 2 b + ∆ 2 b , and tan δ b = cot α b µ 2 b + ∆ 2 b /µ b . If ∆ b = 0, τ = τ 0 is possible, and the model has sufficient parameters to obtain an arbitrary value of τ for a given τ 0 . Thus, in principal, this effective one-dimensional model can be used to describe an arbitrary geometry. However, there is no straightforward way to estimate parameters. As a consequence, a full two-dimensional model is necessary to determine the downstream conductance even in simple geometries. For illustration, in Fig. 8 we show the downstream conductance as a function of the length of the QH-SC interface obtained from a full tight-binding simulation of the structure shown in Fig. 1. Here the same parameters were used as in Fig. 7 with ν = 2.8. It is compared with the result of an effective 1D model where we set L b = ξ/10 with ξ = v SC F /∆ the BCS coherence length, v b = v CAES and µ b = µ SC . We use a numerical minimization procedure to find the values of ∆ b and φ b that give the scattering probabilities τ 1 , τ 2 obtained from the tight-binding model. Fitting parameters for Fig. 8 were ∆ b1 = 10.08∆, φ b1 = 3.382 and ∆ b2 = 0.32∆, φ b2 = 3.002. (Note that the choice is not unique.) In addition, we adjust the scattering phase φ 12 appearing in Eq. (1) so that the effective model matches the simulation at large L. A small mismatch between the values of k 0 can be attributed to lattice effects. Furthermore, deviations are visible at small lengths when the two corners cannot be treated independently, as assumed in Eq. (1). V. FURTHER CONSIDERATIONS In addition to the difficulty of determining parameters, effective 1D models have other obvious limitations. The effective 1D model only describes the topologicallyprotected chiral edge states. As can be seen in Fig. 2, in a full 2D calculation, additional subgap states may appear. While for the parameters chosen in Fig. 2, these states are close to the gap edge, they may cross the Fermi level in other parameter regimes. An example is shown in Fig. 9. We studied their parameter dependence and found zeroenergy crossings only happen for ν 3 and close to ideal interfaces. In experimentally relevant regimes, they are not expected to play a role as discussed in Appendix A. Note that additional in-gap states may appear as well when the interface is smooth. This question has been addressed in Ref. [18]. The downstream conductance at finite temperature is more likely affected by these nonchiral states. Furthermore, at finite T , the linear approximation for the dispersion of the CAES may not be sufficient. Namely, as long as k B T ∆ and continuum contributions may be neglected, the downstream conductance G d (T ) takes the form: G d (T ) ≈ G 0 ∆ −∆ dE 1 − 2P h (E) 4k B T cosh 2 E 2k B T ,(19) where P h (E) is given by Eq. (1) by replacing 2k 0 with δk(E) = k qe (E) − k qh (E), where k qe/qh (E) = E/v ± k 0 , and using the transmission probabilities τ i at energy E. If δk varies significantly with energy on the scale k B T , this leads to an averaging of the oscillations of the downstream conductance. Numerically we find that the effect is small in experimentally relevant parameter regimes, see Appendix C. VI. CONCLUSION In this paper, we have studied the downstream conductance mediated by CAES in QH-SC junctions. In particular, we found that the geometry plays an important role. This limits the applicability of simple effective 1D models that are often used to describe such systems. We showed that the most general effective 1D model containing a complex pairing potential localized in the region where the QH-vacuum edge meets the QH-SC edge allows one to model an arbitrary electron-hole conversion probabilityhowever, there is no clear prescription as to how parameters have to be chosen. We note that the geometry dependence may be exploited to device asymmetric junctions, where the overall electron-hole conversion probability is enhanced and the average downstream conductance can become negative. This may be a way to obtain clearer signatures of the Andreev conversion at the QH-SC interface. Our work concentrated on the clean case. It will be interesting to explore how these features are modified by disorder. Disorder as well as vortices modify the propagation phase along the interface and therefore change the interference pattern. However, this effect alone does not modify the minimal and maximal values, nor the average value of the downstream conductance determined by the electronhole conversion at the corners. Thus, the geometrical effects are expected to be robust as long as the disorder does not introduce significant electron-hole scattering along the interface. On the other hand, vortices may lead to the loss of quasiparticles, thus decreasing the overall value of the downstream conductance. It is less clear how this affects the repartition between quasielectrons and quasiholes. Further studies are also needed to better characterize geometries with a narrow superconducting finger such that crossed Andreev reflections and cotunneling across the finger come into play. Dependence of the electron-hole conversion probability τ on the superconducting gap ∆. The parameters are µQH = µSC , Z = 0, and θQH = 90 • . (a) At θSC = 45 • , the electron-hole conversion probability very weakly depends on ∆ in the regime ∆ µQH . (b) At θSC = 135 • , a stronger dependence is seen. This can be attributed to the observation that, for angles θSC > 90 • , the superconductor-vacuum interface comes into play and may modify the decay as illustrated in Fig. 12. ACKNOWLEDGMENTS We thank X. Waintal for help with Kwant. A.D. gratefully acknowledges interesting discussions with A. Bondarev. Furthermore, we acknowledge support from the French Agence Nationale de la Recherche (ANR) through Grants No. ANR-17-PIRE-0001 and No. ANR-21-CE30-0035. Appendix A: Additional non-chiral edge states As discussed in the main text, see Fig. 9, additional nonchiral edge states may cross the Fermi level in certain parameter regimes as ν approaches three. Using the continuous model of Sec. III A, we may determine the value ν c above which such states are present as a function of system parameters. To do so we need to solve the secular equation, Eq. (9), at E = 0 and determine the value ν c at which a second solution with k y > k 0 appears. The results are shown in Fig. 10 as a function of µ QH /∆ for an ideal interface as well as a function of the interface barrier strength Z and mismatch µ SC /µ QH at ∆ = µ QH × 10 −6 . We see that at ∆/µ QH 1, additional zero-energy states appear for ν > ν c ≈ 2.63 in the case of an ideal interface. An interface barrier, as well as potential mismatch, push that critical value up. It reaches three at µ SC /µ QH 3.73 or Z 0.65. Beyond these values, one never finds additional zero-energy states, which is likely the case in experiments. Appendix B: Dependence of the electron-hole conversion probability on the superconducting gap In the main text, we show the dependence of the electron-hole conversion probability at the corners on various parameters. Here we complement our study with results on the dependence on the superconducting gap ∆. In particular, we compare two different geometries, namely θ QH = 90 • , θ SC = 45 • in Fig. 11a and θ QH = 90 • , θ SC = 135 • in Fig. 11b. As shown in Appendix A, nonchiral edge states may appear upon decreasing ∆. Here we restrict ourselves to values of ν such that these states are absent in the range of values of ∆ plotted. (In particular, we show results for ν = 2.75 rather than ν = 2.8 as in the main text.) For θ SC = 45 • (Fig. 11a), the electronhole conversion probability depends on ∆ only very weakly. This is consistent with the analytic results of Sec. III A, which show that the properties of the edge states are almost independent of ∆ in the considered parameter regime. For θ SC = 135 • (Fig. 11b), a stronger dependence is seen, in particular for ν close to one and three. For angles θ SC > 90 • , the decay length of the edge state in the superconductor plays a more important role. Namely as the decay may reach the superconductor-vacuum interface, a stronger dependence of τ on ∆, which controls the decay length in the superconductor, is expected. The modified decay is illustrated in Fig. 12. Appendix C: Downstream conductance at finite temperature As the downstream conductance at finite temperature involves integral over the hole conversion probabilities at different energies, it is important to know the dependence of the parameters determining the hole conversion probability on energy. In particular, if the momentum mismatch δk or the phase φ 12 strongly varies with energy, the oscillations of the conductance should be averaged out upon increasing temperature. The momentum mismatch δk(E) can be obtained from the continuum model. We find that, even beyond the regime where the edge state spectrum is linear, the variation of δk remains small. We illustrate our findings in Fig. 13. Here the same parameters as in Fig. 8 were used. The spectrum is shown in Fig. 13a. Additional non-chiral edge states are visible at energies |E| ∆/2. The relative deviations of δk(E) from δk(0) = 2k 0 are shown in Fig. 13b. For small enough energies, the deviations are small, implying a nearly constant period of the oscillations. Figures 13c and 13d show the energy dependence of the conversion probabilities τ 1 and τ 2 . Again, the variation is weak up to the energy where additional subgap states appear. Note that this is consistent with what one would obtain from our effective 1D model, where there is no energy dependence. The scattering phase φ 12 (not shown) remains approximatively constant in this regime as well. These findings suggest that the zero-temperature results obtained for the downstream conductance are robust as long as k B T ∆. This is confirmed by a full tight-binding simulation, shown in Fig. 14. For k B T /∆ = 0.1, the result is almost unaffected. By contrast, at the larger temperature k B T /∆ = 0.5, a clear suppression of the amplitude of the oscillations is observed while the mean value increases as energies close to ∆ start to contribute, where variations of δk become nonnegligible and τ i → 0. The zero-temperature result is shown by blue dots. At kBT = ∆/10 (orange line), there is almost no change. By contrast, a clear reduction of the amplitude of the oscillations is observed at kBT = ∆/2 (green line). Parameters are the same as in Fig. 8. FIG. 2 . 2Energy spectrum of the states along the QH-SC interface obtained from Eq.(9). The crossings of the CAES with the Fermi level are indicated by red lines. Here the parameters are µQH = µSC = 10∆, ν = 2.4, and V0 = 0. FIG. 3 . 3Fermi momentum k0 as a function of barrier strength Z for various values of the filling factor ν at µQH = µSC = 10∆. Except for a small region of intermediate Z and ν close to three, the momentum k0 decreases with increasing ν. FIG. 4 . 4Hole content f + h of the quasielectron CAES versus the barrier's strength Z for various values of the filling factor ν at µQH = µSC = 10∆. While at Z = 0, the hole content is close to 1/2: it vanishes as Z 1. Interestingly, it is enhanced in an intermediate region for ν > 2. FIG. 5 . 5Conversion probability τ for various values of the filling factor ν as a function of (a) the SC angle θSC with θQH = 90 • and (b) the QH angle θQH with θSC = 90 • . The parameters are µQH = µSC = 10∆ and Z = 0. To minimize lattice effects, we only show commensurate angles. The solid lines are a guide to the eye. FIG. 6 .FIG. 7 . 67Probability density |ψe(r)| 2 − |ψ h (r)| 2 of an incoming electron state for θSC = 45 • and θQH = 90 • . The interference of CAES along the QH-SC interface (black line) can be clearly seen. Note that the wave function does not have any weight in the vicinity of the SC-vacuum boundary. The parameters are ν = 2, µQH = µSC = 10∆, and Z = 0. Conversion probability τ versus the angle θQH for a nonideal interface at various values of ν. Here θSC = 90 • , µSC = 2µQH = 20∆, and Z = 0.7. As in Fig. 5, we only show commensurate angles, and the solid lines are a guide to the eye. FIG. 8 . 8Conductance oscillations as a function of length L of an asymmetric junction, θQH,1 = 0 and θQH,2 = 90 • whereas θSC,1 = θSC,2 = 90 • . We compare a full tight-binding simulation (TB) with the results of an effective one-dimensional model, where the parameters have been chosen as discussed in Sec. IV. Here, ν = 2.8, µSC = 2µQH = 20∆ and Z = 0.7. The scattering phase φ12 in Eq. (1) is adjusted to match the results of the tight-binding simulation at large L. FIG. 9 . 9Spectrum with additional nonchiral zero-energy edge states. Here we set ν = 2.8, µQH = µSC = 20∆ and Z = 0. These additional states appear for ν 3 and close to ideal interfaces. FIG. 10 . 10Plots of νc, indicating the appearance of additional non-chiral edge states at the Fermi level, as a function of different parameters. (a) Dependence of νc on µQH /∆ for an ideal interface, µSC = µQH and Z = 0. In the limit µQH /∆ → ∞, the critical value tends to νc ≈ 2.63. (b) Dependence of νc on the mismatch µSC /µQH at ∆ = µQH × 10 −6 and Z = 0. As νc reaches three, the additional nonchiral subgap states disappear at moderate values of the mismatch. (c) Dependence of νc on the barrier strength Z at ∆ = µQH × 10 −6 and µSC = µQH . As νc reaches three, the additional nonchiral subgap states disappear at moderate values of the barrier strength. FIG. 11. Dependence of the electron-hole conversion probability τ on the superconducting gap ∆. The parameters are µQH = µSC , Z = 0, and θQH = 90 • . (a) At θSC = 45 • , the electron-hole conversion probability very weakly depends on ∆ in the regime ∆ µQH . (b) At θSC = 135 • , a stronger dependence is seen. This can be attributed to the observation that, for angles θSC > 90 • , the superconductor-vacuum interface comes into play and may modify the decay as illustrated in Fig. 12. FIG. 12. Probability density |ψe(r)| 2 − |ψ h (r)| 2 of an incoming electron state for θSC = 135 • and θQH = 90 • . Other parameters are ν = 2.75, µSC = µQH , and Z = 0.(a) µQH /∆ = 10. (b) µQH /∆ = 20. The modified decay in the superconductor and the effect of the superconductor-vaccum interface can be clearly seen. dependence of various parameters necessary to determine the downstream conductance at finite temperature. Parameters are the same as in Fig. 8. (a) Energy spectrum. (b) Variation of the relative momentum difference |δk(E) − 2k0|/2k0 of the pair of CAES. (c) Conversion probability τ1 = τ (θQH = 0, θSC = 90 • ) and (d) τ2 = τ (θQH = 90 • , θSC = 90 • ). Variations in Figs. (b)-(d) are seen to be small as long as |E| ∆. FIG. 14 . 14Downstream conductance at different temperatures. Introducing particle-hole space to be able to incorporate superconductivity, we can describe one spin state as an electron state and the other spin state as a hole state. While the chiral edge states along an edge with the vacuum are either arXiv:2210.16867v2 [cond-mat.mes-hall] 22 Mar 2023QH SC 1 2 downstream upstream CAES FIG. 1. QH-SC setup: the edge of the quantum Hall region is in contact with a grounded superconductor over a finite length L. The geometry of the corners at the beginning and end of that region can be characterized by two angles each: θQH,i and θSC,i. Both the QH-vacuum and QH-SC interface host chiral edge states that can be probed by measuring the differential downstream conductance G d = ∂I/∂V , where V is the voltage applied to the upstream reservoir and I is the current flow- ing into the downstream reservoir. While (quasi)electron and (quasi)hole states have opposite directions of quasi-momentum along the interface, they have the same propagation direction. Transport properties of semiconductorsuperconductor junctions in quantizing magnetic fields. Y Takagaki, https:/journals.aps.org/prb/abstract/10.1103/PhysRevB.57.4009Phys. Rev. B. 574009Y. Takagaki, Transport properties of semiconductor- superconductor junctions in quantizing magnetic fields, Phys. Rev. B 57, 4009 (1998). Andreev reflection and cyclotron motion of a quasiparticle in high magnetic fields. Y Asano, T Kato, https:/journals.jps.jp/doi/10.1143/JPSJ.69.1125J. Phys. Soc. Jpn. 691125Y. Asano and T. Kato, Andreev reflection and cyclotron motion of a quasiparticle in high magnetic fields, J. Phys. Soc. Jpn. 69, 1125 (2000). Conductance of a semiconductor(2DEG)-superconductor junction in high magnetic field. N M Chtchelkatchev, https:/link.springer.com/article/10.1134/1.1358428JETP Lett. 7394N. M. Chtchelkatchev, Conductance of a semiconductor(2DEG)-superconductor junction in high magnetic field, JETP Lett. 73, 94 (2001). Conductance oscillations with magnetic field of a two-dimensional electron gas-superconductor junction. N M Chtchelkatchev, I S Burmistrov, https:/journals.aps.org/prb/abstract/10.1103/PhysRevB.75.214510Phys. Rev. B. 75214510N. M. Chtchelkatchev and I. S. Burmistrov, Conduc- tance oscillations with magnetic field of a two-dimensional electron gas-superconductor junction, Phys. Rev. B 75, 214510 (2007). Andreev reflection in strong magnetic fields. H Hoppe, U Zülicke, G Schön, https:/journals.aps.org/prl/abstract/10.1103/PhysRevLett.84.1804Phys. Rev. Lett. 841804H. Hoppe, U. Zülicke, and G. Schön, Andreev reflection in strong magnetic fields, Phys. Rev. Lett. 84, 1804 (2000). Andreev reflection at superconductor-semiconductor interfaces in high magnetic fields. U Zülicke, H Hoppe, G Schön, Physica B. 298453U. Zülicke, H. Hoppe, and G. Schön, Andreev reflection at superconductor-semiconductor interfaces in high magnetic fields, Physica B 298, 453 (2001). Andreev reflection and cyclotron motion at superconductor-normal-metal interfaces. F Giazotto, M Governale, U Zülicke, F Beltram, https:/journals.aps.org/prb/abstract/10.1103/PhysRevB.72.054518Phys. Rev. B. 7254518F. Giazotto, M. Governale, U. Zülicke, and F. Bel- tram, Andreev reflection and cyclotron motion at superconductor-normal-metal interfaces, Phys. Rev. B 72, 054518 (2005). Andreev transport in twodimensional normal-superconducting systems in strong magnetic fields. I M Khaymovich, N M Chtchelkatchev, I A Shereshevskii, A S , https:/iopscience.iop.org/article/10.1209/0295-5075/91/17005Europhys. Lett. 9117005I. M. Khaymovich, N. M. Chtchelkatchev, I. A. Shere- shevskii, and A. S. Mel'nikov, Andreev transport in two- dimensional normal-superconducting systems in strong magnetic fields, Europhys. Lett. 91, 17005 (2010). Spin-triplet supercurrent carried by quantum Hall edge states through a josephson junction. J A M Van Ostaay, A R Akhmerov, C W J Beenakker, https:/journals.aps.org/prb/abstract/10.1103/PhysRevB.83.195441Phys. Rev. B. 83195441J. A. M. van Ostaay, A. R. Akhmerov, and C. W. J. Beenakker, Spin-triplet supercurrent carried by quantum Hall edge states through a josephson junction, Phys. Rev. B 83, 195441 (2011). Non-abelian anyons and topological quantum computation. C Nayak, S H Simon, A Stern, M Freedman, S D Sarma, https:/journals.aps.org/rmp/abstract/10.1103/RevModPhys.80.1083Rev. Mod. Phys. 801083C. Nayak, S. H. Simon, A. Stern, M. Freedman, and S. D. Sarma, Non-abelian anyons and topological quantum computation, Rev. Mod. Phys. 80, 1083 (2008). Universal topological quantum computation from a superconductor-abelian quantum Hall heterostructure. R S K Mong, D J Clarke, J Alicea, N H Lindner, P Fendley, C Nayak, Y Oreg, A Stern, E Berg, K Shtengel, M P A Fisher, https:/journals.aps.org/prx/abstract/10.1103/PhysRevX.4.011036Phys. Rev. X. 411036R. S. K. Mong, D. J. Clarke, J. Alicea, N. H. Lindner, P. Fendley, C. Nayak, Y. Oreg, A. Stern, E. Berg, K. Shten- gel, and M. P. A. Fisher, Universal topological quantum computation from a superconductor-abelian quantum Hall heterostructure, Phys. Rev. X 4, 011036 (2014). Exotic circuit elements from zero-modes in hybrid superconductor-quantum-Hall systems. D J Clarke, J Alicea, K Shtengel, Nat. Phys. 10877D. J. Clarke, J. Alicea, and K. Shtengel, Ex- otic circuit elements from zero-modes in hybrid superconductor-quantum-Hall systems, Nat. Phys. 10, 877 (2014). Inducing superconducting correlation in quantum Hall edge states. G H Lee, K F Huang, D K Efetov, D S Wei, S Hart, T Taniguchi, K Watanabe, A Yacoby, P Kim, Nat. Phys. 13693G. H. Lee, K. F. Huang, D. K. Efetov, D. S. Wei, S. Hart, T. Taniguchi, K. Watanabe, A. Yacoby, and P. Kim, Inducing superconducting correlation in quantum Hall edge states, Nat. Phys. 13, 693 (2017). . L Zhao, E G Arnault, A Bondarev, A Seredinski, T F Q Larson, A W Draelos, H Li, K Watanabe, T Taniguchi, F Amet, H U Baranger, G Finkelstein, Nat. Phys. 16862Interference of chiral Andreev edge statesL. Zhao, E. G. Arnault, A. Bondarev, A. Seredinski, T. F. Q. Larson, A. W. Draelos, H. Li, K. Watanabe, T. Taniguchi, F. Amet, H. U. Baranger, and G. Finkelstein, Interference of chiral Andreev edge states, Nat. Phys. 16, 862 (2020). Andreev reflection in the fractional quantum Hall state. Ö Gül, Y Ronen, S Y Lee, H Shapourian, J Zauberman, Y H Lee, K Watanabe, T Taniguchi, A Vishwanath, A Yacoby, P Kim, https:/journals.aps.org/prx/abstract/10.1103/PhysRevX.12.021057Phys. Rev. X. 1221057Ö. Gül, Y. Ronen, S. Y. Lee, H. Shapourian, J. Zauberman, Y. H. Lee, K. Watanabe, T. Taniguchi, A. Vishwanath, A. Yacoby, and P. Kim, Andreev reflection in the fractional quantum Hall state, Phys. Rev. X 12, 021057 (2022). L Zhao, Z Iftikhar, T F Q Larson, E G Arnault, K Watanabe, T Taniguchi, F Amet, G Finkelstein, arXiv:2210.04842Loss and decoherence at the quantum Hall-superconductor interface. L. Zhao, Z. Iftikhar, T. F. Q. Larson, E. G. Arnault, K. Watanabe, T. Taniguchi, F. Amet, and G. Finkelstein, Loss and decoherence at the quantum Hall-superconductor in- terface, arXiv:2210.04842 (2022). Induced superconducting pairing in integer quantum Hall edge states. M Hatefipour, J J Cuozzo, J Kanter, W M Strickland, C R Allemang, T M Lu, E Rossi, J Shabani, https:/pubs.acs.org/doi/10.1021/acs.nanolett.2c01413Nano Lett. 226173M. Hatefipour, J. J. Cuozzo, J. Kanter, W. M. Strick- land, C. R. Allemang, T. M. Lu, E. Rossi, and J. Shabani, Induced superconducting pairing in integer quantum Hall edge states, Nano Lett. 22, 6173 (2022). Mechanisms of Andreev reflection in quantum Hall graphene. A L R Manesco, I M Flór, C X Liu, A R Akhmerov, SciPost Phys. Core. 545A. L. R. Manesco, I. M. Flór, C. X. Liu, and A. R. Akhmerov, Mechanisms of Andreev reflection in quantum Hall graphene, SciPost Phys. Core 5, 045 (2022). V D Kurilovich, Z M Raines, L I Glazman, arXiv:2201.00273Disorder in Andreev reflection of a quantum Hall edge. V. D. Kurilovich, Z. M. Raines, and L. I. Glazman, Disorder in Andreev reflection of a quantum Hall edge, arXiv:2201.00273 (2022). N Schiller, B A Katzir, A Stern, E Berg, N H Lindner, Y Oreg, arXiv:2202.10475Interplay of superconductivity and dissipation in quantum Hall edges. N. Schiller, B. A. Katzir, A. Stern, E. Berg, N. H. Lindner, and Y. Oreg, Interplay of superconductivity and dissipa- tion in quantum Hall edges, arXiv:2202.10475 (2022). Transition from metallic to tunneling regimes in superconducting microconstrictions: Excess current, charge imbalance, and supercurrent conversion. G E Blonder, M Tinkham, T M Klapwijk, https:/journals.aps.org/prb/abstract/10.1103/PhysRevB.25.4515Phys. Rev. B. 254515G. E. Blonder, M. Tinkham, and T. M. Klapwijk, Transi- tion from metallic to tunneling regimes in superconducting microconstrictions: Excess current, charge imbalance, and supercurrent conversion, Phys. Rev. B 25, 4515 (1982). M Abramowitz, I A Stegun, Handbook of Mathematical Functions. New YorkDover PublicationsM. Abramowitz and I. A. Stegun, Handbook of Mathemat- ical Functions (Dover Publications, New York, 1972) Macroscopic quantization and the proximity effect in S-N-S junctions. I O Kulik, Zh. Eksp. Teor. Fiz. 571745Sov. Phys. JETPI. O. Kulik, Macroscopic quantization and the proximity effect in S-N-S junctions, Zh. Eksp. Teor. Fiz. 57, 1745 (1969) [Sov. Phys. JETP 30, 944 (1970)]. Kwant: a software package for quantum transport. C W Groth, M Wimmer, A R Akhmerov, X Waintal, https:/iopscience.iop.org/article/10.1088/1367-2630/16/6/063065New J. Phys. 1663065C. W. Groth, M. Wimmer, A. R. Akhmerov, and X. Wain- tal, Kwant: a software package for quantum transport, New J. Phys. 16, 063065 (2014). Electronic transport in mesoscopic systems. S Datta, Cambridge University PressCambridgeS. Datta, Electronic transport in mesoscopic systems (Cambridge University Press, Cambridge, 1995) Geometrical effects on the downstream conductance in quantum-Hall-superconductor hybrid systems (code). A David, 10.5281/zenodo.7627946Zenodo. A. David, Geometrical effects on the downstream con- ductance in quantum-Hall-superconductor hybrid sys- tems (code), Zenodo (2023), https://doi.org/10.5281/ zenodo.7627946.
[]
[ "EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN) Measurement of three-jet production cross-sections in pp collisions at 7 TeV centre-of-mass energy using the ATLAS detector The ATLAS Collaboration Measurement of three-jet production cross-sections in pp collisions at 7 TeV centre-of-mass energy using the ATLAS detector The ATLAS Collaboration", "EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN) Measurement of three-jet production cross-sections in pp collisions at 7 TeV centre-of-mass energy using the ATLAS detector The ATLAS Collaboration Measurement of three-jet production cross-sections in pp collisions at 7 TeV centre-of-mass energy using the ATLAS detector The ATLAS Collaboration" ]
[ "Santa Federico ", "Valparaíso, ChileMaría \nDepartment of Physics\nGazi University\nAnkara\n\nVinca Institute of Nuclear Sciences\nUniversity of Belgrade\nBelgradeSerbia\n\nDepartment of Physics\nDogus University\nIstanbul\n\nDipartimento di Fisica e Astronomia\nUniversità di Bologna\nBolognaItaly\n\nJuiz de Fora\nFederal University of Juiz de Fora (UFJF)\n\n\nPhysics Department\nNational Institute for Research and Development of Isotopic and Molecular Technologies\nCluj Napoca\n\n\nAlso at Institute of Physics\nAcademy of Sciences\nBakuAzerbaijan, Azerbaijan\n", "Marian ", "\nPhysics Department\nSUNY Albany\nAlbanyNYUnited States of America\n", "\nDepartment of Physics\nUniversity of Alberta\nEdmonton ABCanada\n", "\nDepartment of Physics\nAnkara University\nAnkara\n", "\nDivision of Physics\nTOBB University of Economics and Technology\nAnkara\n", "\nTurkish Atomic Energy Authority\nAnkaraTurkey\n", "\nLAPP\nCNRS/IN2P3\nUniversité de Savoie\nAnnecy-le-VieuxFrance\n", "\nHigh Energy Physics Division\nArgonne National Laboratory\nArgonneILUnited States of America\n", "\nDepartment of Physics\nUniversity of Arizona\nTucson AZ\nUnited States of America\n", "\nDepartment of Physics\nThe University of Texas at Arlington\nArlington TXUnited States of America\n", "\nPhysics Department\nUniversity of Athens\nAthensGreece\n", "\nPhysics Department\nNational Technical University of Athens\nZografouGreece\n", "\nInstitute of Physics\nAzerbaijan Academy of Sciences\nBakuAzerbaijan\n", "\nInstitut de Física d'Altes Energies and Departament de Física\nUniversitat Autònoma de Barcelona\nBarcelonaSpain\n", "\nInstitute of Physics\nUniversity of Belgrade\nBelgrade\n", "\nDepartment for Physics and Technology\nUniversity of Bergen\nBergenNorway\n", "\nPhysics Division\nLawrence Berkeley National Laboratory\nUniversity of California\nBerkeleyCAUnited States of America\n", "\nDepartment of Physics\nHumboldt University\nBerlinGermany\n", "\nAlbert Einstein Center for Fundamental Physics and Laboratory for High Energy Physics\nUniversity of Bern\nBernSwitzerland\n", "\nSchool of Physics and Astronomy\nUniversity of Birmingham\nBirminghamUnited Kingdom\n", "\nDepartment of Physics\nBogazici University\nIstanbul\n", "\nDepartment of Physics Engineering\nGaziantep University\nGaziantepTurkey\n", "\nINFN Sezione di Bologna\n\n", "\nPhysikalisches Institut\nUniversity of Bonn\nBonnGermany\n", "\nDepartment of Physics\nBoston University\nBostonMAUnited States of America\n", "\nDepartment of Physics\nBrandeis University\nWalthamMAUnited States of America\n", "\nEE/IF\nUniversidade Federal do Rio De Janeiro COPPE\nRio de Janeiro\n\n", "\nInstituto de Fisica\nFederal University of Sao Joao del Rei (UFSJ)\nSao Joao del Rei\n", "\nUniversidade de Sao Paulo\nSao PauloBrazil\n", "\nPhysics Department\nBrookhaven National Laboratory\nUpton NYUnited States of America\n", "\nNational Institute of Physics and Nuclear Engineering\nBucharest\n", "\n) University Politehnica Bucharest\nBucharest\n", "\nWest University in Timisoara\nTimisoaraRomania\n", "\nDepartamento de Física\nUniversidad de Buenos Aires\nBuenos AiresArgentina\n", "\nCavendish Laboratory\nUniversity of Cambridge\nCambridgeUnited Kingdom\n", "\nDepartment of Physics\nCarleton University\nOttawaONCanada\n", "\nDepartamento de Física, Pontificia Universidad Católica de Chile\nDepartamento de Física\nCERN\nGenevaSantiago; (b)Switzerland\n", "\nInstitute of High Energy Physics\nDepartment of Modern Physics\nUniversidad Técnica\nChinese Academy of Sciences\nBeijing\n", "\nDepartment of Physics\nSchool of Physics\nUniversity of Science and Technology of China\nNanjing University\nJiangsuAnhui; (c)\n", "\nShandong University\nShandong\n", "\nShanghai Jiao Tong University\nShanghaiChina\n", "\nLaboratoire de Physique Corpusculaire, Clermont Université and Université Blaise Pascal and CNRS/IN2P3, Clermont-Ferrand\nFrance\n", "\nNevis Laboratory\nColumbia University\nIrvingtonNYUnited States of America\n", "\nNiels Bohr Institute\nUniversity of Copenhagen\nKobenhavnDenmark\n", "\nDipartimento di Fisica\na) INFN Gruppo Collegato di Cosenza, Laboratori Nazionali di Frascati; (b\nUniversità della Calabria\nRendeItaly\n", "\nFaculty of Physics and Applied Computer Science\nUniversity of Science and Technology\nKrakow\n", "\nSmoluchowski Institute of Physics\nJagiellonian University\nKrakowPoland\n", "\nThe Henryk Niewodniczanski Institute of Nuclear Physics, Polish Academy of Sciences\nKrakowPoland\n", "\nPhysics Department\nDallas TX, United States of America 41 Physics Department\nat Dallas, Richardson TX, United States of America 42 DESY, Hamburg and Zeuthen\nSouthern Methodist University\nUniversity of Texas\nGermany\n", "\nInstitut für Experimentelle Physik IV\nTechnische Universität Dortmund\nDortmundGermany\n", "\nInstitut für Kern-und Teilchenphysik\nTechnische Universität Dresden\nDresdenGermany\n", "\nDepartment of Physics\nSUPA -School of Physics and Astronomy\nDuke University\nDurham NC, United States of America 46\n", "\nUniversity of Edinburgh\nEdinburghUnited Kingdom\n", "\nINFN Laboratori Nazionali di Frascati\nFrascatiItaly\n", "\nFakultät für Mathematik und Physik\nAlbert-Ludwigs-Universität\nFreiburgGermany\n", "\nSection de Physique\nDipartimento di Fisica\n50 (a) INFN Sezione di Genova; (b)\nUniversité de Genève\nGenevaSwitzerland\n", "\n51 (a) E. Andronikashvili Institute of Physics, Iv. Javakhishvili\nUniversità di Genova\nGenovaItaly\n", "\nHigh Energy Physics Institute\n52 II Physikalisches Institut\nTbilisi State University\nTbilisi State University\nTbilisiTbilisi; (b)Georgia\n", "\nJustus-Liebig-Universität Giessen\nGiessenGermany\n", "\nSUPA -School of Physics and Astronomy\n54 II Physikalisches Institut\nUniversity of Glasgow\nGlasgowUnited Kingdom\n", "\nGeorg-August-Universität\nGöttingenGermany\n", "\nLaboratoire de Physique Subatomique et de Cosmologie, Université Grenoble-Alpes, CNRS/IN2P3, Grenoble\nFrance\n", "\nDepartment of Physics\nHampton University\nHampton VAUnited States of America\n", "\nInstitut für technische Informatik\nLaboratory for Particle Physics and Cosmology\nHarvard University\nCambridgeMAUnited States\n", "\nRuprecht-Karls-Universität Heidelberg\nMannheimGermany\n", "\nFaculty of Applied Information Science\nHiroshima Institute of Technology\nHiroshimaJapan\n", "\nDepartment of Physics\nIndiana University\nBloomington INUnited States of America\n", "\nInstitut für Astro-und Teilchenphysik\nLeopold-Franzens-Universität, Innsbruck\nAustria\n", "\nUniversity of Iowa\nIowa CityIAUnited States of America\n", "\nDepartment of Physics and Astronomy\nIowa State University\nAmesIAUnited States of America\n", "\nJoint Institute for Nuclear Research, JINR Dubna, Dubna\nRussia\n", "\nKEK, High Energy Accelerator Research Organization, Tsukuba\nJapan\n", "\nGraduate School of Science\nKobe University\nKobeJapan\n", "\nFaculty of Science\nKyoto University\nKyotoJapan\n", "\nKyoto University of Education\nKyotoJapan\n", "\nDepartment of Physics\nKyushu University\nFukuokaJapan\n", "\nInstituto de Física La Plata, Universidad Nacional de La Plata and CONICET\nLa PlataArgentina\n", "\nPhysics Department\nDipartimento di Matematica e Fisica\nLancaster, United Kingdom 72 (a) INFN Sezione di Lecce; (b)\nLancaster University\nUniversità del Salento\nLecceItaly\n", "\nOliver Lodge Laboratory\nUniversity of Liverpool\nLiverpoolUnited Kingdom\n", "\nDepartment of Physics\nJožef Stefan Institute and University of Ljubljana\nLjubljanaSlovenia\n", "\nSchool of Physics and Astronomy\nMary University of London\nLondonQueenUnited Kingdom\n", "\nDepartment of Physics\nRoyal Holloway University of London\nSurreyUnited Kingdom\n", "\nDepartment of Physics and Astronomy\nUniversity College London\nLondonUnited Kingdom\n", "\nLouisiana Tech University\nRustonLAUnited States of America\n", "\nLaboratoire de Physique Nucléaire et de Hautes Energies, UPMC and Université Paris-Diderot and CNRS/IN2P3\n80 Fysiska institutionen, Lunds universitet, LundParisFrance, Sweden\n", "\nDepartamento de Fisica Teorica\nUniversidad Autonoma de Madrid\nC-15MadridSpain\n", "\nInstitut für Physik\nUniversität Mainz\nMainzGermany\n", "\nSchool of Physics and Astronomy\nUniversity of Manchester\nManchesterUnited Kingdom\n", "\nCPPM, Aix-Marseille Université and CNRS/IN2P3, Marseille\nFrance\n", "\nDepartment of Physics\nUniversity of Massachusetts\nAmherstMAUnited States of America\n", "\nDepartment of Physics\nMcGill University\nMontrealQCCanada\n", "\nSchool of Physics\nUniversity of Melbourne\nVictoriaAustralia\n", "\nDepartment of Physics\nThe University of Michigan\nAnn Arbor MIUnited States of America\n", "\nDepartment of Physics and Astronomy\nDipartimento di Fisica\n91 B.I. Stepanov Institute of Physics\nEast Lansing MI, United States of America 90 (a) INFN Sezione di Milano; (b)\nMichigan State University\nUniversità di Milano\nItaly\n", "\nNational Academy of Sciences of Belarus\nMinskRepublic of Belarus\n", "\nDepartment of Physics\nNational Scientific and Educational Centre for Particle and High Energy Physics, Minsk, Republic of Belarus 93\nMassachusetts Institute of Technology\nCambridge MAUnited States of America\n", "\nGroup of Particle Physics\n95 P.N. Lebedev Institute of Physics, Academy of Sciences, Moscow\nUniversity of Montreal\nMontrealQCCanada, Russia\n", "\nInstitute for Theoretical and Experimental Physics (ITEP)\nMoscowRussia\n", "\nMoscow Engineering and Physics Institute (MEPhI)\nMoscowRussia\n", "\nD.V.Skobeltsyn Institute of Nuclear Physics, M.V.Lomonosov\nMoscow State University\nMoscowRussia\n", "\nFakultät für Physik\nMax-Planck-Institut für Physik (Werner-Heisenberg-Institut)\nNagasaki Institute of Applied Science, Nagasaki\nGraduate School of Science and Kobayashi-Maskawa Institute\nLudwig-Maximilians-Universität München\n100, 101, 102München, MünchenGermany, Germany, Japan\n", "\nDipartimento di Fisica\n103 (a) INFN Sezione di Napoli; (b)\nNagoya University\nNagoyaJapan\n", "\nDepartment of Physics and Astronomy\nUniversità di Napoli\n104NapoliItaly\n", "\nAlbuquerque NM, United States of America 105 Institute for Mathematics, Astrophysics and Particle Physics\nNikhef National Institute for Subatomic Physics\nUniversity of New Mexico\nRadboud University Nijmegen/Nikhef\n106NijmegenNetherlands\n", "\nDepartment of Physics\nUniversity of Amsterdam\n107AmsterdamNetherlands\n", "\nDeKalb IL, United States of America 108 Budker Institute of Nuclear Physics, SB RAS, Novosibirsk\nDepartment of Physics\nNorthern Illinois University\n109Russia\n", "\nFaculty of Science\nNew York University\n111New York, NYUnited States of America\n", "\nHomer L. Dodge Department of Physics and Astronomy\nOkayama University\n112OkayamaJapan\n", "\nDepartment of Physics\nUniversity of Oklahoma\n113NormanOKUnited States of America\n", "\nOklahoma State University\n114StillwaterOKUnited States of America\n", "\nCzech Republic 115 Center for High Energy Physics\nPalacký University\nRCPTMOlomouc\n", "\nGraduate School of Science\nLAL, Université Paris-Sud and CNRS/IN2P3, Orsay\nUniversity of Oregon\n116, 117EugeneORUnited States of America, France\n", "\nDepartment of Physics\nOsaka University\n118OsakaJapan\n", "\nDepartment of Physics\nUniversity of Oslo\n119OsloNorway\n", "\nDipartimento di Fisica\n120 (a) INFN Sezione di Pavia; (b)\nOxford University\nOxfordUnited Kingdom\n", "\nDepartment of Physics\nUniversità di Pavia\n121PaviaItaly\n", "\nDipartimento di Fisica E. Fermi\nDepartment of Physics and Astronomy\nPhiladelphia PA, United States of America 122 Petersburg Nuclear Physics Institute, Gatchina, Russia 123 (a) INFN Sezione di Pisa; (b)\nUniversity of Pennsylvania\nUniversità di Pisa\n124PisaItaly\n", "\nDepartment of Physics\nInstitute of Physics\nPittsburgh PA, United States of America 125 (a) Laboratorio de Instrumentacao e Fisica Experimental de Particulas -LIP, Lisboa; (b) Faculdade de Ciências, Universidade de Lisboa, Lisboa; (c)\nCentro de Física Nuclear da Universidade de Lisboa, Lisboa; (e) Departamento de Fisica, Universidade do Minho, Braga; (f ) Departamento de Fisica Teorica y del Cosmos and CAFPE, Universidad de Granada, Granada (Spain); (g) Dep Fisica and CEFITEC of Faculdade de Ciencias e Tecnologia\nUniversity of Pittsburgh\nUniversity of Coimbra\nUniversidade Nova de Lisboa\n126CaparicaCoimbra; (d)Portugal\n", "\nFaculty of Mathematics and Physics\n129 State Research Center Institute for High Energy Physics\nParticle Physics Department\nPhysics Department\nAcademy of Sciences of the Czech Republic, Praha, Czech Republic 127 Czech Technical University in Prague, Praha, Czech Republic 128\nCharles University in Prague\nRutherford Appleton Laboratory130, 131Praha, DidcotCzech Republic, Russia, United Kingdom\n", "\nUniversity of Regina\n132ReginaSKCanada\n", "\n133 (a) INFN Sezione di Roma; (b) Dipartimento di Fisica, Sapienza Università di Roma, Roma, Italy 134 (a) INFN Sezione di Roma Tor Vergata; (b) Dipartimento di Fisica, Università di Roma Tor Vergata, Roma, Italy 135 (a) INFN Sezione di Roma Tre; (b) Dipartimento di Matematica e Fisica, Università Roma Tre, Roma, Italy 136 (a) Faculté des Sciences Ain Chock, Réseau Universitaire de Physique des Hautes Energies -Université Hassan II, Casablanca; (b) Centre National de l'Energie des Sciences Techniques Nucleaires, Rabat; (c) Faculté des Sciences Semlalia, Université Cadi Ayyad, LPHEA-Marrakech; (d) Faculté des Sciences, Université Mohamed Premier and LPTPM, Oujda; (e) Faculté des sciences\nRitsumeikan University\nKusatsuShigaJapan\n", "\nDSM/IRFU (\nSanta Cruz Institute for Particle Physics\nInstitut de Recherches sur les Lois Fondamentales de l'Univers), CEA Saclay (Commissariatà l'Energie Atomique et aux Energies Alternatives)\nUniversité Mohammed V-Agdal\nGif-sur-Yvette137, 138RabatMorocco, France\n", "\nDepartment of Physics\nUniversity of California Santa Cruz\nSanta Cruz139CAUnited States of America\n", "\nDepartment of Physics and Astronomy\nUniversity of Washington\n140SeattleWAUnited States of America\n", "\nDepartment of Physics\nUniversity of Sheffield\n141SheffieldUnited Kingdom\n", "\nFachbereich Physik\nShinshu University\n142NaganoJapan\n", "\nDepartment of Physics\nUniversität Siegen\n143SiegenGermany\n", "\nFaculty of Mathematics, Physics & Informatics\nSLAC National Accelerator Laboratory, Stanford CA, United States of America 145 (a)\nSimon Fraser University\n144BurnabyBCCanada\n", "\nDepartment of Subnuclear Physics\nDepartment of Physics\nInstitute of Experimental Physics of the Slovak Academy of Sciences, Kosice, Slovak Republic 146 (a)\nComenius University\nBratislava\n", "\nCape Town\nDepartment of Physics\nSchool of Physics\nDepartment of Physics\nUniversity of Cape Town\nUniversity of Johannesburg\nUniversity of the Witwatersrand\nJohannesburgJohannesburg; (c)South Africa\n", "\nPhysics Department\nStockholm University\nOskar Klein Centre, Stockholm148Sweden\n", "\nDepartments of Physics & Astronomy and Chemistry\nRoyal Institute of Technology\n149StockholmSweden\n", "\nDepartment of Physics and Astronomy\nStony Brook University\nStony Brook150NYUnited States of America\n", "\nSchool of Physics\nUniversity of Sussex\n151BrightonUnited Kingdom\n", "\nInstitute of Physics, Academia Sinica, Taipei\nDepartment of Physics, Technion: Israel Institute of Technology\nRaymond and Beverly Sackler School of Physics and Astronomy\nUniversity of Sydney\n152, 153, 154Sydney, HaifaAustralia, Taiwan, Israel\n", "\nDepartment of Physics\nTel Aviv University\nTel Aviv155Israel\n", "\nInternational Center for Elementary Particle Physics\nDepartment of Physics\nAristotle University of Thessaloniki\n156ThessalonikiGreece\n", "\nGraduate School of Science and Technology\nThe University of Tokyo\n157Japan\n", "\nDepartment of Physics\nTokyo Metropolitan University\n158TokyoJapan\n", "\nDepartment of Physics\nTokyo Institute of Technology\n159TokyoJapan\n", "\nDepartment of Physics and Astronomy\nUniversity of Toronto\n160Toronto, VancouverON, BC; (b)Canada\n", "\nFaculty of Pure and Applied Sciences\nYork University\n161TorontoONCanada\n", "\nDepartment of Physics and Astronomy\nUniversity of Tsukuba\n162TsukubaJapan\n", "\nTufts University\n163 Centro de InvestigacionesMedfordMAUnited States of America\n", "\nDepartment of Physics and Astronomy\nUniversidad Antonio Narino\n164BogotaColombia\n", "\nDipartimento di Chimica, Fisica e Ambiente\nDepartment of Physics\nIrvine CA, United States of America 165 (a) INFN Gruppo Collegato di Udine, Sezione di Trieste, Udine; (b) ICTP, Trieste; (c)\nUniversity of California Irvine\nUniversità di Udine\n166UdineItaly\n", "\nDepartment of Physics and Astronomy\nUniversity of Illinois\nUrbana IL\n167United States of America\n", "\nInstituto de Física Corpuscular (IFIC) and Departamento de Física Atómica, Molecular y Nuclear and Departamento de Ingeniería Electrónica and Instituto de Microelectrónica de Barcelona (IMB-CNM), University of Valencia and CSIC, Valencia\nUniversity of Uppsala\n168UppsalaSweden, Spain\n", "\nThe ATLAS Collaboration 169 Department of Physics\nDepartment of Physics and Astronomy\nUniversity of British Columbia\n170VancouverBCCanada\n", "\nDepartment of Physics\nUniversity of Victoria\nVictoria BC171Canada\n", "\nUniversity of Warwick\n172CoventryUnited Kingdom\n", "\nDepartment of Particle Physics\nWaseda University\n173TokyoJapan\n", "\nDepartment of Physics\nThe Weizmann Institute of Science\n174RehovotIsrael\n", "\nFakultät für Physik und Astronomie\nFachbereich C Physik\nUniversity of Wisconsin\nMadison WI, Julius-Maximilians-Universität175, 176WürzburgUnited States of America, Germany\n", "\nDepartment of Physics\nBergische Universität Wuppertal\n177WuppertalGermany\n", "\nCT, United States of America 178 Yerevan Physics Institute\nCentre de Calcul de l'Institut National de Physique Nucléaire et de Physique des Particules (IN2P3)\nYale University\n179New Haven, Yerevan, VilleurbanneArmenia, France\n", "\nAlso at Department of Physics, King's College London\nLondonUnited Kingdom\n", "\nAlso at Particle Physics Department, Rutherford Appleton Laboratory, Didcot, United Kingdom d Also at TRIUMF, Vancouver BC\nCanada\n", "\nAlso at Department of Physics\nCalifornia State University\nFresnoCAUnited States of America\n", "\nAlso at\nTomsk State University\nTomskRussia\n", "\nAlso at CPPM, Aix-Marseille Université and CNRS/IN2P3, Marseille\nFrance\n", "\nAlso at Università di Napoli Parthenope, Napoli\nItaly\n", "\nAlso at Institute of Particle Physics (IPP)\nCanada\n", "\nAlso at Department of Physics, St. Petersburg State\nPolytechnical University\nSt. PetersburgRussia\n", "\nAlso at Chinese\nUniversity of Hong Kong\nChina\n", "\nAlso at Department of Financial and Management Engineering\nUniversity of the Aegean\nChiosGreece\n", "\nDepartment of Physics\nAlso at Louisiana Tech University, Ruston LA, United States of America n Also at Institucio Catalana de Recerca i Estudis Avancats\nICREA\nBarcelonaSpain\n", "\nThe University of Texas at Austin\nAustin TXUnited States of America\n", "\nAlso at Institute of Theoretical Physics\nIlia State University\nTbilisiGeorgia\n", "\nAlso at CERN\nGenevaSwitzerland\n", "\nAlso at Ochadai Academic Production\nOchanomizu University\nTokyoJapan\n", "\nAlso at Manhattan College\nNew York, NYUnited States of America\n", "\nAlso at\nNovosibirsk State University\nNovosibirskRussia\n", "\nAlso at Institute of Physics, Academia Sinica, Taipei\nTaiwan\n", "\nAlso at LAL, Université Paris-Sud and CNRS/IN2P3, Orsay\nFrance\n", "\nInstitute of Physics, Academia Sinica, Taipei\nAlso at Academia Sinica Grid Computing\nTaiwan\n", "\nAlso at Laboratoire de Physique Nucléaire et de Hautes Energies, UPMC and Université Paris-Diderot and CNRS/IN2P3\nParisFrance\n", "\nAlso at School of Physical Sciences\nNational Institute of Science Education and Research\nBhubaneswarIndia\n", "\nab Also at Section de Physique\nAlso at Dipartimento di Fisica, Sapienza Università di Roma, Roma, Italy aa Also at Moscow Institute of Physics and Technology State University\nUniversité de Genève\nDolgoprudny, GenevaRussia, Switzerland\n", "\naf Also at Faculty of Physics, M.V.Lomonosov\nac Also at International School for Advanced Studies (SISSA), Trieste, Italy ad Also at Department of Physics and Astronomy, University of South Carolina, Columbia SC, United States of America ae Also at School of Physics and Engineering\nSun Yat-sen University\nGuangzhouChina\n", "\nag Also at Moscow Engineering and Physics Institute (MEPhI)\nDepartment of Physics\nah Also at Institute for Particle and Nuclear Physics, Wigner Research Centre for Physics, Budapest\nMoscow State University\nMoscow, MoscowRussia, Russia, Hungary\n", "\nDepartment of Physics\nOxford University\nOxfordUnited Kingdom\n", "\nak Also at Institut für Experimentalphysik\nNanjing University\nJiangsuChina\n", "\nDepartment of Physics\nUniversität Hamburg\nHamburgGermany\n", "\nAnn Arbor MI, United States of America am Also at Discipline of Physics\nThe University of Michigan\nUniversity of KwaZulu-Natal\nDurbanSouth Africa\n", "\nDepartment of Physics, Kuala Lumpur\nUniversity of Malaya\nMalaysia\n", "\nDeceased\n\n" ]
[ "Department of Physics\nGazi University\nAnkara", "Vinca Institute of Nuclear Sciences\nUniversity of Belgrade\nBelgradeSerbia", "Department of Physics\nDogus University\nIstanbul", "Dipartimento di Fisica e Astronomia\nUniversità di Bologna\nBolognaItaly", "Juiz de Fora\nFederal University of Juiz de Fora (UFJF)\n", "Physics Department\nNational Institute for Research and Development of Isotopic and Molecular Technologies\nCluj Napoca\n", "Also at Institute of Physics\nAcademy of Sciences\nBakuAzerbaijan, Azerbaijan", "Physics Department\nSUNY Albany\nAlbanyNYUnited States of America", "Department of Physics\nUniversity of Alberta\nEdmonton ABCanada", "Department of Physics\nAnkara University\nAnkara", "Division of Physics\nTOBB University of Economics and Technology\nAnkara", "Turkish Atomic Energy Authority\nAnkaraTurkey", "LAPP\nCNRS/IN2P3\nUniversité de Savoie\nAnnecy-le-VieuxFrance", "High Energy Physics Division\nArgonne National Laboratory\nArgonneILUnited States of America", "Department of Physics\nUniversity of Arizona\nTucson AZ\nUnited States of America", "Department of Physics\nThe University of Texas at Arlington\nArlington TXUnited States of America", "Physics Department\nUniversity of Athens\nAthensGreece", "Physics Department\nNational Technical University of Athens\nZografouGreece", "Institute of Physics\nAzerbaijan Academy of Sciences\nBakuAzerbaijan", "Institut de Física d'Altes Energies and Departament de Física\nUniversitat Autònoma de Barcelona\nBarcelonaSpain", "Institute of Physics\nUniversity of Belgrade\nBelgrade", "Department for Physics and Technology\nUniversity of Bergen\nBergenNorway", "Physics Division\nLawrence Berkeley National Laboratory\nUniversity of California\nBerkeleyCAUnited States of America", "Department of Physics\nHumboldt University\nBerlinGermany", "Albert Einstein Center for Fundamental Physics and Laboratory for High Energy Physics\nUniversity of Bern\nBernSwitzerland", "School of Physics and Astronomy\nUniversity of Birmingham\nBirminghamUnited Kingdom", "Department of Physics\nBogazici University\nIstanbul", "Department of Physics Engineering\nGaziantep University\nGaziantepTurkey", "INFN Sezione di Bologna\n", "Physikalisches Institut\nUniversity of Bonn\nBonnGermany", "Department of Physics\nBoston University\nBostonMAUnited States of America", "Department of Physics\nBrandeis University\nWalthamMAUnited States of America", "EE/IF\nUniversidade Federal do Rio De Janeiro COPPE\nRio de Janeiro\n", "Instituto de Fisica\nFederal University of Sao Joao del Rei (UFSJ)\nSao Joao del Rei", "Universidade de Sao Paulo\nSao PauloBrazil", "Physics Department\nBrookhaven National Laboratory\nUpton NYUnited States of America", "National Institute of Physics and Nuclear Engineering\nBucharest", ") University Politehnica Bucharest\nBucharest", "West University in Timisoara\nTimisoaraRomania", "Departamento de Física\nUniversidad de Buenos Aires\nBuenos AiresArgentina", "Cavendish Laboratory\nUniversity of Cambridge\nCambridgeUnited Kingdom", "Department of Physics\nCarleton University\nOttawaONCanada", "Departamento de Física, Pontificia Universidad Católica de Chile\nDepartamento de Física\nCERN\nGenevaSantiago; (b)Switzerland", "Institute of High Energy Physics\nDepartment of Modern Physics\nUniversidad Técnica\nChinese Academy of Sciences\nBeijing", "Department of Physics\nSchool of Physics\nUniversity of Science and Technology of China\nNanjing University\nJiangsuAnhui; (c)", "Shandong University\nShandong", "Shanghai Jiao Tong University\nShanghaiChina", "Laboratoire de Physique Corpusculaire, Clermont Université and Université Blaise Pascal and CNRS/IN2P3, Clermont-Ferrand\nFrance", "Nevis Laboratory\nColumbia University\nIrvingtonNYUnited States of America", "Niels Bohr Institute\nUniversity of Copenhagen\nKobenhavnDenmark", "Dipartimento di Fisica\na) INFN Gruppo Collegato di Cosenza, Laboratori Nazionali di Frascati; (b\nUniversità della Calabria\nRendeItaly", "Faculty of Physics and Applied Computer Science\nUniversity of Science and Technology\nKrakow", "Smoluchowski Institute of Physics\nJagiellonian University\nKrakowPoland", "The Henryk Niewodniczanski Institute of Nuclear Physics, Polish Academy of Sciences\nKrakowPoland", "Physics Department\nDallas TX, United States of America 41 Physics Department\nat Dallas, Richardson TX, United States of America 42 DESY, Hamburg and Zeuthen\nSouthern Methodist University\nUniversity of Texas\nGermany", "Institut für Experimentelle Physik IV\nTechnische Universität Dortmund\nDortmundGermany", "Institut für Kern-und Teilchenphysik\nTechnische Universität Dresden\nDresdenGermany", "Department of Physics\nSUPA -School of Physics and Astronomy\nDuke University\nDurham NC, United States of America 46", "University of Edinburgh\nEdinburghUnited Kingdom", "INFN Laboratori Nazionali di Frascati\nFrascatiItaly", "Fakultät für Mathematik und Physik\nAlbert-Ludwigs-Universität\nFreiburgGermany", "Section de Physique\nDipartimento di Fisica\n50 (a) INFN Sezione di Genova; (b)\nUniversité de Genève\nGenevaSwitzerland", "51 (a) E. Andronikashvili Institute of Physics, Iv. Javakhishvili\nUniversità di Genova\nGenovaItaly", "High Energy Physics Institute\n52 II Physikalisches Institut\nTbilisi State University\nTbilisi State University\nTbilisiTbilisi; (b)Georgia", "Justus-Liebig-Universität Giessen\nGiessenGermany", "SUPA -School of Physics and Astronomy\n54 II Physikalisches Institut\nUniversity of Glasgow\nGlasgowUnited Kingdom", "Georg-August-Universität\nGöttingenGermany", "Laboratoire de Physique Subatomique et de Cosmologie, Université Grenoble-Alpes, CNRS/IN2P3, Grenoble\nFrance", "Department of Physics\nHampton University\nHampton VAUnited States of America", "Institut für technische Informatik\nLaboratory for Particle Physics and Cosmology\nHarvard University\nCambridgeMAUnited States", "Ruprecht-Karls-Universität Heidelberg\nMannheimGermany", "Faculty of Applied Information Science\nHiroshima Institute of Technology\nHiroshimaJapan", "Department of Physics\nIndiana University\nBloomington INUnited States of America", "Institut für Astro-und Teilchenphysik\nLeopold-Franzens-Universität, Innsbruck\nAustria", "University of Iowa\nIowa CityIAUnited States of America", "Department of Physics and Astronomy\nIowa State University\nAmesIAUnited States of America", "Joint Institute for Nuclear Research, JINR Dubna, Dubna\nRussia", "KEK, High Energy Accelerator Research Organization, Tsukuba\nJapan", "Graduate School of Science\nKobe University\nKobeJapan", "Faculty of Science\nKyoto University\nKyotoJapan", "Kyoto University of Education\nKyotoJapan", "Department of Physics\nKyushu University\nFukuokaJapan", "Instituto de Física La Plata, Universidad Nacional de La Plata and CONICET\nLa PlataArgentina", "Physics Department\nDipartimento di Matematica e Fisica\nLancaster, United Kingdom 72 (a) INFN Sezione di Lecce; (b)\nLancaster University\nUniversità del Salento\nLecceItaly", "Oliver Lodge Laboratory\nUniversity of Liverpool\nLiverpoolUnited Kingdom", "Department of Physics\nJožef Stefan Institute and University of Ljubljana\nLjubljanaSlovenia", "School of Physics and Astronomy\nMary University of London\nLondonQueenUnited Kingdom", "Department of Physics\nRoyal Holloway University of London\nSurreyUnited Kingdom", "Department of Physics and Astronomy\nUniversity College London\nLondonUnited Kingdom", "Louisiana Tech University\nRustonLAUnited States of America", "Laboratoire de Physique Nucléaire et de Hautes Energies, UPMC and Université Paris-Diderot and CNRS/IN2P3\n80 Fysiska institutionen, Lunds universitet, LundParisFrance, Sweden", "Departamento de Fisica Teorica\nUniversidad Autonoma de Madrid\nC-15MadridSpain", "Institut für Physik\nUniversität Mainz\nMainzGermany", "School of Physics and Astronomy\nUniversity of Manchester\nManchesterUnited Kingdom", "CPPM, Aix-Marseille Université and CNRS/IN2P3, Marseille\nFrance", "Department of Physics\nUniversity of Massachusetts\nAmherstMAUnited States of America", "Department of Physics\nMcGill University\nMontrealQCCanada", "School of Physics\nUniversity of Melbourne\nVictoriaAustralia", "Department of Physics\nThe University of Michigan\nAnn Arbor MIUnited States of America", "Department of Physics and Astronomy\nDipartimento di Fisica\n91 B.I. Stepanov Institute of Physics\nEast Lansing MI, United States of America 90 (a) INFN Sezione di Milano; (b)\nMichigan State University\nUniversità di Milano\nItaly", "National Academy of Sciences of Belarus\nMinskRepublic of Belarus", "Department of Physics\nNational Scientific and Educational Centre for Particle and High Energy Physics, Minsk, Republic of Belarus 93\nMassachusetts Institute of Technology\nCambridge MAUnited States of America", "Group of Particle Physics\n95 P.N. Lebedev Institute of Physics, Academy of Sciences, Moscow\nUniversity of Montreal\nMontrealQCCanada, Russia", "Institute for Theoretical and Experimental Physics (ITEP)\nMoscowRussia", "Moscow Engineering and Physics Institute (MEPhI)\nMoscowRussia", "D.V.Skobeltsyn Institute of Nuclear Physics, M.V.Lomonosov\nMoscow State University\nMoscowRussia", "Fakultät für Physik\nMax-Planck-Institut für Physik (Werner-Heisenberg-Institut)\nNagasaki Institute of Applied Science, Nagasaki\nGraduate School of Science and Kobayashi-Maskawa Institute\nLudwig-Maximilians-Universität München\n100, 101, 102München, MünchenGermany, Germany, Japan", "Dipartimento di Fisica\n103 (a) INFN Sezione di Napoli; (b)\nNagoya University\nNagoyaJapan", "Department of Physics and Astronomy\nUniversità di Napoli\n104NapoliItaly", "Albuquerque NM, United States of America 105 Institute for Mathematics, Astrophysics and Particle Physics\nNikhef National Institute for Subatomic Physics\nUniversity of New Mexico\nRadboud University Nijmegen/Nikhef\n106NijmegenNetherlands", "Department of Physics\nUniversity of Amsterdam\n107AmsterdamNetherlands", "DeKalb IL, United States of America 108 Budker Institute of Nuclear Physics, SB RAS, Novosibirsk\nDepartment of Physics\nNorthern Illinois University\n109Russia", "Faculty of Science\nNew York University\n111New York, NYUnited States of America", "Homer L. Dodge Department of Physics and Astronomy\nOkayama University\n112OkayamaJapan", "Department of Physics\nUniversity of Oklahoma\n113NormanOKUnited States of America", "Oklahoma State University\n114StillwaterOKUnited States of America", "Czech Republic 115 Center for High Energy Physics\nPalacký University\nRCPTMOlomouc", "Graduate School of Science\nLAL, Université Paris-Sud and CNRS/IN2P3, Orsay\nUniversity of Oregon\n116, 117EugeneORUnited States of America, France", "Department of Physics\nOsaka University\n118OsakaJapan", "Department of Physics\nUniversity of Oslo\n119OsloNorway", "Dipartimento di Fisica\n120 (a) INFN Sezione di Pavia; (b)\nOxford University\nOxfordUnited Kingdom", "Department of Physics\nUniversità di Pavia\n121PaviaItaly", "Dipartimento di Fisica E. Fermi\nDepartment of Physics and Astronomy\nPhiladelphia PA, United States of America 122 Petersburg Nuclear Physics Institute, Gatchina, Russia 123 (a) INFN Sezione di Pisa; (b)\nUniversity of Pennsylvania\nUniversità di Pisa\n124PisaItaly", "Department of Physics\nInstitute of Physics\nPittsburgh PA, United States of America 125 (a) Laboratorio de Instrumentacao e Fisica Experimental de Particulas -LIP, Lisboa; (b) Faculdade de Ciências, Universidade de Lisboa, Lisboa; (c)\nCentro de Física Nuclear da Universidade de Lisboa, Lisboa; (e) Departamento de Fisica, Universidade do Minho, Braga; (f ) Departamento de Fisica Teorica y del Cosmos and CAFPE, Universidad de Granada, Granada (Spain); (g) Dep Fisica and CEFITEC of Faculdade de Ciencias e Tecnologia\nUniversity of Pittsburgh\nUniversity of Coimbra\nUniversidade Nova de Lisboa\n126CaparicaCoimbra; (d)Portugal", "Faculty of Mathematics and Physics\n129 State Research Center Institute for High Energy Physics\nParticle Physics Department\nPhysics Department\nAcademy of Sciences of the Czech Republic, Praha, Czech Republic 127 Czech Technical University in Prague, Praha, Czech Republic 128\nCharles University in Prague\nRutherford Appleton Laboratory130, 131Praha, DidcotCzech Republic, Russia, United Kingdom", "University of Regina\n132ReginaSKCanada", "133 (a) INFN Sezione di Roma; (b) Dipartimento di Fisica, Sapienza Università di Roma, Roma, Italy 134 (a) INFN Sezione di Roma Tor Vergata; (b) Dipartimento di Fisica, Università di Roma Tor Vergata, Roma, Italy 135 (a) INFN Sezione di Roma Tre; (b) Dipartimento di Matematica e Fisica, Università Roma Tre, Roma, Italy 136 (a) Faculté des Sciences Ain Chock, Réseau Universitaire de Physique des Hautes Energies -Université Hassan II, Casablanca; (b) Centre National de l'Energie des Sciences Techniques Nucleaires, Rabat; (c) Faculté des Sciences Semlalia, Université Cadi Ayyad, LPHEA-Marrakech; (d) Faculté des Sciences, Université Mohamed Premier and LPTPM, Oujda; (e) Faculté des sciences\nRitsumeikan University\nKusatsuShigaJapan", "DSM/IRFU (\nSanta Cruz Institute for Particle Physics\nInstitut de Recherches sur les Lois Fondamentales de l'Univers), CEA Saclay (Commissariatà l'Energie Atomique et aux Energies Alternatives)\nUniversité Mohammed V-Agdal\nGif-sur-Yvette137, 138RabatMorocco, France", "Department of Physics\nUniversity of California Santa Cruz\nSanta Cruz139CAUnited States of America", "Department of Physics and Astronomy\nUniversity of Washington\n140SeattleWAUnited States of America", "Department of Physics\nUniversity of Sheffield\n141SheffieldUnited Kingdom", "Fachbereich Physik\nShinshu University\n142NaganoJapan", "Department of Physics\nUniversität Siegen\n143SiegenGermany", "Faculty of Mathematics, Physics & Informatics\nSLAC National Accelerator Laboratory, Stanford CA, United States of America 145 (a)\nSimon Fraser University\n144BurnabyBCCanada", "Department of Subnuclear Physics\nDepartment of Physics\nInstitute of Experimental Physics of the Slovak Academy of Sciences, Kosice, Slovak Republic 146 (a)\nComenius University\nBratislava", "Cape Town\nDepartment of Physics\nSchool of Physics\nDepartment of Physics\nUniversity of Cape Town\nUniversity of Johannesburg\nUniversity of the Witwatersrand\nJohannesburgJohannesburg; (c)South Africa", "Physics Department\nStockholm University\nOskar Klein Centre, Stockholm148Sweden", "Departments of Physics & Astronomy and Chemistry\nRoyal Institute of Technology\n149StockholmSweden", "Department of Physics and Astronomy\nStony Brook University\nStony Brook150NYUnited States of America", "School of Physics\nUniversity of Sussex\n151BrightonUnited Kingdom", "Institute of Physics, Academia Sinica, Taipei\nDepartment of Physics, Technion: Israel Institute of Technology\nRaymond and Beverly Sackler School of Physics and Astronomy\nUniversity of Sydney\n152, 153, 154Sydney, HaifaAustralia, Taiwan, Israel", "Department of Physics\nTel Aviv University\nTel Aviv155Israel", "International Center for Elementary Particle Physics\nDepartment of Physics\nAristotle University of Thessaloniki\n156ThessalonikiGreece", "Graduate School of Science and Technology\nThe University of Tokyo\n157Japan", "Department of Physics\nTokyo Metropolitan University\n158TokyoJapan", "Department of Physics\nTokyo Institute of Technology\n159TokyoJapan", "Department of Physics and Astronomy\nUniversity of Toronto\n160Toronto, VancouverON, BC; (b)Canada", "Faculty of Pure and Applied Sciences\nYork University\n161TorontoONCanada", "Department of Physics and Astronomy\nUniversity of Tsukuba\n162TsukubaJapan", "Tufts University\n163 Centro de InvestigacionesMedfordMAUnited States of America", "Department of Physics and Astronomy\nUniversidad Antonio Narino\n164BogotaColombia", "Dipartimento di Chimica, Fisica e Ambiente\nDepartment of Physics\nIrvine CA, United States of America 165 (a) INFN Gruppo Collegato di Udine, Sezione di Trieste, Udine; (b) ICTP, Trieste; (c)\nUniversity of California Irvine\nUniversità di Udine\n166UdineItaly", "Department of Physics and Astronomy\nUniversity of Illinois\nUrbana IL\n167United States of America", "Instituto de Física Corpuscular (IFIC) and Departamento de Física Atómica, Molecular y Nuclear and Departamento de Ingeniería Electrónica and Instituto de Microelectrónica de Barcelona (IMB-CNM), University of Valencia and CSIC, Valencia\nUniversity of Uppsala\n168UppsalaSweden, Spain", "The ATLAS Collaboration 169 Department of Physics\nDepartment of Physics and Astronomy\nUniversity of British Columbia\n170VancouverBCCanada", "Department of Physics\nUniversity of Victoria\nVictoria BC171Canada", "University of Warwick\n172CoventryUnited Kingdom", "Department of Particle Physics\nWaseda University\n173TokyoJapan", "Department of Physics\nThe Weizmann Institute of Science\n174RehovotIsrael", "Fakultät für Physik und Astronomie\nFachbereich C Physik\nUniversity of Wisconsin\nMadison WI, Julius-Maximilians-Universität175, 176WürzburgUnited States of America, Germany", "Department of Physics\nBergische Universität Wuppertal\n177WuppertalGermany", "CT, United States of America 178 Yerevan Physics Institute\nCentre de Calcul de l'Institut National de Physique Nucléaire et de Physique des Particules (IN2P3)\nYale University\n179New Haven, Yerevan, VilleurbanneArmenia, France", "Also at Department of Physics, King's College London\nLondonUnited Kingdom", "Also at Particle Physics Department, Rutherford Appleton Laboratory, Didcot, United Kingdom d Also at TRIUMF, Vancouver BC\nCanada", "Also at Department of Physics\nCalifornia State University\nFresnoCAUnited States of America", "Also at\nTomsk State University\nTomskRussia", "Also at CPPM, Aix-Marseille Université and CNRS/IN2P3, Marseille\nFrance", "Also at Università di Napoli Parthenope, Napoli\nItaly", "Also at Institute of Particle Physics (IPP)\nCanada", "Also at Department of Physics, St. Petersburg State\nPolytechnical University\nSt. PetersburgRussia", "Also at Chinese\nUniversity of Hong Kong\nChina", "Also at Department of Financial and Management Engineering\nUniversity of the Aegean\nChiosGreece", "Department of Physics\nAlso at Louisiana Tech University, Ruston LA, United States of America n Also at Institucio Catalana de Recerca i Estudis Avancats\nICREA\nBarcelonaSpain", "The University of Texas at Austin\nAustin TXUnited States of America", "Also at Institute of Theoretical Physics\nIlia State University\nTbilisiGeorgia", "Also at CERN\nGenevaSwitzerland", "Also at Ochadai Academic Production\nOchanomizu University\nTokyoJapan", "Also at Manhattan College\nNew York, NYUnited States of America", "Also at\nNovosibirsk State University\nNovosibirskRussia", "Also at Institute of Physics, Academia Sinica, Taipei\nTaiwan", "Also at LAL, Université Paris-Sud and CNRS/IN2P3, Orsay\nFrance", "Institute of Physics, Academia Sinica, Taipei\nAlso at Academia Sinica Grid Computing\nTaiwan", "Also at Laboratoire de Physique Nucléaire et de Hautes Energies, UPMC and Université Paris-Diderot and CNRS/IN2P3\nParisFrance", "Also at School of Physical Sciences\nNational Institute of Science Education and Research\nBhubaneswarIndia", "ab Also at Section de Physique\nAlso at Dipartimento di Fisica, Sapienza Università di Roma, Roma, Italy aa Also at Moscow Institute of Physics and Technology State University\nUniversité de Genève\nDolgoprudny, GenevaRussia, Switzerland", "af Also at Faculty of Physics, M.V.Lomonosov\nac Also at International School for Advanced Studies (SISSA), Trieste, Italy ad Also at Department of Physics and Astronomy, University of South Carolina, Columbia SC, United States of America ae Also at School of Physics and Engineering\nSun Yat-sen University\nGuangzhouChina", "ag Also at Moscow Engineering and Physics Institute (MEPhI)\nDepartment of Physics\nah Also at Institute for Particle and Nuclear Physics, Wigner Research Centre for Physics, Budapest\nMoscow State University\nMoscow, MoscowRussia, Russia, Hungary", "Department of Physics\nOxford University\nOxfordUnited Kingdom", "ak Also at Institut für Experimentalphysik\nNanjing University\nJiangsuChina", "Department of Physics\nUniversität Hamburg\nHamburgGermany", "Ann Arbor MI, United States of America am Also at Discipline of Physics\nThe University of Michigan\nUniversity of KwaZulu-Natal\nDurbanSouth Africa", "Department of Physics, Kuala Lumpur\nUniversity of Malaya\nMalaysia", "Deceased\n" ]
[ "America" ]
Double-differential three-jet production cross-sections are measured in proton-proton collisions at a centre-of-mass energy of √ s = 7 TeV using the ATLAS detector at the Large Hadron Collider. The measurements are presented as a function of the three-jet mass (m jjj ), in bins of the sum of the absolute rapidity separations between the three leading jets (|Y * |). Invariant masses extending up to 5 TeV are reached for 8 < |Y * | < 10. These measurements use a sample of data recorded using the ATLAS detector in 2011, which corresponds to an integrated luminosity of 4.51 fb −1 . Jets are identified using the anti-k t algorithm with two different jet radius parameters, R = 0.4 and R = 0.6. The dominant uncertainty in these measurements comes from the jet energy scale. Next-toleading-order QCD calculations corrected to account for non-perturbative effects are compared to the measurements. Good agreement is found between the data and the theoretical predictions based on most of the available sets of parton distribution functions, over the full kinematic range, covering almost seven orders of magnitude in the measured cross-section values.Abstract Double-differential three-jet production crosssections are measured in proton-proton collisions at a centre-of-mass energy of √ s = 7 TeV using the ATLAS detector at the Large Hadron Collider. The measurements are presented as a function of the three-jet mass (m jjj ), in bins of the sum of the absolute rapidity separations between the three leading jets (|Y * |). Invariant masses extending up to 5 TeV are reached for 8 < |Y * | < 10. These measurements use a sample of data recorded using the ATLAS detector in 2011, which corresponds to an integrated luminosity of 4.51 fb −1 . Jets are identified using the anti-k t algorithm with two different jet radius parameters, R = 0.4 and R = 0.6. The dominant uncertainty in these measurements comes from the jet energy scale. Next-to-leading-order QCD calculations corrected to account for non-perturbative effects are compared to the measurements. Good agreement is found between the data and the theoretical predictions based on most of the available sets of parton distribution functions, over the full kinematic range, covering almost seven orders of magnitude in the measured cross-section values.
10.1140/epjc/s10052-015-3363-3
[ "https://arxiv.org/pdf/1411.1855v2.pdf" ]
118,585,006
1411.1855
18c469895b4781e9ac3f2fb0c4392dcb876dd214
EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN) Measurement of three-jet production cross-sections in pp collisions at 7 TeV centre-of-mass energy using the ATLAS detector The ATLAS Collaboration Measurement of three-jet production cross-sections in pp collisions at 7 TeV centre-of-mass energy using the ATLAS detector The ATLAS Collaboration 7 Nov 2014 Santa Federico Valparaíso, ChileMaría Department of Physics Gazi University Ankara Vinca Institute of Nuclear Sciences University of Belgrade BelgradeSerbia Department of Physics Dogus University Istanbul Dipartimento di Fisica e Astronomia Università di Bologna BolognaItaly Juiz de Fora Federal University of Juiz de Fora (UFJF) Physics Department National Institute for Research and Development of Isotopic and Molecular Technologies Cluj Napoca Also at Institute of Physics Academy of Sciences BakuAzerbaijan, Azerbaijan Marian Physics Department SUNY Albany AlbanyNYUnited States of America Department of Physics University of Alberta Edmonton ABCanada Department of Physics Ankara University Ankara Division of Physics TOBB University of Economics and Technology Ankara Turkish Atomic Energy Authority AnkaraTurkey LAPP CNRS/IN2P3 Université de Savoie Annecy-le-VieuxFrance High Energy Physics Division Argonne National Laboratory ArgonneILUnited States of America Department of Physics University of Arizona Tucson AZ United States of America Department of Physics The University of Texas at Arlington Arlington TXUnited States of America Physics Department University of Athens AthensGreece Physics Department National Technical University of Athens ZografouGreece Institute of Physics Azerbaijan Academy of Sciences BakuAzerbaijan Institut de Física d'Altes Energies and Departament de Física Universitat Autònoma de Barcelona BarcelonaSpain Institute of Physics University of Belgrade Belgrade Department for Physics and Technology University of Bergen BergenNorway Physics Division Lawrence Berkeley National Laboratory University of California BerkeleyCAUnited States of America Department of Physics Humboldt University BerlinGermany Albert Einstein Center for Fundamental Physics and Laboratory for High Energy Physics University of Bern BernSwitzerland School of Physics and Astronomy University of Birmingham BirminghamUnited Kingdom Department of Physics Bogazici University Istanbul Department of Physics Engineering Gaziantep University GaziantepTurkey INFN Sezione di Bologna Physikalisches Institut University of Bonn BonnGermany Department of Physics Boston University BostonMAUnited States of America Department of Physics Brandeis University WalthamMAUnited States of America EE/IF Universidade Federal do Rio De Janeiro COPPE Rio de Janeiro Instituto de Fisica Federal University of Sao Joao del Rei (UFSJ) Sao Joao del Rei Universidade de Sao Paulo Sao PauloBrazil Physics Department Brookhaven National Laboratory Upton NYUnited States of America National Institute of Physics and Nuclear Engineering Bucharest ) University Politehnica Bucharest Bucharest West University in Timisoara TimisoaraRomania Departamento de Física Universidad de Buenos Aires Buenos AiresArgentina Cavendish Laboratory University of Cambridge CambridgeUnited Kingdom Department of Physics Carleton University OttawaONCanada Departamento de Física, Pontificia Universidad Católica de Chile Departamento de Física CERN GenevaSantiago; (b)Switzerland Institute of High Energy Physics Department of Modern Physics Universidad Técnica Chinese Academy of Sciences Beijing Department of Physics School of Physics University of Science and Technology of China Nanjing University JiangsuAnhui; (c) Shandong University Shandong Shanghai Jiao Tong University ShanghaiChina Laboratoire de Physique Corpusculaire, Clermont Université and Université Blaise Pascal and CNRS/IN2P3, Clermont-Ferrand France Nevis Laboratory Columbia University IrvingtonNYUnited States of America Niels Bohr Institute University of Copenhagen KobenhavnDenmark Dipartimento di Fisica a) INFN Gruppo Collegato di Cosenza, Laboratori Nazionali di Frascati; (b Università della Calabria RendeItaly Faculty of Physics and Applied Computer Science University of Science and Technology Krakow Smoluchowski Institute of Physics Jagiellonian University KrakowPoland The Henryk Niewodniczanski Institute of Nuclear Physics, Polish Academy of Sciences KrakowPoland Physics Department Dallas TX, United States of America 41 Physics Department at Dallas, Richardson TX, United States of America 42 DESY, Hamburg and Zeuthen Southern Methodist University University of Texas Germany Institut für Experimentelle Physik IV Technische Universität Dortmund DortmundGermany Institut für Kern-und Teilchenphysik Technische Universität Dresden DresdenGermany Department of Physics SUPA -School of Physics and Astronomy Duke University Durham NC, United States of America 46 University of Edinburgh EdinburghUnited Kingdom INFN Laboratori Nazionali di Frascati FrascatiItaly Fakultät für Mathematik und Physik Albert-Ludwigs-Universität FreiburgGermany Section de Physique Dipartimento di Fisica 50 (a) INFN Sezione di Genova; (b) Université de Genève GenevaSwitzerland 51 (a) E. Andronikashvili Institute of Physics, Iv. Javakhishvili Università di Genova GenovaItaly High Energy Physics Institute 52 II Physikalisches Institut Tbilisi State University Tbilisi State University TbilisiTbilisi; (b)Georgia Justus-Liebig-Universität Giessen GiessenGermany SUPA -School of Physics and Astronomy 54 II Physikalisches Institut University of Glasgow GlasgowUnited Kingdom Georg-August-Universität GöttingenGermany Laboratoire de Physique Subatomique et de Cosmologie, Université Grenoble-Alpes, CNRS/IN2P3, Grenoble France Department of Physics Hampton University Hampton VAUnited States of America Institut für technische Informatik Laboratory for Particle Physics and Cosmology Harvard University CambridgeMAUnited States Ruprecht-Karls-Universität Heidelberg MannheimGermany Faculty of Applied Information Science Hiroshima Institute of Technology HiroshimaJapan Department of Physics Indiana University Bloomington INUnited States of America Institut für Astro-und Teilchenphysik Leopold-Franzens-Universität, Innsbruck Austria University of Iowa Iowa CityIAUnited States of America Department of Physics and Astronomy Iowa State University AmesIAUnited States of America Joint Institute for Nuclear Research, JINR Dubna, Dubna Russia KEK, High Energy Accelerator Research Organization, Tsukuba Japan Graduate School of Science Kobe University KobeJapan Faculty of Science Kyoto University KyotoJapan Kyoto University of Education KyotoJapan Department of Physics Kyushu University FukuokaJapan Instituto de Física La Plata, Universidad Nacional de La Plata and CONICET La PlataArgentina Physics Department Dipartimento di Matematica e Fisica Lancaster, United Kingdom 72 (a) INFN Sezione di Lecce; (b) Lancaster University Università del Salento LecceItaly Oliver Lodge Laboratory University of Liverpool LiverpoolUnited Kingdom Department of Physics Jožef Stefan Institute and University of Ljubljana LjubljanaSlovenia School of Physics and Astronomy Mary University of London LondonQueenUnited Kingdom Department of Physics Royal Holloway University of London SurreyUnited Kingdom Department of Physics and Astronomy University College London LondonUnited Kingdom Louisiana Tech University RustonLAUnited States of America Laboratoire de Physique Nucléaire et de Hautes Energies, UPMC and Université Paris-Diderot and CNRS/IN2P3 80 Fysiska institutionen, Lunds universitet, LundParisFrance, Sweden Departamento de Fisica Teorica Universidad Autonoma de Madrid C-15MadridSpain Institut für Physik Universität Mainz MainzGermany School of Physics and Astronomy University of Manchester ManchesterUnited Kingdom CPPM, Aix-Marseille Université and CNRS/IN2P3, Marseille France Department of Physics University of Massachusetts AmherstMAUnited States of America Department of Physics McGill University MontrealQCCanada School of Physics University of Melbourne VictoriaAustralia Department of Physics The University of Michigan Ann Arbor MIUnited States of America Department of Physics and Astronomy Dipartimento di Fisica 91 B.I. Stepanov Institute of Physics East Lansing MI, United States of America 90 (a) INFN Sezione di Milano; (b) Michigan State University Università di Milano Italy National Academy of Sciences of Belarus MinskRepublic of Belarus Department of Physics National Scientific and Educational Centre for Particle and High Energy Physics, Minsk, Republic of Belarus 93 Massachusetts Institute of Technology Cambridge MAUnited States of America Group of Particle Physics 95 P.N. Lebedev Institute of Physics, Academy of Sciences, Moscow University of Montreal MontrealQCCanada, Russia Institute for Theoretical and Experimental Physics (ITEP) MoscowRussia Moscow Engineering and Physics Institute (MEPhI) MoscowRussia D.V.Skobeltsyn Institute of Nuclear Physics, M.V.Lomonosov Moscow State University MoscowRussia Fakultät für Physik Max-Planck-Institut für Physik (Werner-Heisenberg-Institut) Nagasaki Institute of Applied Science, Nagasaki Graduate School of Science and Kobayashi-Maskawa Institute Ludwig-Maximilians-Universität München 100, 101, 102München, MünchenGermany, Germany, Japan Dipartimento di Fisica 103 (a) INFN Sezione di Napoli; (b) Nagoya University NagoyaJapan Department of Physics and Astronomy Università di Napoli 104NapoliItaly Albuquerque NM, United States of America 105 Institute for Mathematics, Astrophysics and Particle Physics Nikhef National Institute for Subatomic Physics University of New Mexico Radboud University Nijmegen/Nikhef 106NijmegenNetherlands Department of Physics University of Amsterdam 107AmsterdamNetherlands DeKalb IL, United States of America 108 Budker Institute of Nuclear Physics, SB RAS, Novosibirsk Department of Physics Northern Illinois University 109Russia Faculty of Science New York University 111New York, NYUnited States of America Homer L. Dodge Department of Physics and Astronomy Okayama University 112OkayamaJapan Department of Physics University of Oklahoma 113NormanOKUnited States of America Oklahoma State University 114StillwaterOKUnited States of America Czech Republic 115 Center for High Energy Physics Palacký University RCPTMOlomouc Graduate School of Science LAL, Université Paris-Sud and CNRS/IN2P3, Orsay University of Oregon 116, 117EugeneORUnited States of America, France Department of Physics Osaka University 118OsakaJapan Department of Physics University of Oslo 119OsloNorway Dipartimento di Fisica 120 (a) INFN Sezione di Pavia; (b) Oxford University OxfordUnited Kingdom Department of Physics Università di Pavia 121PaviaItaly Dipartimento di Fisica E. Fermi Department of Physics and Astronomy Philadelphia PA, United States of America 122 Petersburg Nuclear Physics Institute, Gatchina, Russia 123 (a) INFN Sezione di Pisa; (b) University of Pennsylvania Università di Pisa 124PisaItaly Department of Physics Institute of Physics Pittsburgh PA, United States of America 125 (a) Laboratorio de Instrumentacao e Fisica Experimental de Particulas -LIP, Lisboa; (b) Faculdade de Ciências, Universidade de Lisboa, Lisboa; (c) Centro de Física Nuclear da Universidade de Lisboa, Lisboa; (e) Departamento de Fisica, Universidade do Minho, Braga; (f ) Departamento de Fisica Teorica y del Cosmos and CAFPE, Universidad de Granada, Granada (Spain); (g) Dep Fisica and CEFITEC of Faculdade de Ciencias e Tecnologia University of Pittsburgh University of Coimbra Universidade Nova de Lisboa 126CaparicaCoimbra; (d)Portugal Faculty of Mathematics and Physics 129 State Research Center Institute for High Energy Physics Particle Physics Department Physics Department Academy of Sciences of the Czech Republic, Praha, Czech Republic 127 Czech Technical University in Prague, Praha, Czech Republic 128 Charles University in Prague Rutherford Appleton Laboratory130, 131Praha, DidcotCzech Republic, Russia, United Kingdom University of Regina 132ReginaSKCanada 133 (a) INFN Sezione di Roma; (b) Dipartimento di Fisica, Sapienza Università di Roma, Roma, Italy 134 (a) INFN Sezione di Roma Tor Vergata; (b) Dipartimento di Fisica, Università di Roma Tor Vergata, Roma, Italy 135 (a) INFN Sezione di Roma Tre; (b) Dipartimento di Matematica e Fisica, Università Roma Tre, Roma, Italy 136 (a) Faculté des Sciences Ain Chock, Réseau Universitaire de Physique des Hautes Energies -Université Hassan II, Casablanca; (b) Centre National de l'Energie des Sciences Techniques Nucleaires, Rabat; (c) Faculté des Sciences Semlalia, Université Cadi Ayyad, LPHEA-Marrakech; (d) Faculté des Sciences, Université Mohamed Premier and LPTPM, Oujda; (e) Faculté des sciences Ritsumeikan University KusatsuShigaJapan DSM/IRFU ( Santa Cruz Institute for Particle Physics Institut de Recherches sur les Lois Fondamentales de l'Univers), CEA Saclay (Commissariatà l'Energie Atomique et aux Energies Alternatives) Université Mohammed V-Agdal Gif-sur-Yvette137, 138RabatMorocco, France Department of Physics University of California Santa Cruz Santa Cruz139CAUnited States of America Department of Physics and Astronomy University of Washington 140SeattleWAUnited States of America Department of Physics University of Sheffield 141SheffieldUnited Kingdom Fachbereich Physik Shinshu University 142NaganoJapan Department of Physics Universität Siegen 143SiegenGermany Faculty of Mathematics, Physics & Informatics SLAC National Accelerator Laboratory, Stanford CA, United States of America 145 (a) Simon Fraser University 144BurnabyBCCanada Department of Subnuclear Physics Department of Physics Institute of Experimental Physics of the Slovak Academy of Sciences, Kosice, Slovak Republic 146 (a) Comenius University Bratislava Cape Town Department of Physics School of Physics Department of Physics University of Cape Town University of Johannesburg University of the Witwatersrand JohannesburgJohannesburg; (c)South Africa Physics Department Stockholm University Oskar Klein Centre, Stockholm148Sweden Departments of Physics & Astronomy and Chemistry Royal Institute of Technology 149StockholmSweden Department of Physics and Astronomy Stony Brook University Stony Brook150NYUnited States of America School of Physics University of Sussex 151BrightonUnited Kingdom Institute of Physics, Academia Sinica, Taipei Department of Physics, Technion: Israel Institute of Technology Raymond and Beverly Sackler School of Physics and Astronomy University of Sydney 152, 153, 154Sydney, HaifaAustralia, Taiwan, Israel Department of Physics Tel Aviv University Tel Aviv155Israel International Center for Elementary Particle Physics Department of Physics Aristotle University of Thessaloniki 156ThessalonikiGreece Graduate School of Science and Technology The University of Tokyo 157Japan Department of Physics Tokyo Metropolitan University 158TokyoJapan Department of Physics Tokyo Institute of Technology 159TokyoJapan Department of Physics and Astronomy University of Toronto 160Toronto, VancouverON, BC; (b)Canada Faculty of Pure and Applied Sciences York University 161TorontoONCanada Department of Physics and Astronomy University of Tsukuba 162TsukubaJapan Tufts University 163 Centro de InvestigacionesMedfordMAUnited States of America Department of Physics and Astronomy Universidad Antonio Narino 164BogotaColombia Dipartimento di Chimica, Fisica e Ambiente Department of Physics Irvine CA, United States of America 165 (a) INFN Gruppo Collegato di Udine, Sezione di Trieste, Udine; (b) ICTP, Trieste; (c) University of California Irvine Università di Udine 166UdineItaly Department of Physics and Astronomy University of Illinois Urbana IL 167United States of America Instituto de Física Corpuscular (IFIC) and Departamento de Física Atómica, Molecular y Nuclear and Departamento de Ingeniería Electrónica and Instituto de Microelectrónica de Barcelona (IMB-CNM), University of Valencia and CSIC, Valencia University of Uppsala 168UppsalaSweden, Spain The ATLAS Collaboration 169 Department of Physics Department of Physics and Astronomy University of British Columbia 170VancouverBCCanada Department of Physics University of Victoria Victoria BC171Canada University of Warwick 172CoventryUnited Kingdom Department of Particle Physics Waseda University 173TokyoJapan Department of Physics The Weizmann Institute of Science 174RehovotIsrael Fakultät für Physik und Astronomie Fachbereich C Physik University of Wisconsin Madison WI, Julius-Maximilians-Universität175, 176WürzburgUnited States of America, Germany Department of Physics Bergische Universität Wuppertal 177WuppertalGermany CT, United States of America 178 Yerevan Physics Institute Centre de Calcul de l'Institut National de Physique Nucléaire et de Physique des Particules (IN2P3) Yale University 179New Haven, Yerevan, VilleurbanneArmenia, France Also at Department of Physics, King's College London LondonUnited Kingdom Also at Particle Physics Department, Rutherford Appleton Laboratory, Didcot, United Kingdom d Also at TRIUMF, Vancouver BC Canada Also at Department of Physics California State University FresnoCAUnited States of America Also at Tomsk State University TomskRussia Also at CPPM, Aix-Marseille Université and CNRS/IN2P3, Marseille France Also at Università di Napoli Parthenope, Napoli Italy Also at Institute of Particle Physics (IPP) Canada Also at Department of Physics, St. Petersburg State Polytechnical University St. PetersburgRussia Also at Chinese University of Hong Kong China Also at Department of Financial and Management Engineering University of the Aegean ChiosGreece Department of Physics Also at Louisiana Tech University, Ruston LA, United States of America n Also at Institucio Catalana de Recerca i Estudis Avancats ICREA BarcelonaSpain The University of Texas at Austin Austin TXUnited States of America Also at Institute of Theoretical Physics Ilia State University TbilisiGeorgia Also at CERN GenevaSwitzerland Also at Ochadai Academic Production Ochanomizu University TokyoJapan Also at Manhattan College New York, NYUnited States of America Also at Novosibirsk State University NovosibirskRussia Also at Institute of Physics, Academia Sinica, Taipei Taiwan Also at LAL, Université Paris-Sud and CNRS/IN2P3, Orsay France Institute of Physics, Academia Sinica, Taipei Also at Academia Sinica Grid Computing Taiwan Also at Laboratoire de Physique Nucléaire et de Hautes Energies, UPMC and Université Paris-Diderot and CNRS/IN2P3 ParisFrance Also at School of Physical Sciences National Institute of Science Education and Research BhubaneswarIndia ab Also at Section de Physique Also at Dipartimento di Fisica, Sapienza Università di Roma, Roma, Italy aa Also at Moscow Institute of Physics and Technology State University Université de Genève Dolgoprudny, GenevaRussia, Switzerland af Also at Faculty of Physics, M.V.Lomonosov ac Also at International School for Advanced Studies (SISSA), Trieste, Italy ad Also at Department of Physics and Astronomy, University of South Carolina, Columbia SC, United States of America ae Also at School of Physics and Engineering Sun Yat-sen University GuangzhouChina ag Also at Moscow Engineering and Physics Institute (MEPhI) Department of Physics ah Also at Institute for Particle and Nuclear Physics, Wigner Research Centre for Physics, Budapest Moscow State University Moscow, MoscowRussia, Russia, Hungary Department of Physics Oxford University OxfordUnited Kingdom ak Also at Institut für Experimentalphysik Nanjing University JiangsuChina Department of Physics Universität Hamburg HamburgGermany Ann Arbor MI, United States of America am Also at Discipline of Physics The University of Michigan University of KwaZulu-Natal DurbanSouth Africa Department of Physics, Kuala Lumpur University of Malaya Malaysia Deceased EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN) Measurement of three-jet production cross-sections in pp collisions at 7 TeV centre-of-mass energy using the ATLAS detector The ATLAS Collaboration Measurement of three-jet production cross-sections in pp collisions at 7 TeV centre-of-mass energy using the ATLAS detector The ATLAS Collaboration America Heidelberg, Heidelberg; Heidelberg, Heidelberg587 Nov 2014Received: date / Accepted: dateSubmitted to: Eur. Phys. J. C Reproduction of this article or parts of it is allowed as specified in the CC-BY-3.0 license. Eur. Phys. J. C manuscript No. (will be inserted by the editor) 31 Enrico Fermi Institute, University of Chicago, Chicago IL, United States of America 32 (a)QCD · jet · LHC · PDF Double-differential three-jet production cross-sections are measured in proton-proton collisions at a centre-of-mass energy of √ s = 7 TeV using the ATLAS detector at the Large Hadron Collider. The measurements are presented as a function of the three-jet mass (m jjj ), in bins of the sum of the absolute rapidity separations between the three leading jets (|Y * |). Invariant masses extending up to 5 TeV are reached for 8 < |Y * | < 10. These measurements use a sample of data recorded using the ATLAS detector in 2011, which corresponds to an integrated luminosity of 4.51 fb −1 . Jets are identified using the anti-k t algorithm with two different jet radius parameters, R = 0.4 and R = 0.6. The dominant uncertainty in these measurements comes from the jet energy scale. Next-toleading-order QCD calculations corrected to account for non-perturbative effects are compared to the measurements. Good agreement is found between the data and the theoretical predictions based on most of the available sets of parton distribution functions, over the full kinematic range, covering almost seven orders of magnitude in the measured cross-section values.Abstract Double-differential three-jet production crosssections are measured in proton-proton collisions at a centre-of-mass energy of √ s = 7 TeV using the ATLAS detector at the Large Hadron Collider. The measurements are presented as a function of the three-jet mass (m jjj ), in bins of the sum of the absolute rapidity separations between the three leading jets (|Y * |). Invariant masses extending up to 5 TeV are reached for 8 < |Y * | < 10. These measurements use a sample of data recorded using the ATLAS detector in 2011, which corresponds to an integrated luminosity of 4.51 fb −1 . Jets are identified using the anti-k t algorithm with two different jet radius parameters, R = 0.4 and R = 0.6. The dominant uncertainty in these measurements comes from the jet energy scale. Next-to-leading-order QCD calculations corrected to account for non-perturbative effects are compared to the measurements. Good agreement is found between the data and the theoretical predictions based on most of the available sets of parton distribution functions, over the full kinematic range, covering almost seven orders of magnitude in the measured cross-section values. Introduction Collimated jets of hadrons are a characteristic feature of high-energy particle interactions. In the theory of strong interactions, quantum chromodynamics (QCD), jets can be interpreted as the result of fragmentation of partons produced in a scattering process. In highenergy particle collisions two main phases can be distinguished. In the perturbative phase, partons with high-Address(es) of author(s) should be given transverse momentum (p T ) are produced in a hardscattering process at a scale Q. This phase is described by a perturbative expansion in QCD. In the transition to the second (non-perturbative) phase, these partons emit additional gluons and produce quark-antiquark pairs. The non-perturbative jet evolution is an interplay between the hadronisation process and the underlying event. The hadronisation process governs the transition from partons to hadrons and the underlying event represents initial-state radiation, multiple parton interactions and colour-reconnection effects [1]. In spite of these phenomena, the highly collimated sprays of particles, collectively identified as hadron jets, are observed in the final state. The effects of both hadronisation and the underlying event vary strongly with the jet radius parameter and are most pronounced at low p T . They are accounted for using phenomenological models that are tuned to the data. The ATLAS Collaboration has measured the inclusive jet cross-sections at 7 TeV [2] and at 2.76 TeV [3] centre-of-mass energies in pp collisions for jets defined by the anti-k t algorithm [4] with two jet radius parameters, R = 0.4 and R = 0.6. Recent inclusive jet [5] and dijet [6] cross-section measurements at 7 TeV centreof-mass energy in pp collisions have exploited improved jet energy calibration procedures [7] leading to smaller systematic uncertainties compared to those achieved in Refs. [2,3]. Similar measurements at 7 TeV centre-ofmass energy in pp collisions [8,9] have been carried out by the CMS Collaboration. These measurements test perturbative QCD (pQCD) at very short distances and have provided constraints on the gluon momentum distribution within protons at large momentum fraction. The impact of higher order effects on the inclusive jet cross-section ratios of anti-k t R = 0.5 and R = 0.7 jets has been studied in [10]. The inclusive three-jet to two-jet ratio [11] is used to determine the strong coupling constant. Theoretical predictions of the multi-jet crosssections in pp collisions at 7 TeV centre-of-mass energy have been tested in Refs. [12,13]. Previous measurements of three-jet cross-sections in pp collisions were performed by the D∅ collaboration [14]. The measurements were compared to predictions, and agreement between data and theory was found within the uncertainties. In this paper, measurements of double-differential three-jet production cross-sections are presented as a function of the three-jet mass (m jjj ) and the sum of absolute rapidity separation between the three leading jets (|Y * |). The measurements are corrected for experimental effects and reported at the particle level. The three-jet mass distributions test the dynamics of the underlying 2 → 3 scattering process. The distributions are sensitive to both the transverse momentum (p T ) spectra of the three leading jets and their angular correlations, since a massive three-jet system can be built either from high-p T jets or from jets with large rapidity separation. Binning in |Y * | allows events with m jjj originating from these different regions of phase space to be separated. The analysis presented in this paper tests the description of multi-jet events in next-to-leading-order (NLO) QCD and uses two different values of jet radius parameter, R = 0.4 and R = 0.6, since three-jet crosssections depend on the jet radius even at leading order (LO) in the perturbative expansion. The NLO QCD calculations corrected to account for non-perturbative effects are compared to the measured cross-sections. The measurements also provide constraints on the proton's parton distribution functions (PDFs) beyond those from inclusive and dijet cross-sections, since they probe a different region of phase space in proton momentum fraction and squared momentum transfer (x, Q 2 ). The content of this paper is structured as follows. The ATLAS detector is briefly described in Sect. 2, followed by the definition of observables and description of Monte Carlo (MC) samples in Sects. 3 and 4, respectively. The trigger, data selection and jet calibration are presented in Sect. 5. Data unfolding and experimental uncertainties are described in Sects. 6 and 7. Section 8 describes the theoretical predictions for the measurements in this paper. The cross-section results are presented in Sect. 9 and the conclusions are given in Sect. 10. The ATLAS experiment The ATLAS detector is described in detail in Ref. [15]. ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the z-axis pointing along the beam axis. The x-axis points from the IP to the centre of the LHC ring, and the y-axis points upward. Cylindrical coordinates (r, φ) are used in the transverse plane, φ being the azimuthal angle around the beam pipe. The pseudorapidity is defined in terms of the polar angle θ as η = − ln tan(θ/2). The rapidity is defined in terms of the energy E and longitudinal to the beam pipe momentum p z as y = 1/2 ln ((E + p z )/(E − p z )). The transverse momentum p T is defined as the component of the momentum transverse to the beam pipe. The inner detector (ID) is used to measure the momenta and trajectories of charged particles. The ID has full coverage in the azimuthal angle φ and over the pseudorapidity range |η| < 2.5. The ID is immersed in a 2 T magnetic field provided by a superconducting solenoid magnet. The main detector system used for this analysis is the calorimeter. The electromagnetic calorimeters use liquid argon (LAr) as the active detector medium. They employ accordion-shaped electrodes and lead absorbers, and are divided into one barrel (|η| < 1.475) and two end-cap components (1.375 < |η| < 3.2). The technology used for the hadronic calorimeters depends on η. In the barrel region (|η| < 1.7), the detector is made of scintillator tiles with steel absorbers. In the endcap region (1.5 < |η| < 3.2), the detector uses LAr and copper. A forward calorimeter consisting of LAr and tungsten/copper absorbers has both electromagnetic and hadronic sections, and extends the coverage to |η| = 4.9. The muon spectrometer has one barrel and two endcap air-core toroid magnets. Three layers of precision tracking stations provide muon momentum measurements over the range |η| < 2.7. The ATLAS trigger system consists of three levels of event selection: a first level implemented using custommade electronics, which selects events at a design rate of at most 75 kHz, followed by two successive softwarebased levels. The level-2 trigger uses fast online algorithms, and the final trigger stage, Event Filter (EF), uses reconstruction software with algorithms similar to the offline versions. Cross-section definition Jets are defined using the anti-k t algorithm as implemented in the FastJet [16] package, with two different values of the radius parameter: R = 0.4 and R = 0.6. Events containing at least three jets within the rapidity range |y| < 3.0 with p T > 50 GeV are considered. The leading, subleading and sub-subleading jets are required to have p T > 150 GeV, p T > 100 GeV and p T > 50 GeV, respectively. Three-jet double-differential cross-sections are measured as a function of the three-jet mass m jjj = (p 1 + p 2 + p 3 ) 2 and the summed absolute rapidity separation of the three leading jets |Y * | = |y 1 − y 2 | + |y 2 − y 3 | + |y 1 − y 3 | , where p i (y i ) are the four-momenta (rapidities) of the three leading jets. The measurements are made in five ranges of |Y * | < 10, in equal steps of two. In each range of |Y * |, a lower limit on the three-jet mass is imposed to avoid the region of phase space affected by the jet p T cuts. The measurement starts at m jjj = 380 GeV in the |Y * | < 2 bin, increasing to 1180 GeV for the 8 < |Y * | < 10 bin. The three-jet mass distributions are corrected for detector effects, and the measured cross-sections are defined at the particle level. Here particle level refers to jets built using produced particles with a proper lifetime longer than 10 ps, including muons and neutrinos from decaying hadrons [17]. Monte Carlo samples The default MC generator used to simulate events is Pythia 6 [18] with the Perugia 2011 tune [19] and the CTEQ5L PDFs [20]. Usually, "tune" refers to a set of model parameters, which provide an optimal description of high-energy particle collisions. Data from previous colliders (LEP, TEVATRON, etc), as well as early LHC data are included in the process of tuning the model parameters [19,21,22]. The Pythia 6 is a generator with LO 2 → 2 matrix element calculations, supplemented by leading-logarithmic calculations of parton showers ordered in p T . A simulation of the underlying event, including multiple parton interactions, is also included. The Lund string model [23,24] is used to simulate the fragmentation process. The signal reconstruction is affected by multiple proton-proton interactions occurring during the same bunch crossing and by remnants of electronic signals from previous bunch crossings in the detectors (pileup). To simulate pileup, inelastic pp events are generated using Pythia 8 [25] with the 4C tune [26] and MRST LO * * proton PDF set [27]. The number of minimum-bias events overlaid on each signal event is chosen to reproduce the distribution of the average number of simultaneous pp collisions µ in an event. During the 2011 data-taking period µ changed from 5 to 18 with increasing instantaneous luminosity. To estimate the uncertainties in the modelling of the hard scattering, hadronisation, the underlying event and of parton showers, events are also simulated using Alpgen [28], a multi-leg LO MC simulation, with up to six final-state partons in the matrix element calculations, interfaced to Herwig 6.5.10 [29][30][31] using the AUET2 tune [21] with the CTEQ6L1 PDF set [32] for parton showers and Jimmy 4.31 [33] for the underlying event. The outputs from these event generators are passed to the detector simulation [34], based on Geant4 [35]. Simulated events are digitised [36,37] to model the detector responses, and then reconstructed using the same software as used to process the data. Data selection and jet calibration This analysis is based on data collected with the AT-LAS detector in the year 2011 during periods with stable pp collisions at √ s = 7 TeV in which all relevant detector components were operational. The resulting data sample corresponds to an integrated luminosity of 4.51 ± 0.08 fb −1 [38]. The presence of at least one primary vertex (compatible with the position of the beam spot), reconstructed using two or more tracks with p T > 500 MeV, is required to reject cosmic ray events and beam-related backgrounds. The primary vertex with the largest sum of squared transverse momenta of associated tracks is used as the interaction point for the analysis. Due to the high instantaneous luminosity and a limited detector readout bandwidth, a set of single-jet triggers with increasing transverse energy (E T ) thresholds is used to collect data events with jets. Only a fraction of the events that fired the trigger are actually recorded. The reciprocal of this fraction is the prescale factor of the trigger considered. The triggers with lower E T thresholds were prescaled with higher factors and only the trigger with the highest E T threshold remained unprescaled during the whole data-taking period. The prescale factors are adjusted to keep the jet yield approximately constant as a function of E T . An event must pass all three levels of the jet trigger system. The trigger is based on the E T of jet-like objects. Level-1 provides a fast hardware decision based on the summed E T of calorimeter towers using a slidingwindow algorithm. Level-2 performs a simple jet reconstruction in a geometric region around the object that fired the Level-1 trigger. Finally, a full jet reconstruction using the anti-k t algorithm with R = 0.4 is performed over the entire detector by the third level trigger. The trigger efficiencies are determined as a function of m jjj in each bin of |Y * | separately for R = 0.4 and R = 0.6 jet radius parameters. They are evaluated using an unbiased sample of events that fired the jet trigger with a p T = 30 GeV threshold at the EF level. This trigger is fully efficient in events with a leading jet passing the three-jet analysis requirements. For every |Y * | bin, the full range of three-jet mass is divided into subranges, each filled by only one of the several singlejet triggers. Triggers are used only where the trigger efficiency is above 99%. Moreover, the lower m jjj bound for each trigger is shifted up by 15% from the 99% efficiency point to avoid any possible biases from the trigger strategy chosen for this measurement. This shift leads to a negligible increase in the statistical error on the measured cross-sections, compared to the total uncertainty. Since the EF reconstructs jets with a radius parameter R = 0.4, the p T threshold at which the trigger for jets defined with R = 0.6 becomes fully efficient is significantly higher than for R = 0.4 jets. Using the same trigger subranges for both jet sizes would reduce the number of events with anti-k t R = 0.4 jets. To take advantage of the lower p T at which triggers are fully efficient for R = 0.4 jets, different assignments between triggers and m jjj ranges are considered for these jets and jets reconstructed with R = 0.6. After events are selected by the trigger system, they are fully reconstructed offline. The input objects to the jet algorithm are three-dimensional topo-clusters [39]. Each topo-cluster is constructed from a seed calorimeter cell with energy |E cell | > 4σ, where σ is the width of the total noise distribution of the cell from both the electronics and pileup sources. Neighbouring cells are added to the topo-cluster if they have |E cell | > 2σ. At the last step, all neighbouring cells are added. A local hadronic calibration (LC) that accounts for inactive material, out-of-cluster losses for pions, and calorimeter response is applied to clusters identified as hadronic by their energy density distribution [40]. The LC improves the topo-cluster energy resolution, and the jet clustering algorithm propagates this improvement to the jet level. Each topo-cluster is considered as a massless particle with an energy E = E cell , and a direction given by the energy-weighted barycentre of the cells in the cluster with respect to the geometrical centre of the ATLAS detector. The four-momentum of an uncalibrated jet is defined as the sum of the four-momenta of the clusters making up the jet. The jet is then calibrated in four steps: 1. An estimated mean additional energy due to pileup is subtracted using a correction derived from MC simulation and validated in situ as a function of the average number of pp collisions in the same bunch crossing, µ , the number of primary vertices, N PV , and jet η [41]. 2. The direction of the jet is corrected such that the jet originates from the selected hard-scatter vertex of the event instead of the geometrical centre of AT-LAS. 3. The energy and the position of the jet are corrected for instrumental effects (calorimeter non-compensation, additional inactive material, effects due to the magnetic field) using correction factors obtained from MC simulation. The jet energy scale is restored on average to that of the particle-level jet. For the calibration, the particle-level jet does not include muons and non-interacting particles. 4. An additional in situ calibration is applied to correct for residual differences between the MC simulation and data, derived by combining the results of dijet, γ-jet, Z-jet, and multi-jet momentum balance techniques. The full calibration procedure is described in detail in Ref. [7]. Data-taking in the year 2011 was affected by a readout problem in a region of the LAr calorimeter, causing jets in this region to be poorly reconstructed. In order to avoid a bias in the spectra, events with any of the three leading jets falling in the region −0.88 < φ < −0.5 were rejected. Approximately 15% of events are removed by this requirement. This inefficiency is corrected for using MC simulation (cf. Sect. 6). The three leading jets are required to satisfy the "medium" quality criteria as described in Ref. [42], designed to reject cosmic-rays, beam-halo particles, and detector noise. More than 5.3(2.5)×10 6 three-jet events are selected with radius parameter R = 0.4(0.6). Data unfolding The three-jet cross-sections as a function of m jjj are obtained by unfolding the data distributions, and correcting for detector resolutions and inefficiencies. This procedure includes a correction for the undetected presence of muons and neutrinos from hadron decays in jets. The unfolding procedure is based on the iterative, dynamically stabilised (IDS) unfolding method [43]. Further details can be found in Ref. [2]. To account for bin-to-bin migrations, a transfer matrix is built from the MC simulation, relating the particle-level and reconstructionlevel three-jet masses. The reconstruction-level to parti-cle-level event association is done in the m jjj -|Y * | plane, such that only a requirement on the presence of a threejet system is made. Since bin-to-bin migrations are usually due to jet energy smearing of the three-jet mass, and less often due to jet angular resolution, the migrations across |Y * | bins are negligible and the unfolding is performed separately in each |Y * | bin. The data are unfolded to the particle level using a three-step procedure N P i = 1 P i (j) N R j · R j A ij ,(1) where i (j) is the particle-level (reconstruction-level) bin index, and N P i (N R i ) is the number of particle-level (reconstruction-level) events in bin i. The quantities R i ( P i ) are the fractions of reconstruction-level (particlelevel) events matching (associated with) particle-level (reconstruction-level) events in each bin i. These efficiencies are used to correct for the matching inefficiency at the reconstruction and particle level, respectively. The element A ij of the transfer matrix is the probability for a reconstruction-level event in bin j to be associated with a particle-level event in bin i. It is used to unfold the reconstruction-level spectrum for detector effects. A data-driven closure test is used to evaluate the bias in the unfolded data spectrum shape due to mismodelling of the reconstruction-level spectrum shape in the MC simulation. The transfer matrix is improved through a series of iterations, where the particle-level distribution from simulation is re-weighted such that the reconstruction-level distribution from simulation matches the data distribution. The modified reconstructionlevel MC simulation is unfolded using the original transfer matrix, and the result is compared with the modified particle-level spectrum. The resulting bias is considered as a systematic uncertainty. For the analyses in this paper, one iteration is used, which leads to a bias in closure tests of less than one percent. The statistical uncertainties in the unfolded results are estimated using pseudo-experiments. Each event in the data and in the MC simulation is counted n times, where n is sampled from a Poisson distribution with a mean of one. A fluctuated transfer matrix and efficiency corrections are calculated as the average over these pseudo-experiments in MC simulation. Then, each resulting pseudo-experiment of the data spectrum is unfolded using the fluctuated transfer matrix and efficiency corrections. Finally, the covariance matrix between bins of measured m jjj cross-section is calculated using the set of unfolded pseudo-experiments of the data. The random numbers for the pseudo-experiments are generated using unique seeds. The dijet [6] and in-clusive jet [5] cross-section measurements use the same unique seeds to evaluate the statistical uncertainties. In this way, the statistical uncertainty and bin-to-bin correlations in both the data and the MC simulation are encoded in the covariance matrix and the statistical correlation between different measurements can be taken into account in combined fits. Experimental uncertainties The uncertainty in the jet energy scale (JES) calibration is the dominant uncertainty in this measurement. The uncertainties in the central region are determined using a combination of the transverse momentum balance techniques, such as Z-jet, γ-jet and multi-jet balance measurements performed in situ. In each of the methods, the uncertainties in the energy of the wellmeasured objects, e.g. Z/photon or system of low-p T jets, are propagated to the energy of the balancing jet. The JES uncertainty in the central region is propagated to the forward region using transverse momentum balance between a central and a forward jet in events with two jets. The difference in the balance observed between MC simulation samples generated with Pythia and Herwig is treated as an additional uncertainty in the forward region. The total JES uncertainty is described by the set of fully correlated in p T independent uncertainty sources. Complete details of the JES derivation and its uncertainties can be found in Ref. [7]. The uncertainty in the p T of each individual jet due to the JES calibration is between 1% and 4% in the central region (|η| < 1.8), and increases to 5% in the forward region (1.8 < |η| < 4.5). The uncertainties due to the JES calibration are propagated to the measured cross-sections using the MC simulation. The energy and p T of each jet in the three-jet sample are scaled up or down by one standard deviation of a given uncertainty component, after which the luminosity-normalised three-jet event yield is measured from the resulting sample. The yields from the nominal sample and the samples where all jets were scaled up and down are unfolded, and the difference between each of these variations and the nominal result is taken as the uncertainty due to that JES uncertainty component. Since the sources of JES calibration uncertainty are uncorrelated with each other by construction, the corresponding uncertainty components in the crosssection are also taken as uncorrelated. Each jet is affected by the additional energy deposited in the calorimeters due to pileup effects. Additional energy due to pileup is subtracted during the jet energy calibration procedure [7]. To check for any residual pileup effects in the measured cross-sections, the luminosity-normalised three-jet yields in all threejet mass and rapidity-separation bins are split into bins of different pileup conditions under which the data were collected. No statistically significant deviation from the nominal result is observed. The jet energy resolution (JER) is measured in the data using the bisector method in dijet events [44], where good agreement with the MC simulation is observed. The uncertainty in the JER is affected by selection parameters for jets, such as the amount of nearby jet activity, and depends on both jet p T and jet η. Jet angular resolution (JAR) is studied by matching particle-level jets to reconstruction-level jets in simulation. Jets are matched by requiring that the angular distance ∆R = (∆φ) 2 + (∆y) 2 between the particlelevel and reconstruction-level jet is less than the jet radius parameter. The angular resolution is obtained from a Gaussian fit to the distribution of the difference of reconstruction-level and particle-level jet rapidity. The difference between the JAR determined from the nominal MC simulation and that from the Alpgen sample is taken as a systematic uncertainty. The resolution varies between 0.005 radians and 0.03 radians depending on the jet η and p T values. The JAR uncertainty is about 10-15% for p T < 150 GeV and decreases to ∼ 1% for p T > 400 GeV. The jet angular bias is found to be negligible. The JER and JAR uncertainties are propagated to the measured cross-section through the unfolding transfer matrix. The energy and direction of each jet in the MC sample are smeared according to their uncertainties. To avoid being limited by statistical fluctuations this procedure is repeated 1000 times in each event. The average transfer matrix derived from these pseudoexperiments is used to unfold the three-jet yields, and the deviation from the three-jet yield unfolded using the nominal transfer matrix is taken as a symmetrised systematic uncertainty. The uncertainty due to the jet reconstruction inefficiency as a function of jet p T is estimated by comparing the efficiency for reconstructing a calorimeter jet, given the presence of an independently measured track-jet of the same radius, in data and in MC simulation [7,45]. Here, a track-jet refers to a jet reconstructed using the anti-k t algorithm taking as input all associated to the primary vertex tracks with p T > 500 MeV and |η| < 2.5 in the event assuming they have the mass of a pion. Since this method relies on tracking, its application is restricted to jets with |η| < 1.9 to ensure that both the R = 0.4 and R = 0.6 jets are fully within the tracker acceptance. For jets with p T > 50 GeV, relevant for this analysis, the reconstruction efficiency in both the data and the MC simulation is found to be 100% for this Total systematic uncertainty in the three-jet cross-section for anti-k t R = 0.6 jets as a function of m jjj (a) in |Y * | < 2 and (b) 8 < |Y * | < 10 bins. The bands shows the uncertainties due to jet energy scale, jet angular resolution, jet energy resolution and the combined uncertainty due to jet quality selection and unfolding. The outer band represents the total experimental uncertainty. rapidity region, leading to no additional uncertainty. The same efficiency is assumed for the forward region, where jets of a given p T are more energetic and, therefore, their reconstruction efficiency is expected to be at least as good as that of jets in the central region. The efficiencies for single-jet selection using the "medium" criteria agree within 0.25% in data and MC simulation [42]. Because three jets are considered for each event selected for the analysis, a 0.75% systematic uncertainty in the cross-section is assigned. The impact of a possible mis-modelling of the shape of m jjj spectra in MC simulation, introduced through the unfolding as described in Sect. 6, is also included. The luminosity uncertainty is 1.8% [38] and is fully correlated between all data points. The total experimental uncertainty in the three-jet cross-section is summarised in Fig. 1. The total uncertainty ranges from 8-10% at low three-jet mass to 28% at high three-jet mass for the range |Y * | < 6 (see Appendix), and increases slightly for larger |Y * | bins. In the 8 < |Y * | < 10 bin the total uncertainty ranges from 18% to 38%, where it is dominated by the jet energy scale uncertainty component for forward jets. Theoretical predictions and uncertainties The NLO QCD predictions by the parton-lvel MC crosssection calculator NLOJET++ [46], corrected for hadronisation effects and underlying-event activity using Monte Carlo simulation with Perugia 2011 tune [19] of Pythia 6, are compared to the measured three-jet cross-sections. Fixed-order predictions The fixed-order QCD calculations are performed with the NLOJET++ program interfaced to APPLgrid [47] for fast convolution with various PDF sets. The renormalisation (Q R ) and factorisation (Q F ) scales are set to the mass of the three-jet system, Q = Q R = Q F = m jjj . The following proton PDF sets are considered for the theoretical predictions: CT 10 [48], GJR 08 [49], MSTW 2008 [50], NNPDF 2.3 [51], HERAPDF 1.5 [52], and ABM 11 [53]. To estimate the uncertainty due to missing higherorder terms in the fixed-order perturbative expansion, the renormalisation scale is varied up and down by a factor of two. The uncertainty due to the dependence of the theoretical predictions on the factorisation scale, which specifies the separation between the shortdistance hard scattering and long-distance non-perturbative dynamics, is estimated by varying the factorisation scale up and down by a factor of two. All permutations of these two scale choices are considered, except the cases where the scales are shifted in opposite directions. The maximum deviations from the nominal prediction are taken as the scale uncertainty. The scale uncertainty is generally 10-20% depending on the m jjj . The multiple uncorrelated uncertainty components of each PDF set, as provided by the various PDF analyses, are also propagated through the theoretical calculations. The PDF groups generally derive these from the experimental uncertainties in the data used in the fits. For the results shown in Sect. 9, the standard Hessian sum in quadrature [54] of the various independent components is calculated taking into account asymmetries of the uncertainty components. The NNPDF 2.3 PDF set is an exception, where uncertainties are expressed in terms of replicas instead of independent components. These replicas represent a collection of equally likely PDF sets, where the data used in the PDF fit were fluctuated within their experimental uncertainties. For the plots shown in Sect. 9, the uncertainties in the NNPDF 2.3 PDF set are evaluated as the RMS of the replicas in each bin of m jjj , producing equivalent PDF uncertainties in the theoretical predictions. These uncertainties are symmetric by construction. Where needed, the uncertainties of PDF sets are rescaled to the 68% confidence level (CL). HERAPDF provides three types of uncertainties: experimental, model and parameterisation. The three uncertainty sources are added in quadrature to get a total PDF uncertainty. The uncertainties in the cross-sections due to the strong coupling, α s , are estimated using two additional proton PDF sets, for which different values of α s are assumed in the fits, such that the effect of the strong coupling value on the PDFs is included. This follows Ref. [55]. The resulting uncertainty is approximately 3% across all three-jet mass and |Y * | ranges considered. The scale uncertainties are dominant in low and intermediate three-jet mass regions, while the PDF uncertainties become dominant at high m jjj . The uncertainties in the theoretical predictions due to those on the PDFs range from 5% at low m jjj to 30% at high three-jet mass for the range of |Y * | values up to four. For the values of |Y * | between four and ten, the PDF uncertainties reach 40-80% at high three-jet mass, depending on the PDF set and the |Y * | value. Non-perturbative effects Non-perturbative corrections (NPC) are evaluated using leading-logarithmic parton-shower generators, separately for each value of the jet radius parameter. The corrections are calculated as bin-by-bin ratios of the three-jet differential cross-section at the particle level, including hadronisation and underlying-event effects, to that at parton-level after the parton shower (before the hadronisation process starts) with the underlyingevent simulation switched off. The nominal corrections are calculated using Pythia 6 with the Perugia 2011 tune. The non-perturbative corrections as a function of three-jet mass are shown in Fig. 2 for the range |Y * | < 2 for R = 0.4 and R = 0.6 jets. The NPC are smaller than 10% in all m jjj and |Y * | bins. The uncertainties in the non-perturbative corrections, arising from the modelling of the hadronisation process and the underlying event, are estimated as the maximum deviations of the corrections from the nominal ones, using the following configurations: Pyth- Uncertainty ia 8 with the 4C [26] and AU2 [21] tunes using the CTEQ6L1 PDF set [32]; Pythia 6 with the AUET2B [22] tune with CTEQ6L1; and Herwig++ 2.6.3 [56,57] with the UE-EE-3 tune [58] using the CTEQ6L1 set. The uncertainty in the non-perturbative corrections ranges up to ∼ 10% depending on the three-jet mass in all |Y * | bins. The total theoretical uncertainty is calculated as a sum in quadrature of PDF, scale, α s and NPC uncertainties. ATLAS = 0.6 R jets, t k anti- =7 TeV, |Y*| < 2 s (b) R = 0.6 jets Cross-section results Measurements of the double-differential three-jet crosssections as a function of the three-jet mass in various ranges of |Y * | are shown in Figs. 3 and 4 for anti-k t jets with values of the radius parameter R = 0.4 and R = 0.6, respectively. The cross-section decreases rapidly as a function of the three-jet mass. The NLO QCD cal-culations using NLOJET++ with the CT 10 PDF set corrected for non-perturbative effects are compared to the measured cross-sections. Good agreement between the data and the theoretical predictions is found over the full kinematic range, covering almost seven orders of magnitude in the measured cross-section values. The ratios of the theoretical predictions calculated with various PDF sets to the measured cross-sections are presented in Figs. 5 and 6 for R = 0.4 jets and in Figs. 7 and 8 for R = 0.6 jets. Theoretical calculations that use CT 10, MSTW 2008 and GJR 08 PDFs are compared to data in Figs. 5 and 7 and comparisons to other global PDFs, namely NNPDF 2.3, ABM 11 and HERAPDF 1.5. are presented in Figs. 6 and 8. The three-jet cross-sections are well described by the calculations that use CT 10, NNPDF 2.3, GJR 08, MSTW 2008 and HERAPDF 1.5 PDFs. Disagreement between data and the predictions using ABM 11 PDFs is observed for most of the cross-sections measured with both jet radius parameters. For all PDF sets, the predictions for anti-k t R = 0.4 jets agree well with measured cross-sections, while the calculations that use the ABM 11 PDF set are systematically below all other theory curves. Theory predictions for anti-k t R = 0.6 jets underestimate the data across the full m jjj -|Y * | plane. This shift is within the experimental and theoretical uncertainties. The jet radius dependence of theory-to-data ratios is similar for all PDF sets considered, demonstrating that this tendency is independent of the assumptions made in different PDF determinations. Figure 8 The ratio of NLO QCD predictions, obtained by using NLOJET++ with different PDF sets ( NNPDF 2.3, ABM 11, HERAPDF 1.5) and corrected for non-perturbative effects, to data as a function of m jjj in bins of |Y * |, as denoted in the legend. The ratios are for jets identified using the anti-k t algorithm with R = 0.6. The experimental error bands are centered at one and designate the relative statistical (thin dashed line) and total (statistical and systematic uncertainties added in quadrature) experimental uncertainties (thick solid line). The theoretical predictions are represented by thick lines with the hatched or filled band around it. The line show the central values and the band represent the total theory uncertainty. Conclusions Cross-section measurements of three-jet production in pp collisions at 7 TeV centre-of-mass energy as a function of the three-jet mass, in bins of the sum of the absolute rapidity separations between the three leading jets are presented. Jets are reconstructed with the antik t algorithm using two values of the radius parameter, R = 0.4 and R = 0.6. The measurements are based on the full data set collected with the ATLAS detector during 2011 data-taking at the LHC, corresponding to an integrated luminosity of 4.51 fb −1 . The measurements are corrected for detector effects and reported at the particle level. The total experimental uncertainty in these measurements is dominated by the jet energy scale calibration uncertainty. The measurement uncertainties are smaller than, or similar to, those in the theoretical predictions. The measurements probe three-jet masses up to ∼ 5 TeV and are well described by perturbative QCD at NLO accuracy across the full m jjj -|Y * | plane. The comparison of NLO QCD predictions corrected for nonperturbative effects to the measured cross-sections is performed using several modern PDF sets. The data are well described by the theoretical predictions when using CT 10, NNPDF 2.3, HERAPDF 1.5, GJR 08 and MSTW 2008 PDFs. The theoretical calculations based on the ABM 11 PDFs are systematically below all the other predictions. Comparison of measured cross-sections to theoretical predictions for two different jet radius parameters shows good agreement for R = 0.4 jets but shifted theory-to-data ratios for R = 0.6 jets. This shift is covered by the experimental and theoretical uncertainty bands and it has only a minor dependence on the PDF set used. Table 1 Measured double-differential three-jet cross-section, σ, for R = 0.4 jets and |Y * | < 2, along with uncertainties in the measurement. All uncertainties are given in %, where δ data stat (δ MC stat ) are the statistical uncertainties in the data (MC simulation). The γ components are the uncertainty in the jet energy calibration from the in situ, the pileup, the close-by jet, and flavour components. The u components show the uncertainty for the jet energy and angular resolution, the unfolding, the quality selection, and the luminosity. While all columns are uncorrelated with each other, the in situ, pileup, and flavour uncertainties shown here are the sum in quadrature of multiple uncorrelated components. Table 2 Measured double-differential three-jet cross-section, σ, for R = 0.6 jets and |Y * | < 2, along with uncertainties in the measurement. All uncertainties are given in %, where δ data stat (δ MC stat ) are the statistical uncertainties in the data (MC simulation). The γ components are the uncertainty in the jet energy calibration from the in situ, the pileup, the close-by jet, and flavour components. The u components show the uncertainty for the jet energy and angular resolution, the unfolding, the quality selection, and the luminosity. While all columns are uncorrelated with each other, the in situ, pileup, and flavour uncertainties shown here are the sum in quadrature of multiple uncorrelated components. Table 3 Measured double-differential three-jet cross-section, σ, for R = 0.4 jets and 2 ≤ |Y * | < 4, along with uncertainties in the measurement. All uncertainties are given in %, where δ data stat (δ MC stat ) are the statistical uncertainties in the data (MC simulation). The γ components are the uncertainty in the jet energy calibration from the in situ, the pileup, the close-by jet, and flavour components. The u components show the uncertainty for the jet energy and angular resolution, the unfolding, the quality selection, and the luminosity. While all columns are uncorrelated with each other, the in situ, pileup, and flavour uncertainties shown here are the sum in quadrature of multiple uncorrelated components. Table 4 Measured double-differential three-jet cross-section, σ, for R = 0.6 jets and 2 ≤ |Y * | < 4, along with uncertainties in the measurement. All uncertainties are given in %, where δ data stat (δ MC stat ) are the statistical uncertainties in the data (MC simulation). The γ components are the uncertainty in the jet energy calibration from the in situ, the pileup, the close-by jet, and flavour components. The u components show the uncertainty for the jet energy and angular resolution, the unfolding, the quality selection, and the luminosity. While all columns are uncorrelated with each other, the in situ, pileup, and flavour uncertainties shown here are the sum in quadrature of multiple uncorrelated components. Table 5 Measured double-differential three-jet cross-section, σ, for R = 0.4 jets and 4 ≤ |Y * | < 6, along with uncertainties in the measurement. All uncertainties are given in %, where δ data stat (δ MC stat ) are the statistical uncertainties in the data (MC simulation). The γ components are the uncertainty in the jet energy calibration from the in situ, the pileup, the close-by jet, and flavour components. The u components show the uncertainty for the jet energy and angular resolution, the unfolding, the quality selection, and the luminosity. While all columns are uncorrelated with each other, the in situ, pileup, and flavour uncertainties shown here are the sum in quadrature of multiple uncorrelated components. Table 6 Measured double-differential three-jet cross-section, σ, for R = 0.6 jets and 4 ≤ |Y * | < 6, along with uncertainties in the measurement. All uncertainties are given in %, where δ data stat (δ MC stat ) are the statistical uncertainties in the data (MC simulation). The γ components are the uncertainty in the jet energy calibration from the in situ, the pileup, the close-by jet, and flavour components. The u components show the uncertainty for the jet energy and angular resolution, the unfolding, the quality selection, and the luminosity. While all columns are uncorrelated with each other, the in situ, pileup, and flavour uncertainties shown here are the sum in quadrature of multiple uncorrelated components. Table 7 Measured double-differential three-jet cross-section, σ, for R = 0.4 jets and 6 ≤ |Y * | < 8, along with uncertainties in the measurement. All uncertainties are given in %, where δ data stat (δ MC stat ) are the statistical uncertainties in the data (MC simulation). The γ components are the uncertainty in the jet energy calibration from the in situ, the pileup, the close-by jet, and flavour components. The u components show the uncertainty for the jet energy and angular resolution, the unfolding, the quality selection, and the luminosity. While all columns are uncorrelated with each other, the in situ, pileup, and flavour uncertainties shown here are the sum in quadrature of multiple uncorrelated components. Table 8 Measured double-differential three-jet cross-section, σ, for R = 0.6 jets and 6 ≤ |Y * | < 8, along with uncertainties in the measurement. All uncertainties are given in %, where δ data stat (δ MC stat ) are the statistical uncertainties in the data (MC simulation). The γ components are the uncertainty in the jet energy calibration from the in situ, the pileup, the close-by jet, and flavour components. The u components show the uncertainty for the jet energy and angular resolution, the unfolding, the quality selection, and the luminosity. While all columns are uncorrelated with each other, the in situ, pileup, and flavour uncertainties shown here are the sum in quadrature of multiple uncorrelated components. Table 9 Measured double-differential three-jet cross-section, σ, for R = 0.4 jets and 8 ≤ |Y * | < 10, along with uncertainties in the measurement. All uncertainties are given in %, where δ data stat (δ MC stat ) are the statistical uncertainties in the data (MC simulation). The γ components are the uncertainty in the jet energy calibration from the in situ, the pileup, the close-by jet, and flavour components. The u components show the uncertainty for the jet energy and angular resolution, the unfolding, the quality selection, and the luminosity. While all columns are uncorrelated with each other, the in situ, pileup, and flavour uncertainties shown here are the sum in quadrature of multiple uncorrelated components. Table 10 Measured double-differential three-jet cross-section, σ, for R = 0.6 jets and 8 ≤ |Y * | < 10, along with uncertainties in the measurement. All uncertainties are given in %, where δ data stat (δ MC stat ) are the statistical uncertainties in the data (MC simulation). The γ components are the uncertainty in the jet energy calibration from the in situ, the pileup, the close-by jet, and flavour components. The u components show the uncertainty for the jet energy and angular resolution, the unfolding, the quality selection, and the luminosity. While all columns are uncorrelated with each other, the in situ, pileup, and flavour uncertainties shown here are the sum in quadrature of multiple uncorrelated components. The ATLAS Collaboration 10 Figure 1 101(sum in quadrature of all components) Jet energy scale Jet angular resolution Jet energy resolution Other (b) 8 < |Y * | < Figure 2 2Non-perturbative corrections obtained using various MC generators and tunes for the differential three-jet cross-section as a function of three-jet mass in the range |Y * | < 2 for anti-k t jet (a) R = 0.4 and (b) R = 0.6. Figure 3 3The three-jet double-differential cross-section as a function of m jjj in bins |Y * |, as denoted in the legend. The jets are identified using the anti-k t algorithm with R = 0.4. For convenience, the cross-sections are multiplied by the factors indicated in the legend. Also shown is the comparison with the NLOJET++ prediction with the CT 10 PDF set corrected for non-perturbative effects. The statistical uncertainties are smaller than the size of the symbols. Where visible, the sum in quadrature of the statistical and experimental systematic uncertainties is plotted. Figure 4 4The three-jet double-differential cross-section as a function of m jjj in bins |Y * |, as denoted in the legend. The jets are identified using the anti-k t algorithm with R = 0.6. For convenience, the cross-sections are multiplied by the factors indicated in the legend. Also shown is the comparison with the NLOJET++ prediction with the CT 10 PDF set corrected for non-perturbative effects. The statistical uncertainties are smaller than the size of the symbols. Where visible, the sum in quadrature of the statistical and experimental systematic uncertainties is plotted. Figure 5 5The ratio of NLO QCD predictions, obtained by using NLOJET++ with different PDF sets (CT 10, MSTW 2008, GJR 08) and corrected for non-perturbative effects, to data as a function of m jjj in bins of |Y * |, as denoted in the legend. The ratios are for jets identified using the anti-k t algorithm with R = 0.4. The experimental error bands are centered at one and designate the relative statistical (thin dashed line) and total (statistical and systematic uncertainties added in quadrature) experimental uncertainties (thick solid line). The theoretical predictions are represented by thick lines with the hatched or filled band around it. The line show the central values and the band represent the total theory uncertainty. Figure 6 6The ratio of NLO QCD predictions, obtained by using NLOJET++ with different PDF sets ( NNPDF 2.3, ABM 11, HERAPDF 1.5) and corrected for non-perturbative effects, to data as a function of m jjj in bins of |Y * |, as denoted in the legend. The ratios are for jets identified using the anti-k t algorithm with R = 0.4. The experimental error bands are centered at one and designate the relative statistical (thin dashed line) and total (statistical and systematic uncertainties added in quadrature) experimental uncertainties (thick solid line). The theoretical predictions are represented by thick lines with the hatched or filled band around it. The line show the central values and the band represent the total theory uncertainty. Figure 7 7The ratio of NLO QCD predictions, obtained by using NLOJET++ with different PDF sets (CT 10, MSTW 2008, GJR 08) and corrected for non-perturbative effects, to data as a function of m jjj in bins of |Y * |, as denoted in the legend. The ratios are for jets identified using the anti-k t algorithm with R = 0.6. The experimental error bands are centered at one and designate the relative statistical (thin dashed line) and total (statistical and systematic uncertainties added in quadrature) experimental uncertainties (thick solid line). The theoretical predictions are represented by thick lines with the hatched or filled band around it. The line show the central values and the band represent the total theory uncertainty. 1. R. Field, Min-Bias and the Underlying Event at the LHC, arXiv:1202.0901 [hep-ph]. 2. ATLAS Collaboration, Measurement of inclusive jet and dijet production in pp collisions at √ s = 7 TeV using the ATLAS detector, Phys. Rev. D 86 (2012) 014022, arXiv:1112.6297 [hep-ex]. 3. ATLAS Collaboration, Measurement of the inclusive jet cross section in pp collisions at √ s = 2.76 TeV and comparison to the inclusive jet cross section at √ s = 7 TeV using the ATLAS detector, Eur. Phys. J. C 73 (2013) 2509, arXiv:1304.4739 [hep-ex]. 4. M. Cacciari, G. P. Salam, and G. Soyez, The Anti-k(t) jet clustering algorithm, JHEP 0804 (2008) 063, arXiv:0802.1189 [hep-ph]. 5. ATLAS Collaboration, Measurement of the inclusive jet cross-section in proton-proton collisions at √ s = 7 TeV using 4.5 fb −1 of data with the ATLAS detector, arXiv:1410.8857 [hep-ex]. 6. ATLAS Collaboration, Measurement of dijet cross sections in pp collisions at 7 TeV centre-of-mass energy using the ATLAS detector, JHEP 1405 (2014) 059, arXiv:1312.3524 [hep-ex]. 7. ATLAS Collaboration, Jet energy measurement and its systematic uncertainty in proton-proton collisions at √ s = 7 TeV with the ATLAS detector, arXiv:1406.0076 [hep-ex]. 8. CMS Collaboration, Measurements of differential jet cross sections in proton-proton collisions at √ s = 7 TeV with the CMS detector, Phys. Rev. D 87 (2013) 112002, arXiv:1212.6660 [hep-ex]. 9. CMS Collaboration, Measurement of the inclusive production cross sections for forward jets and for dijet events with one forward and one central jet in pp collisions at √ s = 7 TeV, JHEP 1206 (2012) 036, arXiv:1202.0704 [hep-ex]. 10. CMS Collaboration, Measurement of the ratio of inclusive jet cross sections using the anti-k T algorithm with radius parameters R=0.5 and 0.7 in pp collisions at √ s = 7 TeV, arXiv:1406.0324 [hep-ex]. 11. CMS Collaboration, Measurement of the ratio of the inclusive 3-jet cross section to the inclusive 2-jet cross section in pp collisions at √ s = 7 TeV and first determination of the strong coupling constant in the TeV range, Eur. Phys. J. C 73 (2013) 2604, arXiv:1304.7498 [hep-ex]. 12. ATLAS Collaboration, Measurement of multi-jet cross sections in proton-proton collisions at a 7 TeV center-of-mass energy, Eur. Phys. J. C 71 (2011) 1763, arXiv:1107.2092 [hep-ex]. 13. CMS Collaboration, Measurement of four-jet production in proton-proton collisions at √ s = 7 TeV, Phys. Rev. D 89 (2014) 092010, arXiv:1312.6440 [hep-ex]. 14. V. M. Abazov et al., Measurement of three-jet differential cross sections dσ 3jet /dM 3jet in pp collisions at √ s = 1.96 TeV, Phys. Lett. B 704 (2011) Department of Physics, University of Adelaide, Adelaide, Australia Ohio State University, Columbus OH, United States of America AcknowledgementsWe thank CERN for the very successful operation of the LHC, as well as the support staff from our institutions without whom ATLAS could not be operated efficiently.We acknowledge the support of ANPCyT, Argentina; YerPhI, Armenia; ARC, Australia; BMWFW andReferences . 10.1016/j.physletb.2011.09.048arXiv:1104.1986hep-ex, arXiv:1104.1986 [hep-ex]. The ATLAS Experiment at the CERN Large Hadron Collider. 10.1088/1748-0221/3/08/S08003JINST. 38003ATLAS Collaboration, The ATLAS Experiment at the CERN Large Hadron Collider, JINST 3 (2008) S08003. FastJet User Manual. M Cacciari, G Salam, G Soyez, 10.1140/epjc/s10052-012-1896-2arXiv:1111.6097Eur. Phys. J. C. 721896hep-phM. Cacciari, G. Salam, and G. Soyez, FastJet User Manual, Eur. Phys. J. C 72 (2012) 1896, arXiv:1111.6097 [hep-ph]. C Buttar, J Hondt, M Kramer, G Salam, M Wobisch, arXiv:0803.0678Standard Model Handles and Candles Working Group: Tools and Jets Summary Report. hep-phC. Buttar, J. D'Hondt, M. Kramer, G. Salam, M. Wobisch, et al., Standard Model Handles and Candles Working Group: Tools and Jets Summary Report, arXiv:0803.0678 [hep-ph]. PYTHIA 6.4 Physics and Manual. T Sjostrand, S Mrenna, P Z Skands, 10.1088/1126-6708/2006/05/026arXiv:hep-ph/0603175JHEP. 060526hep-phT. Sjostrand, S. Mrenna, and P. Z. Skands, PYTHIA 6.4 Physics and Manual, JHEP 0605 (2006) 026, arXiv:hep-ph/0603175 [hep-ph]. Tuning Monte Carlo Generators: The Perugia Tunes. P Z Skands, 10.1103/PhysRevD.82.074018arXiv:1005.3457Phys. Rev. D. 8274018hep-phP. Z. Skands, Tuning Monte Carlo Generators: The Perugia Tunes, Phys. Rev. D 82 (2010) 074018, arXiv:1005.3457 [hep-ph]. Global QCD analysis of parton structure of the nucleon: CTEQ5 parton distributions. H Lai, 10.1007/s100529900196arXiv:hep-ph/9903282Eur.Phys.J. C. 12375hep-phH. Lai et al., Global QCD analysis of parton structure of the nucleon: CTEQ5 parton distributions, Eur.Phys.J. C 12 (2000) 375, arXiv:hep-ph/9903282 [hep-ph]. New ATLAS event generator tunes to 2010 data. ATL-PHYS-PUB-2011-008ATLAS Collaboration, New ATLAS event generator tunes to 2010 data, No. ATL-PHYS-PUB-2011-008. . Geneva , Geneva, 2011. https://cds.cern.ch/record/1345343. ATLAS tunes of PYTHIA 6 and Pythia 8 for MC11. ATL-PHYS-PUB-2011-009ATLAS Collaboration, ATLAS tunes of PYTHIA 6 and Pythia 8 for MC11, No. ATL-PHYS-PUB-2011-009. . Geneva , Geneva, Jul, 2011. https://cds.cern.ch/record/1363300. B Andersson, G Gustafson, G Ingelman, T Sjostrand, 10.1016/0370-1573(83)90080-7Parton fragmentation and string dynamics. 9731B. Andersson, G. Gustafson, G. Ingelman, and T. Sjostrand, Parton fragmentation and string dynamics, Physics Reports 97 (1983) 31. Semiclassical Models for Gluon Jets and Leptoproduction Based on the Massless Relativistic String. B Andersson, G Gustafson, 10.1007/BF01577421Z. Phys. C. 3223B. Andersson and G. Gustafson, Semiclassical Models for Gluon Jets and Leptoproduction Based on the Massless Relativistic String, Z. Phys. C 3 (1980) 223. A Brief Introduction to PYTHIA 8.1. T Sjostrand, S Mrenna, P Z Skands, 10.1016/j.cpc.2008.01.036Comput. Phys. Commun. 178852T. Sjostrand, S. Mrenna, and P. Z. Skands, A Brief Introduction to PYTHIA 8.1, Comput. Phys. Commun. 178 (2008) 852. Interleaved Parton Showers and Tuning Prospects. R Corke, T Sjostrand, 10.1007/JHEP03(2011)032arXiv:1011.1759JHEP. 110332hep-phR. Corke and T. Sjostrand, Interleaved Parton Showers and Tuning Prospects, JHEP 1103 (2011) 032, arXiv:1011.1759 [hep-ph]. Different PDF approximations useful for LO Monte Carlo generators. A Sherstnev, R Thorne, arXiv:0807.2132hep-phA. Sherstnev and R. Thorne, Different PDF approximations useful for LO Monte Carlo generators, arXiv:0807.2132 [hep-ph]. ALPGEN, a generator for hard multiparton processes in hadronic collisions. M L Mangano, M Moretti, F Piccinini, R Pittau, A D Polosa, 10.1088/1126-6708/2003/07/001arXiv:hep-ph/0206293JHEP. 03071hep-phM. L. Mangano, M. Moretti, F. Piccinini, R. Pittau, and A. D. Polosa, ALPGEN, a generator for hard multiparton processes in hadronic collisions, JHEP 0307 (2003) 001, arXiv:hep-ph/0206293 [hep-ph]. . G Corcella, I Knowles, G Marchesini, S Moretti, K Odagiri, arXiv:hep-ph/0210213HERWIG 6.5 release note. hep-phG. Corcella, I. Knowles, G. Marchesini, S. Moretti, K. Odagiri, et al., HERWIG 6.5 release note, arXiv:hep-ph/0210213 [hep-ph]. HERWIG: A Monte Carlo event generator for simulating hadron emission reactions with interfering gluons. Version 5.1 -April. G Marchesini, 10.1016/0010-4655(92)90055-4Comput. Phys. Commun. 67465G. Marchesini et al., HERWIG: A Monte Carlo event generator for simulating hadron emission reactions with interfering gluons. Version 5.1 -April 1991, Comput. Phys. Commun. 67 (1992) 465. HERWIG 6: An Event generator for hadron emission reactions with interfering gluons (including supersymmetric processes). G Corcella, 10.1088/1126-6708/2001/01/010arXiv:hep-ph/0011363JHEP. 010110hep-phG. Corcella et al., HERWIG 6: An Event generator for hadron emission reactions with interfering gluons (including supersymmetric processes), JHEP 0101 (2001) 010, arXiv:hep-ph/0011363 [hep-ph]. New generation of parton distributions with uncertainties from global QCD analysis. J Pumplin, 10.1088/1126-6708/2002/07/012arXiv:hep-ph/0201195JHEP. 020712hep-phJ. Pumplin et al., New generation of parton distributions with uncertainties from global QCD analysis, JHEP 0207 (2002) 012, arXiv:hep-ph/0201195 [hep-ph]. Multiparton interactions in photoproduction at HERA. J Butterworth, J R Forshaw, M Seymour, 10.1007/s002880050286arXiv:hep-ph/9601371Z.Phys. C. 72637hep-phJ. Butterworth, J. R. Forshaw, and M. Seymour, Multiparton interactions in photoproduction at HERA, Z.Phys. C 72 (1996) 637, arXiv:hep-ph/9601371 [hep-ph]. The ATLAS Simulation Infrastructure. G Aad, ATLAS Collaboration10.1140/epjc/s10052-010-1429-9arXiv:1005.4568Eur.Phys.J. C. 70823physics.ins-detATLAS Collaboration, G. Aad et al., The ATLAS Simulation Infrastructure, Eur.Phys.J. C 70 (2010) 823, arXiv:1005.4568 [physics.ins-det]. GEANT4: A Simulation toolkit. S Agostinelli, GEANT4 Collaboration10.1016/S0168-9002(03)01368-8Nucl.Instrum.Meth. A. 506250GEANT4 Collaboration, S. Agostinelli et al., GEANT4: A Simulation toolkit, Nucl.Instrum.Meth. A 506 (2003) 250. ATLAS: Detector and physics performance technical design report. Geneva1ATLAS Collaboration, ATLAS: Detector and physics performance technical design report. Volume 1, Geneva, 1999. http://inspirehep.net/record/511648. ATLAS: Detector and physics performance technical design report. Geneva2ATLAS Collaboration, ATLAS: Detector and physics performance technical design report. Volume 2, Geneva, 1999. http://inspirehep.net/record/511649. Improved luminosity determination in pp collisions at √ s = 7 TeV using the ATLAS detector at the LHC. 10.1140/epjc/s10052-013-2518-3arXiv:1302.4393Eur. Phys. J. C. 732518hep-exImproved luminosity determination in pp collisions at √ s = 7 TeV using the ATLAS detector at the LHC, Eur. Phys. J. C 73 (2013) 2518, arXiv:1302.4393 [hep-ex]. ATL-LARG-PUB-2008-002Calorimeter Clustering Algorithms: Description and Performance. GenevaATLAS Collaboration, Calorimeter Clustering Algorithms: Description and Performance, No. ATL-LARG-PUB-2008-002. Geneva, Apr, 2008. https://cds.cern.ch/record/1099735. Local Hadronic Calibration. ATL-LARG-PUB-2009-001. GenevaATLAS Collaboration, Local Hadronic Calibration, No. ATL-LARG-PUB-2009-001. Geneva, Jun, 2008. https://cds.cern.ch/record/1112035. Pile-up corrections for jets from proton-proton collisions at √ s = 7 TeV in ATLAS in 2011. ATLAS-CONF-2012-064GenevaATLAS Collaboration, Pile-up corrections for jets from proton-proton collisions at √ s = 7 TeV in ATLAS in 2011, No. ATLAS-CONF-2012-064. Geneva, Jul, 2012. https://cds.cern.ch/record/1459529. Selection of jets produced in proton-proton collisions with the ATLAS detector using 2011 data. ATLAS-CONF-2012-020GenevaATLAS Collaboration, Selection of jets produced in proton-proton collisions with the ATLAS detector using 2011 data, No. ATLAS-CONF-2012-020. Geneva, Mar, 2012. https://cds.cern.ch/record/1430034/. B Malaescu, arXiv:1106.3107An Iterative, Dynamically Stabilized(IDS) Method of Data Unfolding. physics.data-anB. Malaescu, An Iterative, Dynamically Stabilized(IDS) Method of Data Unfolding, arXiv:1106.3107 [physics.data-an]. Jet energy resolution in proton-proton collisions at √ s = 7 TeV recorded in 2010 with the ATLAS detector. 10.1140/epjc/s10052-013-2306-0arXiv:1210.6210Eur. Phys. J. C. 732306hep-exATLAS Collaboration, Jet energy resolution in proton-proton collisions at √ s = 7 TeV recorded in 2010 with the ATLAS detector, Eur. Phys. J. C 73 (2013) 2306, arXiv:1210.6210 [hep-ex]. No. ATLAS-CONF-2010-054Jet energy resolution and selection efficiency relative to track jets from in-situ techniques with the ATLAS Detector Using Proton-Proton Collisions at a Center of Mass Energy √ s = 7 TeV. GenevaATLAS Collaboration, Jet energy resolution and selection efficiency relative to track jets from in-situ techniques with the ATLAS Detector Using Proton-Proton Collisions at a Center of Mass Energy √ s = 7 TeV, No. ATLAS-CONF-2010-054. Geneva, Jul, 2010. https://cds.cern.ch/record/1281311. Next-to-leading order calculation of three jet observables in hadron hadron collision. Z Nagy, 10.1103/PhysRevD.68.094002arXiv:hep-ph/0307268Phys. Rev. D. 6894002hep-phZ. Nagy, Next-to-leading order calculation of three jet observables in hadron hadron collision, Phys. Rev. D 68 (2003) 094002, arXiv:hep-ph/0307268 [hep-ph]. A posteriori inclusion of parton density functions in NLO QCD final-state calculations at hadron colliders: The APPLGRID Project. T Carli, 10.1140/epjc/s10052-010-1255-0arXiv:0911.2985Eur. Phys. J. C. 66503hep-phT. Carli et al., A posteriori inclusion of parton density functions in NLO QCD final-state calculations at hadron colliders: The APPLGRID Project, Eur. Phys. J. C 66 (2010) 503, arXiv:0911.2985 [hep-ph]. New parton distributions for collider physics. H.-L Lai, 10.1103/PhysRevD.82.074024arXiv:1007.2241Phys. Rev. D. 8274024hep-phH.-L. Lai et al., New parton distributions for collider physics, Phys. Rev. D 82 (2010) 074024, arXiv:1007.2241 [hep-ph]. On the role of heavy flavor parton distributions at high energy colliders. M Gluck, P Jimenez-Delgado, E Reya, C Schuck, 10.1016/j.physletb.2008.04.063arXiv:0801.3618Phys. Lett. B. 664133hep-phM. Gluck, P. Jimenez-Delgado, E. Reya, and C. Schuck, On the role of heavy flavor parton distributions at high energy colliders, Phys. Lett. B 664 (2008) 133, arXiv:0801.3618 [hep-ph]. Parton distributions for the LHC. A D Martin, W J Stirling, R S Thorne, G Watt, 10.1140/epjc/s10052-009-1072-5arXiv:0901.0002Eur. Phys. J. C. 63189hep-phA. D. Martin, W. J. Stirling, R. S. Thorne, and G. Watt, Parton distributions for the LHC, Eur. Phys. J. C 63 (2009) 189, arXiv:0901.0002 [hep-ph]. Parton distributions with LHC data. R Ball, 10.1016/j.nuclphysb.2012.10.003arXiv:1207.1303Nucl. Phys. B. 867244hep-phR. Ball et al., Parton distributions with LHC data, Nucl. Phys. B 867 (2013) 244, arXiv:1207.1303 [hep-ph]. H1prelim-10-142. ZEUS-prel-10-018HERAPDF 1.5"HERAPDF 1.5." H1prelim-10-142, ZEUS-prel-10-018. https://www.desy.de/h1zeus/combined_results/ index.php?do=proton_structure. Parton Distribution Functions and Benchmark Cross Sections at NNLO. J S. Alekhin, S Blumlein, Moch, 10.1103/PhysRevD.86.054009arXiv:1202.2281Phys. Rev. D. 8654009hep-phS. Alekhin, J. Blumlein, and S. Moch, Parton Distribution Functions and Benchmark Cross Sections at NNLO, Phys. Rev. D 86 (2012) 054009, arXiv:1202.2281 [hep-ph]. Uncertainties of predictions from parton distribution functions. 2. The Hessian method. J Pumplin, 10.1103/PhysRevD.65.014013arXiv:hep-ph/0101032Phys. Rev. D. 6514013hep-phJ. Pumplin et al., Uncertainties of predictions from parton distribution functions. 2. The Hessian method, Phys. Rev. D 65 (2001) 014013, arXiv:hep-ph/0101032 [hep-ph]. Uncertainty induced by QCD coupling in the CTEQ global analysis of parton distributions. H.-L Lai, 10.1103/PhysRevD.82.054021arXiv:1004.4624Phys. Rev. D. 8254021hep-phH.-L. Lai et al., Uncertainty induced by QCD coupling in the CTEQ global analysis of parton distributions, Phys. Rev. D 82 (2010) 054021, arXiv:1004.4624 [hep-ph]. Herwig++ Physics and Manual. M Bahr, 10.1140/epjc/s10052-008-0798-9arXiv:0803.0883Eur. Phys. J. C. 58hep-phM. Bahr et al., Herwig++ Physics and Manual, Eur. Phys. J. C 58 (2008) 639-707, arXiv:0803.0883 [hep-ph]. . S Gieseke, D Grellscheid, K Hamilton, A Papaefstathiou, S Platzer, 2S. Gieseke, D. Grellscheid, K. Hamilton, A. Papaefstathiou, S. Platzer, et al., Herwig++ 2.5 Colour reconnections in Herwig++. S Gieseke, C Rohr, A Siodmok, 10.1140/epjc/s10052-012-2225-5arXiv:1206.0041Eur. Phys. J. C. 722225hep-phS. Gieseke, C. Rohr, and A. Siodmok, Colour reconnections in Herwig++, Eur. Phys. J. C 72 (2012) 2225, arXiv:1206.0041 [hep-ph]. A.I. Etienvre 137 , E. Etzion 154 , H. Evans 60 , A. Ezhilov 122 , L. Fabbri 20a,20b , G. Facini 31 , R.M. Fakhrutdinov 129 , S. Falciano 133a , R.J. Falla 77 , J. Faltova 128 , Y. Fang 33a , M. Fanti 90a,90b , A. Farbin 8 , A. Farilla 135a , T. Farooque 12 , S. Farrell 15 , S.M. Farrington 171 , P. Farthouat 30 , F. Fassi 136e , P. Fassnacht 30 , D. Fassouliotis 9 , A. Favareto 50a,50b , L. Fayard 116 , P. Federic 145a , O.L. Fedin 122,j , W. Fedorko 169 , M. Fehling-Kaschek 48 , S. Feigl 30 , L. Feligioni 84 , C. Feng 33d , E.J. Feng 6 , H. Feng 88 , A.B. Fenyuk 129 , S. Fernandez Perez 30 , S. Ferrag 53 , J. Ferrando 53 , A. Ferrari 167 , P. Ferrari 106 , R. Ferrari 120a , D.E. Ferreira de Lima 53 , A. Ferrer 168 , D. Ferrere 49 , C. Ferretti 88 , A. Ferretto Parodi 50a,50b , M. Fiascaris 31 , F. Fiedler 82 , A. Filipčič 74 , M. Filipuzzi 42 , F. Filthaut 105 , M. Fincke-Keeler 170 , K.D. Finelli 151 , M.C.N. Fiolhais 125a,125c , L. Fiorini 168 , A. Firan 40 , A. Fischer 2 , J. Fischer 176 , W.C. Fisher 89 , E.A. Fitzgerald 23 , M. Flechl 48 , I. Fleck 142. Lj. Simic 13a , S. Simion 116 , E. Simioni 82 , B. Simmons 77 , R. Simoniello 90a,90b , M. Simonyan. E.N. Thompson 35 , P.D. Thompson 18 , P.D. Thompson 159 , R.J. Thompson 83 , A.S. Thompson 53 , L.A. Thomsen 36 , E. Thomson 121B.J. O'Brien; J.Y.C. Tam; i , J. Therhaag25J. Veatch. Veloso 125a,125c , S. Veneziano 133a , A. Ventura 72a,72b , D. Ventura 85 , M. Venturi 170 , N. Venturi 159 , A. Venturini 23 , V. Vercesi 120a , M. Verducci 133a,133b , W. Verkerke 106 , J.C. Vermeulen 106 , A. Vest 44 , M.C. Vetterli 143,d , O. Viazlo 80 , I. Vichou 166 , T. Vickey 146c,ai , O.E. Vickey Boeriu 146c , G.H.A. Viehhauser 119 , S. Viel 169 , R. Vigne 30 , M. Villa 20a,20b , M. Villaplana Perez 90a,90b , E. Vilucchi 47 , M.G. Vincter 29 , V.B. Vinogradov 64 , J. Virzi 15 , I. Vivarelli 150 , F. Vives Vaque 3 , S. Vlachos 10 , D. Vladoiu 99 , M. Vlasak 127 , A. Vogel 21 , M. Vogel 32a , P. Vokac 127 , G. Volpi 123a,123b , M. Volpi 87 , H. von der Schmitt 100 , H. von Radziewski 48 , E. von Toerne 21 , V. Vorobel 128 , K. Vorobev 97 , M. Vos 168 , R. Voss 30 , J.H. Vossebeld 73 , N. Vranjes 137 , M. Vranjes Milosavljevic 13a , V. Vrba 126 , M. Vreeswijk 106 , T. Vu Anh 48 , R. Vuillermet 30 , I. Vukotic 31 , Z. Vykydal 127 , P. Wagner 21 , W. Wagner 176 , H. Wahlberg 70 , S. Wahrmund 44 , J. Wakabayashi 102 , J. Walder 71 , R. Walker 99 , W. Walkowiak 142 , R. Wall 177 , P. Waller 73 , B. Walsh 177 , C. Wang 152,aj , C. Wang 45 , F. Wang 174 , H. Wang 15 , H. Wang 40 , J. Wang 42 , J. Wang 33a , K. Wang 86 , R. Wang 104 , S.M. Wang 152 , T. Wang 21 , X. Wang 177 , C. Wanotayaroj 115 , A. Warburton 86 , C.P. Ward 28 , D.R. Wardrope 77 , M. Warsinsky 48 , A. Washbrook 46 , C. Wasicki 42 , P.M. Watkins 18 , A.T. Watson 18 , I.J. Watson 151 , M.F. Watson 18 , G. Watts 139 , S. Watts 83 , B.M. Waugh 77 , S. Webb 83 , M.S. Weber 17 , S.W. Weber 175 , J.S. Webster 31 , A.R. Weidberg 119 , P. Weigell 100 , B. Weinert 60 , J. Weingarten 54 , C. Weiser 48 , H. Weits 106 , P.S. Wells 30 , T. Wenaus 25 , D. Wendland 16 , Z. Weng 152,ae , T. Wengler 30 , S. Wenig 30 , N. Wermes 21 , M. Werner 48 , P. Werner 30 , M. Wessels 58a , J. Wetter 162 , K. Whalen 29 , A. White 8 , M.J. White 1 , R. White 32b , S. White 123a,123b , D. Whiteson 164 , D. Wicke 176 , F.J. Wickens 130 , W. Wiedenmann 174 , M. Wielers 130 , P. Wienemann 21 , C. Wiglesworth 36 , L.A.M. Wiik-Fuchs 21 , P.A. Wijeratne 77 , A. Wildauer 100 , M.A. Wildt 42,ak , H.G. Wilkens 30 , J.Z. Will 99 , H.H. Williams 121 , S. Williams 28 , C. Willis 89 , S. Willocq 85 , A. Wilson 88 , J.A. Wilson 18 , I. Wingerter-Seez 5 , F. Winklmeier 115 , B.T. Winter 21 , M. Wittgen 144 , T. Wittig 43 , J. Wittkowski 99 , S.J. Wollstadt 82 , M.W. Wolter 39 , H. Wolters 125a,125c , B.K. Wosiek 39 , J. Wotschack 30 , M.J. Woudstra 83 , K.W. Wozniak 39 , M. Wright 53 , M. Wu 55 , S.L. Wu 174 , X. Wu 49 , Y. Wu 88 , E. Wulf 35 , T.R. Wyatt 83 , B.M. Wynne 46 , S. Xella 36 , M. Xiao 137 , D. Xu 33a , L. Xu 33b,al , B. Yabsley 151 , S. Yacoob 146b,am , R. Yakabe 66 , M. Yamada 65 , H. Yamaguchi 156 , Y. Yamaguchi 117 , A. Yamamoto 65 , K. Yamamoto 63 , S. Yamamoto 156 , T. Yamamura 156 , T. Yamanaka 156 , K. Yamauchi 102 , Y. Yamazaki 66 , Z. Yan 22 , H. Yang 33e , H. Yang 174 , U.K. Yang 83 , Y. Yang 110 , S. Yanush 92 , L. Yao 33a , W-M. Yao 15 , Y. Yasu 65 , E. Yatsenko 42 , K.H. Yau Wong 21 , J. Ye 40 , S. Ye 25 , I. Yeletskikh 64 , A.L. Yen 57 , E. Yildirim 42 , M. Yilmaz 4b , R. Yoosoofmiya 124 , K. Yorita 172 , R. Yoshida 6 , K. Yoshihara 156 , C. Young 144 , C.J.S. Young 30 , S. Youssef 22 , D.R. Yu 15 , J. Yu 8 , J.M. Yu 88 , J. Yu 113 , L. Yuan 66 , A. Yurkewicz 107 , I. Yusuff 28,an , B. Zabinski 39 , R. Zaidan 62 , A.M. Zaitsev 129,aa , A. Zaman 149 , S. Zambito 23 , L. Zanello 133a,133b , D. Zanzi 100 , C. Zeitnitz 176 , M. Zeman 127 , A. Zemla 38a , K. Zengel 23 , O. Zenin 129 , T.Ženiš 145a , D. Zerwas 116 , G. Zevi della Porta 57 , D. Zhang 88 , F. Zhang 174 , H. Zhang 89 , J. Zhang 6 , L. Zhang 152 , X. Zhang 33d , Z. Zhang 116 , Z. Zhao 33b , A. Zhemchugov 64 , J. Zhong 119 , B. Zhou 88 , L. Zhou 35 , N. Zhou 164 , C.G. Zhu 33d , H. Zhu 33a , J. Zhu 88 , Y. Zhu 33b , X. Zhuang 33a , K. Zhukov 95 , A. Zibell 175 , D. Zieminska 60 , N.I. Zimine 64 , C. Zimmermann 82 , R. Zimmermann 21 , S. Zimmermann 21 , S. Zimmermann 48 , Z. Zinonos 54 , M. Ziolkowski 142 , G. Zobernig 174 , A. Zoccoli 20a,20b , M. zur Nedden 16 , G. Zurzolo 103a,103b , V. Zutshi 107 , L. Zwalinski 30G. Aad 84 , B. Abbott 112 , J. Abdallah 152 , S. Abdel Khalek 116 , O. Abdinov 11 , R. Aben 106 , B. Abi 113 , M. Abolins 89 , O.S. AbouZeid 159 , H. Abramowicz 154 , H. Abreu 153 , R. Abreu 30 , Y. Abulaiti 147a,147b , B.S. Acharya 165a,165b,a , L. Adamczyk 38a , D.L. Adams 25 , J. Adelman 177 , S. Adomeit 99 , T. Adye 130 , T. Agatonovic-Jovin 13a , J.A. Aguilar-Saavedra 125a,125f , M. Agustoni 17 , S.P. Ahlen 22 , F. Ahmadov 64,b , G. Aielli 134a,134b , H. Akerstedt 147a,147b , T.P.A.Åkesson 80 , G. Akimoto 156 , A.V. Akimov 95 , G.L. Alberghi 20a,20b , J. Albert 170 , S. Albrand 55 , M.J. Alconada Verzini 70 , M. Aleksa 30 , I.N. Aleksandrov 64 , C. Alexa 26a , G. Alexander 154 , G. Alexandre 49 , T. Alexopoulos 10 , M. Alhroob 165a,165c , G. Alimonti 90a , L. Alio 84 , J. Alison 31 , B.M.M. Allbrooke 18 , L.J. Allison 71 , P.P. Allport 73 , J. Almond 83 , A. Aloisio 103a,103b , A. Alonso 36 , F. Alonso 70 , C. Alpigiani 75 , A. Altheimer 35 , B. Alvarez Gonzalez 89 , M.G. Alviggi 103a,103b , K. Amako 65 , Y. Amaral Coutinho 24a , C. Amelung 23 , D. Amidei 88 , S.P. Amor Dos Santos 125a,125c , A. Amorim 125a,125b , S. Amoroso 48 , N. Amram 154 , G. Amundsen 23 , C. Anastopoulos 140 , L.S. Ancu 49 , N. Andari 30 , T. Andeen 35 , C.F. Anders 58b , G. Anders 30 , K.J. Anderson 31 , A. Andreazza 90a,90b , V. Andrei 58a , X.S. Anduaga 70 , S. Angelidakis 9 , I. Angelozzi 106 , P. Anger 44 , A. Angerami 35 , F. Anghinolfi 30 , A.V. Anisenkov 108 , N. Anjos 125a , A. Annovi 47 , A. Antonaki 9 , M. Antonelli 47 , A. Antonov 97 , J. Antos 145b , F. Anulli 133a , M. Aoki 65 , L. Aperio Bella 18 , R. Apolle 119,c , G. Arabidze 89 , I. Aracena 144 , Y. Arai 65 , J.P. Araque 125a , A.T.H. Arce 45 , J-F. Arguin 94 , S. Argyropoulos 42 , M. Arik 19a , A.J. Armbruster 30 , O. Arnaez 30 , V. Arnal 81 , H. Arnold 48 , M. Arratia 28 , O. Arslan 21 , A. Artamonov 96 , G. Artoni 23 , S. Asai 156 , N. Asbah 42 , A. Ashkenazi 154 , B.Åsman 147a,147b , L. Asquith 6 , K. Assamagan 25 , R. Astalos 145a , M. Atkinson 166 , N.B. Atlay 142 , B. Auerbach 6 , K. Augsten 127 , M. Aurousseau 146b , G. Avolio 30 , G. Azuelos 94,d , Y. Azuma 156 , M.A. Baak 30 , A. Baas 58a , C. Bacci 135a,135b , H. Bachacou 137 , K. Bachas 155 , M. Backes 30 , M. Backhaus 30 , J. Backus Mayes 144 , E. Badescu 26a , P. Bagiacchi 133a,133b , P. Bagnaia 133a,133b , Y. Bai 33a , T. Bain 35 , J.T. Baines 130 , O.K. Baker 177 , P. Balek 128 , F. Balli 137 , E. Banas 39 , Sw. Banerjee 174 , A.A.E. Bannoura 176 , V. Bansal 170 , H.S. Bansil 18 , L. Barak 173 , S.P. Baranov 95 , E.L. Barberio 87 , D. Barberis 50a,50b , M. Barbero 84 , T. Barillari 100 , M. Barisonzi 176 , T. Barklow 144 , N. Barlow 28 , B.M. Barnett 130 , R.M. Barnett 15 , Z. Barnovska 5 , A. Baroncelli 135a , G. Barone 49 , A.J. Barr 119 , F. Barreiro 81 , J. Barreiro Guimarães da Costa 57 , R. Bartoldus 144 , A.E. Barton 71 , P. Bartos 145a , V. Bartsch 150 , A. Bassalat 116 , A. Basye 166 , R.L. Bates 53 , J.R. Batley 28 , M. Battaglia 138 , M. Battistin 30 , F. Bauer 137 , H.S. Bawa 144,e , M.D. Beattie 71 , T. Beau 79 , P.H. Beauchemin 162 , R. Beccherle 123a,123b , P. Bechtle 21 , H.P. Beck 17 , K. Becker 176 , S. Becker 99 , M. Beckingham 171 , C. Becot 116 , A.J. Beddall 19c , A. Beddall 19c , S. Bedikian 177 , V.A. Bednyakov 64 , C.P. Bee 149 , L.J. Beemster 106 , T.A. Beermann 176 , M. Begel 25 , K. Behr 119 , C. Belanger-Champagne 86 , P.J. Bell 49 , W.H. Bell 49 , G. Bella 154 , L. Bellagamba 20a , A. Bellerive 29 , M. Bellomo 85 , K. Belotskiy 97 , O. Beltramello 30 , O. Benary 154 , D. Benchekroun 136a , K. Bendtz 147a,147b , N. Benekos 166 , Y. Benhammou 154 , E. Benhar Noccioli 49 , J.A. Benitez Garcia 160b , D.P. Benjamin 45 , J.R. Bensinger 23 , K. Benslama 131 , S. Bentvelsen 106 , D. Berge 106 , E. Bergeaas Kuutmann 16 , N. Berger 5 , F. Berghaus 170 , J. Beringer 15 , C. Bernard 22 , P. Bernat 77 , C. Bernius 78 , F.U. Bernlochner 170 , T. Berry 76 , P. Berta 128 , C. Bertella 84 , G. Bertoli 147a,147b , F. Bertolucci 123a,123b , C. Bertsche 112 , D. Bertsche 112 , M.I. Besana 90a , G.J. Besjes 105 , O. Bessidskaia 147a,147b , M. Bessner 42 , N. Besson 137 , C. Betancourt 48 , S. Bethke 100 , W. Bhimji 46 , R.M. Bianchi 124 , L. Bianchini 23 , M. Bianco 30 , O. Biebel 99 , S.P. Bieniek 77 , K. Bierwagen 54 , J. Biesiada 15 , M. Biglietti 135a , J. Bilbao De Mendizabal 49 , H. Bilokon 47 , M. Bindi 54 , S. Binet 116 , A. Bingul 19c , C. Bini 133a,133b , C.W. Black 151 , J.E. Black 144 , K.M. Black 22 , D. Blackburn 139 , R.E. Blair 6 , J.-B. Blanchard 137 , T. Blazek 145a , I. Bloch 42 , C. Blocker 23 , W. Blum 82, * , U. Blumenschein 54 , G.J. Bobbink 106 , V.S. Bobrovnikov 108 , S.S. Bocchetta 80 , A. Bocci 45 , C. Bock 99 , C.R. Boddy 119 , M. Boehler 48 , T.T. Boek 176 , J.A. Bogaerts 30 , A.G. Bogdanchikov 108 , A. Bogouch 91, * , C. Bohm 147a , J. Bohm 126 , V. Boisvert 76 , T. Bold 38a , V. Boldea 26a , A.S. Boldyrev 98 , M. Bomben 79 , M. Bona 75 , M. Boonekamp 137 , A. Borisov 129 , G. Borissov 71 , M. Borri 83 , S. Borroni 42 , J. Bortfeldt 99 , V. Bortolotto 135a,135b , K. Bos 106 , D. Boscherini 20a , M. Bosman 12 , H. Boterenbrood 106 , J. Boudreau 124 , J. Bouffard 2 , E.V. Bouhova-Thacker 71 , D. Boumediene 34 , C. Bourdarios 116 , N. Bousson 113 , S. Boutouil 136d , A. Boveia 31 , J. Boyd 30 , I.R. Boyko 64 , J. Bracinik 18 , A. Brandt 8 , G. Brandt 15 , O. Brandt 58a , U. Bratzler 157 , B. Brau 85 , J.E. Brau 115 , H.M. Braun 176, * , S.F. Brazzale 165a,165c , B. Brelier 159 , K. Brendlinger 121 , A.J. Brennan 87 , R. Brenner 167 , S. Bressler 173 , K. Bristow 146c , T.M. Bristow 46 , D. Britton 53 , F.M. Brochu 28 , I. Brock 21 , R. Brock 89 , C. Bromberg 89 , J. Bronner 100 , G. Brooijmans 35 , T. Brooks 76 , W.K. Brooks 32b , J. Brosamer 15 , E. Brost 115 , J. Brown 55 , P.A. Bruckman de Renstrom 39 , D. Bruncko 145b , R. Bruneliere 48 , S. Brunet 60 , A. Bruni 20a , G. Bruni 20a , M. Bruschi 20a , L. Bryngemark 80 , T. Buanes 14 , Q. Buat 143 , F. Bucci 49 , P. Buchholz 142 , R.M. Buckingham 119 , A.G. Buckley 53 , S.I. Buda 26a , I.A. Budagov 64 , F. Buehrer 48 , L. Bugge 118 , M.K. Bugge 118 , O. Bulekov 97 , A.C. Bundock 73 , H. Burckhart 30 , S. Burdin 73 , B. Burghgrave 107 , S. Burke 130 , I. Burmeister 43 , E. Busato 34 , D. Büscher 48 , V. Büscher 82 , P. Bussey 53 , C.P. Buszello 167 , B. Butler 57 , J.M. Butler 22 , A.I. Butt 3 , C.M. Buttar 53 , J.M. Butterworth 77 , P. Butti 106 , W. Buttinger 28 , A. Buzatu 53 , M. Byszewski 10 , S. Cabrera Urbán 168 , D. Caforio 20a,20b , O. Cakir 4a , P. Calafiura 15 , A. Calandri 137 , G. Calderini 79 , P. Calfayan 99 , R. Calkins 107 , L.P. Caloba 24a , D. Calvet 34 , S. Calvet 34 , R. Camacho Toro 49 , S. Camarda 42 , D. Cameron 118 , L.M. Caminada 15 , R. Caminal Armadans 12 , S. Campana 30 , M. Campanelli 77 , A. Campoverde 149 , V. Canale 103a,103b , A. Canepa 160a , M. Cano Bret 75 , J. Cantero 81 , R. Cantrill 125a , T. Cao 40 , M.D.M. Capeans Garrido 30 , I. Caprini 26a , M. Caprini 26a , M. Capua 37a,37b , R. Caputo 82 , R. Cardarelli 134a , T. Carli 30 , G. Carlino 103a , L. Carminati 90a,90b , S. Caron 105 , E. Carquin 32a , G.D. Carrillo-Montoya 146c , J.R. Carter 28 , J. Carvalho 125a,125c , D. Casadei 77 , M.P. Casado 12 , M. Casolino 12 , E. Castaneda-Miranda 146b , A. Castelli 106 , V. Castillo Gimenez 168 , N.F. Castro 125a , P. Catastini 57 , A. Catinaccio 30 , J.R. Catmore 118 , A. Cattai 30 , G. Cattani 134a,134b , V. Cavaliere 166 , D. Cavalli 90a , M. Cavalli-Sforza 12 , V. Cavasinni 123a,123b , F. Ceradini 135a,135b , B. Cerio 45 , K. Cerny 128 , A.S. Cerqueira 24b , A. Cerri 150 , L. Cerrito 75 , F. Cerutti 15 , M. Cerv 30 , A. Cervelli 17 , S.A. Cetin 19b , A. Chafaq 136a , D. Chakraborty 107 , I. Chalupkova 128 , P. Chang 166 , B. Chapleau 86 , J.D. Chapman 28 , D. Charfeddine 116 , D.G. Charlton 18 , C.C. Chau 159 , C.A. Chavez Barajas 150 , S. Cheatham 86 , A. Chegwidden 89 , S. Chekanov 6 , S.V. Chekulaev 160a , G.A. Chelkov 64,f , M.A. Chelstowska 88 , C. Chen 63 , H. Chen 25 , K. Chen 149 , L. Chen 33d,g , S. Chen 33c , X. Chen 146c , Y. Chen 66 , Y. Chen 35 , H.C. Cheng 88 , Y. Cheng 31 , A. Cheplakov 64 , R. Cherkaoui El Moursli 136e , V. Chernyatin 25, * , E. Cheu 7 , L. Chevalier 137 , V. Chiarella 47 , G. Chiefari 103a,103b , J.T. Childers 6 , A. Chilingarov 71 , G. Chiodini 72a , A.S. Chisholm 18 , R.T. Chislett 77 , A. Chitan 26a , M.V. Chizhov 64 , S. Chouridou 9 , B.K.B. Chow 99 , D. Chromek-Burckhart 30 , M.L. Chu 152 , J. Chudoba 126 , J.J. Chwastowski 39 , L. Chytka 114 , G. Ciapetti 133a,133b , A.K. Ciftci 4a , R. Ciftci 4a , D. Cinca 53 , V. Cindro 74 , A. Ciocio 15 , P. Cirkovic 13b , Z.H. Citron 173 , M. Citterio 90a , M. Ciubancan 26a , A. Clark 49 , P.J. Clark 46 , R.N. Clarke 15 , W. Cleland 124 , J.C. Clemens 84 , C. Clement 147a,147b , Y. Coadou 84 , M. Cobal 165a,165c , A. Coccaro 139 , J. Cochran 63 , L. Coffey 23 , J.G. Cogan 144 , J. Coggeshall 166 , B. Cole 35 , S. Cole 107 , A.P. Colijn 106 , J. Collot 55 , T. Colombo 58c , G. Colon 85 , G. Compostella 100 , P. Conde Muiño 125a,125b , E. Coniavitis 48 , M.C. Conidi 12 , S.H. Connell 146b , I.A. Connelly 76 , S.M. Consonni 90a,90b , V. Consorti 48 , S. Constantinescu 26a , C. Conta 120a,120b , G. Conti 57 , F. Conventi 103a,h , M. Cooke 15 , B.D. Cooper 77 , A.M. Cooper-Sarkar 119 , N.J. Cooper-Smith 76 , K. Copic 15 , T. Cornelissen 176 , M. Corradi 20a , F. Corriveau 86,i , A. Corso-Radu 164 , A. Cortes-Gonzalez 12 , G. Cortiana 100 , G. Costa 90a , M.J. Costa 168 , D. Costanzo 140 , D. Côté 8 , G. Cottin 28 , G. Cowan 76 , B.E. Cox 83 , K. Cranmer 109 , G. Cree 29 , S. Crépé-Renaudin 55 , F. Crescioli 79 , W.A. Cribbs 147a,147b , M. Crispin Ortuzar 119 , M. Cristinziani 21 , V. Croft 105 , G. Crosetti 37a,37b , C.-M. Cuciuc 26a , T. Cuhadar Donszelmann 140 , J. Cummings 177 , M. Curatolo 47 , C. Cuthbert 151 , H. Czirr 142 , P. Czodrowski 3 , Z. Czyczula 177 , S. D'Auria 53 , M. D'Onofrio 73 , M.J. Da Cunha Sargedas De Sousa 125a,125b , C. Da Via 83 , W. Dabrowski 38a , A. Dafinca 119 , T. Dai 88 , O. Dale 14 , F. Dallaire 94 , C. Dallapiccola 85 , M. Dam 36 , A.C. Daniells 18 , M. Dano Hoffmann 137 , V. Dao 48 , G. Darbo 50a , S. Darmora 8 , J.A. Dassoulas 42 , A. Dattagupta 60 , W. Davey 21 , C. David 170 , T. Davidek 128 , E. Davies 119,c , M. Davies 154 , O. Davignon 79 , A.R. Davison 77 , P. Davison 77 , Y. Davygora 58a , E. Dawe 143 , I. Dawson 140 , R.K. Daya-Ishmukhametova 85 , K. De 8 , R. de Asmundis 103a , S. De Castro 20a,20b , S. De Cecco 79 , N. De Groot 105 , P. de Jong 106 , H. De la Torre 81 , F. De Lorenzi 63 , L. De Nooij 106 , D. De Pedis 133a , A. De Salvo 133a , U. De Sanctis 150 , A. De Santo 150 , J.B. De Vivie De Regie 116 , W.J. Dearnaley 71 , R. Debbe 25 , C. Debenedetti 138 , B. Dechenaux 55 , D.V. Dedovich 64 , I. Deigaard 106 , J. Del Peso 81 , T. Del Prete 123a,123b , F. Deliot 137 , C.M. Delitzsch 49 , M. Deliyergiyev 74 , A. Dell'Acqua 30 , L. Dell'Asta 22 , M. Dell'Orso 123a,123b , M. Della Pietra 103a,h , D. della Volpe 49 , M. Delmastro 5 , P.A. Delsart 55 , C. Deluca 106 , S. Demers 177 , M. Demichev 64 , A. Demilly 79 , S.P. Denisov 129 , D. Derendarz 39 , J.E. Derkaoui 136d , F. Derue 79 , P. Dervan 73 , K. Desch 21 , C. Deterre 42 , P.O. Deviveiros 106 , A. Dewhurst 130 , S. Dhaliwal 106 , A. Di Ciaccio 134a,134b , L. Di Ciaccio 5 , A. Di Domenico 133a,133b , C. Di Donato 103a,103b , A. Di Girolamo 30 , B. Di Girolamo 30 , A. Di Mattia 153 , B. Di Micco 135a,135b , R. Di Nardo 47 , A. Di Simone 48 , R. Di Sipio 20a,20b , D. Di Valentino 29 , F.A. Dias 46 , M.A. Diaz 32a , E.B. Diehl 88 , J. Dietrich 42 , T.A. Dietzsch 58a , S. Diglio 84 , A. Dimitrievska 13a , J. Dingfelder 21 , C. Dionisi 133a,133b , P. Dita 26a , S. Dita 26a , F. Dittus 30 , F. Djama 84 , T. Djobava 51b , M.A.B. do Vale 24c , A. Do Valle Wemans 125a,125g , D. Dobos 30 , C. Doglioni 49 , T. Doherty 53 , T. Dohmae 156 , J. Dolejsi 128 , Z. Dolezal 128 , B.A. Dolgoshein 97, * , M. Donadelli 24d , S. Donati 123a,123b , P. Dondero 120a,120b , J. Donini 34 , J. Dopke 130 , A. Doria 103a , M.T. Dova 70 , A.T. Doyle 53 , M. Dris 10 , J. Dubbert 88 , S. Dube 15 , E. Dubreuil 34 , E. Duchovni 173 , G. Duckeck 99 , O.A. Ducu 26a , D. Duda 176 , A. Dudarev 30 , F. Dudziak 63 , L. Duflot 116 , L. Duguid 76 , M. Dührssen 30 , M. Dunford 58a , H. Duran Yildiz 4a , M. Düren 52 , A. Durglishvili 51b , M. Dwuznik 38a , M. Dyndal 38a , J. Ebke 99 , W. Edson 2 , N.C. Edwards 46 , W. Ehrenfeld 21 , T. Eifert 144 , G. Eigen 14 , K. Einsweiler 15 , T. Ekelof 167 , M. El Kacimi 136c , M. Ellert 167 , S. Elles 5 , F. Ellinghaus 82 , N. Ellis 30 , J. Elmsheuser 99 , M. Elsing 30 , D. Emeliyanov 130 , Y. Enari 156 , O.C. Endner 82 , M. Endo 117 , R. Engelmann 149 , J. Erdmann 177 , A. Ereditato 17 , D. Eriksson 147a , G. Ernis 176 , J. Ernst 2 , M. Ernst 25 , J. Ernwein 137 , D. Errede 166 , S. Errede 166 , E. Ertel 82 , M. Escalier 116 , H. Esch 43 , C. Escobar 124 , B. Esposito 47 , A.I. Etienvre 137 , E. Etzion 154 , H. Evans 60 , A. Ezhilov 122 , L. Fabbri 20a,20b , G. Facini 31 , R.M. Fakhrutdinov 129 , S. Falciano 133a , R.J. Falla 77 , J. Faltova 128 , Y. Fang 33a , M. Fanti 90a,90b , A. Farbin 8 , A. Farilla 135a , T. Farooque 12 , S. Farrell 15 , S.M. Farrington 171 , P. Farthouat 30 , F. Fassi 136e , P. Fassnacht 30 , D. Fassouliotis 9 , A. Favareto 50a,50b , L. Fayard 116 , P. Federic 145a , O.L. Fedin 122,j , W. Fedorko 169 , M. Fehling-Kaschek 48 , S. Feigl 30 , L. Feligioni 84 , C. Feng 33d , E.J. Feng 6 , H. Feng 88 , A.B. Fenyuk 129 , S. Fernandez Perez 30 , S. Ferrag 53 , J. Ferrando 53 , A. Ferrari 167 , P. Ferrari 106 , R. Ferrari 120a , D.E. Ferreira de Lima 53 , A. Ferrer 168 , D. Ferrere 49 , C. Ferretti 88 , A. Ferretto Parodi 50a,50b , M. Fiascaris 31 , F. Fiedler 82 , A. Filipčič 74 , M. Filipuzzi 42 , F. Filthaut 105 , M. Fincke-Keeler 170 , K.D. Finelli 151 , M.C.N. Fiolhais 125a,125c , L. Fiorini 168 , A. Firan 40 , A. Fischer 2 , J. Fischer 176 , W.C. Fisher 89 , E.A. Fitzgerald 23 , M. Flechl 48 , I. Fleck 142 , P. Fleischmann 88 , S. Fleischmann 176 , G.T. Fletcher 140 , G. Fletcher 75 , T. Flick 176 , A. Floderus 80 , L.R. Flores Castillo 174,k , A.C. Florez Bustos 160b , M.J. Flowerdew 100 , A. Formica 137 , A. Forti 83 , D. Fortin 160a , D. Fournier 116 , H. Fox 71 , S. Fracchia 12 , P. Francavilla 79 , M. Franchini 20a,20b , S. Franchino 30 , D. Francis 30 , L. Franconi 118 , M. Franklin 57 , S. Franz 61 , M. Fraternali 120a,120b , S.T. French 28 , C. Friedrich 42 , F. Friedrich 44 , D. Froidevaux 30 , J.A. Frost 28 , C. Fukunaga 157 , E. Fullana Torregrosa 82 , B.G. Fulsom 144 , J. Fuster 168 , C. Gabaldon 55 , O. Gabizon 173 , A. Gabrielli 20a,20b , A. Gabrielli 133a,133b , S. Gadatsch 106 , S. Gadomski 49 , G. Gagliardi 50a,50b , P. Gagnon 60 , C. Galea 105 , B. Galhardo 125a,125c , E.J. Gallas 119 , V. Gallo 17 , B.J. Gallop 130 , P. Gallus 127 , G. Galster 36 , K.K. Gan 110 , J. Gao 33b,g , Y.S. Gao 144,e , F.M. Garay Walls 46 , F. Garberson 177 , C. García 168 , J.E. García Navarro 168 , M. Garcia-Sciveres 15 , R.W. Gardner 31 , N. Garelli 144 , V. Garonne 30 , C. Gatti 47 , G. Gaudio 120a , B. Gaur 142 , L. Gauthier 94 , P. Gauzzi 133a,133b , I.L. Gavrilenko 95 , C. Gay 169 , G. Gaycken 21 , E.N. Gazis 10 , P. Ge 33d , Z. Gecse 169 , C.N.P. Gee 130 , D.A.A. Geerts 106 , Ch. Geich-Gimbel 21 , K. Gellerstedt 147a,147b , C. Gemme 50a , A. Gemmell 53 , M.H. Genest 55 , S. Gentile 133a,133b , M. George 54 , S. George 76 , D. Gerbaudo 164 , A. Gershon 154 , H. Ghazlane 136b , N. Ghodbane 34 , B. Giacobbe 20a , S. Giagu 133a,133b , V. Giangiobbe 12 , P. Giannetti 123a,123b , F. Gianotti 30 , B. Gibbard 25 , S.M. Gibson 76 , M. Gilchriese 15 , T.P.S. Gillam 28 , D. Gillberg 30 , G. Gilles 34 , D.M. Gingrich 3,d , N. Giokaris 9 , M.P. Giordani 165a,165c , R. Giordano 103a,103b , F.M. Giorgi 20a , F.M. Giorgi 16 , P.F. Giraud 137 , D. Giugni 90a , C. Giuliani 48 , M. Giulini 58b , B.K. Gjelsten 118 , S. Gkaitatzis 155 , I. Gkialas 155,l , L.K. Gladilin 98 , C. Glasman 81 , J. Glatzer 30 , P.C.F. Glaysher 46 , A. Glazov 42 , G.L. Glonti 64 , M. Goblirsch-Kolb 100 , J.R. Goddard 75 , J. Godlewski 30 , C. Goeringer 82 , S. Goldfarb 88 , T. Golling 177 , D. Golubkov 129 , A. Gomes 125a,125b,125d , L.S. Gomez Fajardo 42 , R. Gonçalo 125a , J. Goncalves Pinto Firmino Da Costa 137 , L. Gonella 21 , S. González de la Hoz 168 , G. Gonzalez Parra 12 , S. Gonzalez-Sevilla 49 , L. Goossens 30 , P.A. Gorbounov 96 , H.A. Gordon 25 , I. Gorelov 104 , B. Gorini 30 , E. Gorini 72a,72b , A. Gorišek 74 , E. Gornicki 39 , A.T. Goshaw 6 , C. Gössling 43 , M.I. Gostkin 64 , M. Gouighri 136a , D. Goujdami 136c , M.P. Goulette 49 , A.G. Goussiou 139 , C. Goy 5 , S. Gozpinar 23 , H.M.X. Grabas 137 , L. Graber 54 , I. Grabowska-Bold 38a , P. Grafström 20a,20b , K-J. Grahn 42 , J. Gramling 49 , E. Gramstad 118 , S. Grancagnolo 16 , V. Grassi 149 , V. Gratchev 122 , H.M. Gray 30 , E. Graziani 135a , O.G. Grebenyuk 122 , Z.D. Greenwood 78,m , K. Gregersen 77 , I.M. Gregor 42 , P. Grenier 144 , J. Griffiths 8 , A.A. Grillo 138 , K. Grimm 71 , S. Grinstein 12,n , Ph. Gris 34 , Y.V. Grishkevich 98 , J.-F. Grivaz 116 , J.P. Grohs 44 , A. Grohsjean 42 , E. Gross 173 , J. Grosse-Knetter 54 , G.C. Grossi 134a,134b , J. Groth-Jensen 173 , Z.J. Grout 150 , L. Guan 33b , F. Guescini 49 , D. Guest 177 , O. Gueta 154 , C. Guicheney 34 , E. Guido 50a,50b , T. Guillemin 116 , S. Guindon 2 , U. Gul 53 , C. Gumpert 44 , J. Gunther 127 , J. Guo 35 , S. Gupta 119 , P. Gutierrez 112 , N.G. Gutierrez Ortiz 53 , C. Gutschow 77 , N. Guttman 154 , C. Guyot 137 , C. Gwenlan 119 , C.B. Gwilliam 73 , A. Haas 109 , C. Haber 15 , H.K. Hadavand 8 , N. Haddad 136e , P. Haefner 21 , S. Hageböck 21 , Z. Hajduk 39 , H. Hakobyan 178 , M. Haleem 42 , D. Hall 119 , G. Halladjian 89 , K. Hamacher 176 , P. Hamal 114 , K. Hamano 170 , M. Hamer 54 , A. Hamilton 146a , S. Hamilton 162 , G.N. Hamity 146c , P.G. Hamnett 42 , L. Han 33b , K. Hanagaki 117 , K. Hanawa 156 , M. Hance 15 , P. Hanke 58a , R. Hanna 137 , J.B. Hansen 36 , J.D. Hansen 36 , P.H. Hansen 36 , K. Hara 161 , A.S. Hard 174 , T. Harenberg 176 , F. Hariri 116 , S. Harkusha 91 , D. Harper 88 , R.D. Harrington 46 , O.M. Harris 139 , P.F. Harrison 171 , F. Hartjes 106 , M. Hasegawa 66 , S. Hasegawa 102 , Y. Hasegawa 141 , A. Hasib 112 , S. Hassani 137 , S. Haug 17 , M. Hauschild 30 , R. Hauser 89 , M. Havranek 126 , C.M. Hawkes 18 , R.J. Hawkings 30 , A.D. Hawkins 80 , T. Hayashi 161 , D. Hayden 89 , C.P. Hays 119 , H.S. Hayward 73 , S.J. Haywood 130 , S.J. Head 18 , T. Heck 82 , V. Hedberg 80 , L. Heelan 8 , S. Heim 121 , T. Heim 176 , B. Heinemann 15 , L. Heinrich 109 , J. Hejbal 126 , L. Helary 22 , C. Heller 99 , M. Heller 30 , S. Hellman 147a,147b , D. Hellmich 21 , C. Helsens 30 , J. Henderson 119 , R.C.W. Henderson 71 , Y. Heng 174 , C. Hengler 42 , A. Henrichs 177 , A.M. Henriques Correia 30 , S. Henrot-Versille 116 , C. Hensel 54 , G.H. Herbert 16 , Y. Hernández Jiménez 168 , R. Herrberg-Schubert 16 , G. Herten 48 , R. Hertenberger 99 , L. Hervas 30 , G.G. Hesketh 77 , N.P. Hessey 106 , R. Hickling 75 , E. Higón-Rodriguez 168 , E. Hill 170 , J.C. Hill 28 , K.H. Hiller 42 , S. Hillert 21 , S.J. Hillier 18 , I. Hinchliffe 15 , E. Hines 121 , M. Hirose 158 , D. Hirschbuehl 176 , J. Hobbs 149 , N. Hod 106 , M.C. Hodgkinson 140 , P. Hodgson 140 , A. Hoecker 30 , M.R. Hoeferkamp 104 , F. Hoenig 99 , J. Hoffman 40 , D. Hoffmann 84 , J.I. Hofmann 58a , M. Hohlfeld 82 , T.R. Holmes 15 , T.M. Hong 121 , L. Hooft van Huysduynen 109 , W.H. Hopkins 115 , Y. Horii 102 , J-Y. Hostachy 55 , S. Hou 152 , A. Hoummada 136a , J. Howard 119 , J. Howarth 42 , M. Hrabovsky 114 , I. Hristova 16 , J. Hrivnac 116 , T. Hryn'ova 5 , A. Hrynevich 92 , C. Hsu 146c , P.J. Hsu 82 , S.-C. Hsu 139 , D. Hu 35 , X. Hu 25 , Y. Huang 42 , Z. Hubacek 30 , F. Hubaut 84 , F. Huegging 21 , T.B. Huffman 119 , E.W. Hughes 35 , G. Hughes 71 , M. Huhtinen 30 , T.A. Hülsing 82 , M. Hurwitz 15 , N. Huseynov 64,b , J. Huston 89 , J. Huth 57 , G. Iacobucci 49 , G. Iakovidis 10 , I. Ibragimov 142 , L. Iconomidou-Fayard 116 , E. Ideal 177 , P. Iengo 103a , O. Igonkina 106 , T. Iizawa 172 , Y. Ikegami 65 , K. Ikematsu 142 , M. Ikeno 65 , Y. Ilchenko 31,o , D. Iliadis 155 , N. Ilic 159 , Y. Inamaru 66 , T. Ince 100 , P. Ioannou 9 , M. Iodice 135a , K. Iordanidou 9 , V. Ippolito 57 , A. Irles Quiles 168 , C. Isaksson 167 , M. Ishino 67 , M. Ishitsuka 158 , R. Ishmukhametov 110 , C. Issever 119 , S. Istin 19a , J.M. Iturbe Ponce 83 , R. Iuppa 134a,134b , J. Ivarsson 80 , W. Iwanski 39 , H. Iwasaki 65 , J.M. Izen 41 , V. Izzo 103a , B. Jackson 121 , M. Jackson 73 , P. Jackson 1 , M.R. Jaekel 30 , V. Jain 2 , K. Jakobs 48 , S. Jakobsen 30 , T. Jakoubek 126 , J. Jakubek 127 , D.O. Jamin 152 , D.K. Jana 78 , E. Jansen 77 , H. Jansen 30 , J. Janssen 21 , M. Janus 171 , G. Jarlskog 80 , N. Javadov 64,b , T. Javůrek 48 , L. Jeanty 15 , J. Jejelava 51a,p , G.-Y. Jeng 151 , D. Jennens 87 , P. Jenni 48,q , J. Jentzsch 43 , C. Jeske 171 , S. Jézéquel 5 , H. Ji 174 , J. Jia 149 , Y. Jiang 33b , M. Jimenez Belenguer 42 , S. Jin 33a , A. Jinaru 26a , O. Jinnouchi 158 , M.D. Joergensen 36 , K.E. Johansson 147a,147b , P. Johansson 140 , K.A. Johns 7 , K. Jon-And 147a,147b , G. Jones 171 , R.W.L. Jones 71 , T.J. Jones 73 , J. Jongmanns 58a , P.M. Jorge 125a,125b , K.D. Joshi 83 , J. Jovicevic 148 , X. Ju 174 , C.A. Jung 43 , R.M. Jungst 30 , P. Jussel 61 , A. Juste Rozas 12,n , M. Kaci 168 , A. Kaczmarska 39 , M. Kado 116 , H. Kagan 110 , M. Kagan 144 , E. Kajomovitz 45 , C.W. Kalderon 119 , S. Kama 40 , A. Kamenshchikov 129 , N. Kanaya 156 , M. Kaneda 30 , S. Kaneti 28 , V.A. Kantserov 97 , J. Kanzaki 65 , B. Kaplan 109 , A. Kapliy 31 , D. Kar 53 , K. Karakostas 10 , N. Karastathis 10 , M.J. Kareem 54 , M. Karnevskiy 82 , S.N. Karpov 64 , Z.M. Karpova 64 , K. Karthik 109 , V. Kartvelishvili 71 , A.N. Karyukhin 129 , L. Kashif 174 , G. Kasieczka 58b , R.D. Kass 110 , A. Kastanas 14 , Y. Kataoka 156 , A. Katre 49 , J. Katzy 42 , V. Kaushik 7 , K. Kawagoe 69 , T. Kawamoto 156 , G. Kawamura 54 , S. Kazama 156 , V.F. Kazanin 108 , M.Y. Kazarinov 64 , R. Keeler 170 , R. Kehoe 40 , M. Keil 54 , J.S. Keller 42 , J.J. Kempster 76 , H. Keoshkerian 5 , O. Kepka 126 , B.P. Kerševan 74 , S. Kersten 176 , K. Kessoku 156 , J. Keung 159 , F. Khalil-zada 11 , H. Khandanyan 147a,147b , A. Khanov 113 , A. Khodinov 97 , A. Khomich 58a , T.J. Khoo 28 , G. Khoriauli 21 , A. Khoroshilov 176 , V. Khovanskiy 96 , E. Khramov 64 , J. Khubua 51b , H.Y. Kim 8 , H. Kim 147a,147b , S.H. Kim 161 , N. Kimura 172 , O. Kind 16 , B.T. King 73 , M. King 168 , R.S.B. King 119 , S.B. King 169 , J. Kirk 130 , A.E. Kiryunin 100 , T. Kishimoto 66 , D. Kisielewska 38a , F. Kiss 48 , T. Kittelmann 124 , K. Kiuchi 161 , E. Kladiva 145b , M. Klein 73 , U. Klein 73 , K. Kleinknecht 82 , P. Klimek 147a,147b , A. Klimentov 25 , R. Klingenberg 43 , J.A. Klinger 83 , T. Klioutchnikova 30 , P.F. Klok 105 , E.-E. Kluge 58a , P. Kluit 106 , S. Kluth 100 , E. Kneringer 61 , E.B.F.G. Knoops 84 , A. Knue 53 , D. Kobayashi 158 , T. Kobayashi 156 , M. Kobel 44 , M. Kocian 144 , P. Kodys 128 , P. Koevesarki 21 , T. Koffas 29 , E. Koffeman 106 , L.A. Kogan 119 , S. Kohlmann 176 , Z. Kohout 127 , T. Kohriki 65 , T. Koi 144 , H. Kolanoski 16 , I. Koletsou 5 , J. Koll 89 , A.A. Komar 95, * , Y. Komori 156 , T. Kondo 65 , N. Kondrashova 42 , K. Köneke 48 , A.C. König 105 , S. König 82 , T. Kono 65,r , R. Konoplich 109,s , N. Konstantinidis 77 , R. Kopeliansky 153 , S. Koperny 38a , L. Köpke 82 , A.K. Kopp 48 , K. Korcyl 39 , K. Kordas 155 , A. Korn 77 , A.A. Korol 108,t , I. Korolkov 12 , E.V. Korolkova 140 , V.A. Korotkov 129 , O. Kortner 100 , S. Kortner 100 , V.V. Kostyukhin 21 , V.M. Kotov 64 , A. Kotwal 45 , C. Kourkoumelis 9 , V. Kouskoura 155 , A. Koutsman 160a , R. Kowalewski 170 , T.Z. Kowalski 38a , W. Kozanecki 137 , A.S. Kozhin 129 , V. Kral 127 , V.A. Kramarenko 98 , G. Kramberger 74 , D. Krasnopevtsev 97 , M.W. Krasny 79 , A. Krasznahorkay 30 , J.K. Kraus 21 , A. Kravchenko 25 , S. Kreiss 109 , M. Kretz 58c , J. Kretzschmar 73 , K. Kreutzfeldt 52 , P. Krieger 159 , K. Kroeninger 54 , H. Kroha 100 , J. Kroll 121 , J. Kroseberg 21 , J. Krstic 13a , U. Kruchonak 64 , H. Krüger 21 , T. Kruker 17 , N. Krumnack 63 , Z.V. Krumshteyn 64 , A. Kruse 174 , M.C. Kruse 45 , M. Kruskal 22 , T. Kubota 87 , S. Kuday 4a , S. Kuehn 48 , A. Kugel 58c , A. Kuhl 138 , T. Kuhl 42 , V. Kukhtin 64 , Y. Kulchitsky 91 , S. Kuleshov 32b , M. Kuna 133a,133b , J. Kunkle 121 , A. Kupco 126 , H. Kurashige 66 , Y.A. Kurochkin 91 , R. Kurumida 66 , V. Kus 126 , E.S. Kuwertz 148 , M. Kuze 158 , J. Kvita 114 , A. La Rosa 49 , L. La Rotonda 37a,37b , C. Lacasta 168 , F. Lacava 133a,133b , J. Lacey 29 , H. Lacker 16 , D. Lacour 79 , V.R. Lacuesta 168 , E. Ladygin 64 , R. Lafaye 5 , B. Laforge 79 , T. Lagouri 177 , S. Lai 48 , H. Laier 58a , L. Lambourne 77 , S. Lammers 60 , C.L. Lampen 7 , W. Lampl 7 , E. Lançon 137 , U. Landgraf 48 , M.P.J. Landon 75 , V.S. Lang 58a , A.J. Lankford 164 , F. Lanni 25 , K. Lantzsch 30 , S. Laplace 79 , C. Lapoire 21 , J.F. Laporte 137 , T. Lari 90a , F. Lasagni Manghi 20a,20b , M. Lassnig 30 , P. Laurelli 47 , W. Lavrijsen 15 , A.T. Law 138 , P. Laycock 73 , O. Le Dortz 79 , E. Le Guirriec 84 , E. Le Menedeu 12 , T. LeCompte 6 , F. Ledroit-Guillon 55 , C.A. Lee 152 , H. Lee 106 , J.S.H. Lee 117 , S.C. Lee 152 , L. Lee 1 , G. Lefebvre 79 , M. Lefebvre 170 , F. Legger 99 , C. Leggett 15 , A. Lehan 73 , M. Lehmacher 21 , G. Lehmann Miotto 30 , X. Lei 7 , W.A. Leight 29 , A. Leisos 155 , A.G. Leister 177 , M.A.L. Leite 24d , R. Leitner 128 , D. Lellouch 173 , B. Lemmer 54 , K.J.C. Leney 77 , T. Lenz 21 , G. Lenzen 176 , B. Lenzi 30 , R. Leone 7 , S. Leone 123a,123b , C. Leonidopoulos 46 , S. Leontsinis 10 , C. Leroy 94 , C.G. Lester 28 , C.M. Lester 121 , M. Levchenko 122 , J. Levêque 5 , D. Levin 88 , L.J. Levinson 173 , M. Levy 18 , A. Lewis 119 , G.H. Lewis 109 , A.M. Leyko 21 , M. Leyton 41 , B. Li 33b,u , B. Li 84 , H. Li 149 , H.L. Li 31 , L. Li 45 , L. Li 33e , S. Li 45 , Y. Li 33c,v , Z. Liang 138 , H. Liao 34 , B. Liberti 134a , P. Lichard 30 , K. Lie 166 , J. Liebal 21 , W. Liebig 14 , C. Limbach 21 , A. Limosani 87 , S.C. Lin 152,w , T.H. Lin 82 , F. Linde 106 , B.E. Lindquist 149 , J.T. Linnemann 89 , E. Lipeles 121 , A. Lipniacka 14 , M. Lisovyi 42 , T.M. Liss 166 , D. Lissauer 25 , A. Lister 169 , A.M. Litke 138 , B. Liu 152 , D. Liu 152 , J.B. Liu 33b , K. Liu 33b,x , L. Liu 88 , M. Liu 45 , M. Liu 33b , Y. Liu 33b , M. Livan 120a,120b , S.S.A. Livermore 119 , A. Lleres 55 , J. Llorente Merino 81 , S.L. Lloyd 75 , F. Lo Sterzo 152 , E. Lobodzinska 42 , P. Loch 7 , W.S. Lockman 138 , T. Loddenkoetter 21 , F.K. Loebinger 83 , A.E. Loevschall-Jensen 36 , A. Loginov 177 , T. Lohse 16 , K. Lohwasser 42 , M. Lokajicek 126 , V.P. Lombardo 5 , B.A. Long 22 , J.D. Long 88 , R.E. Long 71 , L. Lopes 125a , D. Lopez Mateos 57 , B. Lopez Paredes 140 , I. Lopez Paz 12 , J. Lorenz 99 , N. Lorenzo Martinez 60 , M. Losada 163 , P. Loscutoff 15 , X. Lou 41 , A. Lounis 116 , J. Love 6 , P.A. Love 71 , A.J. Lowe 144,e , F. Lu 33a , N. Lu 88 , H.J. Lubatti 139 , C. Luci 133a,133b , A. Lucotte 55 , F. Luehring 60 , W. Lukas 61 , L. Luminari 133a , O. Lundberg 147a,147b , B. Lund-Jensen 148 , M. Lungwitz 82 , D. Lynn 25 , R. Lysak 126 , E. Lytken 80 , H. Ma 25 , L.L. Ma 33d , G. Maccarrone 47 , A. Macchiolo 100 , J. Machado Miguens 125a,125b , D. Macina 30 , D. Madaffari 84 , R. Madar 48 , H.J. Maddocks 71 , W.F. Mader 44 , A. Madsen 167 , M. Maeno 8 , T. Maeno 25 , A. Maevskiy 98 , E. Magradze 54 , K. Mahboubi 48 , J. Mahlstedt 106 , S. Mahmoud 73 , C. Maiani 137 , C. Maidantchik 24a , A.A. Maier 100 , A. Maio 125a,125b,125d , S. Majewski 115 , Y. Makida 65 , N. Makovec 116 , P. Mal 137,y , B. Malaescu 79 , Pa. Malecki 39 , V.P. Maleev 122 , F. Malek 55 , U. Mallik 62 , D. Malon 6 , C. Malone 144 , S. Maltezos 10 , V.M. Malyshev 108 , S. Malyukov 30 , J. Mamuzic 13b , B. Mandelli 30 , L. Mandelli 90a , I. Mandić 74 , R. Mandrysch 62 , J. Maneira 125a,125b , A. Manfredini 100 , L. Manhaes de Andrade Filho 24b , J.A. Manjarres Ramos 160b , A. Mann 99 , P.M. Manning 138 , A. Manousakis-Katsikakis 9 , B. Mansoulie 137 , R. Mantifel 86 , L. Mapelli 30 , L. March 168 , J.F. Marchand 29 , G. Marchiori 79 , M. Marcisovsky 126 , C.P. Marino 170 , M. Marjanovic 13a , C.N. Marques 125a , F. Marroquim 24a , S.P. Marsden 83 , Z. Marshall 15 , L.F. Marti 17 , S. Marti-Garcia 168 , B. Martin 30 , B. Martin 89 , T.A. Martin 171 , V.J. Martin 46 , B. Martin dit Latour 14 , H. Martinez 137 , M. Martinez 12,n , S. Martin-Haugh 130 , A.C. Martyniuk 77 , M. Marx 139 , F. Marzano 133a , A. Marzin 30 , L. Masetti 82 , T. Mashimo 156 , R. Mashinistov 95 , J. Masik 83 , A.L. Maslennikov 108 , I. Massa 20a,20b , L. Massa 20a,20b , N. Massol 5 , P. Mastrandrea 149 , A. Mastroberardino 37a,37b , T. Masubuchi 156 , P. Mättig 176 , J. Mattmann 82 , J. Maurer 26a , S.J. Maxfield 73 , D.A. Maximov 108,t , R. Mazini 152 , L. Mazzaferro 134a,134b , G. Mc Goldrick 159 , S.P. Mc Kee 88 , A. McCarn 88 , R.L. McCarthy 149 , T.G. McCarthy 29 , N.A. McCubbin 130 , K.W. McFarlane 56, * , J.A. Mcfayden 77 , G. Mchedlidze 54 , S.J. McMahon 130 , R.A. McPherson 170,i , J. Mechnich 106 , M. Medinnis 42 , S. Meehan 31 , S. Mehlhase 99 , A. Mehta 73 , K. Meier 58a , C. Meineck 99 , B. Meirose 80 , C. Melachrinos 31 , B.R. Mellado Garcia 146c , F. Meloni 17 , A. Mengarelli 20a,20b , S. Menke 100 , E. Meoni 162 , K.M. Mercurio 57 , S. Mergelmeyer 21 , N. Meric 137 , P. Mermod 49 , L. Merola 103a,103b , C. Meroni 90a , F.S. Merritt 31 , H. Merritt 110 , A. Messina 30,z , J. Metcalfe 25 , A.S. Mete 164 , C. Meyer 82 , C. Meyer 121 , J-P. Meyer 137 , J. Meyer 30 , R.P. Middleton 130 , S. Migas 73 , L. Mijović 21 , G. Mikenberg 173 , M. Mikestikova 126 , M. Mikuž 74 , A. Milic 30 , D.W. Miller 31 , C. Mills 46 , A. Milov 173 , D.A. Milstead 147a,147b , D. Milstein 173 , A.A. Minaenko 129 , I.A. Minashvili 64 , A.I. Mincer 109 , B. Mindur 38a , M. Mineev 64 , Y. Ming 174 , L.M. Mir 12 , G. Mirabelli 133a , T. Mitani 172 , J. Mitrevski 99 , V.A. Mitsou 168 , S. Mitsui 65 , A. Miucci 49 , P.S. Miyagawa 140 , J.U. Mjörnmark 80 , T. Moa 147a,147b , K. Mochizuki 84 , S. Mohapatra 35 , W. Mohr 48 , S. Molander 147a,147b , R. Moles-Valls 168 , K. Mönig 42 , C. Monini 55 , J. Monk 36 , E. Monnier 84 , J. Montejo Berlingen 12 , F. Monticelli 70 , S. Monzani 133a,133b , R.W. Moore 3 , N. Morange 62 , D. Moreno 82 , M. Moreno Llácer 54 , P. Morettini 50a , M. Morgenstern 44 , M. Morii 57 , S. Moritz 82 , A.K. Morley 148 , G. Mornacchi 30 , J.D. Morris 75 , L. Morvaj 102 , H.G. Moser 100 , M. Mosidze 51b , J. Moss 110 , K. Motohashi 158 , R. Mount 144 , E. Mountricha 25 , S.V. Mouraviev 95, * , E.J.W. Moyse 85 , S. Muanza 84 , R.D. Mudd 18 , F. Mueller 58a , J. Mueller 124 , K. Mueller 21 , T. Mueller 28 , T. Mueller 82 , D. Muenstermann 49 , Y. Munwes 154 , J.A. Murillo Quijada 18 , W.J. Murray 171,130 , H. Musheghyan 54 , E. Musto 153 , A.G. Myagkov 129,aa , M. Myska 127 , O. Nackenhorst 54 , J. Nadal 54 , K. Nagai 61 , R. Nagai 158 , Y. Nagai 84 , K. Nagano 65 , A. Nagarkar 110 , Y. Nagasaka 59 , M. Nagel 100 , A.M. Nairz 30 , Y. Nakahama 30 , K. Nakamura 65 , T. Nakamura 156 , I. Nakano 111 , H. Namasivayam 41 , G. Nanava 21 , R. Narayan 58b , T. Nattermann 21 , T. Naumann 42 , G. Navarro 163 , R. Nayyar 7 , H.A. Neal 88 , P.Yu. Nechaeva 95 , T.J. Neep 83 , P.D. Nef 144 , A. Negri 120a,120b , G. Negri 30 , M. Negrini 20a , S. Nektarijevic 49 , A. Nelson 164 , T.K. Nelson 144 , S. Nemecek 126 , P. Nemethy 109 , A.A. Nepomuceno 24a , M. Nessi 30,ab , M.S. Neubauer 166 , M. Neumann 176 , R.M. Neves 109 , P. Nevski 25 , P.R. Newman 18 , D.H. Nguyen 6 , R.B. Nickerson 119 , R. Nicolaidou 137 , B. Nicquevert 30 , J. Nielsen 138 , N. Nikiforou 35 , A. Nikiforov 16 , V. Nikolaenko 129,aa , I. Nikolic-Audit 79 , K. Nikolics 49 , K. Nikolopoulos 18 , P. Nilsson 8 , Y. Ninomiya 156 , A. Nisati 133a , R. Nisius 100 , T. Nobe 158 , L. Nodulman 6 , M. Nomachi 117 , I. Nomidis 29 , S. Norberg 112 , M. Nordberg 30 , O. Novgorodova 44 , S. Nowak 100 , M. Nozaki 65 , L. Nozka 114 , K. Ntekas 10 , G. Nunes Hanninger 87 , T. Nunnemann 99 , E. Nurse 77 , F. Nuti 87 , B.J. O'Brien 46 , F. O'grady 7 , D.C. O'Neil 143 , V. O'Shea 53 , F.G. Oakham 29,d , H. Oberlack 100 , T. Obermann 21 , J. Ocariz 79 , A. Ochi 66 , M.I. Ochoa 77 , S. Oda 69 , S. Odaka 65 , H. Ogren 60 , A. Oh 83 , S.H. Oh 45 , C.C. Ohm 15 , H. Ohman 167 , W. Okamura 117 , H. Okawa 25 , Y. Okumura 31 , T. Okuyama 156 , A. Olariu 26a , A.G. Olchevski 64 , S.A. Olivares Pino 46 , D. Oliveira Damazio 25 , E. Oliver Garcia 168 , A. Olszewski 39 , J. Olszowska 39 , A. Onofre 125a,125e , P.U.E. Onyisi 31,o , C.J. Oram 160a , M.J. Oreglia 31 , Y. Oren 154 , D. Orestano 135a,135b , N. Orlando 72a,72b , C. Oropeza Barrera 53 , R.S. Orr 159 , B. Osculati 50a,50b , R. Ospanov 121 , G. Otero y Garzon 27 , H. Otono 69 , M. Ouchrif 136d , E.A. Ouellette 170 , F. Ould-Saada 118 , A. Ouraou 137 , K.P. Oussoren 106 , Q. Ouyang 33a , A. Ovcharova 15 , M. Owen 83 , V.E. Ozcan 19a , N. Ozturk 8 , K. Pachal 119 , A. Pacheco Pages 12 , C. Padilla Aranda 12 , M. Pagáčová 48 , S. Pagan Griso 15 , E. Paganis 140 , C. Pahl 100 , F. Paige 25 , P. Pais 85 , K. Pajchel 118 , G. Palacino 160b , S. Palestini 30 , M. Palka 38b , D. Pallin 34 , A. Palma 125a,125b , J.D. Palmer 18 , Y.B. Pan 174 , E. Panagiotopoulou 10 , J.G. Panduro Vazquez 76 , P. Pani 106 , N. Panikashvili 88 , S. Panitkin 25 , D. Pantea 26a , L. Paolozzi 134a,134b , Th.D. Papadopoulou 10 , K. Papageorgiou 155,l , A. Paramonov 6 , D. Paredes Hernandez 34 , M.A. Parker 28 , F. Parodi 50a,50b , J.A. Parsons 35 , U. Parzefall 48 , E. Pasqualucci 133a , S. Passaggio 50a , A. Passeri 135a , F. Pastore 135a,135b, * , Fr. Pastore 76 , G. Pásztor 29 , S. Pataraia 176 , N.D. Patel 151 , J.R. Pater 83 , S. Patricelli 103a,103b , T. Pauly 30 , J. Pearce 170 , L.E. Pedersen 36 , M. Pedersen 118 , S. Pedraza Lopez 168 , R. Pedro 125a,125b , S.V. Peleganchuk 108 , D. Pelikan 167 , H. Peng 33b , B. Penning 31 , J. Penwell 60 , D.V. Perepelitsa 25 , E. Perez Codina 160a , M.T. Pérez García-Estañ 168 , V. Perez Reale 35 , L. Perini 90a,90b , H. Pernegger 30 , S. Perrella 103a,103b , R. Perrino 72a , R. Peschke 42 , V.D. Peshekhonov 64 , K. Peters 30 , R.F.Y. Peters 83 , B.A. Petersen 30 , T.C. Petersen 36 , E. Petit 42 , A. Petridis 147a,147b , C. Petridou 155 , E. Petrolo 133a , F. Petrucci 135a,135b , N.E. Pettersson 158 , R. Pezoa 32b , P.W. Phillips 130 , G. Piacquadio 144 , E. Pianori 171 , A. Picazio 49 , E. Piccaro 75 , M. Piccinini 20a,20b , R. Piegaia 27 , D.T. Pignotti 110 , J.E. Pilcher 31 , A.D. Pilkington 77 , J. Pina 125a,125b,125d , M. Pinamonti 165a,165c,ac , A. Pinder 119 , J.L. Pinfold 3 , A. Pingel 36 , B. Pinto 125a , S. Pires 79 , M. Pitt 173 , C. Pizio 90a,90b , L. Plazak 145a , M.-A. Pleier 25 , V. Pleskot 128 , E. Plotnikova 64 , P. Plucinski 147a,147b , S. Poddar 58a , F. Podlyski 34 , R. Poettgen 82 , L. Poggioli 116 , D. Pohl 21 , M. Pohl 49 , G. Polesello 120a , A. Policicchio 37a,37b , R. Polifka 159 , A. Polini 20a , C.S. Pollard 45 , V. Polychronakos 25 , K. Pommès 30 , L. Pontecorvo 133a , B.G. Pope 89 , G.A. Popeneciu 26b , D.S. Popovic 13a , A. Poppleton 30 , X. Portell Bueso 12 , S. Pospisil 127 , K. Potamianos 15 , I.N. Potrap 64 , C.J. Potter 150 , C.T. Potter 115 , G. Poulard 30 , J. Poveda 60 , V. Pozdnyakov 64 , P. Pralavorio 84 , A. Pranko 15 , S. Prasad 30 , R. Pravahan 8 , S. Prell 63 , D. Price 83 , J. Price 73 , L.E. Price 6 , D. Prieur 124 , M. Primavera 72a , M. Proissl 46 , K. Prokofiev 47 , F. Prokoshin 32b , E. Protopapadaki 137 , S. Protopopescu 25 , J. Proudfoot 6 , M. Przybycien 38a , H. Przysiezniak 5 , E. Ptacek 115 , D. Puddu 135a,135b , E. Pueschel 85 , D. Puldon 149 , M. Purohit 25,ad , P. Puzo 116 , J. Qian 88 , G. Qin 53 , Y. Qin 83 , A. Quadt 54 , D.R. Quarrie 15 , W.B. Quayle 165a,165b , M. Queitsch-Maitland 83 , D. Quilty 53 , A. Qureshi 160b , V. Radeka 25 , V. Radescu 42 , S.K. Radhakrishnan 149 , P. Radloff 115 , P. Rados 87 , F. Ragusa 90a,90b , G. Rahal 179 , S. Rajagopalan 25 , M. Rammensee 30 , A.S. Randle-Conde 40 , C. Rangel-Smith 167 , K. Rao 164 , F. Rauscher 99 , T.C. Rave 48 , T. Ravenscroft 53 , M. Raymond 30 , A.L. Read 118 , N.P. Readioff 73 , D.M. Rebuzzi 120a,120b , A. Redelbach 175 , G. Redlinger 25 , R. Reece 138 , K. Reeves 41 , L. Rehnisch 16 , H. Reisin 27 , M. Relich 164 , C. Rembser 30 , H. Ren 33a , Z.L. Ren 152 , A. Renaud 116 , M. Rescigno 133a , S. Resconi 90a , O.L. Rezanova 108,t , P. Reznicek 128 , R. Rezvani 94 , R. Richter 100 , M. Ridel 79 , P. Rieck 16 , J. Rieger 54 , M. Rijssenbeek 149 , A. Rimoldi 120a,120b , L. Rinaldi 20a , E. Ritsch 61 , I. Riu 12 , F. Rizatdinova 113 , E. Rizvi 75 , S.H. Robertson 86,i , A. Robichaud-Veronneau 86 , D. Robinson 28 , J.E.M. Robinson 83 , A. Robson 53 , C. Roda 123a,123b , L. Rodrigues 30 , S. Roe 30 , O. Røhne 118 , S. Rolli 162 , A. Romaniouk 97 , M. Romano 20a,20b , E. Romero Adam 168 , N. Rompotis 139 , M. Ronzani 48 , L. Roos 79 , E. Ros 168 , S. Rosati 133a , K. Rosbach 49 , M. Rose 76 , P. Rose 138 , P.L. Rosendahl 14 , O. Rosenthal 142 , V. Rossetti 147a,147b , E. Rossi 103a,103b , L.P. Rossi 50a , R. Rosten 139 , M. Rotaru 26a , I. Roth 173 , J. Rothberg 139 , D. Rousseau 116 , C.R. Royon 137 , A. Rozanov 84 , Y. Rozen 153 , X. Ruan 146c , F. Rubbo 12 , I. Rubinskiy 42 , V.I. Rud 98 , C. Rudolph 44 , M.S. Rudolph 159 , F. Rühr 48 , A. Ruiz-Martinez 30 , Z. Rurikova 48 , N.A. Rusakovich 64 , A. Ruschke 99 , J.P. Rutherfoord 7 , N. Ruthmann 48 , Y.F. Ryabov 122 , M. Rybar 128 , G. Rybkin 116 , N.C. Ryder 119 , A.F. Saavedra 151 , S. Sacerdoti 27 , A. Saddique 3 , I. Sadeh 154 , H.F-W. Sadrozinski 138 , R. Sadykov 64 , F. Safai Tehrani 133a , H. Sakamoto 156 , Y. Sakurai 172 , G. Salamanna 135a,135b , A. Salamon 134a , M. Saleem 112 , D. Salek 106 , P.H. Sales De Bruin 139 , D. Salihagic 100 , A. Salnikov 144 , J. Salt 168 , D. Salvatore 37a,37b , F. Salvatore 150 , A. Salvucci 105 , A. Salzburger 30 , D. Sampsonidis 155 , A. Sanchez 103a,103b , J. Sánchez 168 , V. Sanchez Martinez 168 , H. Sandaker 14 , R.L. Sandbach 75 , H.G. Sander 82 , M.P. Sanders 99 , M. Sandhoff 176 , T. Sandoval 28 , C. Sandoval 163 , R. Sandstroem 100 , D.P.C. Sankey 130 , A. Sansoni 47 , C. Santoni 34 , R. Santonico 134a,134b , H. Santos 125a , I. Santoyo Castillo 150 , K. Sapp 124 , A. Sapronov 64 , J.G. Saraiva 125a,125d , B. Sarrazin 21 , G. Sartisohn 176 , O. Sasaki 65 , Y. Sasaki 156 , G. Sauvage 5, * , E. Sauvan 5 , P. Savard 159,d , D.O. Savu 30 , C. Sawyer 119 , L. Sawyer 78,m , D.H. Saxon 53 , J. Saxon 121 , C. Sbarra 20a , A. Sbrizzi 3 , T. Scanlon 77 , D.A. Scannicchio 164 , M. Scarcella 151 , V. Scarfone 37a,37b , J. Schaarschmidt 173 , P. Schacht 100 , D. Schaefer 30 , R. Schaefer 42 , S. Schaepe 21 , S. Schaetzel 58b , U. Schäfer 82 , A.C. Schaffer 116 , D. Schaile 99 , R.D. Schamberger 149 , V. Scharf 58a , V.A. Schegelsky 122 , D. Scheirich 128 , M. Schernau 164 , M.I. Scherzer 35 , C. Schiavi 50a,50b , J. Schieck 99 , C. Schillo 48 , M. Schioppa 37a,37b , S. Schlenker 30 , E. Schmidt 48 , K. Schmieden 30 , C. Schmitt 82 , S. Schmitt 58b , B. Schneider 17 , Y.J. Schnellbach 73 , U. Schnoor 44 , L. Schoeffel 137 , A. Schoening 58b , B.D. Schoenrock 89 , A.L.S. Schorlemmer 54 , M. Schott 82 , D. Schouten 160a , J. Schovancova 25 , S. Schramm 159 , M. Schreyer 175 , C. Schroeder 82 , N. Schuh 82 , M.J. Schultens 21 , H.-C. Schultz-Coulon 58a , H. Schulz 16 , M. Schumacher 48 , B.A. Schumm 138 , Ph. Schune 137 , C. Schwanenberger 83 , A. Schwartzman 144 , T.A. Schwarz 88 , Ph. Schwegler 100 , Ph. Schwemling 137 , R. Schwienhorst 89 , J. Schwindling 137 , T. Schwindt 21 , M. Schwoerer 5 , F.G. Sciacca 17 , E. Scifo 116 , G. Sciolla 23 , W.G. Scott 130 , F. Scuri 123a,123b , F. Scutti 21 , J. Searcy 88 , G. Sedov 42 , E. Sedykh 122 , S.C. Seidel 104 , A. Seiden 138 , F. Seifert 127 , J.M. Seixas 24a , G. Sekhniaidze 103a , S.J. Sekula 40 , K.E. Selbach 46 , D.M. Seliverstov 122, * , G. Sellers 73 , N. Semprini-Cesari 20a,20b , C. Serfon 30 , L. Serin 116 , L. Serkin 54 , T. Serre 84 , R. Seuster 160a , H. Severini 112 , T. Sfiligoj 74 , F. Sforza 100 , A. Sfyrla 30 , E. Shabalina 54 , M. Shamim 115 , L.Y. Shan 33a , R. Shang 166 , J.T. Shank 22 , M. Shapiro 15 , P.B. Shatalov 96 , K. Shaw 165a,165b , C.Y. Shehu 150 , P. Sherwood 77 , L. Shi 152,ae , S. Shimizu 66 , C.O. Shimmin 164 , M. Shimojima 101 , M. Shiyakova 64 , A. Shmeleva 95 , M.J. Shochet 31 , D. Short 119 , S. Shrestha 63 , E. Shulga 97 , M.A. Shupe 7 , S. Shushkevich 42 , P. Sicho 126 , O. Sidiropoulou 155 , D. Sidorov 113 , A. Sidoti 133a , F. Siegert 44 , Dj. Sijacki 13a , J. Silva 125a,125d , Y. Silver 154 , D. Silverstein 144 , S.B. Silverstein 147a , V. Simak 127 , O. Simard 5 , Lj. Simic 13a , S. Simion 116 , E. Simioni 82 , B. Simmons 77 , R. Simoniello 90a,90b , M. Simonyan 36 , P. Sinervo 159 , N.B. Sinev 115 , V. Sipica 142 , G. Siragusa 175 , A. Sircar 78 , A.N. Sisakyan 64, * , S.Yu. Sivoklokov 98 , J. Sjölin 147a,147b , T.B. Sjursen 14 , H.P. Skottowe 57 , K.Yu. Skovpen 108 , P. Skubic 112 , M. Slater 18 , T. Slavicek 127 , K. Sliwa 162 , V. Smakhtin 173 , B.H. Smart 46 , L. Smestad 14 , S.Yu. Smirnov 97 , Y. Smirnov 97 , L.N. Smirnova 98,af , O. Smirnova 80 , K.M. Smith 53 , M. Smizanska 71 , K. Smolek 127 , A.A. Snesarev 95 , G. Snidero 75 , S. Snyder 25 , R. Sobie 170,i , F. Socher 44 , A. Soffer 154 , D.A. Soh 152,ae , C.A. Solans 30 , M. Solar 127 , J. Solc 127 , E.Yu. Soldatov 97 , U. Soldevila 168 , A.A. Solodkov 129 , A. Soloshenko 64 , O.V. Solovyanov 129 , V. Solovyev 122 , P. Sommer 48 , H.Y. Song 33b , N. Soni 1 , A. Sood 15 , A. Sopczak 127 , B. Sopko 127 , V. Sopko 127 , V. Sorin 12 , M. Sosebee 8 , R. Soualah 165a,165c , P. Soueid 94 , A.M. Soukharev 108 , D. South 42 , S. Spagnolo 72a,72b , F. Spanò 76 , W.R. Spearman 57 , F. Spettel 100 , R. Spighi 20a , G. Spigo 30 , L.A. Spiller 87 , M. Spousta 128 , T. Spreitzer 159 , B. Spurlock 8 , R.D. St. Denis 53, * , S. Staerz 44 , J. Stahlman 121 , R. Stamen 58a , S. Stamm 16 , E. Stanecka 39 , R.W. Stanek 6 , C. Stanescu 135a , M. Stanescu-Bellu 42 , M.M. Stanitzki 42 , S. Stapnes 118 , E.A. Starchenko 129 , J. Stark 55 , P. Staroba 126 , P. Starovoitov 42 , R. Staszewski 39 , P. Stavina 145a, * , P. Steinberg 25 , B. Stelzer 143 , H.J. Stelzer 30 , O. Stelzer-Chilton 160a , H. Stenzel 52 , S. Stern 100 , G.A. Stewart 53 , J.A. Stillings 21 , M.C. Stockton 86 , M. Stoebe 86 , G. Stoicea 26a , P. Stolte 54 , S. Stonjek 100 , A.R. Stradling 8 , A. Straessner 44 , M.E. Stramaglia 17 , J. Strandberg 148 , S. Strandberg 147a,147b , A. Strandlie 118 , E. Strauss 144 , M. Strauss 112 , P. Strizenec 145b , R. Ströhmer 175 , D.M. Strom 115 , R. Stroynowski 40 , A. Struebig 105 , S.A. Stucci 17 , B. Stugu 14 , N.A. Styles 42 , D. Su 144 , J. Su 124 , R. Subramaniam 78 , A. Succurro 12 , Y. Sugaya 117 , C. Suhr 107 , M. Suk 127 , V.V. Sulin 95 , S. Sultansoy 4c , T. Sumida 67 , S. Sun 57 , X. Sun 33a , J.E. Sundermann 48 , K. Suruliz 140 , G. Susinno 37a,37b , M.R. Sutton 150 , Y. Suzuki 65 , M. Svatos 126 , S. Swedish 169 , M. Swiatlowski 144 , I. Sykora 145a , T. Sykora 128 , D. Ta 89 , C. Taccini 135a,135b , K. Tackmann 42 , J. Taenzer 159 , A. Taffard 164 , R. Tafirout 160a , N. Taiblum 154 , H. Takai 25 , R. Takashima 68 , H. Takeda 66 , T. Takeshita 141 , Y. Takubo 65 , M. Talby 84 , A.A. Talyshev 108,t , J.Y.C. Tam 175 , K.G. Tan 87 , J. Tanaka 156 , R. Tanaka 116 , S. Tanaka 132 , S. Tanaka 65 , A.J. Tanasijczuk 143 , B.B. Tannenwald 110 , N. Tannoury 21 , S. Tapprogge 82 , S. Tarem 153 , F. Tarrade 29 , G.F. Tartarelli 90a , P. Tas 128 , M. Tasevsky 126 , T. Tashiro 67 , E. Tassi 37a,37b , A. Tavares Delgado 125a,125b , Y. Tayalati 136d , F.E. Taylor 93 , G.N. Taylor 87 , W. Taylor 160b , F.A. Teischinger 30 , M. Teixeira Dias Castanheira 75 , P. Teixeira-Dias 76 , K.K. Temming 48 , H. Ten Kate 30 , P.K. Teng 152 , J.J. Teoh 117 , S. Terada 65 , K. Terashi 156 , J. Terron 81 , S. Terzo 100 , M. Testa 47 , R.J. Teuscher 159,i , J. Therhaag 21 , T. Theveneaux-Pelzer 34 , J.P. Thomas 18 , J. Thomas-Wilsker 76 , E.N. Thompson 35 , P.D. Thompson 18 , P.D. Thompson 159 , R.J. Thompson 83 , A.S. Thompson 53 , L.A. Thomsen 36 , E. Thomson 121 , M. Thomson 28 , W.M. Thong 87 , R.P. Thun 88, * , F. Tian 35 , M.J. Tibbetts 15 , V.O. Tikhomirov 95,ag , Yu.A. Tikhonov 108,t , S. Timoshenko 97 , E. Tiouchichine 84 , P. Tipton 177 , S. Tisserant 84 , T. Todorov 5 , S. Todorova-Nova 128 , B. Toggerson 7 , J. Tojo 69 , S. Tokár 145a , K. Tokushuku 65 , K. Tollefson 89 , E. Tolley 57 , L. Tomlinson 83 , M. Tomoto 102 , L. Tompkins 31 , K. Toms 104 , N.D. Topilin 64 , E. Torrence 115 , H. Torres 143 , E. Torró Pastor 168 , J. Toth 84,ah , F. Touchard 84 , D.R. Tovey 140 , H.L. Tran 116 , T. Trefzger 175 , L. Tremblet 30 , A. Tricoli 30 , I.M. Trigger 160a , S. Trincaz-Duvoid 79 , M.F. Tripiana 12 , W. Trischuk 159 , B. Trocmé 55 , C. Troncon 90a , M. Trottier-McDonald 15 , M. Trovatelli 135a,135b , P. True 89 , M. Trzebinski 39 , A. Trzupek 39 , C. Tsarouchas 30 , J.C-L. Tseng 119 , P.V. Tsiareshka 91 , D. Tsionou 137 , G. Tsipolitis 10 , N. Tsirintanis 9 , S. Tsiskaridze 12 , V. Tsiskaridze 48 , E.G. Tskhadadze 51a , I.I. Tsukerman 96 , V. Tsulaia 15 , S. Tsuno 65 , D. Tsybychev 149 , A. Tudorache 26a , V. Tudorache 26a , A.N. Tuna 121 , S.A. Tupputi 20a,20b , S. Turchikhin 98,af , D. Turecek 127 , I. Turk Cakir 4d , R. Turra 90a,90b , P.M. Tuts 35 , A. Tykhonov 49 , M. Tylmad 147a,147b , M. Tyndel 130 , K. Uchida 21 , I. Ueda 156 , R. Ueno 29 , M. Ughetto 84 , M. Ugland 14 , M. Uhlenbrock 21 , F. Ukegawa 161 , G. Unal 30 , A. Undrus 25 , G. Unel 164 , F.C. Ungaro 48 , Y. Unno 65 , C. Unverdorben 99 , D. Urbaniec 35 , P. Urquijo 87 , G. Usai 8 , A. Usanova 61 , L. Vacavant 84 , V. Vacek 127 , B. Vachon 86 , N. Valencic 106 , S. Valentinetti 20a,20b , A. Valero 168 , L. Valery 34 , S. Valkar 128 , E. Valladolid Gallego 168 , S. Vallecorsa 49 , J.A. Valls Ferrer 168 , W. Van Den Wollenberg 106 , P.C. Van Der Deijl 106 , R. van der Geer 106 , H. van der Graaf 106 , R. Van Der Leeuw 106 , D. van der Ster 30 , N. van Eldik 30 , P. van Gemmeren 6 , J. Van Nieuwkoop 143 , I. van Vulpen 106 , M.C. van Woerden 30 , M. Vanadia 133a,133b , W. Vandelli 30 , R. Vanguri 121 , A. Vaniachine 6 , P. Vankov 42 , F. Vannucci 79 , G. Vardanyan 178 , R. Vari 133a , E.W. Varnes 7 , T. Varol 85 , D. Varouchas 79 , A. Vartapetian 8 , K.E. Varvell 151 , F. Vazeille 34 , T. Vazquez Schroeder 54 , J. Veatch 7 , F. Veloso 125a,125c , S. Veneziano 133a , A. Ventura 72a,72b , D. Ventura 85 , M. Venturi 170 , N. Venturi 159 , A. Venturini 23 , V. Vercesi 120a , M. Verducci 133a,133b , W. Verkerke 106 , J.C. Vermeulen 106 , A. Vest 44 , M.C. Vetterli 143,d , O. Viazlo 80 , I. Vichou 166 , T. Vickey 146c,ai , O.E. Vickey Boeriu 146c , G.H.A. Viehhauser 119 , S. Viel 169 , R. Vigne 30 , M. Villa 20a,20b , M. Villaplana Perez 90a,90b , E. Vilucchi 47 , M.G. Vincter 29 , V.B. Vinogradov 64 , J. Virzi 15 , I. Vivarelli 150 , F. Vives Vaque 3 , S. Vlachos 10 , D. Vladoiu 99 , M. Vlasak 127 , A. Vogel 21 , M. Vogel 32a , P. Vokac 127 , G. Volpi 123a,123b , M. Volpi 87 , H. von der Schmitt 100 , H. von Radziewski 48 , E. von Toerne 21 , V. Vorobel 128 , K. Vorobev 97 , M. Vos 168 , R. Voss 30 , J.H. Vossebeld 73 , N. Vranjes 137 , M. Vranjes Milosavljevic 13a , V. Vrba 126 , M. Vreeswijk 106 , T. Vu Anh 48 , R. Vuillermet 30 , I. Vukotic 31 , Z. Vykydal 127 , P. Wagner 21 , W. Wagner 176 , H. Wahlberg 70 , S. Wahrmund 44 , J. Wakabayashi 102 , J. Walder 71 , R. Walker 99 , W. Walkowiak 142 , R. Wall 177 , P. Waller 73 , B. Walsh 177 , C. Wang 152,aj , C. Wang 45 , F. Wang 174 , H. Wang 15 , H. Wang 40 , J. Wang 42 , J. Wang 33a , K. Wang 86 , R. Wang 104 , S.M. Wang 152 , T. Wang 21 , X. Wang 177 , C. Wanotayaroj 115 , A. Warburton 86 , C.P. Ward 28 , D.R. Wardrope 77 , M. Warsinsky 48 , A. Washbrook 46 , C. Wasicki 42 , P.M. Watkins 18 , A.T. Watson 18 , I.J. Watson 151 , M.F. Watson 18 , G. Watts 139 , S. Watts 83 , B.M. Waugh 77 , S. Webb 83 , M.S. Weber 17 , S.W. Weber 175 , J.S. Webster 31 , A.R. Weidberg 119 , P. Weigell 100 , B. Weinert 60 , J. Weingarten 54 , C. Weiser 48 , H. Weits 106 , P.S. Wells 30 , T. Wenaus 25 , D. Wendland 16 , Z. Weng 152,ae , T. Wengler 30 , S. Wenig 30 , N. Wermes 21 , M. Werner 48 , P. Werner 30 , M. Wessels 58a , J. Wetter 162 , K. Whalen 29 , A. White 8 , M.J. White 1 , R. White 32b , S. White 123a,123b , D. Whiteson 164 , D. Wicke 176 , F.J. Wickens 130 , W. Wiedenmann 174 , M. Wielers 130 , P. Wienemann 21 , C. Wiglesworth 36 , L.A.M. Wiik-Fuchs 21 , P.A. Wijeratne 77 , A. Wildauer 100 , M.A. Wildt 42,ak , H.G. Wilkens 30 , J.Z. Will 99 , H.H. Williams 121 , S. Williams 28 , C. Willis 89 , S. Willocq 85 , A. Wilson 88 , J.A. Wilson 18 , I. Wingerter-Seez 5 , F. Winklmeier 115 , B.T. Winter 21 , M. Wittgen 144 , T. Wittig 43 , J. Wittkowski 99 , S.J. Wollstadt 82 , M.W. Wolter 39 , H. Wolters 125a,125c , B.K. Wosiek 39 , J. Wotschack 30 , M.J. Woudstra 83 , K.W. Wozniak 39 , M. Wright 53 , M. Wu 55 , S.L. Wu 174 , X. Wu 49 , Y. Wu 88 , E. Wulf 35 , T.R. Wyatt 83 , B.M. Wynne 46 , S. Xella 36 , M. Xiao 137 , D. Xu 33a , L. Xu 33b,al , B. Yabsley 151 , S. Yacoob 146b,am , R. Yakabe 66 , M. Yamada 65 , H. Yamaguchi 156 , Y. Yamaguchi 117 , A. Yamamoto 65 , K. Yamamoto 63 , S. Yamamoto 156 , T. Yamamura 156 , T. Yamanaka 156 , K. Yamauchi 102 , Y. Yamazaki 66 , Z. Yan 22 , H. Yang 33e , H. Yang 174 , U.K. Yang 83 , Y. Yang 110 , S. Yanush 92 , L. Yao 33a , W-M. Yao 15 , Y. Yasu 65 , E. Yatsenko 42 , K.H. Yau Wong 21 , J. Ye 40 , S. Ye 25 , I. Yeletskikh 64 , A.L. Yen 57 , E. Yildirim 42 , M. Yilmaz 4b , R. Yoosoofmiya 124 , K. Yorita 172 , R. Yoshida 6 , K. Yoshihara 156 , C. Young 144 , C.J.S. Young 30 , S. Youssef 22 , D.R. Yu 15 , J. Yu 8 , J.M. Yu 88 , J. Yu 113 , L. Yuan 66 , A. Yurkewicz 107 , I. Yusuff 28,an , B. Zabinski 39 , R. Zaidan 62 , A.M. Zaitsev 129,aa , A. Zaman 149 , S. Zambito 23 , L. Zanello 133a,133b , D. Zanzi 100 , C. Zeitnitz 176 , M. Zeman 127 , A. Zemla 38a , K. Zengel 23 , O. Zenin 129 , T.Ženiš 145a , D. Zerwas 116 , G. Zevi della Porta 57 , D. Zhang 88 , F. Zhang 174 , H. Zhang 89 , J. Zhang 6 , L. Zhang 152 , X. Zhang 33d , Z. Zhang 116 , Z. Zhao 33b , A. Zhemchugov 64 , J. Zhong 119 , B. Zhou 88 , L. Zhou 35 , N. Zhou 164 , C.G. Zhu 33d , H. Zhu 33a , J. Zhu 88 , Y. Zhu 33b , X. Zhuang 33a , K. Zhukov 95 , A. Zibell 175 , D. Zieminska 60 , N.I. Zimine 64 , C. Zimmermann 82 , R. Zimmermann 21 , S. Zimmermann 21 , S. Zimmermann 48 , Z. Zinonos 54 , M. Ziolkowski 142 , G. Zobernig 174 , A. Zoccoli 20a,20b , M. zur Nedden 16 , G. Zurzolo 103a,103b , V. Zutshi 107 , L. Zwalinski 30 .
[]
[ "Impact of the wave-like nature of Proca stars on their gravitational-wave emission", "Impact of the wave-like nature of Proca stars on their gravitational-wave emission" ]
[ "Nicolas Sanchis-Gual \nDepartamento de Astronomía y Astrofísica\nUniversitat de València\nDr. Moliner 5046100BurjassotValència)Spain\n\nDepartamento de Matemática da Universidade de Aveiro and Centre for Research and Development in Mathematics and Applications (CIDMA)\nCampus de Santiago3810-183AveiroPortugal\n", "Juan Calderón Bustillo \nInstituto Galego de Física de Altas Enerxías\nUniversidade de Santiago de Compostela\n15782Santiago de Compostela, GaliciaSpain\n\nDepartment of Physics\nThe Chinese University of Hong Kong\nShatin, Hong KongN.T\n", "Carlos Herdeiro \nDepartamento de Matemática da Universidade de Aveiro and Centre for Research and Development in Mathematics and Applications (CIDMA)\nCampus de Santiago3810-183AveiroPortugal\n", "Eugen Radu \nDepartamento de Matemática da Universidade de Aveiro and Centre for Research and Development in Mathematics and Applications (CIDMA)\nCampus de Santiago3810-183AveiroPortugal\n", "José A Font \nDepartamento de Astronomía y Astrofísica\nUniversitat de València\nDr. Moliner 5046100BurjassotValència)Spain\n\nObservatori Astronòmic\nUniversitat de València\nC/ Catedrático José Beltrán 246980PaternaValència)Spain\n", "Samson H W Leong \nDepartment of Physics\nThe Chinese University of Hong Kong\nShatin, Hong KongN.T\n", "Alejandro Torres-Forné \nDepartamento de Astronomía y Astrofísica\nUniversitat de València\nDr. Moliner 5046100BurjassotValència)Spain\n" ]
[ "Departamento de Astronomía y Astrofísica\nUniversitat de València\nDr. Moliner 5046100BurjassotValència)Spain", "Departamento de Matemática da Universidade de Aveiro and Centre for Research and Development in Mathematics and Applications (CIDMA)\nCampus de Santiago3810-183AveiroPortugal", "Instituto Galego de Física de Altas Enerxías\nUniversidade de Santiago de Compostela\n15782Santiago de Compostela, GaliciaSpain", "Department of Physics\nThe Chinese University of Hong Kong\nShatin, Hong KongN.T", "Departamento de Matemática da Universidade de Aveiro and Centre for Research and Development in Mathematics and Applications (CIDMA)\nCampus de Santiago3810-183AveiroPortugal", "Departamento de Matemática da Universidade de Aveiro and Centre for Research and Development in Mathematics and Applications (CIDMA)\nCampus de Santiago3810-183AveiroPortugal", "Departamento de Astronomía y Astrofísica\nUniversitat de València\nDr. Moliner 5046100BurjassotValència)Spain", "Observatori Astronòmic\nUniversitat de València\nC/ Catedrático José Beltrán 246980PaternaValència)Spain", "Department of Physics\nThe Chinese University of Hong Kong\nShatin, Hong KongN.T", "Departamento de Astronomía y Astrofísica\nUniversitat de València\nDr. Moliner 5046100BurjassotValència)Spain" ]
[]
We present a systematic study of the dynamics and gravitational-wave emission of head-on collisions of spinning vector boson stars, known as Proca stars. To this aim we build a catalogue of about 800 numerical-relativity simulations of such systems. We find that the wave-like nature of bosonic stars has a large impact on the gravitational-wave emission. In particular, we show that the initial relative phase ∆ = 1 − 2 of the two complex fields forming the stars (or equivalently, the relative phase at merger) strongly impacts both the emitted gravitational-wave energy and the corresponding mode structure. This leads to a non-monotonic dependence of the emission on the frequency of the secondary star ω2, for fixed frequency ω1 of the primary. This phenomenology, which has not been found for the case of black-hole mergers, reflects the distinct ability of the Proca field to interact with itself in both constructive and destructive manners. We postulate this may serve as a smoking gun to shed light on the possible existence of these objects.
10.1103/physrevd.106.124011
[ "https://export.arxiv.org/pdf/2208.11717v3.pdf" ]
251,800,057
2208.11717
35e40da85915e25382a932521af94fb92753091c
Impact of the wave-like nature of Proca stars on their gravitational-wave emission Nicolas Sanchis-Gual Departamento de Astronomía y Astrofísica Universitat de València Dr. Moliner 5046100BurjassotValència)Spain Departamento de Matemática da Universidade de Aveiro and Centre for Research and Development in Mathematics and Applications (CIDMA) Campus de Santiago3810-183AveiroPortugal Juan Calderón Bustillo Instituto Galego de Física de Altas Enerxías Universidade de Santiago de Compostela 15782Santiago de Compostela, GaliciaSpain Department of Physics The Chinese University of Hong Kong Shatin, Hong KongN.T Carlos Herdeiro Departamento de Matemática da Universidade de Aveiro and Centre for Research and Development in Mathematics and Applications (CIDMA) Campus de Santiago3810-183AveiroPortugal Eugen Radu Departamento de Matemática da Universidade de Aveiro and Centre for Research and Development in Mathematics and Applications (CIDMA) Campus de Santiago3810-183AveiroPortugal José A Font Departamento de Astronomía y Astrofísica Universitat de València Dr. Moliner 5046100BurjassotValència)Spain Observatori Astronòmic Universitat de València C/ Catedrático José Beltrán 246980PaternaValència)Spain Samson H W Leong Department of Physics The Chinese University of Hong Kong Shatin, Hong KongN.T Alejandro Torres-Forné Departamento de Astronomía y Astrofísica Universitat de València Dr. Moliner 5046100BurjassotValència)Spain Impact of the wave-like nature of Proca stars on their gravitational-wave emission (Dated: March 23, 2023) We present a systematic study of the dynamics and gravitational-wave emission of head-on collisions of spinning vector boson stars, known as Proca stars. To this aim we build a catalogue of about 800 numerical-relativity simulations of such systems. We find that the wave-like nature of bosonic stars has a large impact on the gravitational-wave emission. In particular, we show that the initial relative phase ∆ = 1 − 2 of the two complex fields forming the stars (or equivalently, the relative phase at merger) strongly impacts both the emitted gravitational-wave energy and the corresponding mode structure. This leads to a non-monotonic dependence of the emission on the frequency of the secondary star ω2, for fixed frequency ω1 of the primary. This phenomenology, which has not been found for the case of black-hole mergers, reflects the distinct ability of the Proca field to interact with itself in both constructive and destructive manners. We postulate this may serve as a smoking gun to shed light on the possible existence of these objects. I. INTRODUCTION Gravitational waves (GWs) provide information about the strong-field regime of gravity and can potentially reveal the true nature and structure of astrophysical compact objects. Their analysis could help unveil the classical and quantum essence of black holes, as well as the interior of neutron stars through the dense-matter equation of state, a long-term open issue. Moreover, theoretical proposals for dark or "exotic" compact objects (ECOs) [1] could be probed through the study of their GW signals as long as those could be distinguished from the signals produced by black holes and neutron stars. Such investigations require a deep understanding of the emitted GWs and, in particular, rely on theoretical waveform templates against which observational data can be compared. As an example, the detection of GWs from compact binary coalescences -the sources so far observed by Advanced LIGO and Advanced Virgo [2][3][4][5][6][7][8] -and the source parameter inference thereof, rely on the matched filtering of the data to waveform templates (or approximants). This makes the production of waveform catalogues of physically motivated exotic compact objects an endeavour both well timed and worth-pushing. Amongst all proposed exotic objects that can reach a compactness comparable to that of black holes, bosonic stars stand out as one of the simplest and best-motivated models [9,10]. Bosonic stars with masses in the astrophysical black hole range, from stellar-origin to supermassive objects, are made of ultralight fundamental bosonic fields that could account for (part of) dark matter. Triggered by this central open issue in theo-retical physics -the nature of dark matter -the study of bosonic stars has earned quite some attention in recent years. From a particle physics perspective, ultralight bosonic particles can emerge in the string axiverse [11,12] or in simple extensions of the Standard Model of particles [13]. Bosonic stars are asymptotically flat (although non-asymptotically flat generalizations exist), stationary and solitonic, i.e. horizonless and everywhere regular equilibrium spacetime geometries, describing selfgravitating lumps of bosonic particles. In their simplest guise, they emerge by minimally coupling the complex, massive Klein-Gordon equation -for scalar boson starsor the complex Proca equations -for vector boson stars, aka Proca stars (PSs) [14] -to Einstein's gravity. Bosonic stars can be either static, in which case the simplest solutions are spherically symmetric (but see also [15,16]) or spinning [17] (thus stationary but nonstatic), in which case they have a non-spherical morphology which depends on the scalar or vector model. In all cases, the bosonic field oscillates periodically at a welldefined frequency ω, which determines the mass, angular momentum (in spinning solutions) and compactness of the star. The dynamical robustness of bosonic stars has been established for some models in well-identified regions of the parameter space (see [18] for a review) making them viable dark-matter candidates. The case of nonspinning spherically symmetric bosonic stars is firmly established. The fundamental solutions (those with the minimum number of nodes of the bosonic field across the star) are perturbatively stable in a range of frequencies between the Newtonian limit (where they become non-compact) and the maximal-mass solution. Additionally, they exhibit a non-fine-tuned dynamical formation mechanism known as gravitational cooling [19,20]. On the contrary, the case of spinning bosonic stars has shown to be more subtle [21]. In particular, while the fundamental PS solutions have been found to be stable in the simplest model where the Proca field has only a mass term (no self-interactions), scalar boson stars are prone to non-axisymmetric perturbations that can trigger the development of instabilities akin to the bar-mode instability found in neutron stars [22], in the corresponding model without self-interactions [23]. The above findings support using the fundamental solutions of the simplest Proca model as a robust starting point to test the true nature of dark compact objects. In particular, this model appears as the most suitable choice to conduct dynamical studies aimed at gauging, through GW information, the potential astrophysical significance, if any, of an appealing ECO model. First, and promising, steps have recently been taken. Pursuing this route [24] found that waveforms from numerical-relativity simulations of head-on collisions of PSs can fit the signal GW190521 as good as those from quasi-circular binaryblack-hole (BBH) mergers, even being slightly preferred from a Bayesian-statistics viewpoint. Moreover, the development of a larger numerical catalogue of PS mergers together with new data-analysis techniques [25], have led to a more systematic study of several LIGO-Virgo-KAGRA (LVK) high-mass events in O3 under the PS collision scenario [24] and to conduct the first population studies of these objects [26]. The present paper complements those recent works. Here, we report on our catalogue of nearly 800 numericalrelativity simulations of head-on collisions of PSs used to obtain the results presented in [24][25][26]. Furthermore, we discuss additional numerical simulations we carried out to explore the impact of the wave-like nature of PSs in their GW emission. We find that the emission at merger dramatically depends on the relative phase of the complex field of each star. This has a major impact in both the net energy emission through GWs and the corresponding mode structure. Since this relative phase is an intrinsic parameter of PSs, absent in BBH mergers, the potential measurement of the GW modulation discussed in this work could serve as a smoking gun for the existence of PSs. The remaining of this paper is organized as follows. Section II briefly describes the formalism needed to perform numerical simulations of PS mergers. The procedure we follow to obtain initial data for the simulations is outlined in Section III as well as the specific numerical setups employed. We report and analyze our results in Section IV. Finally, our conclusions are presented in Section V along with some remarks on possible pathways for future research. Henceforth, units with G = c = 1 are used. II. FORMALISM We investigate the dynamics of a complex Proca field by solving numerically the Einstein-(complex, massive) Proca system, described by the action S = d 4 x √ −gL, where the Lagrangian density depends on the Proca potential A and field strength F = dA. It reads L = R 16π − 1 4 F αβF αβ − 1 2 µ 2 A αĀ α .(1) Above, the bar denotes complex conjugation, R is the Ricci scalar, and µ is the Proca-field mass. The stressenergy tensor of the Proca field is given by T αβ = −F µ(αF µ β) − 1 4 g αβ F µνF µν + µ 2 A (αĀβ) − 1 2 g αβ A µĀ µ ,(2) where g αβ is the spacetime metric, with g = det g αβ , and the parenthesis denotes index symmetrization. Using the standard 3+1 split (see e.g. [27] for details) the Proca field is split into the following 3+1 quantities: A µ = X µ + n µ X φ ,(3)X i = γ µ i A µ ,(4)X φ = −n µ A µ ,(5) where n µ is the timelike unit vector, γ µ ν = δ µ ν + n µ n ν is the operator projecting spacetime quantities onto the spatial hypersurfaces, X i is the vector potential, and X φ is the scalar potential. The fully non-linear Einstein-Proca system can be written as [27]: ∂ t γ ij = −2αK ij + L β γ ij ,(6)∂ t X i = −α (E i + D i X φ ) − X φ D i α + L β X i ,(7)∂ t E i = α KE i + D i Z + µ 2 X i + ijk D j B k − ijk B j D k α + L β E i ,(8)∂ t K ij = −D i D j α + α R ij − 2K ik K k j + KK ij +2α E i E j − 1 2 γ ij E k E k + B i B j − 1 2 γ ij B k B k − µ 2 X i X j +L β K ij ,(9)∂ t X φ = −X i D i α + α KX φ − D i X i − Z +L β X φ ,(10)∂ t Z = α D i E i + µ 2 X φ − κZ + L β Z ,(11) where α is the lapse function, β is the shift vector, γ ij is the spatial metric, K ij is the extrinsic curvature (with K = K i i ), D i is the covariant 3-derivative, L β is the Lie derivative (along the shift-vector direction), and κ is a damping parameter that helps stabilize the numerical evolution. Moreover, the three-dimensional "electric" E i and "magnetic" B i fields are also introduced in the previous equations in analogy with Maxwell's theory: E i = γ µ i F µν n ν , B i = γ µ i F µν n ν = ijk D i X k ,(12) with E µ n µ = B µ n µ = 0 and ijk the three-dimensional Levi-Civita tensor. The system of equations is closed by two constraint equations, namely, the Hamiltonian constraint and the momentum constraint, which are given by: H = R − K ij K ij + K 2 − 2 E i E i + B i B i + µ 2 X 2 φ + X i X i = 0 ,(13)M i = D j K ij − D i K − 2 ijk E j B k + µ 2 X φ X i = 0 .(14) III. INITIAL DATA AND NUMERICS A. The stationary PS solutions Following the conventions in [14], we consider an axially symmetric and stationary line element ds 2 = − e 2F0 dt 2 + e 2F1 dr 2 + r 2 dθ 2 + e 2F2 r 2 sin 2 θ dϕ − W r dt 2 ,(15) where F 0 , F 1 , F 2 , and W are functions of (r, θ). Here, r, θ, ϕ can be taken as spherical coordinates (in fact spheroidal), with the usual range, while t is the time coordinate. The spinning PS solutions of the Einstein-Proca system have been discussed in [14] with these conventions and e.g. in [28] for a slightly different version of (15) with W/r → W . The ansatz for the Proca field is: A = H 1 r dr + H 2 dθ + iH 3 sin θdϕ + iV dt e i(mϕ−ωt+ ) ,(16) withm ∈ Z + and is the initial phase of the star. The domain of existence and the compactness of the solutions of the Einstein-Proca equations describing the fundamental spinning PSs are shown in Fig. 1. These solutions havē m = 1 and are nodeless (i.e. A 0 has no nodes). The frequency range of the solutions of interest varies between ω/µ = 1 (Newtonian limit) and ω/µ ∼ 0.562 (maximalmass solution). As the latter is approached, the PS solutions become ultra-compact, i.e. they develop a light ring pair [29] for ω/µ 0.711. This creates a spacetime instability [30] which motivates us to avoid this region of the parameter space. The compactness is defined as Compactness = 2M 99 R 99(17) where R 99 is the perimetral radius that contains 99% of the star's mass, M 99 . Bosonic stars do not have a surface with a discontinuity of the energy density occurs, i.e. a surface outside which the energy density is zero (in contrast with a fluid star). We remark that for all PS solutions reported in the literature so far, the line element (15) possesses a reflection symmetry, with respect to the θ = π/2 plane. The translation between the functions above, F 0 , F 1 , F 2 , W , V , H 1 , H 2 , H 3 , and the initial value for the metric and the 3+1 Proca field variables is given as follows: α = e F0 , β ϕ = W r ,(18)γ rr = e 2F1 , γ θθ = e 2F1 r 2 , γ φφ = e 2F2 r 2 sin 2 θ, (19) X φ = −n µ A µ ,(20)X i = γ µ i A µ ,(21)E i = −i γ ij α D j (αX φ ) + ∂ t X j .(22) B. Binary head-on data As initial data for the head-on simulations we consider a superposition of two PSs with both stars described by the same Proca field following [24,[31][32][33][34][35][36] (see also [37,38]): • A(x i ) = A (1) (x i − x 0 ) + A (2) (x i + x 0 ), • γ ij (x i ) = γ (1) ij (x i − x 0 ) + γ (2) ij (x i + x 0 ) − γ flat ij (x i ), • α(x i ) = α (1) (x i − x 0 ) + α (2) (x i + x 0 ) − 1, where superscripts (1) and (2) label the stars and ±x 0 indicates their initial positions. The stars are initially separated by a coordinate distance Dµ = ∆xµ = 40 (x 0 µ = ±20). We note that the solutions are not boosted and that these initial data introduce (small) constraint violations [31]. Figure 2 shows the dependence of the L 2 -norm of the Hamiltonian and momentum constraints, Eqs. (13) and (14), with D, at the initial time. The values of the L 2 -norm are O(10 −4 ) or better. The error decreases with separation, reaching a fairly constant value for Dµ 20 (particularly visible for the momentum constraint), see [39]. Each star is defined by its oscillation frequency, ω 1 /µ and ω 2 /µ. For the initial catalogue used in [26], comprising ∼ 800 initial models, we fix the phase difference ∆ between the stars to zero. Here, we also explore the impact of varying this relative phase on the gravitational-wave emission. Equal-mass cases correspond to ω 1 = ω 2 = ω. Correspondingly, ω 1 = ω 2 for unequal-mass binaries. Moreover, since we assume there is a single Proca field describing both stars, these also share a common value for the boson mass µ. C. Parameter space Our main catalogue of 759 simulations is depicted in a compact way in Fig. 3. Each axis in this plot labels the frequencies of the two stars, ω 1 /µ and ω 2 /µ. For the equal-mass models, placed in the diagonal, we run simulations using a uniform grid in frequencies in the range ω 1 /µ = ω 2 /µ = ω/µ ∈ [0.8000, 0.9300] with ∆ω = 0.0025. For the unequal-mass cases, we fix the oscillation frequency of the primary star, ω 1 /µ, and then vary the frequency of the secondary star, ω 2 /µ. Both frequencies range from 0.8000 to 0.9300 with a resolution of ∆ω 1 /µ = 0.01 for ω 1 /µ and of ∆ω 2 /µ = 0.0025 for ω 2 /µ. As mentioned before, in all of these cases the two stars have null relative phase ∆ = 0 at the start of the simulation. As we show below, the initial set of simulations revealed unexpected non-trivial interactions between the stars described by the same Proca field due to their wavelike nature. As a result, we also build an additional set of models to study the impact of the relative phases of the star both in the dynamics and on the GW emission. The effect of this parameter is studied in two (implicit and explicit) ways. First, for some selected cases we vary the initial star-separation at which the simulation is started keeping ∆ = 0 which, for the cases with ω 1 = ω 2 translates into a varying relative phase at merger. This, however, also causes a variation in the velocity of the two stars at merger whose effect mixes with that of the varying phase. Therefore, in order to explicitly isolate the impact of the relative phase change, for a few selected cases of Fig. 3 we explicitly vary ∆ in a uniform grid ∆ ∈ [0, 2π] with step δ∆ = π/6. D. Numerics To carry out the numerical evolutions we use the publicly available Einstein Toolkit [40,41], which uses the Cactus framework and mesh refinement. The methodof-lines is employed to integrate the time-dependent differential equations. In particular, we use a fourth-order Runge-Kutta scheme for this task. The left-hand-side of the Einstein equations is solved using the MacLachlan code [42,43], which is based on the 3+1 Baumgarte-Shapiro-Shibata-Nakamura (BSSN) formulation. On the other hand, the Proca evolution equations, Eqs. (6)- (11), are solved using the code described and available in [44][45][46]. We extended the code to take into account a complex field [21,33]. Technical details, assessment of the code, and convergence tests can be found in [21,33,44]. We use a fixed numerical grid with 7 refinement levels, with the following structure {(320, 48, 48, 24, 24, 6, 2)/µ, (4, 2, 1, 0.5, 0.25, 0.125, 0.0625)/µ}, where the first set of numbers indicates the spatial domain of each level and the second set indicates the resolution. The simulations are performed using equatorial-plane symmetry. To extract gravitational radiation we employ the Newman-Penrose (NP) formalism [47] as described in [44]. We compute the NP scalar Ψ 4 expanded into spin-weighted spherical harmonics of spin weight s = −2. IV. RESULTS We have performed 759 simulations of head-on collisions of spinning PSs starting at rest at fixed initial distance, Dµ = 40. We explore both equal-mass and unequal-mass cases to produce a first systematic study of the GW signals emitted in collisions of these objects. Stationary fundamental bosonic stars are described by the oscillation frequency ω/µ of the field, which determines the dimensionless mass M µ and angular momentum of the star Jµ 2 , besides its compactness. Further specifying the boson particle mass µ determines the corresponding physical quantities M, J (see below). Thus, µ can be set as a fundamental scale of the system and all quantities can be simply rescaled. Alternatively, we can trivially rescale the simulations to any fixed total mass, which in turn determines the mass of the boson. We also remark that in contrast with black holes, the angular momentum of PSs is quantized by the relation J =mQ, where Q is the Noether charge of the star, which counts the number of bosonic particles. This means that an infinitesimal loss/gain in angular momentum must be accompanied by a corresponding loss/gain of particles. We restrict to the case of mergers of dynamically stablem = 1 spinning PSs. For our range of frequencies the PS models have masses and angular momentum that vary from (ω/µ, M µ, Jµ 2 )=(0.9300, 0.622, 0.637) to (0.8000, 0.946, 1.008). All of these mergers lead to a postmerger remnant that is compact enough to collapse into a Kerr black hole. Therefore, our waveform catalogue is well suited for the analysis of LVK GW events under the PS merger scenario. A. Single star The dynamical robustness and formation of spinning PSs were addressed in [21,22]. Here we illustrate the stability properties of these objects. We consider the case of a single isolated spinning PS. We fix the oscillation frequency to ω/µ = 0.90, the mass to M µ = 0.726 and the angular momentum M = − Σ drdθdϕ 2T t t − T α α α √ γ ,(23)J = Σ drdθdϕ T t ϕ α √ γ .(24) In addition, we show in the bottom panel the minimum value of the lapse function α. At the highest resolution, the deviations of the final mass, angular momentum, and lapse function with respect to the initial values are less than 0.4% at tµ = 8000. The resolution is comparable to the merger case, for which we added more refinement levels at the centre of the grid to take into account black hole formation. The initial deviations come from interpolation errors from the initial data computed in a compactified grid to the Cartesian grid used for the numerical simulations. The convergence order of our code under grid resolution is found to be around 2.5. B. Head-on mergers of Proca stars We now move to study head-on collisions of PSs and the corresponding GW emission. Figs. 5 and 6 show the energy density of the Proca field at the equatorial plane (z=0) for two families of collisions respectively characterised by primary-star frequencies, namely ω 1 /µ = 0.8300 and ω 1 /µ = 0.9100, and four illustrative secondary-star frequencies ω 2 /µ. These figures exemplify the dynamics of all PS binaries in our dataset. In particular, we note that the collisions are not strictly head-on since the objects do not follow a straight line. Instead, the trajectories of both stars are curved due to the frame-dragging induced by the stars spins. All mergers lead to the formation of a Kerr black hole with a faint Proca field remnant around the horizon, therefore storing a small fraction of the initial Proca mass and angular momentum [35,48]. The final black holes not always form promptly as for some values of the PS parameters the collisions exhibit the formation of a transient hypermassive PS. The collisions produce a burst of GWs, similar to the signals from head-on collisions of black holes [24,50]. We note that the gravitational waveform sourced by head-on collisions is fundamentally different from that produced in orbital binary mergers. First, it is obviously much shorter as there is no inspiral phase preceding the merger. Second, the radiated energy is significantly lower (only around a 0.2% of the initial energy of the system, when in orbital mergers it reaches a few percent) due to the slow velocities of the two objects at merger, caused by the fact that we release the stars from rest at very short distances. Third, while the GW emission from orbital mergers is vastly dominated by the quadrupole = 2, m = ±2 modes, that from head-on mergers exhibits an ( , m) = (2, 0) mode, equally dominating [24,31,33]. Fig. 7 shows the dominant = m = 2 mode of the Newman-Penrose scalar Ψ 4 in the equal-mass case, for six different PS models. The frequency of the GWs increases with increasing ω/µ, i.e., with decreasing mass and compactness of the PSs. The morphology of the waveforms changes as well: the less compact the stars, the longer the pre-collapse signal before black-hole formation, which corresponds to the peak emission and it is followed by the ringdown phase. For high ω/µ collisions, the transient hypermassive PS that results from the merger has a total mass that is closer (as ω/µ grows) to the maximum mass that defines the linear stability limit of such objects, therefore surviving for a longer time as it emits GWs before collapsing to a black hole. Fig. 8 shows the = m = 2 and = m = 3 modes of Ψ 4 for one equal-mass and five unequal-mass PS binary mergers, with fixed ω 1 /µ = 0.8300 and varying ω 2 /µ. The waveforms look similar to those for the equal-mass cases in terms of shape, duration, and frequency. However, they also exhibit important differences. First, while in equal-mass collisions odd-m modes (e.g. the = m = 3 mode) are almost completely suppressed (modulo numerical noise) due to the symmetries of the problem compared to the dominant = m = 2 (see top middle panel of Fig. 8), these are triggered for unequal-mass systems and can have a significant contribution (see also [35]). In addition, and most importantly, the morphology of the = m = 2 mode manifests a clear non-monotonic dependence on the frequency of the secondary star ω 2 /µ for fixed ω 1 /µ. In particular, the waveform amplitude varies periodically as we increase ω 2 /µ from 0.8000 to 0.9300. For example, for a value of ω 1 /µ = 0.8300, we find that the amplitude maxima correspond to ω 2 /µ equal to 0.8000, 0.8300, 0.8600, 0.8900, and 0.9225, while the minima are found when ω 2 /µ is equal to 0.8150, 0.8450, 0.8750, and 0.9100. This effect is not present in mergers of other types of compact objects as binary black holes or binary neutron stars. The non-trivial dependence of the gravitational radiation with ω 2 /µ for fixed ω 1 /µ becomes more evident when studying the total emitted energy from the GW luminosity, given by Fig. 9 shows the total GW energy as a function of ω/µ or ω 2 /µ for the equal-mass (top left panel) and three illustrative unequal-mass cases, corresponding to fixed values of ω 1 /µ = {0.8300, 0.8950, 0.9100} (top right, bottom left, and bottom right panels of Fig. 9, respectively). In the equal-mass case the emitted energy decreases for decreasing ω/µ reaching a minimum at ω/µ ∼ 0.8625 and increasing onwards. While naively one would expect that the emitted energy would primarily depend on the total mass and compactness of the stars, the described trend depends in a non-trivial way on the dynamics of the binary system, the trajectories followed by the stars due to frame-dragging, and the masses and angular momentum of the PSs. On the other hand, the unequal-mass cases yield interesting results already hinted above. We find that the GW energy displays a distinctive oscillatory pattern as a function of ω 2 /µ for fixed ω 1 /µ. As Fig. 9 shows, the energy maxima are located at intervals of ∆ω max /µ = (ω 1 − ω 2 )/µ ∼ k 0.03 and the minima are located at intervals of ∆ω min /µ ∼ (2k + 1) 0.015 with k ∈ Z. The value of these two intervals between maxima or minima, ∆ω min /µ and ∆ω max /µ, are completely independent of ω 1 /µ. This result can be explained by the wave-like nature of PSs and their fundamental oscillation frequency, which leads to an interference between the different frequencies in the unequal-mass case. The interference behaviour was already found in equal-mass head-on collisions of scalar boson stars with a non-zero initial phase difference [31,32,51,52], but its impact on the GW emission was not systematically explored. L GW = dE dt = lim r→∞ r 2 16π ∞ l=2 l m=−l t −∞ dt Ψ lm 4 2 .(25) C. The role of the relative phase at merger To explain the GW emission pattern, we assume that at the time of the collision we have a linear superposition of both stars (same Proca field) oscillating at different frequencies. Then, removing themϕ-dependence which will not affect the interference and the initial phase , it for six unequal-mass PS collisions with fixed ω1/µ = 0.8300. For animations of the full set of GW signals from unequal-mass collisions see [49]. can be shown that Re(A) ∼ cos(ω 1 t) + cos(ω 2 t) = 2 cos (ω 1 + ω 2 ) 2 t cos (ω 1 − ω 2 ) 2 t , Im(A) ∼ sin(ω 1 t) + sin(ω 2 t) = 2 sin (ω 1 + ω 2 ) 2 t cos (ω 1 − ω 2 ) 2 t .(26) Therefore, the complex amplitude of the Proca field will be given by Since the initial separation between the stars is the same for all cases, Dµ = 40, the time of the collision is also approximately the same, t col µ ∼ 210. This is precisely the time at which the maximum (constructive interference) for the envelope in Eqs. (26) and (27) is reached |A| 2 = Re(A) 2 + Im(A) 2 ∼ 4 cos 2 (ω 1 − ω 2 ) 2 t = 2 1 + cos (ω 1 − ω 2 )t .(27)E GW µ ω 1 /µ =0.8950 ω 2 /µ =0.8050 ω 2 /µ =0.8350 ω 2 /µ =0.8650 ω 2 /µ =0.895 ω 2 /µ =0E GW µ ω 1 /µ =0.9100 ω 2 /µ =0.8200 ω 2 /µ =0.8500 ω 2 /µ =0.88 ω 2 /µ =0.911 + cos (ω 1 − ω 2 )t col max = 2 ⇒ (ω 1 − ω 2 ) max t col = 2kπ,(28) if ∆ω max /µ = (ω 1 − ω 2 )/µ ∼ k 0.03. On the other hand, the minimum (destructive interference) for the same time t col is found for 1 + cos (ω 1 − ω 2 )t col min = 0 ⇒ (ω 1 − ω 2 ) min t col = (2k + 1)π,(29) which gives ∆ω min /µ = (ω 1 − ω 2 )/µ ∼ (2k + 1) 0.015. This simple linear analysis explains the periodicity between maxima and minima observed in Fig. 9, which therefore depends on the initial distance between the stars. This analysis, however, must be regarded as an approximation since the emission also depends on other factors such as the dynamics of the collision, the radius of the stars and the time of merger, which could give rise to some additional features in the GW energy, as hinted by the bottom right panel of Fig. 9. Thus, we anticipate that an increase in Dµ will increase t col and will decrease both ∆ω max and ∆ω min . Accordingly, if the whole merger takes more time to reach the collapse, the factor ∆ω/µ will be low enough so that its period will be longer than the life of the transient hypermassive PS. Depending on the amplitude of the envelope, the GW emission could be critically affected. Gravitational radiation greatly depends on the distribution and amplitude of the energy density. The square of the amplitude of the Proca field is proportional to the energy density (see Eq. (2)) and can be related to the amplitude of the GW emission. To illustrate this, Fig. 10 shows the total energy emitted for the models with fixed ω 1 /µ = 0.8700 computed from the simulations together with the estimated value of the Proca field amplitude Removing the drift of the energy due to dynamics and variations in the total mass, we find an excellent overall agreement, in particular in the location of the maxima and minima. We note that we do not find null GW emission when ∆ω/µ = (2k + 1)0.015, probably because there is no perfect cancellation of the Proca field during the whole merger process. We stress that while this linear argument is a remarkably good approximation, it is not really valid to explain a complete destructive interference of the stars. D. The role of the initial relative distance We now explore the impact of (implicitly) varying the relative phase at merger by changing the time of the collision t col µ. To this end, we place the stars at two additional initial separations, namely Dµ = 30 and 45. We repeat the simulations with these setups for the cases of binaries with fixed primary frequency ω 1 /µ = 0.8000 and secondary frequency in the interval ω 2 /µ ∈ [0.8000, 0.9300] with variations in steps ∆ω 2 /µ = 0.0025. Our results are shown in Fig. 11. The top left panel corresponds to the energy radiated in GWs. This exhibits the same global decreasing trend and periodic oscillations with local maxima and minima as a function of ω 2 /µ for all values of the initial separation distances. However, ∆ω max /µ and ∆ω min /µ are found to depend on Dµ (and t col µ). The new collision times are t Dµ=30 In addition, the top right panel of Fig. 11 shows the GW energy emitted by an unequal-mass binary with ω 1 /µ = 0.8000 and ω 2 /µ = 0.8450 as a function of the initial separation. The GW energy does not depend monotonically with the distance but instead it displays an oscillatory pattern. Moreover, the bottom panels show the l = m = 2 gravitational waveforms for two unequal-mass cases and three initial separations. These two plots illustrate that the initial distance is an important parameter of the system as it can change the morphology and energy of the emitted GWs for the same binary stars. E. The role of the initial phases The fact that the initial separation plays an important role in the dynamics and interactions of the two PSs raises the question of whether the initial phase of the stars may also cause a similar effect. Note that we keep the same phase for both stars (zero initial phase difference), as we have focused in the simplest possible scenario. Recall that while the energy density of PSs is axisymmetric, their real and imaginary parts are not. Therefore, different phases lead to different orientations of the real and imaginary parts at the time of the collision, which in turn yields different results that could potentially reveal the inner complex structure of these stars (for instance, the dipolar distribution of the real and imaginary parts of the Proca field for a m = 1 spinning star). To test this idea, we perform several simulations of a binary with ω 1 /µ = 0.8000 and ω 2 /µ = 0.8450 varying the initial phase in Eq. (16). To check that the key parameter at play is the relative phase of the stars and not their global ones, we first vary the phase of both stars, keeping always the phase difference equal to zero ∆ = 0 with 1 = 2 . Fig. 12 shows the time evolution of the energy density (leftmost column) and the real part of the scalar potential X φ for different values of the phase = {0, π/4, π/2} (remaining columns). The first column shows that even when the orientation of the components of the Proca field (in this case the scalar potential) is different, there is no change at the level of the energy density. No differences are found in the dynamics of the binary, the final object, or the gravitational waveform. These are all completely independent of the initial phase. Therefore, the inner structure and dipolar distribution (m = 1) of the real and imaginary parts of the star do not play a role in the collisions. We note that the real part of the scalar potential shows am = 5 distribution after the collapse and black hole formation (as discussed in [48]; see also [35]) that could trigger the development of the superradiant instability depending on the final spin of the black hole. However, this would happen within a timescale beyond current computational capabilities. Next, we study the effect of varying the initial sepa- Fig. 13 exhibits the GW energy as a function of distance for an equal-mass binary with ω/µ = 0.8000. In both cases we observe the same trend discussed in the top left panel of Fig. 9, together with the corresponding oscillatory pattern for fixed ω/µ. We note that, unlike the unequal-mass case, the top panel of Fig. 13 lacks the maxima and minima arising from the constructive and destructive interferences, as in the equal-mass case we always have ω 1 = ω 2 . E GW µ ω 1 /µ =0.8000, ω 2 /µ =0 Finally, we explore how the relative phase ∆ = | 1 − 2 | impacts the GW energy. Again, even if we change the initial phase, the initial energy density of the stars is independent of the phase. However, the relative phase will change the interference pattern and the dynamics of the Proca field at the time of the collision. From the amplitude of the Proca field |A| 2 = Re(A) 2 + Im(A) 2 ∼ 4 cos 2 (ω 1 − ω 2 ) 2 t + ∆ = 2 1 + cos (ω 1 − ω 2 )t + ∆ ,(30) we see that varying ∆ produces a similar effect to changing the initial distance separation (and t col ). The stars merge with a different internal configuration producing a different GW emission. This is indeed what we get as shown in Fig. 14 where we plot the GW energy for one equal-mass case (ω/µ = 0.8000) and one unequal-mass case (ω 1 /µ = 0.8000, ω 2 /µ = 0.8450) together with the analytical fit from Eq. (30) taking into account that there is no perfect destructive interference that would lead to zero emission. Compared to the ∆ = 0 situation, now the most luminous collision emits about 25% more energy in the form of GWs in the equal-mass case and about 35% more in the unequal-mass case. The relative phase ∆ also alters the mode-emission structure of the source and the frequency content of the modes (or equivalently, their morphology). In particular, the left panel of Fig. 15 shows the frequency content, by means of the amplitude of the Fourier transform, of the quadrupole = m = 2 mode of an unequal-mass PS merger as a function of ∆ . It can be noted how variations of this parameter have an influence not only on the amplitude of the mode, therefore impacting the observability of the source, but also greatly modify its frequency content. This suggests that this effect (or rather the parameter ∆ ) could actually be measurable in a Bayesian parameter inference framework. The effect of ∆ in equal-mass mergers is particularly useful to understand the potential impact of this parameter in GW data analysis as a possible smoking gun to distinguish PS mergers from vanilla black-hole mergers (equal masses and aligned -or zero -spins). In this situation, for the case of black-hole mergers, odd-m emission modes are exactly suppressed due the symmetry of the source. The same is true, as expected, for the case of PSs when we set ∆ = 0. The right panel of Fig. 15 ever, shows that the introduction of ∆ = 0 activates the = m = 3 mode during merger and ringdown. This reflects the fact that the phase difference between the stars breaks the symmetry of the source. While at the moment we cannot perform simulations for the case of quasicircular PS mergers (for lack of constraint-satisfying initial data) we anticipate that this effect would lead to an inconsistency between the binary parameters inferred from the inspiral stage and the corresponding ringdown emission modes of the final black hole if the source were assumed at face value to be a black-hole merger. Moreover, such a signature shall represent a smoking gun of the non-black-hole nature of the merging objects. We leave the quantitative exploration of this possibility for future work. V. CONCLUSIONS Black holes and neutron stars are widely considered the most plausible compact objects populating the Universe. Theoretical proposals for other types of compact objects, dubbed dark or "exotic" compact objects, however, have also been proposed (see e.g. [1] and references therein). The brand new field of gravitational-wave astronomy offers, potentially, the intriguing opportunity to probe those theoretical proposals. In particular the study and characterization of the GWs from collisions of ECOs -the building of waveform template banks -seems a key requisite towards that goal, as those datasets could allow for direct comparisons with the signals produced in mergers of black holes and neutron stars. The expectation is that the distinct nature of the different families of compact objects is somewhat encoded in the GW signals each member of the class emits, hence offering a way to single them out. In order to identify the specific and subtle signatures of each type of object in their GW emission, it is crucial to produce accurate signal models that can be compared to the data collected by detectors and that can also reveal new specific phenomenology. Presently, numerical relativity offers the most accurate way to do so, particularly in the highly non-linear, strong-gravity situations produced when two compact objects merge. In this paper we have presented a catalogue of nearly 800 simulations of head-on mergers of PSs. We recently used this dataset to search for signatures of these objects in existing LIGO-Virgo data [24,26]. Here, we have performed a systematic study of the properties and gravitational-wave emission of these physical systems. Our study has revealed that the relative phase of the two PSs, an intrinsic parameter of bosonic stars that is absent for the case of black-hole mergers, has a strong impact in the GW emission. This parameter, which reflects the wave-like nature of the PSs by controlling the way the Proca field interacts with itself, impacts not only the amplitude of the emission modes (and therefore the total emitted energy) but also the frequency content of the signal and its mode structure. Interestingly, these findings suggest that such an intrinsic parameter of PS binaries could be measurable. As a particular illustration, we have shown here that the asymmetry induced by phase differences in an equal-mass PS head-on collision can trigger odd-parity (odd-m) modes during the merger-ringdown stage which are completely suppressed for the case of equal-mass (and equal-spin) binary black hole mergers. We argue that this may evidence the nonblack-hole nature of the merging objects. The LVK event GW190521 has represented the first example of a GW signal that can be explained both in the classic framework of binary black hole mergers and in the less-common framework of PS mergers [24]. However, to conclusively probe the existence of the latter class of ECOs will require either the accumulation of small evidences in favour of this scenario through the systematic comparison of signals to waveform catalogs and/or the observation of a signal with distinct signatures that cannot be reproduced by black-hole mergers by current or future LIGO-Virgo-KAGRA detectors or by third-generation detectors, such as the Einstein Telescope [53]. On the one hand, the GW catalogue we have discussed in this paper represents the first step towards such systematic comparisons. On the other hand, our results suggest that the wave-like nature of PSs, via the impact of the relative phase parameter ∆ on the GW emission, might serve as a distinct smoking gun for the existence of these objects. In this work we have focused on the particular case of head-on collisions due its technical and computational simplicity. In the future we plan to extend the catalogue to eccentric and orbital quasi-circular mergers of bosonic stars. This will help us to firmly establish if the GW interference patterns found are specific to or can be am- plified by the geometry of the collisions considered in this paper, and, thus, gauge the potential imprint they may actually have in the GW emission. for different simulation resolution levels. The second and fourth panels show difference between different resolutions scaled for fourth-order convergence. FIG. 1 : 1Top panel: Sequence of equilibrium configurations of nodeless fundamental spinningm = 1 PSs. The PSs develop a pair of light rings for ω/µ 0.711 and an ergo-region for ω/µ 0.602. The maximal mass is attained at ω/µ 0.562. Bottom panel: compactness of the Proca stars as function of the oscillation frequency ω/µ in the range considered in this study. ||M x || 2 µ 2 x 20 ||M y || 2 µ 2 x 20 ||M z || 2 µ 2 x 60FIG. 2: Hamiltonian and momentum constraint violations of the initial data as function of the distance Dµ for the equalmass model with ω/µ = 0.8000. The vertical black dashed line corresponds to our choice of the initial separation Dµ = 40 for our set of simulations, in the roughly constant region of the L2-norm. FIG. 3 : 3Main dataset of PS binaries discussed in this work, labelled by the value of the frequencies ω1/µ and ω2/µ of each star. In all cases, the stars are released from rest at a distance of Dµ = 40 and have the same initial phase. FIG. 4 :FIG. 5 : 45Top panel: Evolution of the amplitude of the real part of X φ . The solid red line corresponds to the analytical value cos ωt with ω = 0.90 and the blue circles to the numerical solution. Middle panel: Evolution of the total Proca energy and angular momentum for the model with ω/µ = 0.90 for three different resolutions. Bottom panel: same as the middle panel but for the minimum value of the lapse function α. Equatorial (xy) plane snapshots of the energy density in logscale during the evolution of the collisions of spinning PSs for different models with fixed ω1/µ = 0.8300 and varying the frequency ω2/µ of the secondary star. Time runs from top to bottom and is given in code units with G = c = µ = 1. FIG. 6 : 6Jµ 2 = 0.750. We evolve the star up to a time tµ = 8000. The top panel of Fig. 4 shows the evolution of the amplitude of the real part of X φ at the end of the simulation together with the analytical value. The numerical result is in excellent agreement with the analytical estimate. In the middle panel we show, for three different resolutions, namely a fixed grid with four refinement levels and dx = {0.8, 0.4, 0.2}/µ in their finest level, the time evolution of the Proca energy and angular Same as Fig. 5 but for PS models with ω1/µ = 0.9100 and different values of ω2/µ. momentum given by the Komar integrals: FIG. 7 : 7= m = 2 mode of the rΨ m 4 for six equal-mass PS collisions of increasing frequency. For an animation of the full set of GW signals from equal-mass collisions see [49]. FIG. 8: = m = 2 (blue lines) and = m = 3 (red lines) modes of rΨ m 4 FIG. 9 : 9Total GW energy as a function of ω/µ in the equal-mass case (top left panel), and three unequal-mass cases: ω1/µ = 0.8300 (top right panel), ω1/µ = 0.895 (bottom left panel) and ω1/µ = 0.91 (bottom right panel). FIG. 10 : 10Total GW energy for PS head-on collisions with fixed ω1/µ = 0.8700 and varying ω2/µ. The magenta lines correspond to the behaviour of the estimated square of the Proca field amplitude computed from Eq. (27) (dashed line) and from the formula 0.0047 cos((0.87 − ω2)t col ) + 0.001 (solid line) as a function of ω2/µ at t col = 210. We fit the analytic expressions to the peak of the model with ω1/µ = ω2/µ = 0.8700. from Eq. (27) and a variation of Eq. (27) as a function of ω 2 /µ at t col = 210. col µ ∼ 135 and t Dµ=45 col µ ∼ 250, and from the analysis of the top left panel of Fig. 11 we obtain ∆ω Dµ=30 max /µ ∼ 0. /µ =0.8000, ω 2 /µ =0.9000 Dµ =30 (time shifted) Dµ =40 Dµ =45 (time shifted) FIG. 11: Top left panel: Total GW energy for PS head-on collisions with fixed ω1/µ = 0.8000 and varying ω2/µ for three different initial distances Dµ = [30, 40, 45]. Top right panel: Total GW energy for the unequal-mass case with ω1/µ = 0.8000 and ω2/µ = 0.8450 as a function of the distance Dµ. Bottom left panel: = m = 2 waveforms for ω1/µ = 0.8950. Bottom right panel: = m = 2 waveforms for ω1/µ = 0.9100. ration in the equal-mass case. The top panel of Fig. 13 shows the total GW energy emitted for an equal-mass PS head-on collision, as a function of the stars frequency and for four initial distances, Dµ = {30, 35, 40, 45}. Correspondingly, the bottom panel of FIG. 12 : 12Equatorial (xy) plane snapshots of the energy density (left column) and the real part of the scalar potential X φ (remaining columns) taken during the time evolution of the collisions of spinning PSs with ω1/µ = 0.8000 and ω2/µ = 0.8450, changing the initial phase of the stars. Time runs from top to bottom and is given in code units with G = c = µ = 1. /µ =0.8000, ω 2 /µ =0.8450 Numerical Fit (Eq. (29)) FIG. 14: Top panel: GW energy for head-on collisions with fixed ω/µ = 0.8000 and non-zero phase difference ∆ between the stars. The magenta line depicts the behaviour of the square of the Proca field amplitude with ∆ as computed from Eq. (30). Bottom panel: Same as the top panel but for the unequal-mass case with ω1/µ = 0.8000 and ω2/µ = 0.8450. FIG. 15 : 15Left panel: Absolute value of the Fourier transform of the quadrupole mode of Ψ4 for an unequal-mass PS head-on collision as a function of the relative phase ∆ of the two stars. A clear dependence of both the mode amplitude and frequency content on ∆ is observed. Right panel: Absolute value of the = m = 3 mode of Ψ4 for an equal-mass collision as a function of ∆ . When ∆ = 0 this mode is completely suppressed (as in the black-hole merger case for equal masses and aligned spins) while variations of this parameter trigger this mode during merger and ringdown. FIG. 16 : 16Convergence study using for the equal-mass case with ω1/µ = ω2/µ = 0.8000 and the unequal-mass case with ω1/µ = 0.8000 and ω2/µ = 0.8450. The first and third panels show the respective rΨ =m=2 4 , how-FIG. 13: Top panel: GW energy as a function of frequency for equal-mass head-on collisions for four different initial distances Dµ = [30, 35, 40, 45]. Bottom panel: GW energy as a function of the initial distance Dµ for the equal-mass case with ω/µ = 0.8000.0.80 0.82 0.84 0.86 0.88 0.90 0.92 ω/µ 0.0012 0.0016 0.0020 0.0024 0.0028 E GW µ Dµ =30 Dµ =35 Dµ =40 Dµ =45 30 32 34 36 38 40 42 44 Dµ 0.0020 0.0022 0.0024 0.0026 0.0028 E GW µ ω/µ =0.8000 We briefly comment here on the convergence analysis we carried out to assess the quality of our simulations. InFig. 16we plot the gravitational wave from an equal-mass (ω 1 /µ = ω 2 /µ = 0.8000) and an unequal-mass (ω 1 /µ = 0.8000 and ω 2 /µ = 0.8450) collisions using four different resolutions with (dx = {0.046875, 0.0625, 0.09375, 0.125}/µ) in the finest level. We obtain fourth-order convergence. The initial transient is due to spurious radiation in the initial data, which is not constraint-satisfying, and, therefore, does not converge with resolution. . V Cardoso, P Pani, 1904.05363Living Reviews in Relativity. 22V. Cardoso and P. Pani, Living Reviews in Relativity 22, 4 (2019), 1904.05363. . B P Abbott, R Abbott, T D Abbott, M R Abernathy, F Acernese, K Ackley, C Adams, T Adams, P Addesso, R X Adhikari, 1602.03837Physical Review Letters. 11661102B. P. Abbott, R. Abbott, T. D. Abbott, M. R. Aber- nathy, F. Acernese, K. Ackley, C. Adams, T. Adams, P. Addesso, R. X. Adhikari, et al., Physical Review Let- ters 116, 061102 (2016), 1602.03837. . B Abbott, Virgo ; LIGO Scientific1710.05832Phys. Rev. Lett. 119161101B. Abbott et al. (Virgo, LIGO Scientific), Phys. Rev. Lett. 119, 161101 (2017), 1710.05832. . B P Abbott, Virgo ; LIGO Scientific1709.09660Phys. Rev. Lett. 119141101B. P. Abbott et al. (Virgo, LIGO Scientific), Phys. Rev. Lett. 119, 141101 (2017), 1709.09660. . B P Abbott, LIGO Scientific1811.12907Phys. Rev. X. 931040B. P. Abbott et al. (LIGO Scientific, Virgo), Phys. Rev. X 9, 031040 (2019), 1811.12907. . R Abbott, LIGO Scientific14527R. Abbott et al. (LIGO Scientific, Virgo) (2020), 2010.14527. . R Abbott, T Abbott, S Abraham, F Acernese, K Ackley, C Adams, R Adhikari, V Adya, C Affeldt, M Agathos, Physical review letters. 125101102R. Abbott, T. Abbott, S. Abraham, F. Acernese, K. Ack- ley, C. Adams, R. Adhikari, V. Adya, C. Affeldt, M. Agathos, et al., Physical review letters 125, 101102 (2020). . R Abbott, T Abbott, F Acernese, K Ackley, C Adams, N Adhikari, R Adhikari, V Adya, C Affeldt, D Agarwal, arXiv:2111.03606arXiv preprintR. Abbott, T. Abbott, F. Acernese, K. Ackley, C. Adams, N. Adhikari, R. Adhikari, V. Adya, C. Affeldt, D. Agar- wal, et al., arXiv preprint arXiv:2111.03606 (2021). . D J Kaup, Phys. Rev. 1721331D. J. Kaup, Phys. Rev. 172, 1331 (1968). . R Ruffini, S Bonazzola, Phys. Rev. 1871767R. Ruffini and S. Bonazzola, Phys. Rev. 187, 1767 (1969). . A Arvanitaki, S Dimopoulos, S Dubovsky, N Kaloper, J March-Russell, 0905.4720Phys.Rev. 81123530A. Arvanitaki, S. Dimopoulos, S. Dubovsky, N. Kaloper, and J. March-Russell, Phys.Rev. D81, 123530 (2010), 0905.4720. . A Arvanitaki, S Dubovsky, 1004.3558Phys.Rev. 8344026A. Arvanitaki and S. Dubovsky, Phys.Rev. D83, 044026 (2011), 1004.3558. . F F Freitas, C A Herdeiro, A P Morais, A Onofre, R Pasechnik, E Radu, N Sanchis-Gual, R Santos, Journal of Cosmology and Astroparticle Physics. 202147F. F. Freitas, C. A. Herdeiro, A. P. Morais, A. Onofre, R. Pasechnik, E. Radu, N. Sanchis-Gual, and R. Santos, Journal of Cosmology and Astroparticle Physics 2021, 047 (2021). . R Brito, V Cardoso, C A R Herdeiro, E Radu, 1508.05395Phys. Lett. 752291R. Brito, V. Cardoso, C. A. R. Herdeiro, and E. Radu, Phys. Lett. B752, 291 (2016), 1508.05395. . C A R Herdeiro, J Kunz, I Perapechka, E Radu, Y Shnir, Phys. Lett. B. 812136027C. A. R. Herdeiro, J. Kunz, I. Perapechka, E. Radu, and Y. Shnir, Phys. Lett. B 812, 136027 (2021), 2008.10608. . N Sanchis-Gual, F Di Giovanni, C Herdeiro, E Radu, J A Font, Physical Review Letters. 126241105N. Sanchis-Gual, F. Di Giovanni, C. Herdeiro, E. Radu, and J. A. Font, Physical Review Letters 126, 241105 (2021). . C Herdeiro, I Perapechka, E Radu, Y Shnir, 1906.05386Phys. Lett. B. 797134845C. Herdeiro, I. Perapechka, E. Radu, and Y. Shnir, Phys. Lett. B 797, 134845 (2019), 1906.05386. . S L Liebling, C Palenzuela, Living Reviews in Relativity. 201S. L. Liebling and C. Palenzuela, Living Reviews in Rel- ativity 20, 1 (2017). . E Seidel, W.-M Suen, Physical Review Letters. 722516E. Seidel and W.-M. Suen, Physical Review Letters 72, 2516 (1994). . F Di Giovanni, N Sanchis-Gual, C A R Herdeiro, J A Font, 1803.04802Phys. Rev. 9864044F. Di Giovanni, N. Sanchis-Gual, C. A. R. Herdeiro, and J. A. Font, Phys. Rev. D98, 064044 (2018), 1803.04802. . N Sanchis-Gual, F Di Giovanni, M Zilhão, C Herdeiro, P Cerdá-Durán, J Font, E Radu, Physical review letters. 123221101N. Sanchis-Gual, F. Di Giovanni, M. Zilhão, C. Herdeiro, P. Cerdá-Durán, J. Font, and E. Radu, Physical review letters 123, 221101 (2019). . F Di Giovanni, N Sanchis-Gual, P Cerdá-Durán, M Zilhão, C Herdeiro, J A Font, E Radu, Phys. Rev. D. 1025845F. Di Giovanni, N. Sanchis-Gual, P. Cerdá-Durán, M. Zilhão, C. Herdeiro, J. A. Font, and E. Radu, Phys. Rev. D 102, 124009 (2020), 2010.05845. . N Siemonsen, W E East, 2011.08247Phys. Rev. D. 10344022N. Siemonsen and W. E. East, Phys. Rev. D 103, 044022 (2021), 2011.08247. . J Bustillo, N Sanchis-Gual, A Torres-Forné, J A Font, A Vajpeyi, R Smith, C Herdeiro, E Radu, S H W Leong, Phys. Rev. Lett. 1265376J. Calderón Bustillo, N. Sanchis-Gual, A. Torres-Forné, J. A. Font, A. Vajpeyi, R. Smith, C. Herdeiro, E. Radu, and S. H. W. Leong, Phys. Rev. Lett. 126, 081101 (2021), 2009.05376. J C Bustillo, I C F Wong, N Sanchis-Gual, S H W Leong, A Torres-Forne, K Chandra, J A Font, C Herdeiro, E Radu, T G F Li, arXiv:2205.15029Gravitationalwave parameter inference with the newman-penrose scalar (2022). J. C. Bustillo, I. C. F. Wong, N. Sanchis-Gual, S. H. W. Leong, A. Torres-Forne, K. Chandra, J. A. Font, C. Herdeiro, E. Radu, and T. G. F. Li, Gravitational- wave parameter inference with the newman-penrose scalar (2022), arXiv:2205.15029. Searching for vector boson-star mergers within ligo-virgo intermediate-mass black-hole merger candidates. J C Bustillo, N Sanchis-Gual, S H W Leong, K Chandra, A Torres-Forne, J A Font, C Herdeiro, E Radu, I C F Wong, T G F Li, arXiv:2206.02551J. C. Bustillo, N. Sanchis-Gual, S. H. W. Leong, K. Chan- dra, A. Torres-Forne, J. A. Font, C. Herdeiro, E. Radu, I. C. F. Wong, and T. G. F. Li, Searching for vector boson-star mergers within ligo-virgo intermediate-mass black-hole merger candidates (2022), arXiv:2206.02551. . N Sanchis-Gual, C Herdeiro, E Radu, J C Degollado, J A Font, Phys. Rev. D. 95104028N. Sanchis-Gual, C. Herdeiro, E. Radu, J. C. Degollado, and J. A. Font, Phys. Rev. D 95, 104028 (2017). . C Herdeiro, E Radu, H Runarsson, 1603.02687Class. Quant. Grav. 33154001C. Herdeiro, E. Radu, and H. Runarsson, Class. Quant. Grav. 33, 154001 (2016), 1603.02687. Physical review letters. P V Cunha, E Berti, C A Herdeiro, 119251102P. V. Cunha, E. Berti, and C. A. Herdeiro, Physical re- view letters 119, 251102 (2017). . P V P Cunha, C Herdeiro, E Radu, N Sanchis, 2207.13713P. V. P. Cunha, C. Herdeiro, E. Radu, and N. Sanchis- Gual (2022), 2207.13713. . C Palenzuela, I Olabarrieta, L Lehner, S L Liebling, Physical Review D. 7564005C. Palenzuela, I. Olabarrieta, L. Lehner, and S. L. Liebling, Physical Review D 75, 064005 (2007). . M Bezares, C Palenzuela, C Bona, Physical Review D. 95124005M. Bezares, C. Palenzuela, and C. Bona, Physical Review D 95, 124005 (2017). . N Sanchis-Gual, C Herdeiro, J A Font, E Radu, F. Di Giovanni, Phys. Rev. D. 9924017N. Sanchis-Gual, C. Herdeiro, J. A. Font, E. Radu, and F. Di Giovanni, Phys. Rev. D 99, 024017 (2019). . V Jaramillo, N Sanchis-Gual, J Barranco, A Bernal, J C Degollado, C Herdeiro, M Megevand, D Núñez, arXiv:2202.00696arXiv preprintV. Jaramillo, N. Sanchis-Gual, J. Barranco, A. Bernal, J. C. Degollado, C. Herdeiro, M. Megevand, and D. Núñez, arXiv preprint arXiv:2202.00696 (2022). . M Bezares, M Bošković, S Liebling, C Palenzuela, P Pani, E Barausse, 2201.06113Phys. Rev. D. 10564067M. Bezares, M. Bošković, S. Liebling, C. Palenzuela, P. Pani, and E. Barausse, Phys. Rev. D 105, 064067 (2022), 2201.06113. . N Sanchis-Gual, M Zilhão, V Cardoso, 2207.05494N. Sanchis-Gual, M. Zilhão, and V. Cardoso (2022), 2207.05494. . T Helfer, U Sperhake, R Croft, M Radia, B.-X Ge, E A Lim, Classical and Quantum Gravity. 3974001T. Helfer, U. Sperhake, R. Croft, M. Radia, B.-X. Ge, and E. A. Lim, Classical and Quantum Gravity 39, 074001 (2022). . R Croft, T Helfer, B.-X Ge, M Radia, T Evstafyeva, E A Lim, U Sperhake, K Clough, arXiv:2207.05690arXiv preprintR. Croft, T. Helfer, B.-X. Ge, M. Radia, T. Evstafyeva, E. A. Lim, U. Sperhake, and K. Clough, arXiv preprint arXiv:2207.05690 (2022). . C Palenzuela, L Lehner, S L Liebling, Physical Review D. 7744036C. Palenzuela, L. Lehner, and S. L. Liebling, Physical Review D 77, 044036 (2008). . E Toolkit, E. Toolkit, URL http://einsteintoolkit. org (2012). . F Löffler, Classical Quantum Gravity. 29115001F. Löffler, Classical Quantum Gravity 29, 115001 (2012). . D Brown, P Diener, O Sarbach, E Schnetter, M Tiglio, Physical Review D. 7944023D. Brown, P. Diener, O. Sarbach, E. Schnetter, and M. Tiglio, Physical Review D 79, 044023 (2009). . C Reisswig, C D Ott, U Sperhake, E Schnetter, Physical Review D. 8364008C. Reisswig, C. D. Ott, U. Sperhake, and E. Schnetter, Physical Review D 83, 064008 (2011). . M Zilhão, H Witek, V Cardoso, 1505.00797Class. Quant. Grav. 32234003M. Zilhão, H. Witek, and V. Cardoso, Class. Quant. Grav. 32, 234003 (2015), 1505.00797. . H Witek, M Zilhão, Canuda , H. Witek and M. Zilhão, Canuda, https://bitbucket. org/canuda/. . H Witek, M Zilhao, G Bozzola, M Elley, G Ficarra, T Ikeda, N Sanchis-Gual, H Silva, 10.5281/zenodo.3565474H. Witek, M. Zilhao, G. Bozzola, M. Elley, G. Ficarra, T. Ikeda, N. Sanchis-Gual, and H. Silva (2021), URL https://doi.org/10.5281/zenodo.3565474. . E Newman, R Penrose, J.Math.Phys. 3566E. Newman and R. Penrose, J.Math.Phys. 3, 566 (1962). . N Sanchis-Gual, M Zilhão, C Herdeiro, F Di Giovanni, J A Font, E Radu, Physical Review D. 102101504N. Sanchis-Gual, M. Zilhão, C. Herdeiro, F. Di Giovanni, J. A. Font, and E. Radu, Physical Review D 102, 101504 (2020). Gravitational-wave emission videos. Gravitational-wave emission videos, http: //gravitation.web.ua.pt/node/3922 (2022). . J C Bustillo, N Sanchis-Gual, A Torres-Forné, J A Font, Physical Review Letters. 126201101J. C. Bustillo, N. Sanchis-Gual, A. Torres-Forné, and J. A. Font, Physical Review Letters 126, 201101 (2021). . C.-W Lai, gr-qc/0410040arXiv preprintC.-W. Lai, arXiv preprint gr-qc/0410040 (2004). . D.-I Choi, K C Lai, M W Choptuik, E W Hirschmann, S L Liebling, F Pretorius, preprintD.-I. Choi, K. C. Lai, M. W. Choptuik, E. W. Hirschmann, S. L. Liebling, and F. Pretorius, preprint (2010). . S Hild, M Abernathy, F Acernese, P Amaro-Seoane, N Andersson, K Arun, F Barone, B Barr, M Barsuglia, M Beker, 1012.0908Classical and Quantum Gravity. 2894013S. Hild, M. Abernathy, F. Acernese, P. Amaro-Seoane, N. Andersson, K. Arun, F. Barone, B. Barr, M. Barsug- lia, M. Beker, et al., Classical and Quantum Gravity 28, 094013 (2011), 1012.0908.
[]
[ "Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000. Disease-oriented image embedding with pseudo-scanner standardization for content-based image retrieval on 3D brain MRI", "Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000. Disease-oriented image embedding with pseudo-scanner standardization for content-based image retrieval on 3D brain MRI", "Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000. Disease-oriented image embedding with pseudo-scanner standardization for content-based image retrieval on 3D brain MRI", "Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000. Disease-oriented image embedding with pseudo-scanner standardization for content-based image retrieval on 3D brain MRI" ]
[ "Hayato Arai \nDepartment of Applied Informatics\nGraduate School of Science and Engineering\nHosei University\n184-8584TokyoJapan\n", "Yuto Onga \nDepartment of Applied Informatics\nGraduate School of Science and Engineering\nHosei University\n184-8584TokyoJapan\n", "Kumpei Ikuta \nDepartment of Applied Informatics\nGraduate School of Science and Engineering\nHosei University\n184-8584TokyoJapan\n", "Yusuke Chayama \nDepartment of Applied Informatics\nGraduate School of Science and Engineering\nHosei University\n184-8584TokyoJapan\n", "Hitoshi Iyatomi [email protected] \nDepartment of Applied Informatics\nGraduate School of Science and Engineering\nHosei University\n184-8584TokyoJapan\n", "ANDKenichi Oishi [email protected] \nDepartment of Radiology and Radiological Science\nJohns Hopkins University School of Medicine\n21205BaltimoreMDUSA\n", "Hayato Arai \nDepartment of Applied Informatics\nGraduate School of Science and Engineering\nHosei University\n184-8584TokyoJapan\n", "Yuto Onga \nDepartment of Applied Informatics\nGraduate School of Science and Engineering\nHosei University\n184-8584TokyoJapan\n", "Kumpei Ikuta \nDepartment of Applied Informatics\nGraduate School of Science and Engineering\nHosei University\n184-8584TokyoJapan\n", "Yusuke Chayama \nDepartment of Applied Informatics\nGraduate School of Science and Engineering\nHosei University\n184-8584TokyoJapan\n", "Hitoshi Iyatomi [email protected] \nDepartment of Applied Informatics\nGraduate School of Science and Engineering\nHosei University\n184-8584TokyoJapan\n", "ANDKenichi Oishi [email protected] \nDepartment of Radiology and Radiological Science\nJohns Hopkins University School of Medicine\n21205BaltimoreMDUSA\n" ]
[ "Department of Applied Informatics\nGraduate School of Science and Engineering\nHosei University\n184-8584TokyoJapan", "Department of Applied Informatics\nGraduate School of Science and Engineering\nHosei University\n184-8584TokyoJapan", "Department of Applied Informatics\nGraduate School of Science and Engineering\nHosei University\n184-8584TokyoJapan", "Department of Applied Informatics\nGraduate School of Science and Engineering\nHosei University\n184-8584TokyoJapan", "Department of Applied Informatics\nGraduate School of Science and Engineering\nHosei University\n184-8584TokyoJapan", "Department of Radiology and Radiological Science\nJohns Hopkins University School of Medicine\n21205BaltimoreMDUSA", "Department of Applied Informatics\nGraduate School of Science and Engineering\nHosei University\n184-8584TokyoJapan", "Department of Applied Informatics\nGraduate School of Science and Engineering\nHosei University\n184-8584TokyoJapan", "Department of Applied Informatics\nGraduate School of Science and Engineering\nHosei University\n184-8584TokyoJapan", "Department of Applied Informatics\nGraduate School of Science and Engineering\nHosei University\n184-8584TokyoJapan", "Department of Applied Informatics\nGraduate School of Science and Engineering\nHosei University\n184-8584TokyoJapan", "Department of Radiology and Radiological Science\nJohns Hopkins University School of Medicine\n21205BaltimoreMDUSA" ]
[]
To build a robust and practical content-based image retrieval (CBIR) system that is applicable to a clinical brain MRI database, we propose a new framework -Disease-oriented image embedding with pseudo-scanner standardization (DI-PSS) -that consists of two core techniques, data harmonization and a dimension reduction algorithm. Our DI-PSS uses skull stripping and CycleGAN-based image transformations that map to a standard brain followed by transformation into a brain image taken with a given reference scanner. Then, our 3D convolutioinal autoencoders (3D-CAE) with deep metric learning acquires a low-dimensional embedding that better reflects the characteristics of the disease. The effectiveness of our proposed framework was tested on the T1-weighted MRIs selected from the Alzheimer's Disease Neuroimaging Initiative and the Parkinson's Progression Markers Initiative. We confirmed that our PSS greatly reduced the variability of low-dimensional embeddings caused by different scanner and datasets. Compared with the baseline condition, our PSS reduced the variability in the distance from Alzheimer's disease (AD) to clinically normal (CN) and Parkinson disease (PD) cases by 15.8-22.6% and 18.0-29.9%, respectively. These properties allow DI-PSS to generate lower dimensional representations that are more amenable to disease classification. In AD and CN classification experiments based on spectral clustering, PSS improved the average accuracy and macro-F1 by 6.2% and 10.7%, respectively. Given the potential of the DI-PSS for harmonizing images scanned by MRI scanners that were not used to scan the training data, we expect that the DI-PSS is suitable for application to a large number of legacy MRIs scanned in heterogeneous environments.
10.1109/access.2021.3129105
[ "https://export.arxiv.org/pdf/2108.06518v1.pdf" ]
237,091,371
2108.06518
593efd4a5dfb9ff6c5defcd411aebbf19d9944d7
Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000. Disease-oriented image embedding with pseudo-scanner standardization for content-based image retrieval on 3D brain MRI Hayato Arai Department of Applied Informatics Graduate School of Science and Engineering Hosei University 184-8584TokyoJapan Yuto Onga Department of Applied Informatics Graduate School of Science and Engineering Hosei University 184-8584TokyoJapan Kumpei Ikuta Department of Applied Informatics Graduate School of Science and Engineering Hosei University 184-8584TokyoJapan Yusuke Chayama Department of Applied Informatics Graduate School of Science and Engineering Hosei University 184-8584TokyoJapan Hitoshi Iyatomi [email protected] Department of Applied Informatics Graduate School of Science and Engineering Hosei University 184-8584TokyoJapan ANDKenichi Oishi [email protected] Department of Radiology and Radiological Science Johns Hopkins University School of Medicine 21205BaltimoreMDUSA Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000. Disease-oriented image embedding with pseudo-scanner standardization for content-based image retrieval on 3D brain MRI 10.1109/ACCESS.2021.DOIfor the Alzheimer's Disease Neuroimaging Initiative and the Parkinson's Progression Markers Initiative. Corresponding author: Hitoshi Iyatomi Data used in preparation of this article were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. A complete listing of ADNI investigators can be found at:INDEX TERMS ADNICBIRconvolutional auto encodersCycleGANData harmonizationdata standardizationmetric learningMRIPPMI To build a robust and practical content-based image retrieval (CBIR) system that is applicable to a clinical brain MRI database, we propose a new framework -Disease-oriented image embedding with pseudo-scanner standardization (DI-PSS) -that consists of two core techniques, data harmonization and a dimension reduction algorithm. Our DI-PSS uses skull stripping and CycleGAN-based image transformations that map to a standard brain followed by transformation into a brain image taken with a given reference scanner. Then, our 3D convolutioinal autoencoders (3D-CAE) with deep metric learning acquires a low-dimensional embedding that better reflects the characteristics of the disease. The effectiveness of our proposed framework was tested on the T1-weighted MRIs selected from the Alzheimer's Disease Neuroimaging Initiative and the Parkinson's Progression Markers Initiative. We confirmed that our PSS greatly reduced the variability of low-dimensional embeddings caused by different scanner and datasets. Compared with the baseline condition, our PSS reduced the variability in the distance from Alzheimer's disease (AD) to clinically normal (CN) and Parkinson disease (PD) cases by 15.8-22.6% and 18.0-29.9%, respectively. These properties allow DI-PSS to generate lower dimensional representations that are more amenable to disease classification. In AD and CN classification experiments based on spectral clustering, PSS improved the average accuracy and macro-F1 by 6.2% and 10.7%, respectively. Given the potential of the DI-PSS for harmonizing images scanned by MRI scanners that were not used to scan the training data, we expect that the DI-PSS is suitable for application to a large number of legacy MRIs scanned in heterogeneous environments. I. INTRODUCTION I N the new era of Open Science [1], data sharing has become increasingly crucial for efficient and fair development of science and industry. Especially in the field of medical image science, various datasets have been released and used for the development of new methods and benchmarks. There have been attempts to create publicly open databases consisting of medical images, demographic data, and clinical information, such as ADNI, AIBL, PPMI, 4RTN, PING, ABCD and UK BioBank. In the near future, clinical images acquired with medical indications will become available for research use. Big data, consisting of large amounts of brain magnetic resonance (MR) images and corresponding medical records, VOLUME 4, 2016 1 arXiv:2108.06518v1 [cs.CV] 14 Aug 2021 could provide new evidence for the diagnosis and treatment of various diseases. Clearly, search technology is essential for the practical and effective use of such big data. Currently, text-based searching is widely used for the retrieval of brain MR images. However, since this approach requires skills and experience during retrieval and data registration, there is a strong demand from the field to realize content-based image retrieval (CBIR) [2]. To build a CBIR system that is feasible for brain MR imaging (MRI) databases, obtaining an appropriate and robust low-dimensional representation of the original MR images that reflects the characteristics of the disease in focus is extremely important. Various methods have been proposed, including those based on classical feature description [3]- [5], anatomical phenotypes [6], and deep learning techniques [7]- [9]. The latter two techniques [8], [9] acquire similar low-dimensional representations for similar disease data by introducing the idea of distance metric learning [10] [11]. Their low-dimensional representations adequately capture disease characteristics rather than individual variations seen on gyrification patterns in the brain. However, the application of these methods to a heterogeneous database containing MRIs from various scanners and scan protocols is hampered by the scanner or protocol bias, which is not negligible. In brain MRI, such non-biological experimental variations (i.e., magnetic field strength, scanner manufacturer, reconstruction method) resulting from differences in scanner characteristics and protocols can affect the images in various ways and have a significant impact on the subsequent process [9], [12]- [16]. Wachinger et al. [16] analyzed 35,320 MR images from 17 open datasets and performed the 'Name That Dataset' test, that is guessing which dataset it is based on the images alone. They reported a prediction accuracy of 71.5% based only on volume and thickness information from 70% of the training data. This is evidence that there are clear features left among datasets. Removing those variabilities is essential in multi-site and long-term studies and for building a robust CBIR system. There has been an increase in recent research on data harmonization, i.e., eliminating or reducing variation that is not intrinsically related to the brain's biological features. Perhaps the most straightforward image harmonization approach is to reduce the variations in the intensity profile [17], [18]. In the methods in both [17] and [18], correction of the luminance distribution for each sub-region reduces the variability of the underlying statistics between images, whereas histogram equalization reduces the variability of neuroradiological features. However, these methods are limited to approximating rudimentary statistics that can be calculated from images, and they are based on the assumption that the intensity histogram is similar among images. This assumption is invalid when images that contain pathological findings that affect intensity profile are included. While some improvement in unintended image variability can be expected, the effect on practical tests that utilize data from multiple sites is unknown. In the field of genomics, Johnson et al. [19] proposed an empirical Bayes-based correction method to reduce batch effects, which are non-biological differences originating from each batch of micro-array experiments obtained from multiple tests. This effective statistical bias reduction method is now called ComBat, and it has recently been published as a tool for MRI harmonization [20]. This tool has been applied to several studies [14], [16], [21], [22]. The ComBatbased methods standardize each cortical region based on an additive and use multiplicative linear transform to compensate for variability. Some limitations of these models have been pointed out, such as the following: (i) they might be insufficient for complex multi-site and area-level mapping, (ii) the assumption of certain prior probabilities (Gaussian or inverse gamma) is not always appropriate, and (iii) they are susceptible to outliers [23]. Recently, advancements in machine learning techniques [23]- [26] have provided practical solutions for MR image harmonization. DeepHarmony [24] uses a fully convolutional U-net to perform the harmonization of scanners. The researchers used an MRI dataset of multiple sclerosis patients in a longitudinal clinical setting to evaluate the effect of protocol changes on atrophy measures in a clinical study. As a result, DeepHarmony confirmed a significant improvement in the consistency of volume quantification across scanning protocols. This study was practical in that it aimed to directly standardize MR images using deep learning to achieve longterm, multi-institutional quantitative diagnosis. However, this model requires "traveling head" (participants are scanned using multiple MRI scanners) to train the model. Zhao et al. [23] attempted to standardize a group of MR images of infants taken at multiple sites into a reference group using CycleGAN [27], which has a U-net structure in the generator. The experiment validated the evaluation of cortical thickness with several indices (i.e., ROI (region-of-interest)-base, distribution of low-dimensional representations). They argued that the retention of the patient's age group was superior to ComBat in evaluating group difference. Moyer et al. [25] proposed a sophisticated training technique to reconstruct bias-free MR images by acquiring a lowdimensional representation independent of the scanner and condition. Their method is an hourglass-type unsupervised learning model based on variational autoencoders (VAE) with an encoder-decoder configuration. The input x and output x are the same MR images, and their low-dimensional representation is z (i.e., x → z → x ). The model is trained with the constraint that z and site-and scanner-specific information s are orthogonal (actually relaxed), such that the s in z is eliminated. They demonstrated the advantages of their method on diffusion MRI, but their technological framework is applicable to other modalities. Dinsdale et al. [26] also proposed a data harmonization method based on the idea of domain adaptation [28]. Their model uses adversarial learning, where the feature extractor consisting of convolutional neural networks (CNN) following the input is branched into a fully connected net for the original task (e.g., segmentation and classification) and other fully connected nets for domain discriminators (e.g., scanner type or site prediction) to make the domain unknown while improving the accuracy of the original task. They have confirmed its effectiveness in age estimation and segmentation tasks. The methods developed by Moyer et al. and Dinsdale et al. aim to generate a low-dimensional representation with "no site information", and they are highly practical and generalizable techniques for data harmonization. Nevertheless, for CBIR, a method that is applicable for a large number of legacy images is necessary. Here, it is not realistic to collect images from each site and train the model to harmonize them. Practically, a method that can convert heterogeneous images in terms of variations in scanners and scan parameters into images scanned by a given pseudo-"standard" environment by applying a learned model is highly desired. In this paper, we propose a novel framework called disease-oriented image embedding with pseudo-scanner standardization (DI-PSS) to obtain a low-dimensional representation of MR images for practical CBIR implementation. The PSS, the key element of the proposal, corrects the bias caused by different scanning environments and converts the images so that it is as if the same equipment had scanned them. Our experiments on ADNI and PPMI datasets consisting of MR images captured by three manufacturers' MRI systems confirmed that the proposed DI-PSS plays an important role in realizing CBIR. The highlights of this paper's contribution are as follows: • To the best of the authors' knowledge, this is the first study of the acquisition and quantitative evaluation of an effective low-dimensional representation of brain MR images for CBIR, including scanner harmonization. • Our DI-PSS framework reduces undesirable differences caused by differences in scanning environments (e.g., scanner, protocol, dataset) by converting MR images to images taken on a predefined pseudo-standard scanner, and a deep network using a metric learning acquires a low-dimensional representation that better represents the characteristics of the disease. • DI-PSS provides appropriately good low-dimensional representations for images from other vendors' scanners, diseases, and datasets that are not used for learning image harmonization. This is an important feature for the practical and robust CBIR, which applies to a large amount of legacy MRIs scanned at heterogeneous environments. II. CLARIFICATION OF THE ISSUES ADDRESSED IN THIS PAPER A. OVERLOOKING THE PROBLEM We begin by presenting the issues to be solved in this paper. As mentioned above, to realize CBIR for brain MRI, Onga et al. proposed a new technique called disease-oriented data concentration with metric learning (DDCML), which acquires low-dimensional representations of 3D brain MR images that are focused on disease features rather than the features of the subject's brain shape [9]. DDCML is composed of 3D convolutional autoencoders (3D-CAE) effectively combined with deep metric learning. Thanks to its metric learning, DDCML could acquire reasonable lowdimensional representations for unlearned diseases according to their severity, demonstrating the feasibility of CBIR for brain MR images. However, we found that such representations are highly sensitive to differences in datasets (i.e., differences in imaging environments, scanners, protocols, etc.), which is a serious challenge for CBIR. Figure 1 shows the low-dimensional distribution obtained by DDCML and visualized by t-SNE [29]. Here, DDCML was trained on Alzheimer's disease (AD) and healthy cases (clinically normal; CN) in the ADNI2 dataset and evaluated ADNI2 cases not used for training and healthy control (Control -equivalent to CN) and Parkinson's disease (PD) cases in the untrained PPMI dataset. From the perspective of CBIR, it is desirable to obtain similar low-dimensional representations for CN and Control. However, it can be confirmed that the obtained low-dimensional representations are more affected by the differences in the environment (dataset) than by the disease. As mentioned above, differences in imaging environments, including scanners, are a major problem in multi-center and time series analysis, and inconsistent lowdimensional representations because of such differences in datasets are a fatal problem in CBIR implementation. The purpose of this paper is reducing these differences and to obtain a low-dimensional representation that better captures the characteristics of the disease and is suitable for appropriate CBIR. B. OUR DATA HARMONIZATION STRATEGY FOR REALIZING CBIR In studies dealing with multi-site and long-term data, it is undoubtedly important to reduce non-biological bias origi-VOLUME 4, 2016 nating from differences among sites and datasets. Since the methods of Moyer et. al. [25] and Dinsdale et al. [26] are theoretical and straightforward learning method that utilizes images of the target site to achieve data harmonization, their robustness to unexpected input (i.e. from another site or dataset) is questionable. Therefore, in principle, the images of all target sites (scanners, protocols) need to be learned in advance. Since CBIR requires more consideration of the use of images taken in the past, the number of environments that need to be addressed can be larger than for general data harmonization. It will be more difficult to implement a harmonization method that learns all the data of multiple environments in advance. Therefore, in contrast to their approaches, we aim to achieve data harmonization by converting images taken in each environment into images that can be regarded as having been taken in one predetermined "standard" environment (e.g., the scanner currently used primarily at each site). However, in addition to the problems described above, it is practically impossible to build an image converter for each environment. With this background, we have developed a framework that combines CycleGAN, which realizes robust image transformation, with deep metric learning to achieve a certain degree of harmonization even for images in untrained environments. In this paper, we validate the feasibility of our framework, which converts MR images captured in various environments into pseudo standard environment images using only one type of image converter. III. DISEASE-ORIENTED IMAGE EMBEDDING WITH PSEUDO-SCANNER STANDARDIZATION (DI-PSS) The aim of this study is to obtain a low-dimensional embedding of brain MRI that is independent of the MRI scanner and individual characteristics but dependent on the pathological features of the brain, to realize a practical CBIR system for brain MRI. To accomplish this, we propose a DI-PSS framework, which is composed of the three following components: (1) pre-process, (2) PSS, and (3) embedding acquisition. A. THE PRE-PROCESSING COMPONENT (SKULL STRIPPING WITH GEOMETRY AND INTENSITY NORMALIZATION) The pre-processing component performs the necessary preprocessing for future image scanner standardization processing and low-dimensional embedding acquisition processing. Specifically, for all 3D brain MR image data, skull stripping was performed using a multi-atlas label-fusion algorithm implemented in the MRICloud [30]. The skull-stripped images were linearly aligned to the JHU-MNI space using a 12-parameters affine transformation function implemented in the MRICloud, resulting in aligned brain images. This feature makes a significant contribution to the realization of the proposed PSS in the next stage. It is important to note here that since brain volume information is the feature that contributes most to the prediction of the dataset [16], the alignment to a standard brain with this skull stripping technique should also contribute to the harmonization of the data. In addition, because the intensity and contrast of brain MR images are arbitrarily determined, there is a large inter-image variation. In brain MR image processing using machine learning, the variation in the average intensity confounds the results. Therefore, we standardized the intensity so that the average intensity value of each case was within mean µ = 18 and margin = 1.0 by performing an iterative gamma correction process, as in previous studies [31] [9]. B. THE PSS COMPONENT 1) The concept of PSS The proposed PSS is an image conversion scheme that converts a given raw MR image into a synthesized image that looks like an MR image scanned by a standard scanner and a protocol. Since there are numerous combinations of scanners and scan parameters, building scanner-and parameterspecific converters is not practical. Therefore, in our PSS scheme, we only construct a 1:1 image conversion model (i.e., PSS network) that converts images from a particular scanner Y to a standard scanner X. That is, a particular PSS network is used to convert images captured by other scanners (Z 1 , Z 2 , · · · ) as well. This strategy is in anticipation of the generalizability of the PSS network, backed by advanced deep learning techniques. In this paper, we evaluate the robustness of our image transformations provided by PSS on MR images taken by other vendors' scanners and on images in different datasets. Figure 2 gives an overview of our PSS network that realizes the PSS. The PSS network makes effective use of CycleGAN [27], which has achieved excellent results in 1:1 image transformation. Here, training of CycleGAN generally requires a lot of training data, especially in the case of 3D data, because the degree of freedom of the model parameters is large. However, it is difficult to collect such a large amount of supervised labeled 3D MRI data to keep up with the increase. Since the position of any given slice is almost the same in our setting thanks to MRICloud in the skull stripping process, a 3D image can be treated as a set of 2D images containing position information. With these advantages, our PSS suppressed the problems an overwhelmingly insufficient amount of training data and the high degree of freedom of the transformation network. In sum, arbitrary slices are cut out from the input 3D image and converted to slices corresponding to the same position in the 3D image as the target domain using the PSS network based on common (2D) CycleGAN. Note that the PSS process is performed using the trained generator G X . 2) Implementation of the PSS network The structure of the PSS network that realizes the proposed PSS is explained according to the CycleGAN syntax, with images captured by a standard scanner as domain X and images captured by a certain different scanner as domain Y . Generator G Y transforms (generates) an image y = G Y (x) with the features of domain Y from an image x of the original Our PSS network is based on CycleGAN, and PSS is performed with trained generator GX . domain X. Discriminator D Y determines the authenticity of the real image y belonging to domain Y or the generated y = G Y (x). Similarly, the conversion from domain Y to domain X is performed by generator G Y , and discriminator D X judges the authenticity of the image. The goal of this model is to learn maps of two domains X and Y given as training data. Note here again that we use the trained module G X (maps Y to X) as an image converter. The training of the model proceeds by repeating the transformation of the training data sample x i ∈ X and the training data sample y j ∈ Y . The overall objective function of the PSS network, L P SS to be minimized, consists of the three following loss components: adversarial loss (L GAN ), cycle consistency loss (L eye ), and identity mapping loss (L identity ). This is expressed as follows: L P SS (G Y ,G X , D Y , D X ) = L GAN (G Y , D Y ) + L GAN (G X , D X ) + λ 1 L eye + λ 2 L identity .(1) The adversarial loss (L GAN ) is defined based on the competition between the generator, which tries to produce the desired other domain image, and the discriminator, which sees through the fake generated image; this minimization implies a refinement of both. From the point of view of image transformation, the minimization of this loss means that the probability distribution generated by the generator is closer to the probability distribution of the counterpart domain, which means that a higher quality image can be obtained. This loss is defined in both directions, X → Y and Y → X, and these are expressed in order as follows: L GAN (G Y , D Y ) =E y∼p data (y) [(D Y (y) − 1) 2 ] +E x∼p data (x) [(D Y (G Y (x)) 2 ],(2)L GAN (G X , D X ) =E x∼p data (x) [(D X (x) − 1) 2 ] +E y∼p data (y) [(D X (G X (y)) 2 ].(3) The cycle consistency loss (L eye ) is a constraint to guarantee that mutual transformation is possible by cycling two generators: L eye (G X , G Y ) =E x∼p data (x) ||G X (G Y (x)) − x|| 1 +E y∼p data (y) ||G Y (G X (y)) − y|| 1(4) Finally, the identity mapping loss (L identity ) is a constraint to maintain the original image features without performing any transformation when the image of the destination domain is input: L identity (G X , G Y ) =E x∼p data (x) ||G X (x) − x|| 1 +E y∼p data (y) ||G Y (y) − y|| 1(5) It has been confirmed that the introduction of this constraint can suppress the learning of features that are not important in either domain, such as unneeded tints. Here, λ 1 and λ 2 are hyper-parameters and we set λ 1 = 10.0 and λ 2 = 0.5 as in the original setting. 3 ) The Embedding acquisition component In the embedding acquisition component, the lowdimensional embedding of 3D brain MRI images is obtained by our embedding network after the PSS process. Our embedding network is a 3D-CAE model consisting of encoders and decoders with distance metric learning, referring to Onga et al.'s DDCML [9]. Distance metric learning is a learning technique that reduces the Euclidean distance between feature representations of the same label and increases the distance between feature representations of different labels. Thanks to the introduction of metric learning, 3D-CAE has been found to yield embedding that is more focused on disease features. According to Hoffer's criteria [11], the distance distribution in the low-dimensional embedding space for input x for VOLUME 4, 2016 class i (i ∈ 1, · · · c; where c is the number of types of disease labels in the dataset) is calculated by P (x; x 1 , · · · , x c ) i = exp(−||f (x) − f (x i )|| 2 ) c j exp(−||f (x) − f (x j )|| 2 ) .(6) Here, x i (i ∈ 1, · · · , c) is randomly sampled data from each class i, and f denotes the operation of the encoder (i.e., encoder part of the 3D-CAE in our implementation). This probability can be thought of as the probability that the data x belong to each class i. The loss function L dist is calculated by the cross-entropy between the c-dimensional vector P described above and the c-dimensional one-hot vector I(x) with bits of the class to which x belongs as L dist (x, x 1 , · · · , x c ) = H(I(x), P (x; x 1 , · · · , x c )) (7) Here, H(I(x), P (x; x 1 , · · · , x c )) takes a small value when the probability that the element firing in I(x) belongs to the class it represents is high, whereas it takes a large value when the probability is low. Thus, L dist aims at the distribution of the sampled data at locations closer to the same class and farther from the different classes on the low-dimensional feature space. Finally, the objective function L CAE of our low-dimensional embedding acquisition network consisting of 3D-CAE and metric learning is finally expressed by the following equation: L CAE = L RM SE + αL dist (x, x 1 , · · · , x c )(8) Here, L RM SE is the pixel-wise root mean square error normalized by image size in CAE image reconstruction. Furthermore, α is a hyper-parameter set to 1/3 based on the results of preliminary experiments. IV. EXPERIMENTS In CBIR, cases of the same disease should be able to acquire similar low-dimensional representations, regardless of the individual, scanner, or protocol. We investigated the effectiveness of the proposed DI-PSS by quantitatively evaluating how PSS changes the distribution of embeddings within and between data groups (i.e., combination of scanner type and disease). In addition, we compared the clustering performance of the obtained embeddings against diseases with and without PSS. A. DATASET In this experiment, we used the ADNI2 and PPMI datasets, in which the vendor information of the scanners (Siemens [SI], GE Medical Systems [GE], Philips Medical Systems [PH]) was recorded along with the disease information. Statistics of those datasets used in the experiment are shown in Table 1. We used Alzheimer's disease (ADNI-AD or AD) and clinically normal cases (ADNI-CN) from ADNI2 dataset with vendor information. From the PPMI dataset, we used two types of labeled images, Parkinson's disease (PD) and Control. We did not utilize the scanner information for this dataset in evaluating the versatility of the proposed method. Note that ADNI-CN and Control can be considered medically equivalent. Furthermore, PD is known to show little or no difference in MRI from healthy cases [32] [33]. The ADNI and PPMI are longitudinal studies that include multiple time points, and the datasets contain multiple scans for each participant. To avoid duplication, one MRI was randomly selected from each participant. The MRICloud (https: //mricloud.org/) was used to skull strip the T1-weighted MRIs and affine transform to the JHU-MNI atlas [34]. A neurologist with more than 20 years of experience in brain MRI research performed the quality control of the MRIs and removed MRIs that the MRICloud did not appropriately pre-process. Due to the neural network model used in the experiments, the skull-stripped and affine-transformed brain MR images were converted to 160×160×192 pixels after cropping the background area. Training and evaluation of the PSS network and embedding network were performed using five-fold cross validation. In the evaluation experiments described below" the evaluation data of each fold is not included in the training data for either the PSS network or the embedding network. Note that even skilled and experienced neuroradiologists cannot separate PD from CN or Control by visual inspection of the T1-weighted images. Therefore, we did not expect these two conditions to be separable by unsupervised clustering methods even after applying the DI-PSS. Figures 3a and 3b show the architecture of the generator (G X , G Y ) and the discriminator (D X , D Y ), respectively of the PSS network. They are basically the same as the original CycleGAN for 2D images. Since PSS is to reduce the bias caused by variations in scanners and scan parameters, the disease-related anatomical variations should be minimized in the training images. Therefore, we used only ADNI-CN cases, in which disease features do not appear in the brain structure, to train the PSS network. In this experiment, we chose the Siemens scanner as the standard scanner because it has largest market share, and we chose the GE scanner as the specific vendor of image conversion source. In other words, our PSS network is designed to convert CN images taken by GE scanners from the ADNI2-dataset (CN_GE) to synthetic images similar to those scanned by the Siemens scanners (CN_SI). We evaluated the applicability of the PSS to the diseased brain MRIs (AD and PD), as well as the generalizability to the non-GE scanners (see Section IV.D). B. DETAIL OF THE PSS NETWORK AND ITS TRAINING In PSS network, we used coronal images for the training. The number of training images of each fold in the PSS network is (93+92)×4/5 (5-fold CV)×192 (slices). Figure 4 shows the architecture of our 3D-CAE-based embedding network. Our embedding network embeds each 3D brain MR image into 150-dimensional vectors. The size of the MRIs handled by the embedding network is halved at each side, as in DDCML [9], to improve the learning efficiency. Note that the compression ratio of our embedding network is (80×80×96):150 = 4,096:1. The embedding network was trained and evaluated using ADNI2 and PPMI datasets with the five-fold cross-validation strategy. As mentioned above, PD and CN cannot even be diagnosed from images by skilled neuroradiologists, so for training 3D-CAE to obtain low-dimensional representations, two classes of metric learning are used so that the representations of AD and (CN + Control) are separated. The lowdimensional representations of brain MR images are acquired by five-fold cross validation of 3D-CAE. In addition to AD, CN, and Control in each test fold, the low-dimensional representation of PD, which was not included in the training, is analyzed to quantitatively verify the effectiveness of the proposed DI-PSS evaluation. C. DETAIL OF THE EMBEDDING NETWORK AND ITS TRAINING D. EVALUATION OF THE PSS To evaluate the effectiveness of the proposed DI-PSS framework, we evaluate the three following elements: 1) Changes in MR images 2) Distribution of the embedding. 3) Clustering performance of the embedding. In (1), we assess how the images are changed by our scanner standardization. We quantitatively evaluate the difference between the original (raw) image and the synthetic image with peak signal-to-noise ratio (PSNR), root mean squared error (RMSE), and structured similarity (SSIM). To ensure that the evaluation is not affected by differences in brain size, these evaluations were performed on brain regions only. Although MRICloud, which is used in skull stripping in this experiment, standardizes the brain size to the standard brain size, reducing the differences in brain size between cases, this method was adopted for a more rigorous evaluation. In (2), we quantitatively examine the effect of PSS by analyzing the distribution of the obtained low-dimensional representations. Specifically, for each category (e.g., CN_SI, AD_GE) we investigate the following: (i) variation (i.e., standard deviation) of the embedding and (ii) the mean and standard deviation of the distance from each embedding to the VOLUME 4, 2016 centroid of a different category, where the distance between the centroids of ADNI-CN_Siemens (CN_SI) and ADNI-AD_Siemens (AD_SI) are normalized to 1. In addition, we visualize those distributions in 2D space using t-SNE [29] as supplemental results for intuitive understanding. In (3), we evaluate the separability of the resulting embeddings. In this study, we performed spectral clustering [35] to assess its potential quality for CBIR. In the spectral clustering, we used a normalized graph Laplacian based on 10-nearest neighbor graphs with a general Gaussian-type similarity measure. We set the number of clusters to be two (AD vs. CN + Control + PD), which is the number of disease categories to be classified. Here, the consistency of the distance between the embedded data because of the difference in folds is solved by standardizing the distance between CN_SI and AD_SI per fold to be 1, as mentioned above. The clustering performance was evaluated using two methodologies. The first was evaluation with six commonly used criteria (i.e., silhouette score, homogeneity, completeness, V-measure, adjusted Rand-index [ARI], and adjusted mutual information [AMI]) implemented on the scikit-learn machine learning library (https://scikit-learn.org/). The other is a diagnostic capability based on clustering results. Here, as with other clustering evaluations in the literature, we swap the columns so that each fold results in the optimal clustering result and then sum them. Figure 5 shows an example of each MR image converted to an image taken on a pseudo-standard (= Siemens) scanner with PSS and the difference visualized. Table 2 summarizes the statistics of the degree of change in the images in the brain regions. Here, the background region was excluded from the calculation to eliminate the effect of differences in brain size. For the ADNI dataset, the differences obtained by the PSS image transformation were not significant between CN, AD, and scanner vendors, although the Philips scanners showed less variation on average. For the PPMI dataset that was not used for training, the change in the image because of PSS is clearly larger compared with ADNI (approx. × 1.5 in RMSE). In all categories, the amount of change because of PSS varied from case to case, but the PSS treatment did not cause any visually unnatural changes in the images. Figure 6 shows the cumulative intensity changes of images by PSS in each category. This time, the background areas other than the brain are also included in the evaluation. The number of pixels where the intensity has not changed because of PSS exceeds 80% for all categories, indicating that no undesired intensity changes have occurred for the background (as also seen in Figures. 5 and 6). There is no significant difference in the distribution of intensity change by vendor, and the PPMI dataset has a larger amount of intensity change overall. Table 3 shows the variation (standard deviation; SD) of the 150-dimensional embedded representation in each category. Again, it should be noted here that CN_SI and AD_SI were normalized to 1. The average reduction in SD for all data by PSS was 8.27%. Tables 4 shows the statistics of distances from each embedding to the centroid of a different category. This shows the distribution of the data, considering the direction of variation, which is more practical for CBIR application. With PSS, the average distance between centroids across categories is almost unchanged, but the variability is greatly reduced for all categories. V. RESULTS A. CHANGES IN MR IMAGES BY PSS B. DISTRIBUTION OF LOW-DIMENSIONAL EMBEDDED DATA 1) Distance between centers of the data distribution by category 2) Visualization of the distribution of the embedding Figures 7a and 7b show scatter plots of the embedding of test data with and without PSS, respectively in an arbitrary fold by t-SNE. Specifically, this is a scatter plot of the AD, CN, and Control test cases (data excluded from the training in the five-fold cross-validation) along with the untrained PD cases on the model. Here, PD has been randomly reduced to 1/5 for better visualization. Without PSS (baseline; 3D CAE + metric learning), AD and CN are properly separated, but the distribution of Control + PD (i.e., the difference in datasets) is separated from that of CN to a discernible degree (left). It can be confirmed that by performing PSS, the distribution of Control + PD becomes closer to that of CN, and the separation between AD and other categories becomes better (right). In each category, from left to right, the original image, the PSS processed image, and the difference between them. (a) overall view (b) enlarged view FIGURE 6: Cumulative intensity changes of MR images by PSS. C. CLUSTERING PERFORMANCE OF THE EMBEDDING In this section, we compare the separation ability of the obtained low-dimensional embedding of MR images with and without PSS (baseline). Tables 5 summarizes the clustering performance evaluated with six commonly used criteria. These are the silhouette score (silh), homogeneity score (homo), completeness score (comp), V-measure (harmonic mean of homogeneity and completeness; V) , ARI, and AMI implemented on the scikitlearn library. In each category, 1 is the best score and 0 is a score based on random clustering. It can be confirmed that PSS improved the clustering ability in all evaluation items. Table 6 is a summary of the clustering performance evaluated with the diagnostic ability. Table 6 (a) is a confusion matrix. Here, the numbers of CN, Control and AD cases are the sum of each fold in the cross-validation. In each fold, we tested all PD cases (not included in the training), and the number was divided by five and rounded to the nearest whole number. Tables 6b and 6c summarize the diagnostic performance calculated from Table 6 (a) without and with PD cases, respectively. It can be confirmed that PSS enhances the separation of AD and other categories (i.e., CN, Control and VOLUME 4, 2016 PD) in the low-dimensional representation. PSS improved the diagnostic performance by about 6.2% (from 73.7 to 79.9%) for micro-accuracy and about 10.7% (from 63.8 to 74.5%) for macro-F1. The specificity for PD was also improved by 6.1% (from 69.7% to 75.8%). VI. DISCUSSION A. CHANGES ON MR IMAGES BY PSS Our PSS network transforms healthy cases taken with GE scanners to those taken with Siemens scanners. As can be seen from Figure 6 and Table 2, the amount of change in the images because of PSS was almost the same for both AD and CN images in the ADNI dataset, including the Philips case. The amount of conversion of the image for the PPMI dataset was larger than that for the ADNI dataset. This is thought to be due to the process of absorbing the differences in the datasets that exist in the image but are invisible to the eye. However, in all cases, the converted images have a natural appearance without destroying the brain structure. This can be objectively confirmed in SSIM, which evaluates the structural similarity on the image, maintains a high value. As discussed in detail below, PSS can reduce disease-specific variation in the resulting low-dimensional embedding, absorb differences among datasets and scanner vendors, and improve the separability of diseases. Given these factors, we can conclude that this PSS transformation was done properly. B. CONTRIBUTIONS OF DI-PSS FOR CBIR This section discusses the effects of our DI-PSS framework from the perspective of CBIR implementation. 1) Distribution of embedding Based on the results in Tables 3 and 4, we first discuss the effectiveness of the proposed DI-PSS. From Table 3, PSS reduces the inter-cluster variability for all data categories. In particular, the SD of ADNI-CN and ADNI-AD, which are taken by scanners from three different companies in the same dataset, are reduced by 6.9% and 6.1%, respectively. This indicates that the PSS reduces the difference caused by different scanners. In addition, the SD of ALL_CN, which is a combination of ADNI-CN and Control from a different PPMI dataset, is also reduced by 7.2%, which clearly shows that the proposed PSS can absorb differences in datasets. This benefit can also be seen in Figure 7. The reduction of PD variability by PSS is more pronounced (−14.7%) than the others, and it is ultimately the category with the lowest variability. This is mentioned later in this section. From Table 4, PSS also succeeds in reducing the variability from each piece of data to all the different cluster centers (inter-cluster variability). What is noteworthy here is the degree of decrease in the standard deviation, which reached an average of 22.6%. This ability to reduce not only the variability of data in the same category, but also the directional variability up to different data categories is an important feature in CBIR. In this experiment, we only built an image transformer (i.e. PSSnetwork) that converts CN_GE to CN_SI cases, but we could confirm that the harmonization is desirable for categories that are not included in the training in this way. This strongly suggests that the strategy we have adoptedthat is, not having to build image harmonizers for all scanner types -may have sufficient harmonization effects for many types of scanners. Incidentally, the distances between PD and CN (ADNI-CN vs. PD and ALL-CN vs. PD) are closer than the distances between other categories. This supports the validity of the assumption we made in our experiment that PD and CN are outwardly indistinguishable, and therefore, they can be treated as the same class. In contrast, if we look closely, we can see that the distances of the gravity centers between PD and CN (0.249→0.269) and PD and Control (0.256→0.297) are slightly increased by PSS, and Table 3 shows that the variation of PD is greatly reduced by PSS. From this, we can say that the PSS is moving the PDs into smaller groups away from CN and Control. This can be taken as an indication that the model trained by DI-PSS tends to consider PD as a different class that is potentially separated from the CN category. Since the size of the dataset for this experiment was limited, we would like to run tests with a larger dataset in the future. 2) Separability of the embedding for CBIR Thanks to the harmonization of scanners by PSS, the proposed DI-PSS not only reduces the variability of lowdimensional representations of each disease category, which could not be reduced by deep metric learning learning alone as adopted in DCMML [9], but also reduces the differences among datasets, resulting in a significant performance improvement in the clustering ability of low-point representations. The PD data are different from the ADNI data used for training, and thus, it is an unknown dataset from our model. The improvement of clustering performance by the proposed DI-PSS for PD as well is an important and noteworthy result for the realization of CBIR. C. VALIDITY OF THE MODEL ARCHITECTURE The recently proposed data harmonization methods for brain MR images by Moyer et al [25] and Dinsdale et al. [26] have been reported to be not only logically justified but also very effective. However, as mentioned above, these methods are difficult to apply to CBIR applications because images from all scanners are theoretically needed to train the model. Our DI-PSS is a new proposal to address these problems. Although DI-PSS only learned the transformation from CN_GE to CN_Siemens, the improvement of the properties of the obtained embeddings was confirmed even for combinations that included other companies' scanners, such as the Philips scanner, and different disease categories (AD) that were not included in the training. The results are evidence of proper data harmonization. We think this is due to the combination of MRICloud, an advanced skull stripping algorithm that performs geometric and volumetric positioning, and CycleGAN's generic style transformation capabilities and distance metric learning, which make up the PSS network. Experiments with large-scale data from more diverse disease classes are needed, but in this experiment, we could confirm the possibility of obtaining effective scanner standardization by building one model that translates into a standard scanner. LIMITATIONS OF THIS STUDY The number of data and diversity of their conditions used in these experiments are limited. There is also a limit to the VOLUME 4, 2016 number of diseases we considered. In the future, verification using more data is essential. VII. CONCLUSION In this paper, we proposed a novel and effective MR image embedding method, DI-PSS, which is intended for application to CBIR. DI-PSS achieves data harmonization by transforming MR images to look like those captured with a predefined standard scanner, reducing the bias caused by variations in scanners and scan protocols, and obtaining a low-dimensional representation preserving disease-related anatomical features. The DI-PSS did not require training data that contained MRIs from all scanners and protocols; One set of image converters (i.e., CN_GE to CN_Siemens) was sufficient to train the model. In the future, we will continue the validation with more extensive and diverse data. FIGURE 1 : 1Plots of low-dimensional representations of 3D MRI obtained from different datasets.The impact of different scanners (CN⇔Control; they are medically equivalent) is greater than the impact of the disease (AD⇔CN). FIGURE 2 : 2Overview of pseudo-scanner standardization (PSS) network. FIGURE 3 : 3Architecture of (a) Generators (G X , G Y ) and (b) Discriminators (D X , D Y ) in the PSS network. (a) kernel size (f×f), stride size (s), padding size (p), × # of kernel + instance norm + ReLU* (b) convA : kernel size (f×f), stride size (s), padding size (p), × # of kernel + LeakyReLU convB : kernel size (f×f), stride size (s), padding size (p), × # of kernel + instanceNorm + LeakyReLU convC : kernel size (f×f), stride size (s), padding size (p), × # of kernel FC : fully connected layer (400→1) FIGURE 4: Architecture of embedding network. conv : kernel 3×3, stride size=1, padding size=1, × (# of kernel) + ReLU deconv : kernel 3×3, stride size=1, padding size=1, × (# of kernel) + ReLU average pooling, up-sampling (bi-linear interpolation): 2×2×2:1. FIGURE 5 : 5Example of image change by PSS (in coronal plane). FIGURE 7 : 7Distribution of embedding visualized with t-SNE [29]: (left) baseline (3D-CAE+metric learning), (right) baseline with PSS. TABLE 1 : 1CN and Control can be considered medically equivalent. There are no PD-related anatomical features observable on T1-weighted MRI.Dataset used in our study dataset vendor label #used #patients #total ADNI CN Siemens CN_SI 92 103 439 GE CN_GE 93 101 494 Philips CN_PH 27 27 119 AD Siemens AD_SI 80 84 254 GE AD_GE 80 92 302 Philips AD_PH 20 24 73 PPMI n/a Control 75 75 114 PD 149 149 338 ADNI- TABLE 2 : 2Summary of image changes by PSSdataset label PSNR (db) RMSE SSIM ADNI CN_SI 31.52 ± 2.85 7.17 ± 2.57 0.9743 ± 0.0048 CN_GE 31.67 ± 2.45 6.94 ± 2.13 0.9748 ± 0.0041 CN_PH 32.18 ± 2.65 6.58 ± 2.02 0.9747 ± 0.0038 AD_SI 31.64 ± 3.04 7.13 ± 2.77 0.9746 ± 0.0043 AD_GE 31.65 ± 2.52 6.98 ± 2.32 0.9750 ± 0.0044 AD_PH 32.33 ± 2.13 6.36 ± 1.63 0.9751 ± 0.0031 PPMI Control 30.16 ± 5.47 9.81 ± 8.32 0.9596 ± 0.0346 PD 29.40 ± 5.94 11.60 ± 10.89 0.9539 ± 0.0473 TABLE 3 : 3Variation (SD) of the embedding in category † dataset label #data baseline with PSS − SD (%) ADNI CN_SI 92 0.697 0.648 7.12 CN_GE 93 0.784 0.716 8.66 CN_PH 27 0.622 0.619 0.48 ADNI-CN 212 0.753 0.701 6.91 AD_SI 80 0.863 0.783 9.24 AD_GE 80 0.849 0.806 5.09 AD_PH 20 0.771 0.706 8.46 ADNI-AD 180 0.876 0.823 6.09 PPMI Control 75 0.607 0.554 8.74 PD 149 0.603 0.515 14.71 both CN 92 0.759 0.704 7.20 all 616 0.755 0.693 8.27 †:Distance between CN_SI and AD_SI were normalized to 1. TABLE 4 : 4Mean and variability of embedding across categories of data †baseline with PSS -SD (%) from to mean SD mean SD ADNI-CN AD 0.879 0.669 0.890 0.541 19.1 AD ADNI-CN 0.875 0.729 16.6 Control AD 1.354 0.745 1.329 0.537 28.0 AD Control 0.907 0.702 22.6 PD Control 0.256 0.469 0.297 0.312 33.5 Control PD 0.414 0.368 11.2 ADNI-CN PD 0.364 0.609 0.362 0.474 22.2 PD ADNI-CN 0.373 0.255 31.4 AD PD 1.164 0.939 1.091 0.770 18.0 PD AD 0.620 0.434 29.9 CN AD 0.996 0.753 0.997 0.583 22.6 AD CN 0.917 0.773 15.8 CN PD 0.249 0.593 0.269 0.466 21.4 PD CN 0.349 0.264 24.2 †:Distance between CN_SI and AD_SI were normalized to 1. TABLE 5 : 5Clustering performance evaluated with common criteria † silh homo comp V ARI AMI baseline 0.236 0.220 0.301 0.250 0.251 0.241 +PSS 0.246 0.301 0.351 0.324 0.387 0.317 †: Score 1 is the best in each category. 0 is the score for random clustering. TABLE 6 : 6Evaluation of clustering ability by diagnostic ability.(a) Confusion matrix baseline with PSS CN+Control (+PD) AD CN+Control (+PD) AD CN+Control 284 3 274 13 (+PD) (+104) (+45) (+113) (+36) AD 114 66 75 105 (b) Clustering performance (excluded PD cases) CN+Control AD accuracy macro-F1 precision recall F1 precision recall F1 baseline 71.36 98.95 82.92 95.65 36.67 53.01 74.9 68.0 +PSS 78.51 95.47 86.16 88.98 58.33 70.47 81.1 78.3 (c) Clustering performance (included PD cases) CN+Control AD accuracy macro-F1 Specificity precision recall F1 precision recall F1 of PD baseline 77.29 88.99 82.73 57.89 36.67 44.90 73.7 63.8 69.7 +PSS 83.77 88.76 86.19 68.18 58.33 62.87 79.9 74.5 75.8 VOLUME 4, 2016 ACKNOWLEDGMENTThis research was supported in part by the Ministry of Education, Science, Sports and Culture of Japan (JSPS KAK-ENHI), Grant-in-Aid for Scientific Research (C), 21K12656, 2021-2023. The Open science is a research accelerator. M Woelfle, P Olliaro, M H Todd, Nature chemistry. 310M. Woelfle, P. Olliaro, and M. H. Todd, "Open science is a research accelerator," Nature chemistry, vol. 3, no. 10, pp. 745-748, 2011. Content-based medical image retrieval: a survey of applications to multidimensional and multimodality data. A Kumar, J Kim, W Cai, M Fulham, D Feng, Journal of Digital Imaging. 266A. Kumar, J. Kim, W. Cai, M. Fulham, and D. Feng, "Content-based medical image retrieval: a survey of applications to multidimensional and multimodality data," Journal of Digital Imaging, vol. 26, no. 6, pp. 1025- 1039, 2013. Auto-context and its application to high-level vision tasks and 3D brain image segmentation. Z Tu, X Bai, IEEE Transactions on Pattern Analysis and Machine Intelligence. 3210Z. Tu and X. Bai, "Auto-context and its application to high-level vision tasks and 3D brain image segmentation," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 10, pp. 1744-1757, 2010. Retrieval of brain tumors with region-specific bag-of-visual-words representations in contrast-enhanced MRI images. M Huang, W Yang, M Yu, Z Lu, Q Feng, W Chen, Computational and Mathematical Methods in Medicine. 2012280538M. Huang, W. Yang, M. Yu, Z. Lu, Q. Feng, and W. Chen, "Retrieval of brain tumors with region-specific bag-of-visual-words representations in contrast-enhanced MRI images," Computational and Mathematical Methods in Medicine, vol. 2012, p. 280538, 2012. Local mesh patterns versus local binary patterns: Biomedical image indexing and retrieval. S Murala, Q M J Wu, IEEE Journal of Biomedical and Health Informatics. 183S. Murala and Q. M. J. Wu, "Local mesh patterns versus local binary patterns: Biomedical image indexing and retrieval," IEEE Journal of Biomedical and Health Informatics, vol. 18, no. 3, pp. 929-938, 2014. Content-based image retrieval for brain MRI: An image-searching engine and population-based analysis to utilize past clinical data for future diagnosis. A V Faria, K Oishi, S Yoshida, A Hillis, M I Miller, S Mori, NeuroImage: Clinical. 7A. V. Faria, K. Oishi, S. Yoshida, A. Hillis, M. I. Miller, and S. Mori, "Content-based image retrieval for brain MRI: An image-searching en- gine and population-based analysis to utilize past clinical data for future diagnosis," NeuroImage: Clinical, vol. 7, pp. 367-376, 2015. CBIR system using capsule networks and 3D CNN for alzheimer's disease diagnosis. K Kruthika, H Rajeswari, Maheshappa, Informatics in Medicine Unlocked. 14K. Kruthika, Rajeswari, and H. Maheshappa, "CBIR system using capsule networks and 3D CNN for alzheimer's disease diagnosis," Informatics in Medicine Unlocked, vol. 14, pp. 59-68, 2019. Content-based brain tumor retrieval for MR images using transfer learning. Z N K Swati, Q Zhao, M Kabir, F Ali, Z Ali, S Ahmed, J Lu, IEEE Access. 7Z. N. K. Swati, Q. Zhao, M. Kabir, F. Ali, Z. Ali, S. Ahmed, and J. Lu, "Content-based brain tumor retrieval for MR images using transfer learning," IEEE Access, vol. 7, pp. 17 809-17 822, 2019. Efficient feature embedding of 3D brain MRI images for content-based image retrieval with deep metric learning. Y Onga, S Fujiyama, H Arai, Y Chayama, H Iyatomi, K Oishi, Y. Onga, S. Fujiyama, H. Arai, Y. Chayama, H. Iyatomi, and K. Oishi, "Efficient feature embedding of 3D brain MRI images for content-based image retrieval with deep metric learning," pp. 3764-3769, 2019. Distance metric learning vs. fisher discriminant analysis. B Alipanahi, M Biggs, A Ghodsi, Proceedings of the 23rd National Conference on Artificial Intelligence. the 23rd National Conference on Artificial Intelligence2B. Alipanahi, M. Biggs, and A. Ghodsi, "Distance metric learning vs. fisher discriminant analysis," Proceedings of the 23rd National Conference on Artificial Intelligence, vol. 2, pp. 598-603, 2008. Semi-supervised deep learning by metric embedding. E Hoffer, N Ailon, arXiv preprintE. Hoffer and N. Ailon, "Semi-supervised deep learning by metric embed- ding," arXiv preprint, p. 1611.01449, 2016. Impact of acquisition protocols and processing streams on tissue segmentation of t1 weighted mr images. K A Clark, R P Woods, D A Rottenberg, A W Toga, J C Mazziotta, NeuroImage. 291K. A. Clark, R. P. Woods, D. A. Rottenberg, A. W. Toga, and J. C. Mazziotta, "Impact of acquisition protocols and processing streams on tissue segmentation of t1 weighted mr images," NeuroImage, vol. 29, no. 1, pp. 185-202, 2006. Reliability of MRIderived measurements of human cerebral cortical thickness: the effects of field strength, scanner upgrade and manufacturer. X Han, J Jovicich, D Salat, A Van Der Kouwe, B Quinn, S Czanner, E Busa, J Pacheco, M Albert, R Killiany, P Maguire, D Rosas, N Makris, A Dale, B Dickerson, B Fischl, Neuroimage. 321X. Han, J. Jovicich, D. Salat, A. van der Kouwe, B. Quinn, S. Czanner, E. Busa, J. Pacheco, M. Albert, R. Killiany, P. Maguire, D. Rosas, N. Makris, A. Dale, B. Dickerson, and B. Fischl, "Reliability of MRI- derived measurements of human cerebral cortical thickness: the effects of field strength, scanner upgrade and manufacturer," Neuroimage, vol. 32, no. 1, pp. 180-194, 2006. Statistical harmonization corrects site effects in fuctional connectivity measures from multi-site fMRI data. M Yu, K A Linn, P A Cook, M L Phillips, M Mcinnis, M Fava, M H Trivedi, M H Trivedi, R T Shinohara, Y I Sheline, Human brain mapping. 3911M. Yu, K. A. Linn, P. A. Cook, M. L. Phillips, M. McInnis, M. Fava, M. H. Trivedi, M. H. Trivedi, R. T. Shinohara, and Y. I. Sheline, "Statistical harmonization corrects site effects in fuctional connectivity measures from multi-site fMRI data," Human brain mapping, vol. 39, no. 11, pp. 4213- 4227, 2018. Developmental trajectories of the human embryologic brain regions. K Oishi, J Chotiyanonta, D Wu, M I Miller, S Mori, Neuroscience Letters. 708134342K. Oishi, J. Chotiyanonta, D. Wu, M. I. Miller, and S. Mori, "Develop- mental trajectories of the human embryologic brain regions," Neuroscience Letters, vol. 708, p. 134342, 2019. Detect and correct bias in multi-site neuroimaging datasets. C Wachinger, A Rieckmann, S Pölsterl, Medical Image Analysis. 67101879C. Wachinger, A. Rieckmann, and S. Pölsterl, "Detect and correct bias in multi-site neuroimaging datasets," Medical Image Analysis, vol. 67, p. 101879, 2021. Optimised mri intensity standardisation based on multi-dimensional subregional point cloud registration. Y Gao, J Pan, Y Guo, J Yu, J Zhang, D Geng, Y Wang, Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization. 75-6Y. Gao, J. Pan, Y. Guo, J. Yu, J. Zhang, D. Geng, and Y. Wang, "Op- timised mri intensity standardisation based on multi-dimensional sub- regional point cloud registration," Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, vol. 7, no. 5-6, pp. 594-603, 2019. Impact of image preprocessing on the scanner dependence of multi-parametric MRI radiomic features and covariate shift in multiinstitutional glioblastoma datasets. H Um, F Tixier, D Deasy, J O Bermudez, R J Young, H Veeraraghavan, Physics in Medicine and Biology. 6416165011H. Um, F. Tixier, D. Deasy, Bermudez, J. O, R. J. Young, and H. Veer- araghavan, "Impact of image preprocessing on the scanner dependence of multi-parametric MRI radiomic features and covariate shift in multi- institutional glioblastoma datasets," Physics in Medicine and Biology, vol. 64, no. 16, p. 165011, 2019. Adjusting batch effects in microarray expression data using empirical Bayes methods. W E Johnson, C Li, A Rabinovic, Biostatistics. 81W. E. Johnson, C. Li, and A. Rabinovic, "Adjusting batch effects in microarray expression data using empirical Bayes methods," Biostatistics, vol. 8, no. 1, pp. 118-127, 2006. . J.-P Fortin, D Parker, B Tunc, T Watanabe, M A Elliott, K Ruparel, D R Roalf, T D Satterthwaite, R C Gur, R E Gur, R T , J.-P. Fortin, D. Parker, B. Tunc, T. Watanabe, M. A. Elliott, K. Ruparel, D. R. Roalf, T. D. Satterthwaite, R. C. Gur, R. E. Gur, R. T. Harmonization of multi-site diffusion tensor imaging data. R Schultz, R T Verma, Shinohara, Schultz, R. Verma, and R. T. Shinohara, "Harmonization of multi-site diffusion tensor imaging data," bioRxiv, 2017. [Online]. Available: http://biorxiv.org/content/early/2017/03/15/116541 Harmonization of multi-site diffusion tensor imaging data. J.-P Fortin, D Parker, B Tunç, T Watanabe, M A Elliott, K Ruparel, D R Roalf, T D Satterthwaite, R C Gur, R E Gur, R T Schultz, R Verma, R T Shinohara, NeuroImage. 161J.-P. Fortin, D. Parker, B. Tunç, T. Watanabe, M. A. Elliott, K. Ruparel, D. R. Roalf, T. D. Satterthwaite, R. C. Gur, R. E. Gur, R. T. Schultz, R. Verma, and R. T. Shinohara, "Harmonization of multi-site diffusion tensor imaging data," NeuroImage, vol. 161, pp. 149-170, 2017. Harmonization of cortical thickness measurements across scanners and sites. J.-P Fortin, N Cullen, Y I Sheline, W D Taylor, I Aselcioglu, P A Cook, P Adams, C Cooper, M Fava, P J Mcgrath, M Mcinnis, M L Phillips, M H Trivedi, M M Weissman, NeuroImage. 167J.-P. Fortin, N. Cullen, Y. I. Sheline, W. D. Taylor, I. Aselcioglu, P. A. Cook, P. Adams, C. Cooper, M. Fava, P. J. McGrath, M. McInnis, M. L. Phillips, M. H. Trivedi, and M. M. Weissman, "Harmonization of cortical thickness measurements across scanners and sites," NeuroImage, vol. 167, pp. 104-120, 2017. Harmonization of infant cortical thickness using surface-to-surface cycle-consistent adversarial networks. F Zhao, Z Wu, L Wang, W Lin, S Xia, D Shen, G Li, Med Image Comput Comput Assist Interv. 11767F. Zhao, Z. Wu, L. Wang, W. Lin, S. Xia, D. Shen, and G. Li, "Harmoniza- tion of infant cortical thickness using surface-to-surface cycle-consistent adversarial networks," Med Image Comput Comput Assist Interv, vol. 11767, pp. 475-483, 2019. Deepharmony: A deep learning approach to contrast harmonization across scanner changes. C Zhao, J C Reinhold, A Carass, K C Fitzgerald, E S Sotirchos, S Saidha, J Oh, D L Pham, P A Calabresi, P C M Van Zijl, J L Prince, Magnetic Resonance Imaging. 64C. Zhao, J. C. Reinhold, A. Carass, K. C. Fitzgerald, E. S. Sotirchos, S. Saidha, J. Oh, D. L. Pham, P. A. Calabresi, P. C. M. van Zijl, and J. L. Prince, "Deepharmony: A deep learning approach to contrast harmoniza- tion across scanner changes," Magnetic Resonance Imaging, vol. 64, pp. 160-170, 2019. Scanner invariant representations for diffusion MRI harmonization. D Moyer, G Ver, C M W Steeg, T P M Tax, Magnetic resonance in medicine. 844D. Moyer, G. Ver Steeg, C. M. W. Tax, and T. P. M., "Scanner invariant representations for diffusion MRI harmonization," Magnetic resonance in medicine, vol. 84, no. 4, pp. 2174-2189, 2020. Deep learningbased unlearning of dataset bias for MRI harmonisation and confound removal. N K Dinsdale, M Jenkinson, A I L Namburete, NeuroImage. 228117689N. K. Dinsdale, M. Jenkinson, and A. I. L. Namburete, "Deep learning- based unlearning of dataset bias for MRI harmonisation and confound removal," NeuroImage, vol. 228, p. 117689, 2021. Unpaired image-to-image translation using cycle-consistent adversarial networks. J.-Y Zhu, T Park, P Isola, A A Efros, J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, "Unpaired image-to-image translation using cycle-consistent adversarial networks," pp. 2242-2251, 2017. Domain-adversarial training of neural networks. Y Ganin, E Ustinova, H Ajakan, P Germain, H Larochelle, F Laviolette, M Marchand, V Lempitsky, Journal of Machine Learning Research. 17Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky, "Domain-adversarial training of neural networks," Journal of Machine Learning Research, vol. 17, pp. 1-35, 2016. Visualizing data using t-SNE. M Van Der Laurens, G Hinton, Journal of machine learning research. 986M. van der Laurens and G. Hinton, "Visualizing data using t-SNE," Journal of machine learning research, vol. 9, no. 86, pp. 2579-2605, 2008. MRICloud: Delivering high-throughput mri neuroinformatics as cloud-based software as a service. S Mori, D Wu, C Ceritoglu, Y Li, A Kolasny, M A Vaillant, A V Faria, K Oishi, M I Miller, Computing in Science Engineering. 185S. Mori, D. Wu, C. Ceritoglu, Y. Li, A. Kolasny, M. A. Vaillant, A. V. Faria, K. Oishi, and M. I. Miller, "MRICloud: Delivering high-throughput mri neuroinformatics as cloud-based software as a service," Computing in Science Engineering, vol. 18, no. 5, pp. 21-35, 2016. Significant dimension reduction of 3D brain MRI using 3D convolutional autoencoders. H Arai, Y Chayama, H Iyatomi, K Oishi, H. Arai, Y. Chayama, H. Iyatomi, and K. Oishi, "Significant dimension reduction of 3D brain MRI using 3D convolutional autoencoders," pp. 5162-5165, 2018. MDS clinical diagnostic criteria for Parkinson's disease. R B Postuma, D Berg, M Stern, W Poewe, C W Olanow, W Oertel, J Obeso, K Marek, I Litvan, A E Lang, G Halliday, C G Goetz, T Gasser, B Dubois, P Chan, B R Bloem, C H Adler, G Deuschl, Movement Disorders. 3012R. B. Postuma, D. Berg, M. Stern, W. Poewe, C. W. Olanow, W. Oertel, J. Obeso, K. Marek, I. Litvan, A. E. Lang, G. Halliday, C. G. Goetz, T. Gasser, B. Dubois, P. Chan, B. R. Bloem, C. H. Adler, and G. Deuschl, "MDS clinical diagnostic criteria for Parkinson's disease," Movement Disorders, vol. 30, no. 12, pp. 1591-601, 2015. Clinical application of brain mri in the diagnostic work-up of parkinsonism. F J Meijera, B Goraja, B R Bloemc, R A Esselinkc, Journal of Parkinson's Disease. 7F. J. Meijera, B. Goraja, B. R. Bloemc, and R. A. Esselinkc, "Clinical application of brain mri in the diagnostic work-up of parkinsonism," Journal of Parkinson's Disease, vol. 7, pp. 211-217, 2017. Atlas-based whole brain white matter analysis using large deformation diffeomorphic metric mapping: application to normal elderly and alzheimer's disease participants. K Oishi, A Faria, H Jiang, X Li, K Akhter, J Zhang, J T Hsu, M I Miller, P C M Van Zijl, M Albert, C G Lyketsos, R Woods, A W Toga, G B Pike, P Rosa Neto, A Evans, J Mazziotta, S Mori, Neuroimage. 462K. Oishi, A. Faria, H. Jiang, X. Li, K. Akhter, J. Zhang, J. T. Hsu, M. I. Miller, P. C. M. van Zijl, M. Albert, C. G. Lyketsos, R. Woods, A. W. Toga, G. B. Pike, P. Rosa Neto, A. Evans, J. Mazziotta, and S. Mori, "Atlas-based whole brain white matter analysis using large deformation diffeomorphic metric mapping: application to normal elderly and alzheimer's disease participants," Neuroimage, vol. 46, no. 2, pp. 486-499, 2009. On spectral clustering: Analysis and an algorithm. A Y Ng, M I Jordan, Y Weiss, Proceedings of the 14th International Conference on Neural Information Processing Systems: Natural and Synthetic. the 14th International Conference on Neural Information Processing Systems: Natural and SyntheticA. Y. Ng, M. I. Jordan, and Y. Weiss, "On spectral clustering: Analysis and an algorithm," Proceedings of the 14th International Conference on Neural Information Processing Systems: Natural and Synthetic, pp. 849- 856, 2001.
[]
[ "Observables in a Noncommutative Approach to the Unification of Quanta and Gravity. A Finite Model", "Observables in a Noncommutative Approach to the Unification of Quanta and Gravity. A Finite Model" ]
[ "Leszek Pysiak \nDepartment of Mathematics and Information Science\nul. Pow-stańców Warszawy\nWarsaw University of Technol-ogy\nPlac Politechniki 1, 13/9400-661, 33-110Warsaw, TarnówPoland., Poland\n", "Michael Heller [email protected] ", "Zdzis Law Odrzygóźdź \nDepartment of Mathematics and Information Science, Warsaw\nUniversity of Technol\n-ogy Plac Politechniki 100-661WarsawPoland\n", "Wies Law Sasin \nDepartment of Mathematics and Information Science, Warsaw\nUniversity of Technol-ogy\nPlac Politechniki 100-661WarsawPoland\n" ]
[ "Department of Mathematics and Information Science\nul. Pow-stańców Warszawy\nWarsaw University of Technol-ogy\nPlac Politechniki 1, 13/9400-661, 33-110Warsaw, TarnówPoland., Poland", "Department of Mathematics and Information Science, Warsaw\nUniversity of Technol\n-ogy Plac Politechniki 100-661WarsawPoland", "Department of Mathematics and Information Science, Warsaw\nUniversity of Technol-ogy\nPlac Politechniki 100-661WarsawPoland" ]
[]
We further develop a noncommutative model unifying quantum mechanics and general relativity proposed in Gen. Rel. Grav. (2004) 36, 111-126. Generalized symmetries of the model are defined by a groupoid Γ given by the action of a finite group on a space E. The geometry of the model is constructed in terms of suitable (noncommutative) algebras on Γ. We investigate observables of the model, especially its position and momentum obesvables. This is not a trivial thing since the model is based on a noncommutative geometry and has strong nonlocal properties. We show that, in the position representation of the model, the position observable is a coderivation of a corresponding coalgebra, "coparallelly" to the well known fact that the momentum observable is a derivation of the algebra. We also study the momentum representation of the model. It turns out that, in the case of the algebra of smooth, quickly decreasing functions on Γ, the model in its "quantum sector" is nonlocal, i.e., there are no nontrivial coderivations of the corresponding coalgebra, whereas in its "gravity sector" such coderivations do exist. They are investigated.
10.1007/s10714-005-0041-z
[ "https://export.arxiv.org/pdf/gr-qc/0410010v1.pdf" ]
16,372,460
gr-qc/0410010
c225b3c372ef7eaf01c95425cba67b0de1ed5586
Observables in a Noncommutative Approach to the Unification of Quanta and Gravity. A Finite Model 2 Oct 2004 December 29, 2021 Leszek Pysiak Department of Mathematics and Information Science ul. Pow-stańców Warszawy Warsaw University of Technol-ogy Plac Politechniki 1, 13/9400-661, 33-110Warsaw, TarnówPoland., Poland Michael Heller [email protected] Zdzis Law Odrzygóźdź Department of Mathematics and Information Science, Warsaw University of Technol -ogy Plac Politechniki 100-661WarsawPoland Wies Law Sasin Department of Mathematics and Information Science, Warsaw University of Technol-ogy Plac Politechniki 100-661WarsawPoland Observables in a Noncommutative Approach to the Unification of Quanta and Gravity. A Finite Model 2 Oct 2004 December 29, 2021† Vatican Observatory, V-00120 Vatican City State. Correspondence address: We further develop a noncommutative model unifying quantum mechanics and general relativity proposed in Gen. Rel. Grav. (2004) 36, 111-126. Generalized symmetries of the model are defined by a groupoid Γ given by the action of a finite group on a space E. The geometry of the model is constructed in terms of suitable (noncommutative) algebras on Γ. We investigate observables of the model, especially its position and momentum obesvables. This is not a trivial thing since the model is based on a noncommutative geometry and has strong nonlocal properties. We show that, in the position representation of the model, the position observable is a coderivation of a corresponding coalgebra, "coparallelly" to the well known fact that the momentum observable is a derivation of the algebra. We also study the momentum representation of the model. It turns out that, in the case of the algebra of smooth, quickly decreasing functions on Γ, the model in its "quantum sector" is nonlocal, i.e., there are no nontrivial coderivations of the corresponding coalgebra, whereas in its "gravity sector" such coderivations do exist. They are investigated. Introduction Noncommutative geometry is not only quickly developing branch of mathematics [7,9,17,18], but also finds interesting applications in physics. Within its conceptual framework several approaches have been elaborated to unify relativity and quanta (see, for example, [2,3,4,6,19,20]), and noncommutative methods more and more deeply penetrate into superstring or M-theories (first indication [8], first result, [21], reviews [14,16]). In a series of works [10,11,12] we have proposed a scheme for unifying general relativity and quantum mechanics based on a noncommutative algebra related to a transformation groupoid which plays the role of generalized symmetries of the model. In [13] we have tested the scheme by constructing a simplified (but still mathematically interesting) model and computing many of its details. In the present work, we discuss observables of this model; in particular, position and momentum observables. It is a common knowledge that the algebra structure of observables is a crucial ingredient of the standard formulation of quantum mechanics; our analysis has disclosed that its coalgebra structure is also implicitly present in the standard approach. The present paper is thus a continuation of [13] (we will refer to it as to a "previous work"), but, for the reader's convenience, we shortly summarize its main features and results. We construct our transformation groupoid in the following way. LetẼ be a differential manifold andG a group acting smoothly and freely onẼ. In consequence, we have the bundle (Ẽ, π M , M =Ẽ/G). We can think of it as of the frame bundle, withG the Lorentz group, over space-time M. Now, we choose a finite subgroup G ofG, a cross section S : M →Ẽ of the above bundle (it need not be continuous), and define E = x∈M S(x)G. The fact that G acts freely (to the right) on E, allows us to define the transformation groupoid structure on the Cartesian product Γ = E × G (for details see the previous work). The choice of the cross section S : M →Ẽ can be thought of as the choice of a gauge for our model. To be more precise every γ ∈ Γ can be presented in the form γ = (S(x)g,ḡ) where g,ḡ ∈ G. The set of all γ's with the beginning at p ∈ E is denoted by Γ p (a "fiber" of Γ over p). We define the Hilbert space L 2 (Γ p ) = {u : Γ p → C : Σ g∈G |u(S(x)g 0 , g)| 2 < ∞} with the scalar product u, v = Σ g∈Gū (S(x)g 0 , g)v(S(x)g 0 , g). If L h : L 2 (Γ p ) → L 2 (Γ p ) is a left translation, it is straightforward to show that L h is a unitary operator with respect to the above scalar product. It transforms S(x) into S(x)h (for each fibre independently). In this sense, our choice of the gauge is unique up to unitary transformations. The groupoid Γ is a key structure of our model. It represents a space, the elements of which are symmetry operations of the model. The algebra A = C ∞ (Γ, C) of smooth complex valued functions on Γ (if necessary, we shall assume that they vanish at infinity) plays the role of an algebraic counterpart of this symmetry space. In the previous work, we have reconstructed geometry of the groupoid Γ = E × G (including generalized Einstein's equations) in terms of this algebra. The Cartesian product structure of the groupoid Γ has enabled us to consider its two natural components. By projecting the full geometry into the E-direction we recover the usual space-time geometry and, consequently, the standard general relativity. It is a remarkable fact that this can equivalently be achieved by suitably averaging elements of the algebra A (see below Section 3). On the other hand, the regular representation π p : A → End(H p ) of the groupoid algebra A in a Hilbert space H p , for p ∈ E, leads to the G-component of the model which can be considered as its quantum sector. In the present work, we develop the model by considering its observables, especially its position and momentum observables. This is not a trivial thing. The model is based on a noncommutative algebra A and to determine these two observables means to disentangle its local features (essentially rooted in the center of the algebra A) from its nonlocal properties. The organization of our material runs as follows. In Section 2, we present the preliminaries of the model, and study some properties of the subalgebra A proj ⊂ A which is isomorphic to the center of A, and serves to reconstruct the standard geometry of space-time M. In Section 3, we formulate the eigenvalue equation for observables of the model, and show that by averaging functions belonging to A proj we obtain functions on M. The position observable is discussed in Section 4. We show that, in the position representation of the model, the position observable is a coderivation of a corresponding coalgebra, whereas the momentum observable is a derivation of the algebra (as it is well known). This remains valid in the usual quantum mechanics (see below, Example 1). In Section 5, we study derivations and coderivations of the group algebra and its dual in the case of a finite group, and, in Section 6, we extend this analysis to the algebra and the dual coalgebra on the groupoid Γ = E × G. To do so we limit ourselves to the case M = R n , and consider the space S of smooth, quickly decreasing functions on Γ (the Schwarz space). It can be equipped with the algebra structure in two ways which gives us two algebras, S 1 and S 2 , on Γ. We then extend the space S on Γ to a distribution space S ′ and obtain, correspondingly, two coalgebras S ′ 1 and S ′ 2 on Γ. We show that S ′ 2 defines the position representation of our model, and S ′ 1 its momentum representation. We also demonstrate that a coderivation of the coalgebra S ′ 2 is the position observable in the position representation, whereas a coderivation of the coalgebra S ′ 1 is the position observable in the momentum representation. It turns out that in the G-component of the model there is no localization (there exist no non-trivial corresponding coderivations). This remains in agreement with the "noncommutative paradigm" of this aspect of the model. In the E-component of the model, the corresponding coderivations do exist and the local properties are preserved. Finally, in Section 7, we present the position and momentum observables in the elegant language of a sheaf of algebras on the groupoid Γ. Preliminaries Let, as usually, Γ = E × G, with G a finite group, be a transformation groupoid. We shall consider the algebra A = C ∞ 0 (Γ, C) of smooth, complex valued functions on Γ vanishing at infinity, i.e., functions vanishing at infinity on every connected component of Γ diffeomorphic with M. Hermitian elements of this algebra are candidates for being observables of the model. However, the true observable should leave some traces in space-time M where it could be registered by a measuring device. This is guaranteed by the following construction. Let C ∞ 0 (M) denote the algebra of smooth functions on M vanishing at infinity. We define A proj = pr * (C ∞ 0 (M)) where pr = π M •π E . A proj is an algebra without unit. Let (C ∞ 0 (M)) M denote the set of complex functions f : M → C such that, for any such function and for any x ∈ M, there exists an open neighborhood U of x and a function φ ∈ C ∞ 0 (M) satisfying the condition f |U = φ|U. We will say that functions of (C ∞ 0 (M)) M are localized to M. Lemma 2.1 (C ∞ 0 (M)) M = C ∞ (M) where C ∞ (M) denotes the algebra of all smooth functions on M. Proof The inclusion C ∞ (M) ⊂ (C ∞ 0 (M)) M is obvious. It is enough to show that every continuous function f : M → C belongs to (C ∞ 0 (M)) M . But this is indeed the case. For any x ∈ M there exists an open neighborhood U of x and a function φ ∈ C ∞ 0 (M) such that φ|U = 1 ; of course φf ∈ C ∞ 0 (M), and f |U = φf |U. ✷ Although the algebra A proj does not contain constant functions, we can always -on the strength of the above lemma -recover them locally. Let us also notice that (A proj ) Γ is a subalgebra of the algebra (C ∞ 0 (Γ, C)) Γ . Therefore, we can safely assume that the Hermitian elements of A proj represent observables of the model. Let us now consider the regular representation π p : A → B(H p ) of the algebra A in the Hilbert space H p = L 2 (Γ p ) given by (π p (a)ψ)(γ) = (a * ψ)(γ) = γ 1 ∈Γp a(γ • γ −1 1 )ψ(γ 1 ) for γ ∈ Γ p . Let further I p : L 2 (G) → L 2 (Γ p ) be te obvious isomorphism of Hilbert spaces. For every a ∈ A we definē π p (a) = I −1 p • π p (a) • I p . clearly,π p (a) ∈ B(L 2 (G)). Lemma 2.2 L −1 g 0π p (a)L g 0 =π pg 0 (a) for every g 0 ∈ G. Proof Let ψ p ∈ L 2 (Γ p ) and ψ ∈ L 2 (G). We have ψ p = I p (ψ), and we compute (π pg 0 (a) ψ pg 0 )(γ) = (a * ψ pg 0 )(γ) = γ 1 ∈Γpg 0 a(γ • γ −1 1 )ψ pg 0 (γ 1 ) = g 1 ∈G a(pg 0 g 1 , g −1 1 g)ψ(g 1 ) = g ′ 1 ∈G a(pg ′ 1 , g ′−1 1 g 0 g)ψ(g −1 0 g ′ 1 ). In the last line we have made the substitution g 0 g 1 → g ′ 1 . Now, letψ = L g 0 ψ, and we compute (π p (a)ψ p )(γ) = γ 1 ∈Γp a(γ • γ −1 1 )ψ(γ 1 ) = g 1 ∈G a(pg 1 , g −1 1 g)ψ(p, g 1 ) = g 1 ∈G a(pg 1 , g −1 1 g)ψ(g 1 ) = g 1 ∈G a(pg 1 , g −1 1 g 0 g)ψ(g 1 ). In the last line we have substituted g → g 0 g. We thus have L −1 g 0 (l eft hand side) = (right hand side). ✷ We now define the norm in the algebra A in the following way ||a|| = sup p∈E ||π p (a)||. The above lemma can be written in the form π s(x)g = L −1 g • π s(x) • L g , hence ||π s(x)g (a)|| = ||π s(x) (a)||. We see that supremum is taken over M, i.e., ||a|| = sup x∈M ||π s(x) (a)||. For a fixed x ∈ M we have a finally dimensional vector space (of matrices). In such a space all norms are equivalent. In the matrix representation we write π s(x) (a) = (M a (x)) i,j , and ||a|| = sup( i,j |M a (x)| i,j ). Since functions a vanish at infinity, this supremum is finite. Now, we complete the algebra A in the above norm to the C * -algebra. From now on we shall always consider the algebra A as a C * -algebra. A function ψ ∈ L 2 (Γ) is said to be G-invariant if ψ(p 1 , g 1 ) = ψ(p 2 , g 2 ), when there exists g ∈ G such that p 2 = p 1 g. The set of all G-invariant functions will be denoted by L 2 G (Γ). L 2 G (Γ) is evidently isomorphic with L 2 (M). The fact that A is a C * -algebra allows us to employ in our model the algebraic quantization method. Eigenvalue Equation Let a be a Hermitian element of A proj , and ψ ∈ L 2 G (Γ). The eigenvalue equation for the observable a assumes the form π p (a)ψ = λ p ψ where λ p is the eigenvalue of π p (a). Here we have p ∈ π −1 M (x), x ∈ M. For simplicity we consider a nondegenerate case. We compute (π p (a)ψ)(γ) = (ψ * a)(γ) = g 1 ∈G ψ(p, g 1 )a(pg 1 , g −1 1 g) = ψ (x)ã(x) = |G|ψ(x)ã(x) where γ ∈ Γ p , and we have introduced the abbreviations: ψ(x) = ψ(p, g) for every g ∈ G, andã(x) = a(p, g) for every p ∈ π −1 M (x) and g ∈ G. If we further denote (π p (a)ψ)(x) = (π p (a)ψ)(γ), where γ ∈ Γ p and p ∈ π −1 M (x), then we finally have (π p (a)ψ)(x) = |G|ψ(x) ·ã(x). Hence, the eigenvalue of this observable is λ p = |G| ·ã(x) for every p ∈ π −1 M (x). As we can see, the observable a ∈ A indeed leaves a trace in spacetime M. In the previous work we have shown that the transition from the noncommutative geometry on the groupoid Γ to the classical geometry on the manifold M can be done with the help of a suitable averaging procedure of elements of the groupoid algebra. The following lemma establishes the equivalence of this averaging method with the one using the subalgebra A proj . Proof Let A a be the matrix representation of a ∈ A. Then its averaging is A a = 1 |G| Tr(A a ). If a ∈ A proj then a = pr * f for some f ∈ C ∞ 0 (M). By averaging we obtain A pr * f = 1 |G| · f · Tr[1] = f. Here [1] denotes the matrix with all entries equal to one. Of course, Tr[1] = |G|. ✷ Position Operator as a Coderivation In this section we consider the position observable of our model. The projection pr : Γ → M, pr = π M • π E , which is clearly connected with localization in space-time M, is not a numerical function (it has no values in R or C), and consequently it does not belong to the algebra A). However, in the fourdimensional case, if we choose a local coordinate map x = (x µ ), µ = 0, 1, 2, 3, in M then the projection pr determines four observables in the domain D x of x pr µ = x µ • pr We thus have the system of four position observables pr = (pr 0 , pr 1 , pr 2 , pr 3 ). For a fixed µ one has pr µ ∈ A proj | Dx , and it is Hermitian. Let us notice that the projection pr : Γ → M contains, in a sense, the information about all possible local observables pr µ . This can be regarded as a "noncommutative formulation" of the fact that there is no absolute position but only the position with respect to a local coordinate system. From the previous work (Section 7) it follows that in the matrix representation of the algebra A one has (π q (pr 1 ))(ξ) = ξ T · M pr 1 = x 1 · ξ, x ∈ M, in the local map, where ξ T is ξ ∈ C n transposed, and M pr 1 denotes the matrix corresponding to the projection pr 1 . We see that the position observable in the "quantum sector" of our model has the same form as in the ordinary quantum mechanics. This indicates that we are working in the position representation of the model. We shall now demonstrate the connection between the position operator and the coderivation of a coalgebra. Let (A, +, m, ) be an associative (not necessarily commutative) algebra over the field of complex numbers C. Here m : A⊗A → A is a product map. We define the dual space A * = Hom(A, C) which has the structure of a coalgebra with the coproduct ∆ : A * → A * ⊗ A * given by ∆(ϕ)(f, g) = ϕ(m(f ⊗ g)), g, f ∈ A, ϕ ∈ C * . We assume that (A ⊗ A) * ⊂ A * ⊗ A * . This is always true for finite dimensional algebras. Rather than considering a completion of the tensor product A * ⊗ A * (see [1, Chapters 5-6]), we shall slightly modify the coproduct definition (in Section 6). We recall that a derivation X of the algebra A is, by definition, a linear map X : A → A satisfying the Leibniz rule X • m = m • (X ⊗ id A ) + m • (id A ⊗ X). The set DerA of all derivations of the algebra A is a A-module. Let X * : A * → A * be a linear mapping satisfying the following condition ∆ • X * = (X * ⊗ id A * ) • ∆ + (id A * ⊗ X * ) • ∆, which we shall call the co-Leibniz rule. For ϕ ∈ A * it can be written in the form ∆(X * (ϕ)) = X * (ϕ (1) ) ⊗ ϕ (2) + ϕ (1) ⊗ X * (ϕ (2) ). Definition 4.1 Let A be an associative not necessarily commutative algebra, and A * its dual coalgebra. A coderivation of the coalgebra A * is a linear map X * : A * → A * satisfying the co-Leibniz rule. Proposition 4.1 Let X : A → A be an endomorphism of an algebra A as a linear space, and X * : A * → A * its adjoint endomorphism, i.e., (X * (ϕ))(a) = ϕ(X(a)), ϕ ∈ A * , a ∈ A. X satisfies the Leibniz rule if and only if X * satisfies the co-Leibniz rule. Proof Let us suppose that X satisfies the Leibniz rule, then (∆(X * ϕ))(a 1 , a 2 ) = (X * ϕ)(a 1 · a 2 ) = ϕ(X(a 1 · a 2 )) = ϕ(X(a 1 )a 2 ) + ϕ(a 1 X(a 2 )) = ∆ϕ(X(a 1 ), a 2 ) + ∆ϕ(a 1 , X(a 2 )) = ((( X * ⊗ id A * ) • ∆ + (id A * ⊗ X * ) • ∆)(ϕ))(a 1 , a 2 ), and similarly in the other direction. ✷ f (x) = R e −ipx f (p)dp. We have R e −ipx d dp f (p)dp = ix R e −ipx f (x)dx or, in the abbreviated form, f ′ (x) = ix ·f (x), x ∈ R. The dual of A = L 1 (R) is A * = L ∞ (R) (with the pointwise multiplication). Let us denote (X * ϕ)(x) = xϕ(x) for ϕ ∈ L ∞ (bf R). X * is an operator adjoint to the operator X : A → A given by X = −i d dp . Since R is an Abelian group, as a coproduct in L ∞ (R), we have ∆ϕ(x 1 , x 2 ) = ϕ(x 1 + x 2 ) for ϕ ∈ L ∞ (R), x 1 , x 2 ∈ R. It can be readily checked that X * 1 is a coderivation of the coalgebra L ∞ (R). This coderivation is called the position operator in the position representation. The above example suggests that in noncommutative generalizations of quantum mechanics we can treat coderivations of suitable coalgebras as counterparts of the position operator (in the position representation). We will explore this possibility in the following sections. Derivations and Coderivations on a Finite Group In this section, we apply considerations of the previous section to the quantum mechanics on a finite group G. In this case, we have the group algebra H = CG of formal linear combinations of elements of G (which corresponds to the momentum representation) and the algebra H * = C(G) of complex functions on G with pointwise multiplication (corresponding to the position representation of the model). Let us notice that both H and H * are bialgebras with coproducts being conjugations of products of the corresponding dual algebras. We assume that G is a non-Abelian group. Let us consider the Fourier transformation of a finite group G F : G → ΠM d λ (C) ≡ M, here d λ is the dimension of the representation from the class λ, λ ∈Ĝ wherê G is the dual object of G, given by F (g) = (T λ (g)) g∈G with T λ (g) ∈ M d λ (C). This is extended by linearity to the whole of the group algebra F : CG → M [15, Chapter 12]. As it is well known, F is the isomorphism of algebras. We thus can change the algebra CG into the corresponding matrix algebra. The latter algebra has only inner derivations with dim(Inn(M n (C)) = n 2 − 1. Proof (1) Let J λ 0 ⊂ M be an ideal of CG such that J λ 0 = {(0, 0, . . . , A λ 0 , 0, . . . 0) : A λ 0 ∈ M λ 0 }. IfD is a derivation of the algebra M then, on the strength of Lemma 5.1, DJ λ 0 ⊂ J λ 0 . Therefore J λ 0 is isomorphic with M d 0 (C) which has only inner derivations. HenceD | J λ 0 = adB λ 0 where B λ 0 ∈ J λ 0 . Let now B = (B λ ) λ∈Ĝ be the sequence as above. Therefore,D = adB and, on the strength of Lemma 5.2, in the algebra CG there exist only inner derivations. (2) The element δ g 0 is central idempotent in C(G), i.e., δ g 0 · δ g 0 = δ g 0 , and such elements form a basis {δ g } in C(G). Therefore, D(a) = 0 for every a ∈ C(G) and any derivation D of C(G). ✷ To sum up, in the bialgebra CG there exist only (nonzero) derivations, and in the bialgebra C(G) only (nonzero) coderivations. Our goal is now to discuss the position observable on a finite group G. It is given by a coderivation of the coalgebra C(G), and could be found as an adjoint of the derivation of the algebra CG. But the latter algebra has only nonzero inner derivations; they are of the form X g 0 (g) = (adg 0 )(g) = g 0 g − gg 0 with g 0 ∈ G. Therefore, if f ∈ C(G), we dually have (X * g 0 (f ))(g) = f (X g 0 g) = f (g 0 g) − f (gg 0 ). The eigenvalue equation assumes the form (X * g 0 )f = λ · f where λ ∈ C, or f (g 0 g) − f (gg 0 ) = λ · f (g). This equation has non-trivial solutions. For instance, it can be easily seen that the eigenspace corresponding to the eigenvalue λ = 0 is the space of central functions. 1 Therefore, we can have a well determined localization on a finite group. This remains in agreement with the fact that C(G) is a commutative algebra. Localization on a Transformation Groupoid In this section, we extend the above analysis to the case of the transformation groupoid Γ = E × G where G is a finite (non-Abelian) group. To do this we limit our considerations to the case when the base space M = R n . We extend the space of functions on Γ to the distribution space on Γ. Let then S = S(Γ, C) be the space of smooth, quickly decreasing functions on Γ, called also the Schwarz space (we recall that the Schwarz space on R n is the vector space S of smooth functions on R n such that for every φ ∈ S, φ and its derivatives decrease more rapidly than any power of 1/|x|, x ∈ R n , when |x| goes to infinity, [5, p. 474]). In the following we consequently use the matrix representation of S. We thus have S = S(R n ) ⊗ M n (C). A (nonunital) algebra structure in S can be introduced in two ways: 1 We remind that a function f is central if f (g) = f (g 0 gg −1 0 ) for any g, g 0 ∈ G. 1. m 1 [(f ⊗ A) ⊗ (g ⊗ B)] = f g ⊗ A · B, f, g ∈ S(R n ), A, B ∈ M n (C), where f and g are multiplied pontwise, and A and B are multiplied in the usual matrix way; we will write S 1 = (S, m 1 ). 2. m 2 [(f ⊗ A) ⊗ (g ⊗ B)] = f * g ⊗ A · B where * denotes the usual convolution of functions; in this case we write S 2 = (S, m 2 ). We should distinguish two kinds of derivations of the above algebras: 1. Vertical derivations, X A , A ∈ M n (C), are of the form X A = id ⊗ adA, or, for ∈ S(R n ), X A (f ⊗ B) = [1 ⊗ A, f ⊗ B] on the basis elements and extended by linearity. 2. Horizontal derivations are linearly generated by X k = D k ⊗ id Mn(C) where k = 0, 1, . . . , n and D k = 1 i ∂ ∂x k , i.e., X k (g ⊗ B) = 1 i ∂g ∂x k ⊗ B, and are extended by linearity. As a distribution space we assume S ′ = S ′ (R n ) ⊗ M n (C) where S ′ (R n ) is the dual of S(R n ) (but S ′ is not the dual of S). The space S ′ has no algebra structure. However, if we slightly modify the usual condition for coproduct, we can introduce in it (in two ways) the coalgebra structure. The usual coproduct would be the mapping ∆ : S ′ (R n ) → S ′ (R n ) ⊗ S ′ (R n ), whereas we assume ∆ i : S ′ (R n ) → (S (R n ) ⊗ S(R n )) ′ , i = 1, 2, (which is also valid in the usual approach for finally dimensional cases), and for T ∈ S ′ (R n ) we define (∆ i T )(f ⊗ g) = T (m i (f ⊗ g)) where m i are restricted to the first factors in the corresponding tensor products, or ∆ i = m * i , i.e., our coproduct is a dual homomorphism of linear spaces. This shows that our definition is very natural. However, we must accordingly adapt the associativity condition. We introduce the following notation ∆ i⊗ id : (S(R n ) ⊗ S(R n )) ′ → (S(R n ) ⊗ S(R n ) ⊗ S(R n )) ′ where ∆ i⊗ id = (m i ⊗ id) * and, analogously, id⊗∆ i = (id ⊗ m i ) * . With this notation our associativity condition reads (∆ i⊗ id) • ∆ i = (id⊗∆ i ) • ∆ i . It can be easily checked that the above defined coproducts satisfy this condition. We define the coproduct on M n (C) in the usual way; in the basis {E ij } we have ∆ Mn(C) = E ij ⊗ E ij , and we extend this by linearity. Finally, we set∆ i = ∆ i ⊗ ∆ Mn(C) . As in Section 4, coderivations of S ′ are defined to be linear mappings of S ′ adjoint to the derivations of S. With the coproduct defined as above, this requires a slight modification of the co-Leibniz rule. If X is a derivation of S(R n ) then X * is a coderivation of S ′ (R n ), i.e., it satisfies the co-Leibniz rule ∆ • X * = (X * ⊗ id) + (id⊗X * ) where we define X * ⊗ id = (X ⊗ id) * , and analogously for id⊗X * . Remembering that the distributional derivative of T is defined to be (D k T )(f ) = −T (D k f ), we have X = D k : S 1 (R n ) → S 1 (R n ) with D k = 1 i ∂ ∂x k , and X * : S ′ 1 (R n ) → S ′ 1 (R n ) with X * = −D k . It can be readily checked that −D k satisfies the above modified co-Leibniz rule. Now, we consider the distributional Fourier transform F * : S ′ 1 (R n ) → S ′ 2 (R n ) which is given by (F * T )(f ) = T (F f ) where F f is to be understood as (F f )(x) = R n f (t)e −itx dµ k (x) with dµ k (x) = dx/(2π) n/2 , normalized Lebesgue measure. F is a continuous algebra isomorphism of S 1 (R n ) into S 2 (R n ), and F * is a continuous linear isomorphism of S ′ 1 (R n ) into S ′ 2 (R n ) which is additionally a coalgebra map, i.e., it satisfies the condition ∆ 2 • F * = (F * ⊗ F * ) • ∆ 1 . One can check that, for X * = −D k , one has F * (X * T ) = x k (F * T ), or, forT ∈ S ′ 2 (R n ),X * T = x k ·T . It can be also verified thatX * T is a coderivation of S ′ 2 (R n ). The question remains to be answered of whether do exist nonzero vertical derivations, i.e., the derivations of the form Y * = id ⊗ Y * where Y * is a coderivation of the coalgebra M n (C)? Clearly, the answer is "no" since there do not exist nonzero derivations of M n (C) * (with the pointwise multiplication). Now, we should collect the results and conclusions of this lengthy analysis. First, we make this clear that by the position representation of our model we understand the representation in which the position operator has the form of the multiplication by a coordinate, and by the momentum representation the one in which the momentum operator has the form of the multiplication by a coordinate. In this sense, the coalgebra S ′ 2 defines the position representation of our model, and the coalgebra S ′ 1 its momentum representation, and we have Corollary 6.1 1. The operatorX * k ⊗id Mn(C) is a coderivation of the coalgebra S ′ 2 , and it is the position observable in the position representation. 2. The operator X * ⊗ id Mn(C) is a coderivation of the coalgebra S ′ 1 , and it is the position observable in the momentum representation. ✷ We can see that the localization on the groupoid Γ comes only from the "horizontal component" of our model which reflects essentially the spacetime geometry; whereas its "vertical component", representing the quantum sector of the model, is entirely nonlocal. A Sheaf Structure on the Groupoid In this section, we show how could one elegantly describe, by exploring a sheaf structure on the transformation groupoid Γ, the position and momentum observables of our model. On the Cartesian product Γ = E × G there exists the natural product topology; however, we shall consider a weaker topology in which the open sets are of the form π −1 E (U) where U is open in the manifold topology τ E on E. Every such open set is also open in the topology τ E × τ G . Indeed, every such set is given by π −1 E (U) = U × G. Let A be a functor which associates with an open set U ×G the involutive noncommutative algebra A(U × G) of smooth compactly supported complex valued functions with the ordinary addition and the convolution multiplication. As it can be easily seen, A is a sheaf of noncommutative algebras on the topological space (Γ, π −1 E (τ E )). The projection pr: Γ → M can be locally interpreted as a set of (local) cross sections of the sheaf A (i.e. as a set of position observables). Indeed, for the domain D x of any coordinate map x = (x 0 , x 1 , x 2 , x 3 ), the composition x • pr = (x 0 • pr, x 1 • pr, x 2 • pr, x 3 • pr) is a set of such local cross sections of A on the open set π −1 E (D x × G). The global mapping pr : Γ → M is not a cross section of A. Let us notice that to a measurement result which is not a number but a set of numbers there does not correspond a single observable but rather a set of observables, i. e., a set of (local) cross sections of the sheaf A. Now, we define the derivation morphism of the sheaf A over an open set U ∈ π −1 E (τ E ) as a family of mappings X = (X W ) W ⊂U such that X W : A(W ) → A(W ) is a derivation of the algebra A(W ), and for any W 1 , W 2 open and W 1 ⊂ W 2 ⊂ U, the following diagram commutes A(W 2 ) A(W 2 ) A(W 1 ) A(W 1 ) ✲ ✻ ✲ ❄ ❄ X(W 2 ) X(W 1 ) ρ W 2 W 1 ρ W 2 W 1 where ρ W 1 W 2 is the known restriction homomorphism. The family of all derivation morphisms indexed by open sets is a sheaf of Z(A)-modules where Z(A) denotes the sheaf of centers of the algebras A(U), U ∈ π −1 E (τ e ). Components of the momentum observable∂ µ are cross sections of the sheaf of Z(A)-modules of derivations of the sheaf A over domains of coordinate maps, and the representation π U : A(U) → π U (A(U)), where U ∈ π −1 E (τ E ), transfers the sheaf structure from the groupoid Γ to the family of operator algebras over the topological space (Γ, π −1 E (τ E )). Lemma 3. 1 1The averaging of a function belonging to A proj gives a function of C ∞ 0 (M). Example 4. 1 1Let us consider the algebra A = (L 1 (R), * ), where * denotes the convolution. The Fourier transformation of f ∈ L 1 (R) giveŝ Lemma 5. 1 1Let A be a unital algebra, J an ideal defined by a central idempotent, i.e., J = eA = Ae, e 2 = e, e ∈ Z(A), and D a derivation of the algebra A. Then DJ ⊂ J, De = 0. Proof The fact that e is idempotent implies De = D(e 2 ) = (2De)e ∈ J. From 1 = 1e + e ′ , where e ′ = 1 − e is also an idempotent, it follows that ee ′ = e ′ e = 0, and we haveD1 = De + De ′ = 0. Therefore, De = −De ′ , and D(ae) = (Da)e ∈ J for any a ∈ A. Thus DJ ⊂ J.✷ Lemma 5.2 Any isomorphism of algebras determines the isomorphism of the corresponding derivation spaces in such a way that the inner derivations are transformed into the inner derivations. ✷ Theorem 5.1 In the algebra CG there are only inner derivations, and in the algebra C(G) there are no nonzero derivations. Geometric Models for Noncommutative Algebras. A Cannas Da Silva, A Weinstein, American Mathematical SocietyBerkeleyCannas da Silva, A. and Weinstein, A. (1999). Geometric Models for Noncommutative Algebras, American Mathematical Society, Berkeley. . A Chamseddine, Int. J. Mod. Phys. 16Chamseddine, A. (2001). Int. J. Mod. Phys. 16, 759-766. . A H Chamseddine, A Connes, Phys. Rev. Lett. 24Chamseddine, A.H. and Connes, A. (1996). Phys. Rev. Lett. 24, 4868- 4871. . A H Chamseddine, G Felder, J Fröhlich, Commun. Math. Phys. 155Chamseddine, A. H., Felder, G. and Fröhlich, J. (1993). Commun. Math. Phys. 155, 205-217. . Y Choquet-Bruhat, C Dewitt-Morette, M Dillard-Bleick, Choquet-Bruhat, Y., DeWitt-Morette, C. and Dillard-Bleick, M. (1982). . Analysis, Manifolds and Physics. Analysis, Manifolds and Physics, North-Holland, Amsterdam -New York -Oxford. . A Connes, Comm. Math. Phys. 182Connes, A. (1996). Comm. Math. Phys. 182, 155-176. Noncommutative Geometry. A Connes, Academic PressNew York -LondonConnes, A. (1994). Noncommutative Geometry, Academic Press, New York -London. . A Connes, M Douglas, A Schwarz, J. High Energy Phys. no. 02. paper 03Connes, A., Douglas, M. and Schwarz, A. (1998), J. High Energy Phys. no. 02. paper 03. Elements of Noncommutative Geometry. J M Gracia-Bondía, J C Várilly, H Figueroa, Birkhäuser, Boston -Basel -BerlinGracia-Bondía, J.M., Várilly, J.C. and Figueroa, H. (2000). Elements of Noncommutative Geometry, Birkhäuser, Boston -Basel -Berlin. . M Heller, W Sasin, D Lambert, J. Math. Phys. 38Heller, M. Sasin, W. and Lambert, D. (1997). J. Math. Phys. 38, 5840- 5853. . M Heller, W Sasin, Int. J. Theor. Phys. 38Heller, M. and Sasin, W. (1999). Int. J. Theor. Phys. 38, 1619-1642. . M Heller, W Sasin, Z Odrzygóźdź, J. Math. Phys. 41Heller, M., Sasin, W. and Odrzygóźdź, Z. (2000). J. Math. Phys. 41, 5168-5179. . M Heller, Z Odrzygóźdź, L Pysiak, W Sasin, Gen. Rel. Grav. 36Heller, M., Odrzygóźdź, Z., Pysiak, L. and Sasin, W. (2004). Gen. Rel. Grav. 36, 111-126. Noncommutative Geometry, Matrices and String Theories, Thesis. J L Karczmarek, Princeton UniversityKarczmarek, J.L. (2002). Noncommutative Geometry, Matrices and String Theories, Thesis, Princeton University (http://schwinger.harv- ard.edu/karczmar). Elements of the Representation Theory. A A Kirillov, NaukaMoscowin RussianKirillov, A.A. (1978). Elements of the Representation Theory, Nauka, Moscow (in Russian). . A Konechny, A Schwarz, Phys. Rept. 360Konechny, A. and Schwarz, A. (2002). Phys. Rept. 360, 353-465. An Introduction to Noncommutative Spaces and Their Geometries. G Landi, SpringerBerlin -Heidelberg -New YorkLandi, G. (1997). An Introduction to Noncommutative Spaces and Their Geometries, Springer, Berlin -Heidelberg -New York. J Madore, An Introduction to Noncommutative Differential Geometry and Its Physical Applications. CambridgeCambridge University Press2nd editionMadore, J. (1999). An Introduction to Noncommutative Differential Ge- ometry and Its Physical Applications, 2nd edition, Cambridge University Press, Cambridge. . J Madore, J Mourad, J. Math. Phys. 39Madore, J. and Mourad, J. (1998). J. Math. Phys. 39, 424-442. . J Madore, L A Saeger, Class. Quantum Grav. 15Madore, J. and Saeger, L. A. (1998). Class. Quantum Grav . 15, 811-826. . N Seiber, E Witten, J. High Enerhy Phys. 0932Seiber, N. and Witten, E. (1999). J. High Enerhy Phys. no. 09, paper 32.
[]
[ "EXACT CONTINUOUS FRAMES IN HILBERT C * -MODULES", "EXACT CONTINUOUS FRAMES IN HILBERT C * -MODULES" ]
[ "Hadi Ghasemi ", "Tayebe Lal Shateri " ]
[]
[]
In the present paper, we investigate some properties of continuous frames in Hilbert C * -modules. In particular, we give the requirements so that by removing some elements of a continuous frame, it does not remain a continuous frame and when the remaining set still remains a continuous frame. Finally, we consider the requirements under which a continuous frame is not a Riesz-type frame.
null
[ "https://export.arxiv.org/pdf/2210.13808v2.pdf" ]
258,297,843
2210.13808
33f44fa92bbe617b198e90896d6c2385b26f443d
EXACT CONTINUOUS FRAMES IN HILBERT C * -MODULES 24 Apr 2023 Hadi Ghasemi Tayebe Lal Shateri EXACT CONTINUOUS FRAMES IN HILBERT C * -MODULES 24 Apr 2023arXiv:2210.13808v2 [math.FA] In the present paper, we investigate some properties of continuous frames in Hilbert C * -modules. In particular, we give the requirements so that by removing some elements of a continuous frame, it does not remain a continuous frame and when the remaining set still remains a continuous frame. Finally, we consider the requirements under which a continuous frame is not a Riesz-type frame. Introduction And Preliminaries Duffin and Schaeffer [3] introduced the concept of a frame to study some deep problems in the nonharmonic Fourier series. After the fundamental paper by Daubechies et al. [2], various generalizations of frames were developed. Now, frames are useful in some areas such as signal and image processing, filter bank theory, data compression and sampling theory, etc. We refer to [1] for an introduction to frame theory in Hilbert spaces and its applications. In 2000, Frank-Larson [4] introduced the notion of frames in Hilbert C * -modules as a generalization of frames in Hilbert spaces. It is well known that Hilbert C * -modules are generalizations of Hilbert spaces by allowing the inner product to take values in a C * -algebra rather than in the field of complex numbers. The theory of Hilbert C * -modules has applications in the study of locally compact quantum groups, complete maps between -algebras, non-commutative geometry, and KK-theory. There are some differences between Hilbert C * -modules and Hilbert spaces. For example, we know that the Riesz representation theorem for continuous linear functionals on Hilbert spaces does not extend to Hilbert C * -modules [14], also any closed subspace in a Hilbert space has an orthogonal complement, but it is not true for Hilbert C * -module [9]. Moreover, we know that every bounded operator on a Hilbert space has an adjoint, while there are bounded operators on Hilbert C * -modules which do not have any [10]. Thus it is more difficult to make a discussion of the theory of Hilbert C * -modules than those for Hilbert spaces. This makes the study of the frames for Hilbert C * -modules important and interesting. We refer the readers to [8], for more details on Hilbert C * -modules. The theory of frames has been extended from Hilbert spaces to Hilbert C * -modules, see [11,12,13]. The paper is organized as follows. First, we recall the basic definitions and some notations about Hilbert C * -modules, and we also give some properties of them which we will use in the later sections. Also, we recall the definition of continuous frames in Hilbert C * -modules. In Section 2, we deal to the duals of continuous frames in Hilbert C * -modules and prove some important properties of them. In particular, we show when can remove some elements from a continuous frame so that the remaining set is not a continuous frame and when the remaining set still remains a continuous frame. Finally, we consider the requirements under which a continuous frame has more duals. First, we recall some definitions and basic properties of Hilbert C * -modules. We give only a brief introduction to the theory of Hilbert C * -modules to make our explanations self-contained. For comprehensive accounts, we refer to [8,14]. We now give the definition of Hilbert C *modules. Throughout this paper, A shows a unital C * -algebra. Definition 1.1. A pre-Hilbert module over unital C * -algebra A is a complex vector space U which is also a left A-module equipped with an A-valued inner product ., . : U × U → A which is C-linear and A-linear in its first variable and satisfies the following conditions: (i) f, f ≥ 0, (ii) f, f = 0 iff f = 0, (iii) f, g * = g, f , (iv) af, g = a f, g , for all f, g ∈ U and a ∈ A. A pre-Hilbert A-module U is called Hilbert A-module if U is complete with respect to the topology determined by the norm f = f, f 1 2 . By [7, Example 2.46], if A is a C * -algebra, then it is a Hilbert A-module with respect to the inner product a, b = ab * , (a, b ∈ A). Example 1.2. [14, Page 237] Let l 2 (A) be the set of all sequences {a n } n∈N of elements of a C * -algebra A such that the series ∞ n=1 a n a * n is convergent in A. Then l 2 (A) is a Hilbert A-module with respect to the pointwise operations and inner product defined by {a n } n∈N , {b n } n∈N = ∞ n=1 a n b * n . In the following lemma the Cauchy-Schwartz inequality reconstructed in Hilbert C * -modules. (i) af ≤ a f (ii) f, g g, f = g f, f for all f, g ∈ U and a ∈ A. In the following, we assume that A is a unital C * -algebra and U is a Hilbert C * -module over A and (Ω, µ) is a measure space. We give the definition of continuous frames for Hilbert A-modules and some properties of them from [6]. Let (Ω, µ) be a measure space and A is a unital C * -algebra. Consider L 2 (Ω, A) = {ϕ : Ω → A ; Ω |(ϕ(ω)) * | 2 dµ(ω) < ∞}. For any ϕ, ψ ∈ L 2 (Ω, A), the inner product is defined by ϕ, ψ = Ω ϕ(ω), ψ(ω) dµ(ω) and the norm is defined by ϕ = ϕ, ϕ Definition 1.5. A mapping F : Ω → U is called a continuous frame for U if (i) F is weakly-measurable, i.e, for any f ∈ U, the mapping ω −→ f, F (ω) is measurable on Ω. (ii) There exist constants A, B > 0 such that A f, f ≤ Ω f, F (ω) F (ω), f dµ(ω) ≤ B f, f , (f ∈ U). (1.1) The constants A, B are called lower and upper frame bounds, respectively. The mapping F is called Bessel if the right inequality in (1.1) holds and is called tight if A = B. A continuous frame F : Ω → U is called exact if for every measurable subset Ω 1 ⊆ Ω with 0 < µ(Ω 1 ) < ∞, the mapping F : Ω\Ω 1 → U is not a continuous frame for U. Now we give definitions of the pre-frame operator and the continuous frame operator for a continuous frame in Hilbert C * -modules. Let F : Ω → U be a Bessel mapping . Then (i) The synthesis operator or pre-frame operator T F : L 2 (Ω, A) → U weakly defined by T F ϕ, f = Ω ϕ(ω) F (ω), f dµ(ω), (f ∈ U). (1.2) (ii) The adjoit of T , called The analysis operator T * F : U → L 2 (Ω, A) is defined by (T * F f )(ω) = f, F (ω) (ω ∈ Ω). (1.3) The frame operator S F : U → U for a continuous frame F : Ω → U is weakly defined by S F f, f = Ω f, F (ω) F (ω), f dµ(ω), (f ∈ U). (1.4) In [6] it is shown that the pre-frame operator T F : L 2 (Ω, A) → U is well defined, surjective, adjointable and T ≤ √ B . Moreover the analysis operator T * F : U → L 2 (Ω, A) is injective and has closed range. Also S = T T * is positive, adjointable, self-adjoint and invertible and S ≤ B. Main results In this section, we give the consept of duals of continuous frames in Hilbert C * -modules and prove some important properties of continuous frames and their duals. In particular, we show when can remove some elements from a continuous frame so that the remaining set is not a continuous frame and when the remaining set still remains a continuous frame. Definition 2.1. [6] Let F : Ω → U be a continuous Bessel mapping. A continuous Bessel mapping G : Ω → U is called a dual for F if f = Ω f, G(ω) F (ω)dµ(ω) (f ∈ U) or f, g = Ω f, G(ω) F (ω), g dµ(ω) (f, g ∈ U) (2.1) In this case (F, G) is called a dual pair. If T F and T G denote the synthesis operators of F and G, respectively, then (2.1) is equivalent to T F T * G = I U . It is easy to see that S −1 F is a dual for F , which is called canonical dual. Definition 2.2. [6] Let F : Ω → U be a continuous frame for Hilbert C * -module U. If F has only one dual, we call F a Riesz-type frame. In [6], it is shown that a continuous frame F for Hilbert C * -module U over a unital C * -algebra A is a Riesz-type frame if and only if the analysis operator T * F : U → L 2 (Ω, A) is onto. Proposition 2.3. Let F : Ω → U be a Riesz-type frame for Hilbert C * -module U. Then F (ω) = 0 for every ω ∈ Ω. Proof. Let G be the canonical dual of F and ω 0 ∈ Ω such that F (ω 0 ) = 0 . Define G 1 : Ω → U where G 1 (ω 0 ) = 0 and G 1 (ω) = G(ω) for all ω = ω 0 . Then G 1 is a continuous Bessel mapping and f = Ω f, G(ω) F (ω)dµ(ω) = {ω 0 } f, G(ω) F (ω)dµ(ω) + Ω\{ω 0 } f, G(ω) F (ω)dµ(ω) = {ω 0 } f, G 1 (ω) F (ω)dµ(ω) + Ω\{ω 0 } f, G 1 (ω) F (ω)dµ(ω) = Ω f, G 1 (ω) F (ω)dµ(ω). Hence G 1 is a dual of F and f is not Riesz-type. The following theorem shows that for each dual pair (F, G) for Hilbert C * -module U, both F and G are continuous frames for U. Theorem 2.4. Let F : Ω → U be a continuous Bessel mapping for Hilbert C * -module U with bound B F . If there exists a continuous Bessel mapping G : Ω → U with bound B G such that for any f, g ∈ U , f, g = Ω f, G(ω) F (ω), g dµ(ω), then F is a continuous frame for U. Proof. For any f ∈ U we have, f, f 2 = Ω f, G(ω) F (ω), f dµ(ω) 2 = { f, G(ω) } ω∈Ω , { f, F (ω) } ω∈Ω 2 ≤ { f, G(ω) } ω∈Ω 2 { f, F (ω) } ω∈Ω 2 = Ω f, G(ω) G(ω), f dµ(ω) Ω f, F (ω) F (ω), f dµ(ω) ≤ B G f, f Ω f, F (ω) F (ω), f dµ(ω) . Hence B −1 G f, f ≤ Ω f, F (ω) F (ω), f dµ(ω) ≤ B F f, f . This shows that F is a continuous frame for U, by [6, Theorem 2.13] Remark 2.5. By the existence of a canonical dual frame for each continuous frame, the converse of the above theorem is obvious. For each f = a 0 0 0 0 b ∈ U, we have Ω f, F (ω) G(ω)dµ(ω) = [0,1] 2ωa 0 0 2ωb    3 2 ω 0 0 0 0 3 2 ω    dµ(ω) = a 0 0 0 0 b [0,1] 3ω 2 dµ(ω) = a 0 0 0 0 b = f. Therefore G is a dual of F . Moreover, both F and G are continuous tight frames for U where A F = B F = 4 3 is the frame bound of F and A G = B G = 3 4 is the frame bound of G. Example 2.7. Assume that A = a 0 0 b : x, y ∈ C which is an unital C * -algebra. We define the inner product ., . : A × A → A (M, N) −→ M(N ) t . A with this inner product is a Hilbert C * -module over itself. Suppose that (Ω, µ) is a measure space where Ω = [0, 1] and µ is the Lebesgue measure. Consider the continuous Bessel mappings F, G : Ω → A defined by F (ω) = 2ω 0 0 ω − 1 and G(ω) =    3 2 ω 0 0 ω − 7 3    , for any ω ∈ Ω. For each f = a 0 0 b ∈ A, we have Ω f, F (ω) G(ω)dµ(ω) = [0,1] 2ωa 0 0 (ω − 1)b    3 2 ω 0 0 ω − 7 3    dµ(ω) = a 0 0 b [0,1]   3ω 2 0 0 ω 2 − 10 3 ω + 7 3   dµ(ω) = a 0 0 b 1 0 0 1 = a 0 0 b = f. Therefore F is a dual of G. It is easy to see that both F and G are continuous frames for A In [7], it has stated an important property of a given frame for Hilbert C * -module U and showed when it can removed some elements from a frame so that the set still remains a frame. Now we generalize these properties to the situation of continuous frames for Hilbert C * -module U over a unital C * -algebra A. We need the following lemma for the proof of the next theorem. Lemma 2.8. Let F : Ω → U be a continuous frame for Hilbert C * -module U over a unital C * -algebra A with frame operator S and ω 0 ∈ Ω. Then the mapping ψ :Ω −→ A ω −→ F (ω 0 ), S −1 F (ω) belongs to L 2 (Ω, A) and ψ(ω 0 ) = Ω\{ω 0 } ψ(ω)ψ(ω) * dµ(ω) + (ψ(ω 0 )) 2 µ({ω 0 }). Proof. Assume that the upper frame bound of F is B F . Then Ω ψ(ω)ψ(ω) * dµ(ω) = Ω F (ω 0 ), S −1 F (ω) S −1 F (ω), F (ω 0 )dµ(ω) = Ω S −1 F (ω 0 ), F (ω) F (ω), S −1 F (ω 0 ) dµ(ω) ≤ B F S −1 F (ω 0 ), S −1 F (ω 0 ) . Hence Ω ψ(ω)ψ(ω) * dµ(ω) ≤ B F S −1 F (ω 0 ) 2 < ∞, i.e. ψ ∈ L 2 (Ω, A). Also ψ(ω 0 ) = F (ω 0 ), S −1 F (ω 0 ) = Ω F (ω 0 ), S −1 F (ω) F (ω)dµ(ω), S −1 F (ω 0 ) = Ω ψ(ω) F (ω), S −1 F (ω 0 ) dµ(ω) = Ω\{ω 0 } ψ(ω) F (ω), S −1 F (ω 0 ) dµ(ω) + ψ(ω 0 ) F (ω 0 ), S −1 F (ω 0 ) µ({ω 0 }) = Ω\{ω 0 } ψ(ω)ψ(ω) * dµ(ω) + (ψ(ω 0 )) 2 µ({ω 0 }). Moreover, ψ(ω 0 ) is self-adjoint. The following theorem shows when does removing some elements from a continuous frame cause that the remaining set is not a continuous frame. Theorem 2.9. Let F : Ω → U be a continuous frame for Hilbert C * -module U over a unital C * -algebra A with frame operator S. Let 1 A be the identity element of A and ω 0 ∈ Ω such that 1 A − F (ω 0 ), S −1 F (ω 0 ) µ({ω 0 }) is not invertible in A. Then F : Ω \ {ω 0 } → U is not a continuous frame for Hilbert C * -module U. Proof. By the reconstruction formula we have F (ω 0 ) = Ω F (ω 0 ), S −1 F (ω 0 ) F (ω)dµ(ω) = Ω\{ω 0 } F (ω 0 ), S −1 F (ω) F (ω)dµ(ω) + F (ω 0 ), S −1 F (ω 0 ) F (ω 0 )µ({ω 0 }), then F (ω 0 )(1 A − F (ω 0 ), S −1 F (ω 0 ) µ({ω 0 })) = Ω\{ω 0 } F (ω 0 ), S −1 F (ω) F (ω)dµ(ω). Now assume that F : Ω \ {ω 0 } → U be a continuous frame for U. Since S − 1 2 is invertible, then S − 1 2 F : Ω \ {ω 0 } → U is also a continuous frame with the frame bounds C, D > 0. Then for all f ∈ U, C f, f ≤ Ω\{ω 0 } f, S − 1 2 F (ω) S − 1 2 F (ω), f dµ(ω) ≤ D f, f , and for f = S − 1 2 F (ω 0 ) we have C S − 1 2 F (ω 0 ), S − 1 2 F (ω 0 ) ≤ Ω\{ω 0 } S − 1 2 F (ω 0 ), S − 1 2 F (ω) S − 1 2 F (ω), S − 1 2 F (ω 0 ) dµ(ω) ≤ D S − 1 2 F (ω 0 ), S − 1 2 F (ω 0 )f , then C F (ω 0 ), S −1 F (ω 0 ) ≤ Ω\{ω 0 } F (ω 0 ), S −1 F (ω) S −1 F (ω), F (ω 0 ) dµ(ω). Define ψ :Ω −→ A ω −→ F (ω 0 ), S −1 F (ω) By lemma 2.8, ψ ∈ L 2 (Ω, A) and ψ(ω 0 ) = Ω\{ω 0 } ψ(ω)ψ(ω) * dµ(ω) + (ψ(ω 0 )) 2 µ({ω 0 }). Also Cψ(ω 0 ) ≤ Ω\{ω 0 } ψ(ω)ψ(ω) * dµ(ω), then Cψ(ω 0 ) ≤ ψ(ω 0 ) − (ψ(ω 0 )) 2 µ({ω 0 }). So Ct ≤ t − t 2 µ({ω 0 }) holds for any t in σ(ψ(ω 0 )), the spectrum of ψ(ω 0 ). Since 1 A − ψ(ω 0 )µ({ω 0 }) is not invertible and 1 A − ψ(ω 0 )µ({ω 0 }) = ( 1 µ({ω 0 }) 1 A − ψ(ω 0 ))µ({ω 0 }), so 1 µ({ω 0 }) ∈ σ(ψ(ω 0 )). Therefore, C 1 µ({ω 0 }) ≤ 1 µ({ω 0 }) − 1 (µ({ω 0 })) 2 µ({ω 0 }) = 0. This is a contradiction and complete the proof. We need the following lemma to prove the next theorems that states an important property of the canonical dual frame of a given continuous frame for Hilbert C * -modules. Lemma 2.10. Let F : Ω → U be a continuous frame for Hilbert C * -module U over a unital C * -algebra A with frame operator S and pre-frame operator T . Let f ∈ U and suppose that f = Ω ϕ(ω)F (ω)dµ(ω), for some ϕ ∈ L 2 (Ω, A). Then Ω |ϕ(ω) * | 2 dµ(ω) = Ω f, S −1 F (ω) S −1 F (ω), f dµ(ω) + Ω (ϕ(ω) − f, S −1 F (ω) )(ϕ(ω) * − S −1 F (ω), f )dµ(ω). Proof. For each ω ∈ Ω we can write ϕ(ω) = (ϕ(ω) − f, S −1 F (ω) ) + f, S −1 F (ω) Since F is a frame so, f = Ω f, S −1 F (ω) F (ω)dµ(ω). Then Ω (ϕ(ω) − f, S −1 F (ω) )F (ω)dµ(ω) = 0 =⇒ {(ϕ(ω) − f, S −1 F (ω) )} ω∈Ω ∈ Ker(T ) Also { f, S −1 F (ω) } ω∈Ω = { S −1 F (ω), f } ω∈Ω ∈ R(T * ) and L 2 (Ω, A) = Ker(T ) ⊕ R(T * ). It shows that { f, S −1 F (ω) } ω∈Ω , {(ϕ(ω) − f, S −1 F (ω) )} ω∈Ω = 0. Then 0 = Ω f, S −1 F (ω) (ϕ(ω) − f, S −1 F (ω) ) * dµ(ω) = Ω f, S −1 F (ω) S −1 F (ω), f dµ(ω) − Ω f, S −1 F (ω) ϕ(ω) * dµ(ω). Note that Ω (ϕ(ω) − f, S −1 F (ω) )(ϕ(ω) * − S −1 F (ω), f )dµ(ω) = Ω ϕ(ω)ϕ(ω) * dµ(ω) + Ω ϕ(ω) S −1 F (ω), f dµ(ω) − Ω f, S −1 F (ω) ϕ(ω) * dµ(ω) + Ω f, S −1 F (ω) S −1 F (ω), f dµ(ω). Hence the proof is complete. In the following, we generalize Theorem 2.9 by removing a measurable subset of Ω. Theorem 2.11. Let F : Ω → U be a continuous frame for Hilbert C * -module U over a unital C * -algebra A with frame operator S. Also suppose that Ω 1 is a measurable subset of Ω such that 0 < µ(Ω 1 ) < ∞ and f = Ω 1 F (ω)dµ(ω) = 0. If f, S −1 F (ω) = χ Ω 1 (ω) for all ω ∈ Ω, then the mapping F : Ω \ Ω 1 → U is not a continuous frame for Hilbert C * -module U. Proof. We knowe, Ω χ Ω 1 (ω)F (ω)dµ(ω) = Ω 1 F (ω)dµ(ω) = f = Ω f, S −1 F (ω) (ω)F (ω)dµ(ω). Also Ω (χ Ω 1 (ω))(χ Ω 1 (ω)) * dµ(ω) = Ω (χ Ω 1 (ω)) 2 dµ(ω) = µ(Ω 1 ). So by the Lemma 2.10 we have µ(Ω 1 ) = Ω f, S −1 F (ω) S −1 F (ω), f dµ(ω) + Ω (χ Ω 1 (ω) − f, S −1 F (ω) )(χ Ω 1 (ω) − S −1 F (ω), f )dµ(ω). Now if χ Ω 1 (ω) = f, S −1 F (ω) for all ω ∈ Ω, then µ(Ω 1 ) = Ω 1 f, S −1 F (ω) S −1 F (ω), f dµ(ω) + Ω\Ω 1 f, S −1 F (ω) S −1 F (ω), f dµ(ω) + Ω (χ Ω 1 (ω) − f, S −1 F (ω) )(χ Ω 1 (ω) − S −1 F (ω), f )dµ(ω) = µ(Ω 1 ) + Ω\Ω 1 f, S −1 F (ω) S −1 F (ω), f dµ(ω) + 0, then { f, S −1 F (ω) } ω∈Ω\Ω 1 , { S −1 F (ω), f } ω∈Ω\Ω 1 = Ω\Ω 1 f, S −1 F (ω) S −1 F (ω), f dµ(ω) = 0. This shows that there exists a nonzero element S −1 f ∈ U such that S −1 f, F (ω) = F (ω), S −1 f = 0, for all ω ∈ Ω \ Ω 1 . If F : Ω \ Ω 1 → U is a continuous frame for U, then Ker(T * F | Ω\Ω 1 ) = {0} and f = 0, which is a contradiction. The following theorem shows when does removing some elements from a continuous frame cause that the remaining set is a continuous frame. Theorem 2.12. Let F : Ω → U be a continuous frame for Hilbert C * -module U over a unital C * -algebra A with frame operator S. Let 1 A be the identity element of A and ω 0 ∈ Ω such that 1 A − F (ω 0 ), S −1 F (ω 0 ) µ({ω 0 }) is invertible in A. Then F : Ω \ {ω 0 } → U is a continuous frame for Hilbert C * -module U. Proof. By reconstruction formula we have F (ω 0 ) = Ω F (ω 0 ), S −1 F (ω) F (ω)dµ(ω) = Ω\{ω 0 } F (ω 0 ), S −1 F (ω) F (ω)dµ(ω) + F (ω 0 ), S −1 F (ω 0 ) F (ω 0 )µ({ω 0 }) then (1 A − F (ω 0 ), S −1 F (ω 0 ) µ({ω 0 }))F (ω 0 ) = Ω\{ω 0 } F (ω 0 ), S −1 F (ω) F (ω)dµ(ω) =⇒ F (ω 0 ) = (1 A − F (ω 0 ), S −1 F (ω 0 ) µ({ω 0 })) −1 Ω\{ω 0 } F (ω 0 ), S −1 F (ω) F (ω)dµ(ω). Put a := (1 A − F (ω 0 ), S −1 F (ω 0 ) µ({ω 0 }))F (ω 0 ) −1 , then by using Lemma 1.4, we have f , F (ω 0 ) F (ω 0 ), f = f, a Ω\{ω 0 } F (ω 0 ), S −1 F (ω) F (ω)dµ(ω) a Ω\{ω 0 } F (ω 0 ), S −1 F (ω) F (ω)dµ(ω), f = f, Ω\{ω 0 } F (ω 0 ), S −1 F (ω) F (ω)dµ(ω) a * a Ω\{ω 0 } F (ω 0 ), S −1 F (ω) F (ω)dµ(ω), f ≤ a 2 Ω\{ω 0 } f, F (ω) S −1 F (ω), F (ω 0 ) dµ(ω) Ω\{ω 0 } F (ω 0 ), S −1 F (ω), F (ω), f dµ(ω) = a 2 { f, F (ω) } ω∈Ω\{ω 0 } , { F (ω 0 ), S −1 F (ω) } ω∈Ω\{ω 0 } { F (ω 0 ), S −1 F (ω) } ω∈Ω\{ω 0 } , { f, F (ω) } ω∈Ω\{ω 0 } ≤ a 2 { F (ω 0 ), S −1 F (ω) } ω∈Ω\{ω 0 } 2 { f, F (ω) } ω∈Ω\{ω 0 } , { f, F (ω) } ω∈Ω\{ω 0 } = a 2 Ω\{ω 0 } F (ω 0 ), S −1 F (ω) S −1 F (ω), F (ω 0 )dµ(ω) Ω\{ω 0 } f, F (ω) F (ω), f dµ(ω). Put k := a 2 Ω\{ω 0 } F (ω 0 ), S −1 F (ω) S −1 F (ω), F (ω 0 ) dµ(ω) , then f, F (ω 0 ) F (ω 0 ), f ≤ k Ω\{ω 0 } f, F (ω) F (ω), f dµ(ω). Since F is a continuous frame so there exsists constant A > 0 such that A f, f ≤ Ω f, F (ω) F (ω), f dµ(ω) = Ω\{ω 0 } f, F (ω) F (ω), f dµ(ω) + f, F (ω 0 ) F (ω 0 ), f µ({ω 0 }) ≤ Ω\{ω 0 } f, F (ω) F (ω), f dµ(ω) + kµ({ω 0 }) Ω\{ω 0 } f, F (ω) F (ω), f dµ(ω) = (1 + kµ({ω 0 })) Ω\{ω 0 } f, F (ω) F (ω), f dµ(ω). This implies that F : Ω \ {ω 0 } → U satisfies the lower frame bound condition with bound A 1+kµ({ω 0 }) . Obviously F : Ω \ {ω 0 } → U satisfies the upper frame bound condition. This completes the proof. Corollary 2.13. Let F : Ω → U be a continuous frame for Hilbert C * -module U over a unital C * -algebra A with frame operator S. Let 1 A be the identity element of A and ω 0 ∈ Ω such that 1 A − F (ω 0 ), S −1 F (ω 0 ) µ({ω 0 }) is invertible in A. Then F is not a Riesz-type frame. Proof. By the previous theorem, F : Ω \ {ω 0 } → U is a continuous frame for U. Let G : Ω \ {ω 0 } → U be the canonical dual frame for F : Ω \ {ω 0 } → U and suppose that G(ω 0 ) = 0. Then S −1 F = G and for each f, g ∈ U, Ω f, G(ω) F (ω), g dµ(ω) = Ω\{ω 0 } f, G(ω) F (ω), g dµ(ω) + f, G(ω 0 ) F (ω 0 ), g µ({ω 0 }) = f, g + 0 = f, g . Hence G : Ω → U is a dual frame for F : Ω → U and S −1 F = G. By the same argument for the proof of the Theorem 2.12, we can generalize it for an orbitrary dual G of continuous frame F as follows. Remark 2.14. Let F : Ω → U be a continuous frame for Hilbert C * -module U over a unital C * -algebra A with the frame bounds A F , B F . Also suppose that 1 A be the identity element of A and ω 0 ∈ Ω such that 1 A − F (ω 0 ), G(ω 0 ) µ({ω 0 }) is invertible in A where G : Ω → U is an orbitrary dual of F with the frame bounds A G , B G . Then F : Ω \ {ω 0 } → U is a continuous frame for Hilbert C * -module U with the lower frame bound A F 1+ a 2 B G F (ω 0 ) 2 µ({ω 0 }) . 1 2 . 2It was shown in [8] L 2 (Ω, A) is a Hilbert A-module. : a, b ∈ C , and A = x 0 0 y : x, y ∈ C which is an unital C * -algebra. We define the inner product., . : U × U → A (M, N) −→ M(N ) t .This inner product makes U a Hilbert C * -module over A. Suppose that (Ω, µ) is a measure space where Ω = [0, 1] and µ is the Lebesgue measure. Consider the continuous Bessel mappings F, G : Ω → U defined by F (for any ω ∈ Ω. the frame bounds of G. Hadi Ghasemi, Department of Mathematics and Computer Sciences, Hakim Sabzevari University, Sabzevar, P.O. Box 397, IRAN Email address: [email protected] Tayebe Lal Shateri, Department of Mathematics and Computer Sciences, Hakim Sabzevari University, Sabzevar, P.O. Box 397, IRAN Email address: [email protected]; [email protected] An introduction to frames and Riesz bases. O Christensen, BirkhauserBostonO. Christensen, An introduction to frames and Riesz bases, Birkhauser, Boston, 2016. Painless nonothogonal expanisions. I Daubechies, A Grassman, Y Meyer, J. Math. Phys. 27I. Daubechies, A. Grassman and Y. Meyer, Painless nonothogonal expanisions, J. Math. Phys., 27 (1986), 1271-1283. A class of nonharmonic Fourier series. R J Duffin, A C Schaeffer, Trans. Amer. Math. Soc. 72R.J. Duffin and A.C. Schaeffer, A class of nonharmonic Fourier series, Trans. Amer. Math. Soc., 72 (1952), 341-366. Frames in Hilbert C * -modules and C * -algebras. M Frank, D R Larson, J. Operator Theory. 48M. Frank and D. R. Larson, Frames in Hilbert C * -modules and C * -algebras, J. Operator Theory, 48 (2002), 273-314. Continuous * -controlled frames in Hilbert C * -modules. H Ghasemi, T L Shateri, 10.22080/cjms.2022.21850.1590Caspian J. Math. Scien. 112H. Ghasemi and T.L. Shateri, Continuous * -controlled frames in Hilbert C * -modules, Caspian J. Math. Scien., 11(2) (2022), 448-460, DOI: 10.22080/cjms.2022.21850.1590. H Ghasemi, T L Shateri, arXiv:2208.06799Continuous Frames in Hilbert C * -Modules. arXiv preprintH. Ghasemi and T.L. Shateri, Continuous Frames in Hilbert C * -Modules, arXiv preprint arXiv:2208.06799 (2022). W Jing, Frames in Hilbert C * -modules. Orlando, FloridaUniversity of Central FloridaPh.D. ThesisW. Jing, Frames in Hilbert C * -modules, Ph.D. Thesis, University of Central Florida Orlando, Florida, 2006. E C Lance, C * -Modules Hilbert, A Toolkit for Operator Algebraist, 144 pages. Cambridge, UKCambridge University Press210E.C. Lance, Hilbert C * -Modules: A Toolkit for Operator Algebraist, 144 pages, vol. 210 of London Mathe- matical Society Lecture Note Series, Cambridge University Press, Cambridge, UK, (1995). Hilbert C * -modules in which all closed submodules are complemented. B Magajna, Proc. Amer. Math. Soc. 1253B. Magajna, Hilbert C * -modules in which all closed submodules are complemented, Proc. Amer. Math. Soc., 125 (3) (1997), 849-852. V M Manuilov, E V Troitsky, C * -Modules Hilbert, Translations of mathematical monographs. American Mathematical SocV. M. Manuilov and E. V. Troitsky, Hilbert C * -Modules: Translations of mathematical monographs, Amer- ican Mathematical Soc, (2005). On controlled frames in Hilbert C * -modules. M Rashidi-Kouchi, A Rahimi, 10.1142/S0219691317500382Int. J. Wavelets Multiresolut. Inf. Process. 154175003815 pagesM. Rashidi-Kouchi and A. Rahimi, On controlled frames in Hilbert C * -modules, Int. J. Wavelets Multires- olut. Inf. Process., 15(4) (2017), 1750038 (15 pages), DOI: 10.1142/S0219691317500382. * -controlled frames in Hilbert C * -modules. T L Shateri, 10.1142/S0219691320500800Inter. J. Wavelets Multiresolut. Inf. Process. 19032050080T.L. Shateri, * -controlled frames in Hilbert C * -modules, Inter. J. Wavelets Multiresolut. Inf. Process., 19(03) 2050080 (2021), DOI: 10.1142/S0219691320500800. Some properties of g-frames in Hilbert C * -modules. X-C Xiao, X-M Zeng, Journal of Mathematical Analysis and Applications. 3632X-C Xiao, X-M Zeng. Some properties of g-frames in Hilbert C * -modules, Journal of Mathematical Analysis and Applications 363(2) (2010), 399-408. N E Wegge-Olsen, K-Theory , C * - , algebras: a friendly approach. Oxford University PressN.E. Wegge-Olsen, K-theory and C * -algebras: a friendly approach, Oxford University Press, (1993).
[]
[ "Observation of nonlinear disclination states", "Observation of nonlinear disclination states" ]
[ "Boquan Ren \nSchool of Electronic and Information Engineering\nKey Laboratory for Physical Electronics and Devices of the Ministry of Education & Shaanxi Key Lab of Information Photonic Technique\nXi'an Jiaotong University\n710049Xi'anChina\n", "A A Arkhipova \nInstitute of Spectroscopy\nRussian Academy of Sciences\n108840Troitsk, MoscowRussia\n\nFaculty of Physics\nHigher School of Economics\n105066MoscowRussia\n", "Yiqi Zhang \nSchool of Electronic and Information Engineering\nKey Laboratory for Physical Electronics and Devices of the Ministry of Education & Shaanxi Key Lab of Information Photonic Technique\nXi'an Jiaotong University\n710049Xi'anChina\n", "Y V Kartashov \nInstitute of Spectroscopy\nRussian Academy of Sciences\n108840Troitsk, MoscowRussia\n", "Hongguang Wang \nSchool of Electronic and Information Engineering\nKey Laboratory for Physical Electronics and Devices of the Ministry of Education & Shaanxi Key Lab of Information Photonic Technique\nXi'an Jiaotong University\n710049Xi'anChina\n", "S A Zhuravitskii \nInstitute of Spectroscopy\nRussian Academy of Sciences\n108840Troitsk, MoscowRussia\n\nQuantum Technology Centre\nFaculty of Physics\nM. V. Lomonosov Moscow State University\n119991MoscowRussia\n", "N N Skryabin \nInstitute of Spectroscopy\nRussian Academy of Sciences\n108840Troitsk, MoscowRussia\n\nQuantum Technology Centre\nFaculty of Physics\nM. V. Lomonosov Moscow State University\n119991MoscowRussia\n", "I V Dyakonov \nQuantum Technology Centre\nFaculty of Physics\nM. V. Lomonosov Moscow State University\n119991MoscowRussia\n", "A A Kalinkin \nInstitute of Spectroscopy\nRussian Academy of Sciences\n108840Troitsk, MoscowRussia\n\nQuantum Technology Centre\nFaculty of Physics\nM. V. Lomonosov Moscow State University\n119991MoscowRussia\n", "S P Kulik \nQuantum Technology Centre\nFaculty of Physics\nM. V. Lomonosov Moscow State University\n119991MoscowRussia\n", "V O Kompanets \nInstitute of Spectroscopy\nRussian Academy of Sciences\n108840Troitsk, MoscowRussia\n", "S V Chekalin \nInstitute of Spectroscopy\nRussian Academy of Sciences\n108840Troitsk, MoscowRussia\n", "V N Zadkov \nInstitute of Spectroscopy\nRussian Academy of Sciences\n108840Troitsk, MoscowRussia\n\nFaculty of Physics\nHigher School of Economics\n105066MoscowRussia\n" ]
[ "School of Electronic and Information Engineering\nKey Laboratory for Physical Electronics and Devices of the Ministry of Education & Shaanxi Key Lab of Information Photonic Technique\nXi'an Jiaotong University\n710049Xi'anChina", "Institute of Spectroscopy\nRussian Academy of Sciences\n108840Troitsk, MoscowRussia", "Faculty of Physics\nHigher School of Economics\n105066MoscowRussia", "School of Electronic and Information Engineering\nKey Laboratory for Physical Electronics and Devices of the Ministry of Education & Shaanxi Key Lab of Information Photonic Technique\nXi'an Jiaotong University\n710049Xi'anChina", "Institute of Spectroscopy\nRussian Academy of Sciences\n108840Troitsk, MoscowRussia", "School of Electronic and Information Engineering\nKey Laboratory for Physical Electronics and Devices of the Ministry of Education & Shaanxi Key Lab of Information Photonic Technique\nXi'an Jiaotong University\n710049Xi'anChina", "Institute of Spectroscopy\nRussian Academy of Sciences\n108840Troitsk, MoscowRussia", "Quantum Technology Centre\nFaculty of Physics\nM. V. Lomonosov Moscow State University\n119991MoscowRussia", "Institute of Spectroscopy\nRussian Academy of Sciences\n108840Troitsk, MoscowRussia", "Quantum Technology Centre\nFaculty of Physics\nM. V. Lomonosov Moscow State University\n119991MoscowRussia", "Quantum Technology Centre\nFaculty of Physics\nM. V. Lomonosov Moscow State University\n119991MoscowRussia", "Institute of Spectroscopy\nRussian Academy of Sciences\n108840Troitsk, MoscowRussia", "Quantum Technology Centre\nFaculty of Physics\nM. V. Lomonosov Moscow State University\n119991MoscowRussia", "Quantum Technology Centre\nFaculty of Physics\nM. V. Lomonosov Moscow State University\n119991MoscowRussia", "Institute of Spectroscopy\nRussian Academy of Sciences\n108840Troitsk, MoscowRussia", "Institute of Spectroscopy\nRussian Academy of Sciences\n108840Troitsk, MoscowRussia", "Institute of Spectroscopy\nRussian Academy of Sciences\n108840Troitsk, MoscowRussia", "Faculty of Physics\nHigher School of Economics\n105066MoscowRussia" ]
[]
Introduction of controllable deformations into periodic materials that lead to disclinations in their structure opens novel routes for construction of higher-order topological insulators hosting topological states at disclinations. Appearance of these topological states is consistent with the bulkdisclination correspondence principle, and is due to the filling anomaly that results in fractional charges to the boundary unit cells. So far, topological disclination states were observed only in the linear regime, while the interplay between nonlinearity and topology in the systems with disclinations has been never studied experimentally. We report here bon the experimental observation of the nonlinear photonic disclination states in waveguide arrays with pentagonal or heptagonal disclination cores inscribed in transparent optical medium using the fs-laser writing technique. The transition between nontopological and topological phases in such structures is controlled by the Kekulé distortion coefficient r with topological phase hosting simultaneously disclination states at the inner disclination core and spatially separated from them corner, zero-energy, and extended edge states at the outer edge of the structure. We show that the robust nonlinear disclination states bifurcate from their linear counterparts and that location of their propagation constants in the gap and, hence, their spatial localization can be controlled by their power. Nonlinear disclination states can be efficiently excited by Gaussian input beams, but only if they are focused into the waveguides belonging to the disclination core, where such topological states reside.
null
[ "https://export.arxiv.org/pdf/2304.11936v1.pdf" ]
258,298,467
2304.11936
71139006be14f92ef1032db793fbaeb3b527b6d6
Observation of nonlinear disclination states Boquan Ren School of Electronic and Information Engineering Key Laboratory for Physical Electronics and Devices of the Ministry of Education & Shaanxi Key Lab of Information Photonic Technique Xi'an Jiaotong University 710049Xi'anChina A A Arkhipova Institute of Spectroscopy Russian Academy of Sciences 108840Troitsk, MoscowRussia Faculty of Physics Higher School of Economics 105066MoscowRussia Yiqi Zhang School of Electronic and Information Engineering Key Laboratory for Physical Electronics and Devices of the Ministry of Education & Shaanxi Key Lab of Information Photonic Technique Xi'an Jiaotong University 710049Xi'anChina Y V Kartashov Institute of Spectroscopy Russian Academy of Sciences 108840Troitsk, MoscowRussia Hongguang Wang School of Electronic and Information Engineering Key Laboratory for Physical Electronics and Devices of the Ministry of Education & Shaanxi Key Lab of Information Photonic Technique Xi'an Jiaotong University 710049Xi'anChina S A Zhuravitskii Institute of Spectroscopy Russian Academy of Sciences 108840Troitsk, MoscowRussia Quantum Technology Centre Faculty of Physics M. V. Lomonosov Moscow State University 119991MoscowRussia N N Skryabin Institute of Spectroscopy Russian Academy of Sciences 108840Troitsk, MoscowRussia Quantum Technology Centre Faculty of Physics M. V. Lomonosov Moscow State University 119991MoscowRussia I V Dyakonov Quantum Technology Centre Faculty of Physics M. V. Lomonosov Moscow State University 119991MoscowRussia A A Kalinkin Institute of Spectroscopy Russian Academy of Sciences 108840Troitsk, MoscowRussia Quantum Technology Centre Faculty of Physics M. V. Lomonosov Moscow State University 119991MoscowRussia S P Kulik Quantum Technology Centre Faculty of Physics M. V. Lomonosov Moscow State University 119991MoscowRussia V O Kompanets Institute of Spectroscopy Russian Academy of Sciences 108840Troitsk, MoscowRussia S V Chekalin Institute of Spectroscopy Russian Academy of Sciences 108840Troitsk, MoscowRussia V N Zadkov Institute of Spectroscopy Russian Academy of Sciences 108840Troitsk, MoscowRussia Faculty of Physics Higher School of Economics 105066MoscowRussia Observation of nonlinear disclination states (Dated: April 25, 2023) Introduction of controllable deformations into periodic materials that lead to disclinations in their structure opens novel routes for construction of higher-order topological insulators hosting topological states at disclinations. Appearance of these topological states is consistent with the bulkdisclination correspondence principle, and is due to the filling anomaly that results in fractional charges to the boundary unit cells. So far, topological disclination states were observed only in the linear regime, while the interplay between nonlinearity and topology in the systems with disclinations has been never studied experimentally. We report here bon the experimental observation of the nonlinear photonic disclination states in waveguide arrays with pentagonal or heptagonal disclination cores inscribed in transparent optical medium using the fs-laser writing technique. The transition between nontopological and topological phases in such structures is controlled by the Kekulé distortion coefficient r with topological phase hosting simultaneously disclination states at the inner disclination core and spatially separated from them corner, zero-energy, and extended edge states at the outer edge of the structure. We show that the robust nonlinear disclination states bifurcate from their linear counterparts and that location of their propagation constants in the gap and, hence, their spatial localization can be controlled by their power. Nonlinear disclination states can be efficiently excited by Gaussian input beams, but only if they are focused into the waveguides belonging to the disclination core, where such topological states reside. INTRODUCTION Topological systems hosting topologically protected states at their edges or in their corners are attracting considerable attention in diverse areas of physics, including solid-state physics [1,2], mechanics [3], acoustics [4], physics of matter waves [5], exciton-polariton condensates [6], and, particularly, in photonics [7][8][9]. This attention is connected, in part, with considerable practical potential of topological systems for construction of transmission lines, switching devices, routers, and lasers resilient to disorder and edge deformations. With the development of topological photonics [7][8][9], the class of systems, where topologically nontrivial states can be encountered has been substantially extended. While many works on photonic topological insulators employed periodic in the bulk structures for demonstration of topologically protected edge states [10][11][12][13][14][15][16][17], it is now realized that the topological insulators can also be created using aperiodic structures that possess discrete rotational symmetry, but lack periodicity, such as quasi-crystals [18], fractal structures [19,20], and structures with disclinations [21][22][23]. The concept of topological insulators with disclinations originates from solid state physics [24][25][26][27][28], where it was predicted that disclinations -crystallographic defects disrupting lattice structure -can trap fractional "spectral" charges (connected with the local density of states [21,29]) and support localized states of the topological origin. Such systems can also be used [30] for realization of the higher-order topological insulators hosting so-called zero-dimensional states [31]. The bulk-disclination correspondence principle proposed for these systems that links the appearance of the disclination states with the topological properties of the spectrum, illustrates the importance of fractional spectral charges as a probe of "crystalline" topology of these systems [21,22,32]. Higher-order topological disclination states typically form at the boundary of the hollow disclination core of the structure. It has been demonstrated that linear lattices with disclinations offer new opportunities for the control of confinement and internal structure of the field, not only in photonics [33,34], but also in acoustics [35][36][37]. Different from aforementioned achievements reported only in linear media, the impact of nonlinearity on photonic disclination states was addressed theoretically only recently [38], while experimental observation of nonlinear disclination states has never been performed so far. At the same time, nonlinear effects, such as self-action of light, attract more and more attention in topological photonics [9] because they enable all-optical control of the properties of the topological states. New effects of topological origin emerging due to self-action include topological phase transitions [39], nonlinear Thouless pumping [40][41][42][43], formation of the topological solitons [44][45][46][47][48][49][50][51][52][53], development of the modulational instabilities of the edge states [54,55] and rich bistability effects [56,57], to name only a few. Nonlinear higherorder topological insulators supporting corner solitons have been also reported theoretically [58] and in experiment [59,60], while the Floquet version of higher-order nonlinear topological insulator was proposed just recently in [61]. Disclination states appearing in aperiodic structures obtained by specific deformations of periodic arrays, formally belong to a class of higher-order topological states. However, in contrast to the previously considered higherorder insulator geometries with periodic bulk, disclination systems may feature other types of discrete rotational symmetries, not compatible with crystallographic symmetries and not attainable in usual higher-order insulators. One can thus expect that such symmetry properties of the disclination systems should find their manifestation in a completely different structure of their linear eigenmodes, properties of nonlinear self-sustained states bifurcating from them, and in their excitation dynamics. Our work is thus aimed at the exploration of the interplay of nonlinear self-action effects and topology in the disclination structures with different discrete rotational symmetries. Here we report on the first experimental observation of the nonlinear topological states in disclination arrays with both pentagonal and heptagonal cores, obtained by removing or adding sectors into periodic honeycomb structures, where topological phase arises due to the Kekulé distortion [12,16,21,23,29] introduced into positions of six sites in each unit cell of the structure. Our disclination arrays are inscribed in nonlinear fused silica, using the fs-laser direct writing technique [10,59,[62][63][64][65]. We observe that when disclination arrays are in topological phase, one can excite thresholdless disclination solitons existing in a broad range of input powers by Gaussian beam focused into one of the waveguides on the disclination core. The excitation of the same waveguides in nontopological regime yields strong diffraction at low powers, while formation of nontopological selfsustained states occurs only above considerable power threshold. We thus compare behaviour of nonlinear excitations for different values of the distortion coefficient r. The results obtained here are relevant for a broad class of nonlinear physical systems, including matter waves, polariton condensates, photonic crystals, atomic vapors, and many others, where potentials with disclinations can be created. They also highlight the potential of these topological structures for realisation of higher-harmonic generation and lasing that may benefit from strong topological state confinement and its resilience to disorder. RESULTS AND DISCUSSIONS Spectra of the arrays with disclinations We consider paraxial propagation of a light beam along the z axis of a medium with the focusing cubic nonlinearity and shallow transverse refractive index modulation that can be described by the nonlinear Schrödinger-like equation for the dimensionless light field amplitude ψ: i ∂ψ ∂z = − 1 2 ∂ 2 ∂x 2 + ∂ 2 ∂y 2 ψ − R(x, y)ψ − |ψ| 2 ψ,(1) where x, y are the scaled transverse coordinates, z is the propagation distance that plays the same role as time in the Schrödinger equation describing a quantum particle in a potential, and the function R(x, y) describes disclination array with the straight waveguides: R = p m,n e [−(x−xm,n) 2 /a 2 x −(y−ym,n) 2 /a 2 y ] , where a x and a y are the widths of waveguides that are elliptical due to the writing process, x m,n and y m,n are the positions of the waveguide centers (depending on the particular type of introduced disclination), and p is the array depth proportional to the refractive index contrast δn in the structure (see Methods for adopted normalizations). To create topologically nontrivial arrays with disclinations we use two-step process. We start from usual periodic honeycomb waveguide array with identical waveguide spacing d in the entire structure and first introduce shift of the waveguides in the direction perpendicular to the borders of the unit cell, whose magnitude can be characterized by the Kekulé distortion coefficient r = intra / inter , with intra and inter being intra-cell and inter-cell spacing between waveguides after shift [12,16,21,23]. Clearly, r = 1 corresponds to non-deformed structure with intra = inter = d. As it will be shown below, by changing the value of r one can achieve the transition between topologically trivial and nontrivial geometries. On the second step, to create the arrays with disclination, we remove or add sectors into honeycomb structure with shifted waveguides. At this step, after removing of the sector we deform the unit cells in the remaining structure such that they fill the entire 2π polar angle, while to add the sector we compress unit cells accordingly (see Methods for the description of the deformation process). In Fig. 1(a), we display the microphotographs of the arrays with pentagonal disclination core inscribed with fs-laser in 10 cm long fused silica sample (total number of waveguides in this structure is 90), obtained by removing a sector from honeycomb array, with coordinates of r = 0.5 r = 1.0 r = 1.68 the waveguides obtained using the above two-step process for three different values of the distortion coefficient r. Black hexagons superimposed on the microphotographs are guides for the eye indicating different unit cells of the structure. Similar microphotographs, but for the structure with the heptagonal disclination core (total number of waveguides is 126) that was obtained by adding the sector into honeycomb array are presented in Fig. 1(d), also for three different r values. Topological properties of these structures are controlled by the distortion coefficient r. One can see that for r < 1 the inter-cell coupling becomes weaker than the intra-cell one, while for r > 1 the situation is reversed, and the inter-cell coupling becomes stronger than the intra-cell one, indicating on the possible transition of the disclination array into higherorder topological insulator phase. This transition is manifested in qualitative modification of the linear spectrum of eigenmodes supported by these structures. To obtain such modes, we first use the ansatz ψ = u(x, y)e ibz , where b is the propagation constant and u(x, y) is the real function, for Eq. (1) to get the equation r = 0.5 r = 1.0 r = 1.68 x ¡! ¡ y ¡! ¡ (a) (b) (c) (d) (e)(f)bu = 1 2 ∂ 2 ∂x 2 + ∂ 2 ∂y 2 u + Ru + u 3 .(2) We then omit last nonlinear term in Eq. (2) and calculate all linear eigenmodes of the system using the plane-wave expansion method. The transformation of linear spectrum of the array with the pentagonal disclination core with increase of the distortion coefficient r is illustrated in Fig. 1(b). While at r < 1 no localized states are present in the gap between two bulk bands, at r > 1 the spectrum changes qualitatively, with several different types of localized states emerging in the gap. The appearance of such states can be explained by the bulk-disclination correspondence principle which gives the link between the fractional charge and topologically nontrivial states (see Methods for discussion of the topological invariants). Among these states, five states marked with red color (their number is dictated by the symmetry of the structure) are disclination states residing at the disclination core in the center of the array. Some of these states can be degenerate depending on the value of r, but in general they have different eigenvalues. These states have different phase structure, their localization at the disclination core increases with the increase of r. To describe the structure of the spectrum in more details, we chose r = 1.68 [cyan dashed line in Fig. 1(b)] and show eigenvalues of all modes in the interval 0.36 ≤ b ≤ 0.50 in Fig. 1(c). Besides disclination states, in the same gap there appear corner (cyan dots), edge (purple dots), and zero-energy states (green dots), but all of them emerge at the outer edge/corners of the structure due to its finite size and on this reason they do not hybridize for sufficiently large r with red disclination states localized on the central disclination core. Calculated intensity distributions of all eigenmodes forming in the gap at r = 1.68 for the arrays with pentagonal and heptagonal disclina- tion cores are presented in Supplemental Materials -to stress generality of these results, we present them for even larger structures with 300 (420) waveguides for pentagonal (heptagonal) cases -while in experiments we use sufficiently large and most compatible with writing technology structures from Fig. 1. U (a) Corner Edge Zero -energy Disclination Corner Edge Zero -energy Disclination (c) U, I p U, I p I p U I p (b) (d) b = 0.45 b = 0.43 b = 0.45 b = 0.43 b = 0.50 b = 0.50 x ¡! ¡ y ¡! ¡ Similar transformation of linear spectrum with increase of r is observed also in the array with heptagonal disclination core, see Fig. 1(d). In this structure seven disclination states with different phase structures emerge in the spectrum (some of them are nearly degenerate so that there are seemingly five red curves) shown in Fig. 1(e). Detailed structure of spectrum for this case is presented in Fig. 1(f) for r = 1.68, where one can again see that disclination states at the disclination core may coexist with spatially separated from them corner (cyan dots), edge (purple dots), and zero-energy states (green dots) at the outer edge. The emergence of disclination states of topological origin at the inner disclination core is consistent with the bulk-disclination correspondence principle [21,22,32] that establishes the link between the fractional disclination charge Q (see Methods for details of topological characterization) and the localized states emerging at the disclination core. For our arrays, Q = 1/2 in topologically nontrivial phase at r > 1 signalizing on the appearance of disclination states, while Q = 0 in nontopological regime, when r < 1 and disclination states are absent. Properties of nonlinear disclination states We now address the properties of stationary nonlinear disclination states, whose shapes are governed by the Eq. (2), where we keep the last nonlinear term. Such states can be found using the Newton relaxation method. By analogy with corner solitons encountered in higherorder topological insulators with periodic bulk [59,60], such nonlinear disclination states can be called disclination solitons. For their theoretical description we adopt the large-scale disclination arrays schematically depicted in Fig. 2. In both pentagonal and heptagonal arrays the families of the nonlinear disclination states bifurcate from linear modes localized at the disclination core. We consider bifurcation from the disclination state with the largest propagation constant (see Supplemental Materials). With the increase of the propagation constant b the power U = |ψ| 2 dxdy of nonlinear disclination state monotonically grows [see Figs. 2(a) and 2(c) for pentagonal and heptagonal cases, respectively], while the state first rapidly localizes and already at U ∼ 0.1 concentrates practically on one side of the disclination core [see representative intensity distributions in Figs. 2(b) and 2(d)]. This clearly stresses solitonic nature of such state, since without nonlinearity its power would be redistributed between different sites at the disclination core due to beating between several disclination states (notice that this process may be slow because eigenvalues of linear disclination states are close). The peak intensity I p = max{|ψ| 2 } of nonlinear disclination state [blue curve in Figs These results illustrate that nonlinear disclination states are the modes of topological origin, whose position inside the gap and localization degree strongly depend on their power. Linear stability analysis performed for these nonlinear families shows that they are stable in the entire gap, in both pentagonal and heptagonal structures, thus they can be readily excited in the experiment. (f) (e) (d) node2                                                         Observation of nonlinear disclination states For observation of nonlinear disclination states we inscribed (see Methods for details of fabrication) the arrays with pentagonal and heptagonal disclination cores with different values of the distortion coefficient r = 0.8, 1.0 and 1.68, to be able to compare dynamics in topologically trivial and nontrivial structures. In experiments, we employed single-waveguide excitations using 280 fs pulses of variable energy E from 1 kHz fs Ti:sapphire laser at 800 nm central wavelength. The input peak power in the waveguide defined as a ratio of the pulse energy E to the pulse duration τ taking into account the losses for matching with the focusing lens is evaluated as 2.5 kW for each 1 nJ. We compare excitations of three different waveguides (nodes) numbered 1 (at the disclination core), 2 (in the bulk), and 3 (in the outer corner) indicated by colored circles in Figs. 1(a) and 1(d).                                                         In Fig. 3 we present comparison of the output intensity distributions for these three types of excitations calculated theoretically (images with white background) using Eq. (1) and measured experimentally (images with maroon background) for both pentagonal and heptagonal disclination arrays without distortion, i.e. with r = 1. In this "borderline" case between topological and nontopological phases, no localized states are present in spectra of the arrays. On this reason the excitation of any of the nodes 1, 2, or 3 is accompanied by strong diffraction in the linear regime for E = 10 nJ pulses (in Fig. 3 the position of excitation in each case is marked by colored ring). Increasing pulse energy (power in theoretical simulations) results in gradual contraction of light towards the excited waveguide. For excitation of node 1 at disclination core one can observe the formation of well-localized soliton at highest shown pulse energy E ∼ 600 nJ, i.e. above considerable threshold [ Figs. 3(a) and 3(d)]. The same pulse energy level in general is not sufficient for soliton formation for excitation in the bulk [Figs. 3(b) and 3(e)] and at the outer corner [Figs. 3(c) and 3(f)], since at this energy level the tendency for light contraction to the excited waveguide only begins. To achieve good localization in these cases one has to increase the pulse energy even further, approximately to E ∼ 900 nJ level. In Fig. 4 we consider the same three types of excita- tions in the trivial insulator phase, when distortion coefficient r = 0.8. According to the Wannier center analysis in each unit cell, the filling anomaly does not occur in this case, and, consequently, no localized corner, edge or disclination states can appear in the linear spectrum of the system, despite the fact that forbidden gap opens for this value of r [see Figs. 1(b) and 1(e)], i.e. all linear eigenmodes are delocalized bulk modes. Thus, one again observes diffraction in the linear regime for E = 10 nJ pulses, for both pentagonal Figs. 4(a)-4(c) and heptagonal Figs. 4(d)-4(f) disclination arrays for all three types of excitations. Moreover, now localization does not occur for excitation at the disclination core even for pulse energies E ∼ 600 nJ that was sufficient for nonlinear localization at r = 1.0. Thus, there exists the tendency for increase of the pulse energy required for localization at the disclination core with decrease of r. For depicted pulse energies localization was not observed neither for bulk nor for corner excitations (it occurs only around E ∼ 900 nJ). (c) (b) (a)                                                         The picture changes qualitatively at r = 1.68 in topologically nontrivial phase. In this case, the disclination core supports topologically nontrivial localized disclination states, thus the input beam focused into node 1 excites localized states even at the lowest pulse energies E ∼ 10 nJ in both pentagonal [ Fig. 5(a)] and heptagonal [ Fig. 5(d)] arrays. Notice that even though in this quasi-linear regime single-site excitation leads to simultaneous population of several localized disclination eigenmodes, the beating between them occurs on the scale much larger than sample length (due to small difference of propagation constants b of such eigenmodes, see Supplementary Materials) and is therefore not visible in experiment at 10 cm of propagation. Even weak nonlinearity suppresses this beating leading to the formation of well-localized disclination solitons that exist over broad range of input pulse energies (powers), as long as propagation constants of such states remain in the forbidden gap of the spectrum [Figs. 5(a) and 5(d)]. Notice that because for r = 1.68 the gap is already wide, in experiment we do not reach power levels (below optical damage threshold), at which strong coupling with bulk states occurs. In contrast, when node 2 in the bulk is excited, one observes diffraction, and nonlinear localization does occur even for pulse energies E ∼ 600 nJ [see Figs. 5(b) and 5(e)]. An interesting situation is encountered for excitation of the node 3 [Figs. 5(c) and 5(f)]. This excitation has the largest overlap with the corner states that are also well-localized for this value of r at the outer edge of the array, and it does not excite zero-energy states (because the latter have different symmetry, see Supplementary Materials). As a result, in this case one observes the formation of nonlinear corner states in both pentagonal and heptagonal disclination arrays, whose localization degree only weakly changes in the considered range of input powers. Theoretical simulations fully support these observations. CONCLUSIONS In conclusion, we have reported on the experimental observation of nonlinear disclination states in disclination lattices inscribed in transparent nonlinear optical medium. Such states form when Kekulé distortion of waveguide positions drives the array into the topological phase, where several different types of localized states appear: disclination states residing at the disclination core and not overlapping with them spatially corner, edge, and zero-energy states at the outer edge of the structure. The nonlinearity enables strong light localization on one side of the disclination core. Our findings are reported for the pentagonal and heptagonal structures, with symmetry different from previously considered higher-order insulators with periodic bulk (such as C 3 , C 4 , C 6 ones). They pave the way for the development of new types of topological lasers on disclination states, efficient harmonic generation in topologically protected states, and observation of new interesting topological objects, such as topological disclination bound states in the continuum [66]. MATERIALS AND METHODS Fs-laser inscription of the waveguide arrays The waveguide arrays shown in Figs. 1(a,d) were inscribed in 10 cm-long fused silica glass samples (JGS1) using focused (with an aspheric lens with NA = 0.3) under the surface of sample at the depth range of 550 ∼ 1050 µm fs laser pulses at the wavelength of 515 nm with the duration 280 fs, repetition rate 1 MHz, and energy 320 nJ. Translation of the sample during the writing process of each waveguide was performed by the high-precision air-bearing positioner (Aerotech) with identical for all waveguides velocity of 1 mm/s. All such waveguides are elliptical and single-mode, and they exhibit propagation losses not exceeding 0.3 dB/cm at λ = 800 nm. After the waveguide arrays were inscribed, the input/output facets of the sample were optically polished, so that the sample length was shortened to 99 mm. Numerical simulations and normalizations For numerical simulations of evolution and excitation of the nonlinear disclination states, we used dimensionless continuous nonlinear Schrödinger-like equation (1), in which the transverse coordinates x, y are normalized to the characteristic scale r 0 = 10 µm, the propagation distance z is normalized to the diffraction length kr 2 0 ≈ 1.14 mm, k = 2πn/λ is the wavenumber in the medium with the background refractive index n (for fused silica n ≈ 1.45), and λ = 800 nm is the working wavelength. Our waveguides are single-mode and elliptical due to the writing process, the dimensionless widths of the waveguides are a x = 0.25 and a y = 0.75 (corresponding to 2.5 and 7.5 µm, respectively), but the eigenmode of such waveguides is only slightly elliptical. The waveguide spacing in structure without distortion is d = 3.2 (corresponding to 32 µm). The array depth p = k 2 r 2 0 δn/n is proportional to the refractive index contrast δn in the structure. For instance, p = 1.0 corresponds to δn ∼ 1.1×10 −4 . In the majority of presented results (unless specifically stated in the caption), we use the depth p = 5.0 that provides the best agreement between experiments and theory. 6. Construction of the disclination array from honeycomb structure. (a) Original honeycomb array with intra-cell separation intra and inter-cell separation inter. The Kekulé distortion coefficient r = intra/ inter. Each unit cell is indicated by white hexagons. √ 3d denotes the length of one side of the hexagonal unit cell. (b) Disclination array with the pentagonal core obtained by removing the sector with the Frank angle 2π/6 from array in (a) and gluing the cutting edges. (c) Disclination array with the heptagonal core obtained by inserting the sector with the Frank angle 2π/6 into array (a) and subsequent compression of the unit cells. All arrays are shown within the window −30 ≤ x, y ≤ 30. 2¼ 6 → → → → → → ℓ intra inter ℓ (a) (b) (c) √ 3d x ¡! ¡ y ¡! ¡ FIG. Pentagonal and heptagonal disclination arrays were obtained from regular honeycomb arrays using two-step process described in the main text, when at the first stage by shifting the waveguides one introduces controllable Kekulé distortion, quantified by the distortion coefficient r = intra / inter , where intra and inter is the intra-cell and inter-cell spacing between waveguides after shift [see notations in Fig. 6(a)], while at the second stage one removes or inserts Frank sector into the array, and then expands or contracts unit cells such as to obtain the resulting disclination structure [ Fig. 6(b) and 6(c)]. While deforming the structure we keep the longer axes of all elliptical waveguides parallel to the y axis. Topological indices The topological properties of disclination arrays can be discussed by analyzing the fractional "charge" that is carried by each unit cell, that is employed in established bulk-disclination correspondence principle [21]. Note that the "charge" here is spectral charge that can be defined through the local density of states. It is an analog of the real charge in electric systems, and it can be used to evaluate the number of states in the unit cell with states considered below the topological band gap [29]. The spectral charge Q bound to a disclination with a Frank angle Ω is defined by [16, 21-23, 27, 28] Q = Ω 2π 3 2 χ M − χ K modulo 1,(3) where the high symmetry indicators are χ M = #M that should be calculated directly in the honeycomb array before removing or inserting the Frank sector. Here #Π (n) q is the number of bands below the forbidden gap at a high-symmetry point Π = Γ, M, K with the eigenvalue of the C n rotation matrix e i2π(q−1)/n (q = 1, · · · , n) [27], and Ω = 2π/6 for the disclination arrays adopted in this work. For the topological nontrivial case with r > 1, one can find that (χ M , χ K ) = (2, 0) for both the pentagonal disclination array and the heptagonal disclination array. While for the topological trivial case with r < 1, (χ M , χ K ) = (0, 0). Thus, the fractional charge is Q = 1/2 for r > 1 and Q = 0 for r < 1. The fractional charge can be also obtained by counting the number of the Wannier centers occupied by each unit cell. The Wannier centers are located at the edges of the unit cell if r > 1 and at its center if r < 1. The unit cell around the pentagonal disclination core has five bulk Wannier centers if r > 1, which give a 5/2 charge per unit cell. If r < 1, the charge per unit cell around the disclination core is 3. See also the Supplementary Materials. FIG. 1 . 1Disclination arrays and their linear spectra. (a) Microphotographs of the fs-laser written waveguide arrays with a pentagonal disclination core for different values of the distortion coefficient r. The orange, blue, and green dotted circles indicate the nodes 1, 2 and 3, that will be used below for probing of excitation dynamics. (b) Propagation constants b of the eigenmodes of pentagonal disclination array vs distortion coefficient r. Red curves are associated with states residing at the disclination core. (c) Spectrum at r = 1.68. The bands corresponding to the bulk states are shown in gray, while propagation constants of corner, edge, zero-energy and disclination states are represented by dots of different colors. (d-f) Microphotographs and spectrum for the array with heptagonal disclination core. The arrangement of panels is the same as in (a)-(c). FIG. 2 . 2The families of nonlinear disclination states. (a) Peak intensity Ip (blue solid curve) and power U (orange solid curve) of the nonlinear disclination states vs propagation constant b in the array with the pentagonal core at r = 1.68. Gray regions represent the bulk band, while the vertical dotted color lines show propagation constants of linear corner, edge, zeroenergy, and disclination states. (b) Intensity distributions of selected nonlinear disclination states with different propagation constants that correspond to circles in (a). (c,d) The families of nonlinear disclination states and examples of their profiles in the array with heptagonal core. Intensity distributions in (b) are shown within the window −40 ≤ x, y ≤ 40, while those in (d) are shown within the window −46 ≤ x, y ≤ 46. . 2(a) and 2(c)] practically linearly increases FIG. 3 . 3Excitation of nonlinear modes in disclination arrays without distortion, at r = 1.0. Comparison of experimentally measured (maroon background) and theoretically calculated (white background) output intensity distributions in pentagonal (a)-(c) and heptagonal (d)-(f) disclination arrays for different input powers (pulse energies) of single-site Gaussian excitation. Results for beams focused into different nodes 1, 2 and 3 of the structure [indicated by colored circles and corresponding to the circles in Figs. 1(a) and 1(d)] are additionally highlighted with the orange, blue, and green backgrounds. Pulse energies E for the experimental outputs and dimensionless input powers U for theoretical outputs are indicated on each panel. White and black lines are guides for the eye illustrating unit cells of the array. All theoretical panels are shown within the window −30 ≤ x, y ≤ 30 and are obtained for the array depth p = 5.0. with b. Even though propagation constant of the nonlinear disclination state crosses eigenvalues of linear edge (magenta vertical dashed lines) and corner (cyan vertical dashed lines) states, no coupling with them occurs because they are located at the outer edge of the array. However, when the propagation constant of the nonlinear disclination state penetrates into the bulk band, shown with gray color in Figs. 2(a) and 2(c), the coupling with bulk states occurs that leads to strong expansion over entire array [see right panels in Figs. 2(b) and 2(d)]. As a result, the power U in the band rapidly increases with increase of b. FIG. 4 . 4Excitation of nonlinear modes in nontopological disclination array with r = 0.8. Comparison of the output theoretical and experimental output intensity distributions for different input powers in pentagonal (a)-(c) and heptagonal (d)-(f) arrays in topologically trivial phase. The arrangement of panels is similar to Fig. 3. The array depth is p = 4.9 in (a)-(c) and p = 5.0 in (d)-(f). FIG. 5 . 5Excitations of nonlinear modes in topological disclination arrays with r = 1.68. Comparison of experimental and theoretical outputs in pentagonal (a)-(c) and heptagonal (d)-(f) arrays in the topological phase illustrating the formation of disclination solitons (a),(d) and corner solitons (c),(f) existing for a broad range of powers, and considerable diffraction at all power levels for excitation of node 2. In all cases the array depth p = 5.0. AUTHOR CONTRIBUTIONSAll authors made significant contribution to this work.CONFLICT OF INTERESTThe authors declare no competing interests.SUPPLEMENTARY INFORMATIONThe online version contains supplementary material available at https://doi.org/. Colloquium: Topological insulators. M Z Hasan, C L Kane, 10.1103/RevModPhys.82.3045Rev. Mod. Phys. 823045M. Z. Hasan and C. L. Kane, "Colloquium: Topological insulators," Rev. Mod. Phys. 82, 3045 (2010). Topological insulators and superconductors. X.-L Qi, S.-C Zhang, 10.1103/RevModPhys.83.1057Rev. Mod. Phys. 831057X.-L. Qi and S.-C. Zhang, "Topological insulators and superconductors," Rev. Mod. Phys. 83, 1057 (2011). Topological mechanics. S D Huber, 10.1038/nphys3801Nat. Phys. 12621S. D. Huber, "Topological mechanics," Nat. Phys. 12, 621 (2016). Topological acoustics. H Xue, Y Yang, B Zhang, 10.1038/s41578-022-00465-6Nat. Rev. Mater. 7974H. Xue, Y. Yang, and B. Zhang, "Topological acoustics," Nat. Rev. Mater. 7, 974 (2022). Experimental realisation of the topological Haldane model. G Jotzu, M Messer, R Desbuquois, M Lebrat, T Uehlinger, D Greif, T Esslinger, 10.1038/nature13915Nature. 515237G. Jotzu, M. Messer, R. Desbuquois, M. Lebrat, T. Uehlinger, D. Greif, and T. Esslinger, "Experimen- tal realisation of the topological Haldane model," Nature 515, 237 (2014). Exciton-polariton topological insulator. S Klembt, T H Harder, O A Egorov, K Winkler, R Ge, M A Bandres, M Emmerling, L Worschech, T C H Liew, M Segev, C Schneider, S Höfling, 10.1038/s41586-018-0601-5Nature. 562552S. Klembt, T. H. Harder, O. A. Egorov, K. Winkler, R. Ge, M. A. Bandres, M. Emmerling, L. Worschech, T. C. H. Liew, M. Segev, C. Schneider, and S. Höfling, "Exciton-polariton topological insulator," Nature 562, 552 (2018). Topological photonics. L Lu, J D Joannopoulos, M Soljačić, 10.1038/nphoton.2014.248Nat. Photon. 8821L. Lu, J. D. Joannopoulos, and M. Soljačić, "Topological photonics," Nat. Photon. 8, 821 (2014). Topological photonics. T Ozawa, H M Price, A Amo, N Goldman, M Hafezi, L Lu, M C Rechtsman, D Schuster, J Simon, O Zilberberg, I Carusotto, 10.1103/RevModPhys.91.015006Rev. Mod. Phys. 9115006T. Ozawa, H. M. Price, A. Amo, N. Goldman, M. Hafezi, L. Lu, M. C. Rechtsman, D. Schuster, J. Simon, O. Zil- berberg, and I. Carusotto, "Topological photonics," Rev. Mod. Phys. 91, 015006 (2019). Nonlinear topological photonics. D Smirnova, D Leykam, Y Chong, Y Kivshar, 10.1063/1.5142397Appl. Phys. Rev. 721306D. Smirnova, D. Leykam, Y. Chong, and Y. Kivshar, "Nonlinear topological photonics," Appl. Phys. Rev. 7, 021306 (2020). M C Rechtsman, J M Zeuner, Y Plotnik, Y Lumer, D Podolsky, F Dreisow, S Nolte, M Segev, A Szameit, 10.1038/nature12066Photonic Floquet topological insulators. 496196M. C. Rechtsman, J. M. Zeuner, Y. Plotnik, Y. Lumer, D. Podolsky, F. Dreisow, S. Nolte, M. Segev, and A. Sza- meit, "Photonic Floquet topological insulators," Nature 496, 196 (2013). Optical resonator analog of a two-dimensional topological insulator. G Q Liang, Y D Chong, 10.1103/PhysRevLett.110.203904Phys. Rev. Lett. 110203904G. Q. Liang and Y. D. Chong, "Optical resonator analog of a two-dimensional topological insulator," Phys. Rev. Lett. 110, 203904 (2013). Scheme for achieving a topological photonic crystal by using dielectric material. L.-H Wu, X Hu, 10.1103/PhysRevLett.114.223901Phys. Rev. Lett. 114223901L.-H. Wu and X. Hu, "Scheme for achieving a topological photonic crystal by using dielectric material," Phys. Rev. Lett. 114, 223901 (2015). Realization of a three-dimensional photonic topological insulator. Y Yang, Z Gao, H Xue, L Zhang, M He, Z Yang, R Singh, Y Chong, B Zhang, H Chen, 10.1038/s41586-018-0829-0Nature. 565622Y. Yang, Z. Gao, H. Xue, L. Zhang, M. He, Z. Yang, R. Singh, Y. Chong, B. Zhang, and H. Chen, "Realiza- tion of a three-dimensional photonic topological insula- tor," Nature 565, 622 (2019). Observation of photonic anomalous Floquet topological insulators. L J Maczewsky, J M Zeuner, S Nolte, A Szameit, 10.1038/ncomms13756Nat. Commun. 813756L. J. Maczewsky, J. M. Zeuner, S. Nolte, and A. Szameit, "Observation of photonic anomalous Floquet topological insulators," Nat. Commun. 8, 13756 (2017). Observation of photonic topological valley Hall edge states. J Noh, S Huang, K P Chen, M C Rechtsman, 10.1103/PhysRevLett.120.063902Phys. Rev. Lett. 12063902J. Noh, S. Huang, K. P. Chen, and M. C. Rechtsman, "Observation of photonic topological valley Hall edge states," Phys. Rev. Lett. 120, 063902 (2018). Topological protection of photonic mid-gap defect modes. J Noh, W A Benalcazar, S Huang, M J Collins, K P Chen, T L Hughes, M C Rechtsman, 10.1038/s41566-018-0179-3Nat. Photon. 12408J. Noh, W. A. Benalcazar, S. Huang, M. J. Collins, K. P. Chen, T. L. Hughes, and M. C. Rechtsman, "Topolog- ical protection of photonic mid-gap defect modes," Nat. Photon. 12, 408 (2018). Bimorphic Floquet topological insulators. G G Pyrialakos, J Beck, M Heinrich, L J Maczewsky, N V Kantartzis, M Khajavikhan, A Szameit, D N Christodoulides, 10.1038/s41563-022-01238-wNat. Mater. 21634G. G. Pyrialakos, J. Beck, M. Heinrich, L. J. Maczewsky, N. V. Kantartzis, M. Khajavikhan, A. Szameit, and D. N. Christodoulides, "Bimorphic Floquet topological insulators," Nat. Mater. 21, 634 (2022). Topological photonic quasicrystals: Fractal topological spectrum and protected transport. M A Bandres, M C Rechtsman, M Segev, 10.1103/PhysRevX.6.011016Phys. Rev. X. 611016M. A. Bandres, M. C. Rechtsman, and M. Segev, "Topo- logical photonic quasicrystals: Fractal topological spec- trum and protected transport," Phys. Rev. X 6, 011016 (2016). Photonic Floquet topological insulators in a fractal lattice. Z Yang, E Lustig, Y Lumer, M Segev, 10.1038/s41377-020-00354-zLight Sci. Appl. 9128Z. Yang, E. Lustig, Y. Lumer, and M. Segev, "Photonic Floquet topological insulators in a fractal lattice," Light Sci. Appl. 9, 128 (2020). Fractal photonic topological insulators. T Biesenthal, L J Maczewsky, Z Yang, M Kremer, M Segev, A Szameit, M Heinrich, 10.1126/science.abm2842Science. 3762842T. Biesenthal, L. J. Maczewsky, Z. Yang, M. Kremer, M. Segev, A. Szameit, and M. Heinrich, "Fractal pho- tonic topological insulators," Science 376, eabm2842 (2022). Bulk-disclination correspondence in topological crystalline insulators. Y Liu, S Leung, F.-F Li, Z.-K Lin, X Tao, Y Poo, J.-H Jiang, 10.1038/s41586-020-03125-3Nature. 589381Y. Liu, S. Leung, F.-F. Li, Z.-K. Lin, X. Tao, Y. Poo, and J.-H. Jiang, "Bulk-disclination correspondence in topo- logical crystalline insulators," Nature 589, 381 (2021). Trapped fractional charges at bulk defects in topological insulators. C W Peterson, T Li, W Jiang, T L Hughes, G Bahl, 10.1038/s41586-020-03117-3Nature. 589376C. W. Peterson, T. Li, W. Jiang, T. L. Hughes, and G. Bahl, "Trapped fractional charges at bulk defects in topological insulators," Nature 589, 376 (2021). All-dielectric photonic crystal with unconventional higher-order topology. S Wu, B Jiang, Y Liu, J.-H Jiang, 10.1364/PRJ.418689Photon. Res. 9668S. Wu, B. Jiang, Y. Liu, and J.-H. Jiang, "All-dielectric photonic crystal with unconventional higher-order topol- ogy," Photon. Res. 9, 668 (2021). Bound states of conical singularities in graphene-based topological insulators. A Rüegg, C Lin, 10.1103/PhysRevLett.110.046401Phys. Rev. Lett. 11046401A. Rüegg and C. Lin, "Bound states of conical singu- larities in graphene-based topological insulators," Phys. Rev. Lett. 110, 046401 (2013). Existence of Majorana-Fermion bound states on disclinations and the classification of topological crystalline superconductors in two dimensions. J C Y Teo, T L Hughes, 10.1103/PhysRevLett.111.047006Phys. Rev. Lett. 11147006J. C. Y. Teo and T. L. Hughes, "Existence of Majorana- Fermion bound states on disclinations and the classifi- cation of topological crystalline superconductors in two dimensions," Phys. Rev. Lett. 111, 047006 (2013). Classification of two-dimensional topological crystalline superconductors and Majorana bound states at disclinations. W A Benalcazar, J C Y Teo, T L Hughes, 10.1103/PhysRevB.89.224503Phys. Rev. B. 89224503W. A. Benalcazar, J. C. Y. Teo, and T. L. Hughes, "Classification of two-dimensional topological crystalline superconductors and Majorana bound states at disclina- tions," Phys. Rev. B 89, 224503 (2014). Quantization of fractional corner charge in Cn-symmetric higherorder topological crystalline insulators. W A Benalcazar, T Li, T L Hughes, 10.1103/PhysRevB.99.245151Phys. Rev. B. 99245151W. A. Benalcazar, T. Li, and T. L. Hughes, "Quantiza- tion of fractional corner charge in Cn-symmetric higher- order topological crystalline insulators," Phys. Rev. B 99, 245151 (2019). Fractional disclination charge in two-dimensional Cnsymmetric topological crystalline insulators. T Li, P Zhu, W A Benalcazar, T L Hughes, 10.1103/PhysRevB.101.115115Phys. Rev. B. 101115115T. Li, P. Zhu, W. A. Benalcazar, and T. L. Hughes, "Fractional disclination charge in two-dimensional Cn- symmetric topological crystalline insulators," Phys. Rev. B 101, 115115 (2020). Higher-order topological phases in tunable C3 symmetric photonic crystals. H.-X Wang, L Liang, B Jiang, J Hu, X Lu, J.-H Jiang, 10.1364/PRJ.433188Photon. Res. 91854H.-X. Wang, L. Liang, B. Jiang, J. Hu, X. Lu, and J.-H. Jiang, "Higher-order topological phases in tunable C3 symmetric photonic crystals," Photon. Res. 9, 1854 (2021). A fractional corner anomaly reveals higherorder topology. C W Peterson, T Li, W A Benalcazar, T L Hughes, G Bahl, 10.1126/science.aba7604Science. 3681114C. W. Peterson, T. Li, W. A. Benalcazar, T. L. Hughes, and G. Bahl, "A fractional corner anomaly reveals higher- order topology," Science 368, 1114 (2020). Higher-order band topology. B Xie, H.-X Wang, X Zhang, P Zhan, J.-H Jiang, M Lu, Y Chen, 10.1038/s42254-021-00323-4Nat. Rev. Phys. 3520B. Xie, H.-X. Wang, X. Zhang, P. Zhan, J.-H. Jiang, M. Lu, and Y. Chen, "Higher-order band topology," Nat. Rev. Phys. 3, 520 (2021). Observation of topological p-orbital disclination states in non-Euclidean acoustic metamaterials. Y Chen, Y Yin, Z.-K Lin, Z.-H Zheng, Y Liu, J Li, J.-H Jiang, H Chen, 10.1103/PhysRevLett.129.154301Phys. Rev. Lett. 129154301Y. Chen, Y. Yin, Z.-K. Lin, Z.-H. Zheng, Y. Liu, J. Li, J.-H. Jiang, and H. Chen, "Observation of topologi- cal p-orbital disclination states in non-Euclidean acoustic metamaterials," Phys. Rev. Lett. 129, 154301 (2022). Observation of protected photonic edge states induced by real-space topological lattice defects. Q Wang, H Xue, B Zhang, Y D Chong, 10.1103/PhysRevLett.124.243602Phys. Rev. Lett. 124243602Q. Wang, H. Xue, B. Zhang, and Y. D. Chong, "Ob- servation of protected photonic edge states induced by real-space topological lattice defects," Phys. Rev. Lett. 124, 243602 (2020). Photonic topological pump between chiral disclination states. B.-Y Xie, O You, S Zhang, 10.1103/PhysRevA.106.L021502Phys. Rev. A. 10621502B.-Y. Xie, O. You, and S. Zhang, "Photonic topological pump between chiral disclination states," Phys. Rev. A 106, L021502 (2022). Vortex states in an acoustic Weyl crystal with a topological lattice defect. Q Wang, Y Ge, H Sun, H Xue, D Jia, Y Guan, S Yuan, B Zhang, Y D Chong, 10.1038/s41467-021-23963-7Nat. Commun. 123654Q. Wang, Y. Ge, H.-x. Sun, H. Xue, D. Jia, Y.-j. Guan, S.-q. Yuan, B. Zhang, and Y. D. Chong, "Vortex states in an acoustic Weyl crystal with a topological lattice de- fect," Nat. Commun. 12, 3654 (2021). Observation of degenerate zeroenergy topological states at disclinations in an acoustic lattice. Y Deng, W A Benalcazar, Z.-G Chen, M Oudich, G Ma, Y Jing, 10.1103/PhysRevLett.128.174301Phys. Rev. Lett. 128174301Y. Deng, W. A. Benalcazar, Z.-G. Chen, M. Oudich, G. Ma, and Y. Jing, "Observation of degenerate zero- energy topological states at disclinations in an acoustic lattice," Phys. Rev. Lett. 128, 174301 (2022). . S.-N Liang, Z.-H Qin, H.-Y Chen, X.-C Sun, J.-L , S.-N. Liang, Z.-H. Qin, H.-Y. Chen, X.-C. Sun, J.-L. . Z.-G Xie, S.-Y Chen, C Yu, M.-H He, Y.-F Lu, Xie, Z.-G. Chen, S.-Y. Yu, C. He, M.-H. Lu, and Y.-F. Topological disclination states for surface acoustic waves. Chen , 10.1103/PhysRevB.106.174112Phys. Rev. B. 106174112Chen, "Topological disclination states for surface acous- tic waves," Phys. Rev. B 106, 174112 (2022). Nonlinear photonic disclination states. B Ren, H Wang, Y V Kartashov, Y Li, Y Zhang, 10.1063/5.0126104APL Photon. 816101B. Ren, H. Wang, Y. V. Kartashov, Y. Li, and Y. Zhang, "Nonlinear photonic disclination states," APL Photon. 8, 016101 (2023). Nonlinearityinduced photonic topological insulator. L J Maczewsky, M Heinrich, M Kremer, S K Ivanov, M Ehrhardt, F Martinez, Y V Kartashov, V V Konotop, L Torner, D Bauer, A Szameit, 10.1126/science.abd2033Science. 370701L. J. Maczewsky, M. Heinrich, M. Kremer, S. K. Ivanov, M. Ehrhardt, F. Martinez, Y. V. Kartashov, V. V. Kono- top, L. Torner, D. Bauer, and A. Szameit, "Nonlinearity- induced photonic topological insulator," Science 370, 701 (2020). Quantized nonlinear Thouless pumping. M Jürgensen, S Mukherjee, M C Rechtsman, 10.1038/s41586-021-03688-9Nature. 59663M. Jürgensen, S. Mukherjee, and M. C. Rechtsman, "Quantized nonlinear Thouless pumping," Nature 596, 63 (2021). Nonlinear Thouless pumping: Solitons and transport breakdown. Q Fu, P Wang, Y V Kartashov, V V Konotop, F Ye, 10.1103/PhysRevLett.128.154101Phys. Rev. Lett. 128154101Q. Fu, P. Wang, Y. V. Kartashov, V. V. Konotop, and F. Ye, "Nonlinear Thouless pumping: Solitons and trans- port breakdown," Phys. Rev. Lett. 128, 154101 (2022). Two-dimensional nonlinear Thouless pumping of matter waves. Q Fu, P Wang, Y V Kartashov, V V Konotop, F Ye, 10.1103/PhysRevLett.129.183901Phys. Rev. Lett. 129183901Q. Fu, P. Wang, Y. V. Kartashov, V. V. Konotop, and F. Ye, "Two-dimensional nonlinear Thouless pumping of matter waves," Phys. Rev. Lett. 129, 183901 (2022). Quantized fractional Thouless pumping of solitons. M Jürgensen, S Mukherjee, C Jörg, M , 10.1038/s41567-022-01871-xNature Physics. 19420RechtsmanM. Jürgensen, S. Mukherjee, C. Jörg, and M. C. Rechts- man, "Quantized fractional Thouless pumping of soli- tons," Nature Physics 19, 420 (2023). Linear and nonlinear traveling edge waves in optical honeycomb lattices. M J Ablowitz, C W Curtis, Y.-P Ma, 10.1103/PhysRevA.90.023813Phys. Rev. A. 9023813M. J. Ablowitz, C. W. Curtis, and Y.-P. Ma, "Linear and nonlinear traveling edge waves in optical honeycomb lattices," Phys. Rev. A 90, 023813 (2014). Edge solitons in nonlinearphotonic topological insulators. D Leykam, Y D Chong, 10.1103/PhysRevLett.117.143901Phys. Rev. Lett. 117143901D. Leykam and Y. D. Chong, "Edge solitons in nonlinear- photonic topological insulators," Phys. Rev. Lett. 117, 143901 (2016). Tight-binding methods for general longitudinally driven photonic lattices: Edge states and solitons. M J Ablowitz, J T Cole, 10.1103/PhysRevA.96.043868Phys. Rev. A. 9643868M. J. Ablowitz and J. T. Cole, "Tight-binding methods for general longitudinally driven photonic lattices: Edge states and solitons," Phys. Rev. A 96, 043868 (2017). Self-localized states in photonic topological insulators. Y Lumer, Y Plotnik, M C Rechtsman, M Segev, 10.1103/PhysRevLett.111.243905Phys. Rev. Lett. 111243905Y. Lumer, Y. Plotnik, M. C. Rechtsman, and M. Segev, "Self-localized states in photonic topological insulators," Phys. Rev. Lett. 111, 243905 (2013). Observation of edge solitons in photonic graphene. Z Y Zhang, R Wang, Y Q Zhang, Y V Kartashov, F Li, H Zhong, H Guan, K Gao, F L Li, Y P Zhang, M Xiao, 10.1038/s41467-020-15635-9Nat. Commun. 111902Z. Y. Zhang, R. Wang, Y. Q. Zhang, Y. V. Kartashov, F. Li, H. Zhong, H. Guan, K. Gao, F. L. Li, Y. P. Zhang, and M. Xiao, "Observation of edge solitons in photonic graphene," Nat. Commun. 11, 1902 (2020). Observation of Floquet solitons in a topological bandgap. S Mukherjee, M C Rechtsman, 10.1126/science.aba8725Science. 368856S. Mukherjee and M. C. Rechtsman, "Observation of Flo- quet solitons in a topological bandgap," Science 368, 856 (2020). Vector topological edge solitons in Floquet insulators. S K Ivanov, Y V Kartashov, A Szameit, L Torner, V V Konotop, 10.1021/acsphotonics.9b01589ACS Photon. 7735S. K. Ivanov, Y. V. Kartashov, A. Szameit, L. Torner, and V. V. Konotop, "Vector topological edge solitons in Floquet insulators," ACS Photon. 7, 735 (2020). Topological dipole Floquet solitons. S K Ivanov, Y V Kartashov, M Heinrich, A Szameit, L Torner, V V Konotop, 10.1103/PhysRevA.103.053507Phys. Rev. A. 10353507S. K. Ivanov, Y. V. Kartashov, M. Heinrich, A. Szameit, L. Torner, and V. V. Konotop, "Topological dipole Flo- quet solitons," Phys. Rev. A 103, 053507 (2021). Nonlinear topological valley Hall edge states arising from type-II Dirac cones. H Zhong, S Xia, Y Zhang, Y Li, D Song, C Liu, Z Chen, 10.1117/1.AP.3.5.056001Adv. Photon. 356001H. Zhong, S. Xia, Y. Zhang, Y. Li, D. Song, C. Liu, and Z. Chen, "Nonlinear topological valley Hall edge states arising from type-II Dirac cones," Adv. Photon. 3, 056001 (2021). Dark topological valley Hall edge solitons. B Ren, H Wang, V O Kompanets, Y V Kartashov, Y Li, Y Zhang, 10.1515/nanoph-2021-0385Nanophoton. 103559B. Ren, H. Wang, V. O. Kompanets, Y. V. Kartashov, Y. Li, and Y. Zhang, "Dark topological valley Hall edge solitons," Nanophoton. 10, 3559 (2021). Modulational instability and solitary waves in polariton topological insulators. Y V Kartashov, D V Skryabin, 10.1364/OPTICA.3.001228Optica. 31228Y. V. Kartashov and D. V. Skryabin, "Modulational in- stability and solitary waves in polariton topological insu- lators," Optica 3, 1228 (2016). Interface states in polariton topological insulators. Y Q Zhang, Y V Kartashov, A Ferrando, 10.1103/PhysRevA.99.053836Phys. Rev. A. 9953836Y. Q. Zhang, Y. V. Kartashov, and A. Ferrando, "In- terface states in polariton topological insulators," Phys. Rev. A 99, 053836 (2019). Bistable topological insulator with exciton-polaritons. Y V Kartashov, D V Skryabin, 10.1103/PhysRevLett.119.253904Phys. Rev. Lett. 119253904Y. V. Kartashov and D. V. Skryabin, "Bistable topolog- ical insulator with exciton-polaritons," Phys. Rev. Lett. 119, 253904 (2017). Finite-dimensional bistable topological insulators: From small to large. W Zhang, X Chen, Y V Kartashov, D V Skryabin, F Ye, 10.1002/lpor.201900198Laser Photon. Rev. 131900198W. Zhang, X. Chen, Y. V. Kartashov, D. V. Skryabin, and F. Ye, "Finite-dimensional bistable topological in- sulators: From small to large," Laser Photon. Rev. 13, 1900198 (2019). Nonlinear higher-order polariton topological insulator. Y Zhang, Y V Kartashov, L Torner, Y Li, A Ferrando, 10.1364/OL.396039Opt. Lett. 454710Y. Zhang, Y. V. Kartashov, L. Torner, Y. Li, and A. Fer- rando, "Nonlinear higher-order polariton topological in- sulator," Opt. Lett. 45, 4710 (2020). Nonlinear second-order photonic topological insulators. M S Kirsch, Y Zhang, M Kremer, L J Maczewsky, S K Ivanov, Y V Kartashov, L Torner, D Bauer, A Szameit, M Heinrich, 10.1038/s41567-021-01275-3Nat. Phys. 17995M. S. Kirsch, Y. Zhang, M. Kremer, L. J. Maczewsky, S. K. Ivanov, Y. V. Kartashov, L. Torner, D. Bauer, A. Szameit, and M. Heinrich, "Nonlinear second-order photonic topological insulators," Nat. Phys. 17, 995 (2021). Nonlinear control of photonic higher-order topological bound states in the continuum. Z Hu, D Bongiovanni, D Jukić, E Jajtić, S Xia, D Song, J Xu, R Morandotti, H Buljan, Z Chen, 10.1038/s41377-021-00607-5Light Sci. Appl. 10164Z. Hu, D. Bongiovanni, D. Jukić, E. Jajtić, S. Xia, D. Song, J. Xu, R. Morandotti, H. Buljan, and Z. Chen, "Nonlinear control of photonic higher-order topological bound states in the continuum," Light Sci. Appl. 10, 164 (2021). πmode solitons in photonic Floquet lattices. H Zhong, Y V Kartashov, Y Li, Y Zhang, 10.1103/PhysRevA.107.L021502Phys. Rev. A. 10721502H. Zhong, Y. V. Kartashov, Y. Li, and Y. Zhang, "π- mode solitons in photonic Floquet lattices," Phys. Rev. A 107, L021502 (2023). Observation of edge solitons in topological trimer arrays. Y V Kartashov, A A Arkhipova, S A Zhuravitskii, N N Skryabin, I V Dyakonov, A A Kalinkin, S P Kulik, V O Kompanets, S V Chekalin, L Torner, V N Zadkov, 10.1103/PhysRevLett.128.093901Phys. Rev. Lett. 12893901Y. V. Kartashov, A. A. Arkhipova, S. A. Zhuravitskii, N. N. Skryabin, I. V. Dyakonov, A. A. Kalinkin, S. P. Kulik, V. O. Kompanets, S. V. Chekalin, L. Torner, and V. N. Zadkov, "Observation of edge solitons in topologi- cal trimer arrays," Phys. Rev. Lett. 128, 093901 (2022). Photonic circuits written by femtosecond laser in glass: improved fabrication and recent progress in photonic devices. D Tan, Z Wang, B Xu, J Qiu, Adv. Photon. 324002D. Tan, Z. Wang, B. Xu, and J. Qiu, "Photonic circuits written by femtosecond laser in glass: improved fabrica- tion and recent progress in photonic devices," Adv. Pho- ton. 3, 024002 (2021). Femtosecond laser precision engineering: From micron, submicron, to nanoscale. Z Lin, M Hong, 10.34133/2021/9783514Ultrafast Sci. 20219783514Z. Lin and M. Hong, "Femtosecond laser precision engi- neering: From micron, submicron, to nanoscale," Ultra- fast Sci. 2021, 9783514 (2021). Femtosecond laserinscribed optical waveguides in dielectric crystals: a concise review and recent advances. L Li, W Kong, F Chen, 10.1117/1.AP.4.2.024002Adv. Photon. 424002L. Li, W. Kong, and F. Chen, "Femtosecond laser- inscribed optical waveguides in dielectric crystals: a con- cise review and recent advances," Adv. Photon. 4, 024002 (2022). H Qin, Z Zhang, Q Chen, R Fleury, 10.48550/arXiv.2304.03206arXiv:2304.03206Anomalous Floquet topological disclination states. arXiv preprintH. Qin, Z. Zhang, Q. Chen, and R. Fleury, "Anomalous Floquet topological disclination states," arXiv preprint arXiv:2304.03206 (2023).
[]
[ "TripPy: A Triple Copy Strategy for Value Independent Neural Dialog State Tracking", "TripPy: A Triple Copy Strategy for Value Independent Neural Dialog State Tracking" ]
[ "Michael Heck [email protected] \nHeinrich Heine University\nDüsseldorfGermany\n", "Carel Van Niekerk \nHeinrich Heine University\nDüsseldorfGermany\n", "Nurul Lubis [email protected] \nHeinrich Heine University\nDüsseldorfGermany\n", "Christian Geishauser [email protected] \nHeinrich Heine University\nDüsseldorfGermany\n", "Hsien-Chin Lin [email protected] \nHeinrich Heine University\nDüsseldorfGermany\n", "Marco Moresi [email protected] \nHeinrich Heine University\nDüsseldorfGermany\n", "Milica Gašić \nHeinrich Heine University\nDüsseldorfGermany\n" ]
[ "Heinrich Heine University\nDüsseldorfGermany", "Heinrich Heine University\nDüsseldorfGermany", "Heinrich Heine University\nDüsseldorfGermany", "Heinrich Heine University\nDüsseldorfGermany", "Heinrich Heine University\nDüsseldorfGermany", "Heinrich Heine University\nDüsseldorfGermany", "Heinrich Heine University\nDüsseldorfGermany" ]
[ "Proceedings of the SIGdial 2020 Conference" ]
Task-oriented dialog systems rely on dialog state tracking (DST) to monitor the user's goal during the course of an interaction. Multidomain and open-vocabulary settings complicate the task considerably and demand scalable solutions. In this paper we present a new approach to DST which makes use of various copy mechanisms to fill slots with values. Our model has no need to maintain a list of candidate values. Instead, all values are extracted from the dialog context on-thefly. A slot is filled by one of three copy mechanisms: (1) Span prediction may extract values directly from the user input; (2) a value may be copied from a system inform memory that keeps track of the system's inform operations; (3) a value may be copied over from a different slot that is already contained in the dialog state to resolve coreferences within and across domains. Our approach combines the advantages of span-based slot filling methods with memory methods to avoid the use of value picklists altogether. We argue that our strategy simplifies the DST task while at the same time achieving state of the art performance on various popular evaluation sets including Multiwoz 2.1, where we achieve a joint goal accuracy beyond 55%.
null
[ "https://www.aclweb.org/anthology/2020.sigdial-1.4.pdf" ]
218,517,128
2005.02877
4f31f0b83155a15163f07eeec8301126a225e155
TripPy: A Triple Copy Strategy for Value Independent Neural Dialog State Tracking Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 2020. 2020 Michael Heck [email protected] Heinrich Heine University DüsseldorfGermany Carel Van Niekerk Heinrich Heine University DüsseldorfGermany Nurul Lubis [email protected] Heinrich Heine University DüsseldorfGermany Christian Geishauser [email protected] Heinrich Heine University DüsseldorfGermany Hsien-Chin Lin [email protected] Heinrich Heine University DüsseldorfGermany Marco Moresi [email protected] Heinrich Heine University DüsseldorfGermany Milica Gašić Heinrich Heine University DüsseldorfGermany TripPy: A Triple Copy Strategy for Value Independent Neural Dialog State Tracking Proceedings of the SIGdial 2020 Conference the SIGdial 2020 ConferenceAssociation for Computational LinguisticsJuly 2020. 202035 Task-oriented dialog systems rely on dialog state tracking (DST) to monitor the user's goal during the course of an interaction. Multidomain and open-vocabulary settings complicate the task considerably and demand scalable solutions. In this paper we present a new approach to DST which makes use of various copy mechanisms to fill slots with values. Our model has no need to maintain a list of candidate values. Instead, all values are extracted from the dialog context on-thefly. A slot is filled by one of three copy mechanisms: (1) Span prediction may extract values directly from the user input; (2) a value may be copied from a system inform memory that keeps track of the system's inform operations; (3) a value may be copied over from a different slot that is already contained in the dialog state to resolve coreferences within and across domains. Our approach combines the advantages of span-based slot filling methods with memory methods to avoid the use of value picklists altogether. We argue that our strategy simplifies the DST task while at the same time achieving state of the art performance on various popular evaluation sets including Multiwoz 2.1, where we achieve a joint goal accuracy beyond 55%. Introduction The increasing popularity of natural language human-computer interaction urges the development of robust and scalable task-oriented dialog systems. In order to fulfill a user goal, a dialogue system must be capable of extracting meaning and intent from the user input, and be able to keep and update this information over the continuation of the dialog (Young et al., 2010). This task is called dialog state tracking (DST). Because the next dialog system action depends on the current state of the conversation, accurate dialog state tracking (DST) is absolutely vital. DST is tasked to extract from the user input information on different concepts that are necessary to complete the task at hand. For example, in order to recommend a restaurant to a user, the system needs to know their preferences in terms of price, location, etc. These concepts are encapsulated in an ontology, where dialogue domain (e.g., restaurant or hotel), slot (e.g., price range or location), and value (e.g. cheap or expensive) are defined. Solving this information extraction task is prerequisite for forming a belief over the dialog state. Traditional approaches to DST operate on a fixed ontology and perform prediction over a pre-defined set of slot-value pairs Liu and Lane, 2017;Zhong et al., 2018). Such approaches perform very well on datasets which are defined over fairly small ontologies. Apply these methods to more complex datasets however reveals various limitations (Ren et al., 2018;Nouri and Hosseini-Asl, 2018). First, it is often difficult to obtain a complete ontology for a task. Second, slot-value pairs that were outside the ontology or the training data are impossible to capture during test time. Third, such methods at best scale linearly with the size of the ontology. Most importantly, the idea of fixed ontologies is not sustainable, as in real world applications they are subject to constant change. Human-computer interactions often need to be defined over multiple domains at the same time, ideally with unrestricted vocabulary. Recent approaches to multi-domain and open-vocabulary DST extract values from the dialog context directly by predicting value spans in the input (Gao et al., 2019;Chao and Lane, 2019;Zhang et al., 2019). Span prediction is a demonstrably potent method to detect relevant information in utterances, but its major drawback is that it only suits extractive values that are explicitly expressed as a sequence of tokens. This is the reason why span-based methods benefit from the support of a picklist, i.e., a list of value candidates from which a system can choose. Still, these methods fall short when handling nuanced and subtle phenonema that often occur in natural conversations, such as coreference and value sharing ("I'd like a hotel in the same area as the restaurant."), and implicit choice ("Any of those is ok."). In this work, we propose a new approach to value independent multi-domain DST: 1. In addition to extracting values directly from the user utterance via span prediction and copy, our model creates and maintains two memories on-the-fly, one for system inform slots, and one for the previously seen slots. 2. The system inform memory solves the implicit choice issue by allowing copy mechanism from concepts mentioned by the system, e.g., values that are offered and recommended. 3. The DS memory allows the use of values already existing in the dialogue state to infer new values, which solves the coreference and value sharing problems. We call this approach TripPy, Triple copy strategy DST. 1 Our experiments results show that our model is able to handle out-of-vocabulary and rare values very well during test time, demonstrating good generalization. In a detailed analysis we take a closer look at each of the model's components to study their particular roles. Related Work Dialog state tracking has been of broad interest to the dialog research community, which is reflected by the existence of a series of DST challenges (Henderson et al., 2014;Rastogi et al., 2019). These challenges consistently pushed the boundaries of DST performance. Current state-of-the-art has to prove to work on long, diverse conversations in multiple domains with a high slot count and principally unrestricted vocabulary (Eric et al., 2019). Dialogs of such complex nature are tough for traditional approaches that rely on the availability of a candidate list due to scalability and generalization issues Liu and Lane, 2017;Rastogi et al., 2017). Span-based approaches recently alleviated both problems to some extent. Here, slot values are extracted from the input directly by predicting start and end positions in the course of the dialog. For instance, Xu and Hu (2018) utilizes an attentionbased recurrent network with a pointer mechanism to extract values from the context. This extractive approach has its limitations, since many expressible values are not found verbatim in the input, but rather mentioned implicitly, or expressed by a variety of rephrasings. With the assistance of contextual models such as BERT (Devlin et al., 2018), issues arising from expressional variations can be mitigated. Recent work has demonstrated that encoding the dialog context with contextual representations supports span prediction to generalize over rephrasings. SUMBT utilizes BERT to encode slot IDs and candidate values and learns slot-value relationships appearing in dialogs via an attention mechanism. Dialog context is encoded with recurrence. BERT-DST (Chao and Lane, 2019) employs contextual representations to encode each dialog turn and feeds them into classification heads for value prediction. The dialog history, however, is not considered for slot filling. In Gao et al. (2019), DST is rendered as a reading comprehension task that is approached with a BERT-based dialog context encoder. A slot carryover prediction model determines whether previously detected values should be kept in the DS for the current turn. An alternative to span prediction is value generation. TRADE and MA-DST (Kumar et al., 2020) generate a DS from the input using a copy mechanism to combine the distributions over a pre-defined vocabulary and the vocabulary of current context. SOM-DST applies a similar mechanism for value generation, but takes the previous dialog turn as well as the previous DS as input to BERT to predict the current DS. A state operation predictor determines whether a slot actually needs to be updated or not. The downside of generative models is that they tend to produce invalid values, for instance by word repetitions or omissions. Recently, a hybrid approach called DS-DST has been proposed that makes use of both span-based and picklist-based prediction for slot-filling (Zhang et al., 2019). In contrast to generative approaches, picklist-based and span-based methods use existing word sequences to fill slots. DS-DST somewhat alleviates the limitations of span prediction by filling a subset of slots with a picklist method instead. Recent works seemed to reveal a trade-off between the level value independence in a model and the DST performance. Chao and Lane (2019) and Gao et al. (2019) solely rely on span-prediction, but their performance lacks behind methods that at least partially rely on a pre-defined list of candidate values. This has impressionably been demonstrated by Zhang et al. (2019). Their model could not compete when relying on span-prediction entirely. In contrast, when relying solely on their picklist slot-filling method, they achieved the to-date best performance on MultiWOZ 2.1. The proposed dualstrategy approach lies favorably between these two extremes. To the best of our knowledge, none of the recent approaches to complex DST tasks such as Multi-WOZ Eric et al., 2019) are value independent in the strict sense. What's more, they tremendously benefit from the use of a value candidate list. Our work tackles this limitation by introducing a triple copy strategy that relies on span-prediction as well as memory mechanisms. In contrast to other hybrid approaches such as Zhang et al. (2019), our memory mechanisms create candidate lists of values on-the-fly with the dialog context as only source of information, thus avoiding the use of pre-defined picklists. We let the model decide which strategy to choose for each slot at each turn. Our approach differs from Chao and Lane (2019) and in that we consider the dialog history as context in addition to the current turn. We also differ from approaches like Lee et al. (2019) since we do not employ recurrence. Like , we use auxiliary inputs at each turn, but we do so as a late feature fusion strategy. With our slot-value copy mechanism to resolve coreferring value phrases, we employ a method which is reminiscent of Gao et al. (2019)'s slot carryover, but with the sharp distinction that we copy values between different slots, facilitating value sharing within and across domains. TripPy: Triple Copy Strategy for DST Our model expects the following input format to perform dialog state tracking. Let X = {(U 1 , M 1 ), . . . , (U T , M T )} be the sequence of turns that comprise a dialog of length T . U t is the user utterance at turn t, M t is the system utterance that preceeds the user utterance. The task of the model is (1) to determine for every turn whether any of the N domain-slot pairs in S = {S 1 , . . . , S N } is present, (2) to predict the values for each S n and (3) to track the dialog state DS t over the course of the dialog, i.e., for t ∈ [1, T ]. We employ a triple-copy strategy to fill the slots. The intuition is that values are either explicitly expressed by the user, that they are expressed by the system and referred to by the user via confirmation or rejection, or that they have been expressed earlier in the dialog as assignment to another domain-slot pair (coreference). Each of these cases is handled by one of three copy mechanisms. It becomes apparent that slots can not be filled by exclusively resorting to one particular copy method. Therefore, we employ slot gates that determine at each turn which method to use to fill the respective slot. Figure 2 depicts our model. We encode the dialog context with a BERT front-end and feedforward the resulting contextual representations to various classification heads to solve the sub-tasks for DST. The aggregate sequence representation is the input to the slot gates. The sequence of token representations is the input to the span predictors. Context Encoder We use BERT (Devlin et al., 2018) as front-end to encode at each turn t the dialog context as where r CLS t is a representation of the entire turn including the dialog context H t . The vectors r 1 t to r seqmax t are contextual representations for the sequence of input tokens (including special tokens). Both types of representations are used for the following classification tasks. R t = BERT([CLS] ⊕ U t ⊕ [SEP] ⊕ M t ⊕ [SEP] ⊕ H t ⊕ [SEP]),(1)where H t = (U t−1 , M t−1 ), . . . , (U 1 ,MR t = [r CLS 1 [SEP] [SEP] [SEP] [CLS] | | 1 | | ℎ 1 ℎ | | BERT 1 +1 +2 +1 +2 −1 start Slot Gates Our model is equipped with a slot gate for each domain-slot pair. This ensures greatest flexibility for multi-domain DST, as there is no restriction as to how many domains might be present in a single turn. At each turn t, slot gates assign each slot S n to one of the classes in C = {none, dontcare, span, inform, refer }. The first two labels express special cases. none denotes that the slot does not take a value in this turn and dontcare states that any value is acceptable for this slot. The remaining three labels each denote one of the model's copy mechanisms. span indicates that a value is present in U t that can be extracted via span prediction. inform indicates that the user refers to a value that has been uttered by the system in M t . Lastly, refer indicates that the user refers to a value that is already present in DS t . The input to the slot gates is r CLS t , and the probability distribution over classes C for domain-slot pair S n at turn t is p gate t,s (r CLS t ) = softmax(W gate s · r CLS t + b gate s ) ∈ R 5 ,(2) i.e., each slot gate is realized by a trainable linear layer classification head for BERT. Boolean slots, i.e., slots that only take binary values, are treated separately. Here, the list of possible classes is C bool = {none, dontcare, true, f alse} and the slot gate probability is p bgate t,s (r CLS t ) = softmax(W bgate s · r CLS t + b bgate s ) ∈ R 4 . (3) Span-based Value Prediction For each slot s that is to be filled via span prediction, a domain-slot specific span prediction layer takes the token representations [r 1 t , . . . , r seqmax t ] of the entire dialog context for turn t as input and projects them as follows: [α s t,i , β s t,i ] = W span s · r i t + b span s ∈ R 2 (4a) p start t,s = softmax(α s t ) (4b) p end t,s = softmax(β s t ) (4c) start s t = argmax(p start t,s ) (4d) end s t = argmax(p end t,s ).(4e) Each span predictor is realized by a trainable linear layer classification head for BERT, followed by two parallel softmax layers to predict start and end position. Note that there is no special handling for erroneously predicting end s t < start s t . In practice, the resulting span will simply be empty. System Inform Memory for Value Prediction The system inform memory I t = {I 1 t , . . . , I N t } keeps track of all slot values that were informed by the system in dialog turn t. A slot in DS t needs to be filled by an informed value, if the user positively refers to it, but does not express the value such that span prediction can be used. E.g., in Figure 1 the slot gate for domain-slot <restaurant,name> should predict inform. The slot is filled by copying the informed value into the dialog state, i.e., DS s t = I s t , where i is the index of the respective domain-slot. DS Memory for Coreference Resolution The more complex a dialog can be, the more likely it is that coreferences need to be resolved. For instance, the name of a restaurant might very well be the destination of a taxi ride, but the restaurant might not be referred to explicitly upon ordering a taxi within the same conversation. Coreference resolution is challenging due to the rich variety of how to form referrals, as well as due to the fact that coreferences often span multiple turns. An example of a coreference that can be handled by our model is found in the example in Figure 1. The third copy mechanism utilizes the DS as a memory to resolve coreferences. If a slot gate predicts that the user refers to a value that has already been assigned to a different slot during the conversation, then the probability distribution over all possible slots that can be referenced is p refer t,s (r CLS t ) = softmax(W s refer · r CLS t + b s refer ) ∈ R N +1 ,(5) i.e., for each slot, a linear layer classification head either predicts the slot which contains the referenced value, or none for no reference. Auxiliary Features Some recent approaches to neural DST utilize auxiliary input to preserve contextual information. For instance, SOM-DST adds the dialog state to its single-turn input as a means to preserve context across turns. We already include contextual information in the input to BERT by appending the dialog history H t . In addition to that, we also create auxiliary features based on the system inform memory and the DS memory. We generate two binary vectors a inform t ∈ {0, 1} N and a ds t ∈ {0, 1} N that indicate whether (1) a slot has recently been informed (based on the system inform memory), or (2) a slot has already been filled during the course of the dialog (based on the DS memory). These vectors are added to the output of BERT in a late fusion approach, and the slot gate probabilities in Equations 2, 3 and 5 become p gate t,s (r CLS t ), p bgate t,s (r CLS t ) and p refer t,s (r CLS t ), withr CLS t = r CLS t ⊕ a inform t ⊕ a ds t . Partial Masking We partially mask the dialog history H t by replacing values with BERT's generic [UNK] token. The masking is partial in the sense that it is applied only to the past system utterances. For the system utterances, the contained values are known and their masking is straightforward. The idea behind partially masking the history is that the model is compelled to focus on the historical context information rather than the sighting of specific values. This should result in more robust representations r CLS t and therefore better overall slot gate performance. Dialog State Update We employ the same rule-based update mechanism as Chao and Lane (2019) to track the dialog state across turns. At every turn, we update a slot, if a value has been detected which is not none. If a slot-value is predicted as none, then the slot will not be updated. The other datasets are single-domain and significantly smaller. Evaluations on these mainly serve as sanity check to show that we don't overfit to a particular problem. Some slots in sim-M and sim-R show a high out-of-vocabulary rate, making them particularly interesting for evaluating value independent DST. The single domain datasets come with span labels. However, MultiWOZ 2.1 does not. We therefore generate our own span labels by matching the ground truth value labels to their respective utterances. Evaluation We compute the joint goal accuracy (JGA) on all test sets for straightforward comparison with other approaches. The joint goal accuracy defined over a dataset is the ratio of dialog turns in that dataset for which all slots have been filled with the correct value according to the ground truth. Note that none needs to be predicted if a slot value is not present in a turn. In addition to JGA, we compute the accuracy of the slot gates (joint and per-class) and various other metrics for a more detailed analysis of model design decisions. We run each test three times with different seeds and report the average numbers for more reliable results. MultiWOZ 2.1 is in parts labeled inconsistently. For a fair evaluation, we consider a value prediction correct, if it matches any of its valid labels (for instance "centre" and "center" for the slot-value hotel-area=centre) as being correct. We semi-automatically analyzed value label inconsistencies in the training portion of the dataset in order to identify all label variants for any given value. During testing, these mappings are applied as is. Training We use the pre-trained BERT-base-uncased transformer (Vaswani et al., 2017) as context encoder front-end. This model has 12 hidden layers with 768 units and 12 self-attention heads each. The maximum input sequence length is set to 180 tokens after WordPiece tokenization (Wu et al., 2016), except for MultiWOZ 2.1, where we set this parameter to 512. We compute the joint loss as L = 0.8 · L gate + 0.1 · L span + 0.1 · L refer . (6) The function for all losses is joint cross entropy. As there is no coreferencing in the evaluated singledomain datasets, the refer loss is not computed in those cases and the loss function is L = 0.8 · L gate + 0.2 · L span(7) instead. Span predictors are presented only spans from the user utterances U i to learn from (includ- ing the user utterances in the history portion H i of the input). During training we set the span prediction loss to zero for all slots that are not labeled as span. Likewise, the coreference prediction losses are set to zero if slots are not labeled as refer. For optimization we use Adam optimizer (Kingma and Ba, 2014) and backpropagate through the entire network including BERT, which constitutes a finetuning of the latter. The initial learning rate is set to 2e −5 . We conduct training with a warmup proportion of 10% and let the LR decay linearly after the warmup phase. Early stopping is employed based on the JGA of the development set. During training we use dropout (Srivastava et al., 2014) on the BERT output with a rate of 30%. We do not use slot value dropout (Xu and Sarikaya, 2014) except for one dataset (sim-M), where performance was greatly affected by this measure (see Section 5.1. Experimental Results Tables 1, 3 and 2 show the performance of our model in comparison to various baselines. TripPy achieves state-of-the-art performance on all four evaluated datasets, with varying distance to the runner-up. Most notably, we were able to push the performance on MultiWOZ 2.1, the most complex task, by another 2.0% absolute compared to the previous top scoring method, achieving 55.3% JGA. The improvements on the much smaller datasets WOZ 2.0, sim-M and sim-R demonstrate that the model benefits from its design on single-domain tasks as well. The following analysis serves a better understanding of our model's strengths. Analysis We analyse the performance of TripPy on ablation experiments on MultiWOZ 2.1 (see Table 4). Our baseline model is best compared to BERT-DST (Chao and Lane, 2019); we only take single turns as input, and only use span prediction to extract values from the turn. The resulting performance is comparable to other span-based methods such as DST-reader and DST-span and confirms that the dialogs in MultiWOZ are too complex to only be handled by this information extracting mechanism alone. Impact of the triple copy mechanism Using our proposed triple copy mechanism pushes the performance close to 50%, surpassing TRADE and closing in on the leading hybrid approaches. Especially the performance of the slot gates benefits from this change (see Figure 3). When looking at the F1 score for the individual classes, one can see that the span class benefits from the distinction. It is important to point out that none of the coreferences that our model handles can be resolved by span-prediction alone. This means that otherwise guaranteed misses can now be avoided and coreferences can be resolved by copying values between slots. What's more, using the dialog state memory to resolve coreferences helps value detection across multiple turns, as a value that has been referred to in the current turn might have been assigned to another slot multiple turns before. Impact of the dialog history We found that using the dialog history as additional context information is critical to a good performance, as it reduces contextual ambiguity. This is clearly reflected in the improved performance of the slot gates (see Figure 3, which has two positive effects. First, the presence and type of values is recognized correctly more often. Especially the special value dontcare, and boolean slots (taking values true and false) benefit from the additional context. This is only logical, since they are predicted by the slot gate using the representation vector of the [CLS] token. Second, values are assigned to the correct slot more often than without the additional contextual information. With the additional dialog history, we outperform DS-DST and match SOM-DST, which set the previous state-of-the-art. Impact of the auxiliary features SOM-DST uses single turns as input, but preserves additional contextual information throughout the dialog by using the dialog state as auxiliary input. By adding our memory based auxiliary features in a late fusion approach, we surpass SOM-DST, and ultimately DS-picklist, which performs slot-filling with the knowledge of the full ontology. Even though our features carry less information, that is, only the identities of the informed slots -tracked by the system inform memory -and the identities of the previously seen slots -tracked by the DS memory -, we see substantial improvement using them. Obviously, more information about the progress of the dialog helps the slot gates and the referral gates in their classification tasks. Impact of partial masking We found that masking the informed values in past system utterances does not give a clear benefit, but it also does not harm performance of the slot gates. While the inform cases are detected more accurately, some other cases suffer from the loss of information in the input. Overall, there is a minor overall improvement observable. We report the numbers for MultiWOZ in Table 4 and Figure 3, but would like to note that we have seen the same trend on all other datasets as well. Impact of the context width Our best model utilizes the full width of BERT (512 tokens). This is a clear advantage for longer dialogs. Maximal context width is not a decisive factor for the singledomain datasets, since their dialogs tend to be shorter. As expected, we have not seen any change in performance on these. For MultiWOZ, we gain 1% absolute by maximizing the history length to preserve as much of the dialog history as possible, achieving 55.3% JGA. Generalization Study It is important that a DST model generalizes well to previously unseen values. We looked at the performance of our model on slots with exceptionally high out-of-vocabulary rates, of which we identified 8 across the evaluated datasets. Figure 4 with relatively high OOV rate still perform close to or better than the average. Figure 5 plots the recall of values depending on the number of samples seen during training. To our surprise, it does not seem to matter whether a particular value has never been seen during training in order to be detected correctly. OOV values are detected just as well as generally less common values. Our observations however indicate that the model benefits tremendously by seeing a certain minimal amount of training samples for each value, which is somewhere around 50. In other words, if such amounts of data are available, then the model is able to effectively utilize them. In the same Figure we compare TripPy to the span prediction baseline. The latter clearly struggles with OOVs and rare values and generally seems to require more training samples to achieve a good recall. The higher recall on OOV values is likely caused by the fact that many unseen values are of the category time of day, which mostly follows a strict format and is therefore easier to spot. Overall, TripPy clearly generalizes better over sample counts. To test the limits of our model's generalization capacities, we manually replaced most of the values in the MultiWOZ test set by (fictional but still meaningful) OOV values. Of the over 1000 unique slot-value pairs appearing in the modified test set, about 84% are OOV after the replacement. Figure 6 compares the per-slot accuracy of our model on the original test set and the OOV test set. Underlined slot names indicate slots with at least one OOV value. Their average OOV rate is 90%. Surprisingly, most of these slots maintain their high accuracy and only few suffer from the high OOV count. Mainly it is one particular domain, train, which suffers above-average performance drops. However, the remainder of the slots maintain their performance. This demonstrates that our model is well equipped to handle OOV values, regardless of the type (e.g., named entity, time of day). Conclusion We have demonstrated that our approach can handle challenging DST scenarios. Having to detect unseen values does not considerably impair our model's general performance. The information extraction capabilities of our proposed model are rooted in the memory-based copy mechanisms and perform well even in extreme cases as discussed in Section 5.2. The copy mechanisms are not limited by a predefined vocabulary, since the memories themselves are value agnostic. To further improve the DST capabilities of TripPy, we hope to introduce slot independence as at present its tracking abilities are limited to slots that are predefined in the ontology. For that, We would like to expand our approach towards the schema-guided paradigm for dialog modeling. We also would like to employ a more sophisticated update strategy, for example by adding the option to partially forget. There already exists an intriguing set of works focusing on these issues and we hope to incorporate and expand upon it in the future. UFigure 1 : 1: i'm looking for an expensive restaurant in the center of town. S: there are 33 restaurants [...]. would you like to narrow your search by type of food? U: i do not care about food. surprise me. S: fitzbillies restaurant serves british food, [...]. would you like to hear about any others? [...]. U: that sounds fine. can you book it please and get me the reference number? S: sure , what day and time would you like and how many people ? U: i would like a table for 5 at 11:30 on tuesday [...] S: okay, the booking was successful. [...]. is there anything else i can help you with? U: i'm also looking for a place to stay. it needs [...] free wifi and [be] in the same area as the restaurant. Example dialog in MultiWOZ. Figure 2 : 2Architecture of our proposed model. TripPy takes the turn and dialog history as input and outputs a DS. and test our model on four datasets, Mul-tiWOZ 2.1(Eric et al., 2019), WOZ 2.0 (Wen et al., 2016), sim-M and sim-R(Shah et al., 2018). Among these, MultiWOZ 2.1 is by far the most challenging dataset. It is comprised of over 10000 multi-domain dialogs defined over a fairly large ontology. There are 5 domains(train, restaurant, hotel, taxi, attraction) with 30 domain-slot pairs that appear in all portions of the data. Figure 3 : 3Per class performance of the slot gates for different versions of our model (ablation study). Figure 5 : 5Recall of values depending on the amount of samples seen during training. 0 seen samples means the value is OOV during test time. 1 ) is the history of the dialog up to and excluding turn t. The special token [CLS] preceeds every input sequence to BERT, and [SEP] separates portions of the input sequence. It is then Table 2 : 2DST Results on WOZ 2.0.Models sim-M sim-R SMD-DST 96.8% * 94.4% * LU-DST 50.4% 87.1% BERT-DST 80.1% 89.6% TripPy 83.5±1.2% 90.0±0.2% Table 3 : 3DST Results on sim-M and sim-R. * should be considered as oracle because the value candidates are ground truth labels. Table 4 : 4Ablation experiments for our model.20% 30% 40% 50% 60% 70% 80% 90% 100% Per class performance of the slot gates (F1 score) TripPy + masking + auxiliary features + dialog history + triple copy mechanism Span prediction only plots performance measures for these slots and compares them to the average performance for all slots in the respective datasets. Generally, the slots that expect named entities as values show the lowest accuracy. However, the below-average performance of these slots does not seem to be caused by a particularly high OOV rate. Even at 100%, the movie slot of sim-M still performs comparably well. Other slotsFigure 6: Per-slot accuracy of TripPy on the original test set and the OOV test set. Underlined slot names indicate slots with at least one OOV value.80% 84% 88% 92% 96% 100% hotel-type train-leaveAt hotel-area taxi-departure attraction-type hotel-parking restaurant-area hotel-pricerange restaurant-name train-arriveBy taxi-destination attraction-name restaurant-pricerange attraction-area hotel-name hotel-internet restaurant-food hotel-stars train-book_people train-departure train-destination train-day hotel-book_people taxi-arriveBy hotel-book_day hotel-book_stay taxi-leaveAt restaurant-book_people restaurant-book_day restaurant-book_time Original Test Set Values replaced with OOVs Our code will be released upon publication of this work. t , r 1 t , . . . , r seqmax t ], MultiWOZ -a large-scale multi-domain Wizard-of-Oz dataset for task-oriented dialogue modelling. Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Inigo Casanueva, Stefan Ultes, Milica Osman Ramadan, Gašić, arXiv:1810.00278arXiv preprintPaweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Inigo Casanueva, Stefan Ultes, Osman Ra- madan, and Milica Gašić. 2018. MultiWOZ -a large-scale multi-domain Wizard-of-Oz dataset for task-oriented dialogue modelling. arXiv preprint arXiv:1810.00278. BERT-DST: Scalable end-to-end dialogue state tracking with bidirectional encoder representations from transformer. Guan-Lin Chao, Ian Lane, arXiv:1907.03040arXiv preprintGuan-Lin Chao and Ian Lane. 2019. BERT-DST: Scal- able end-to-end dialogue state tracking with bidi- rectional encoder representations from transformer. arXiv preprint arXiv:1907.03040. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language under- standing. arXiv preprint arXiv:1810.04805. Mihail Eric, Rahul Goel, Shachi Paul, Abhishek Sethi, Sanchit Agarwal, arXiv:1907.01669Shuyag Gao, and Dilek Hakkani-Tür. 2019. MultiWOZ 2.1: Multi-domain dialogue state corrections and state tracking baselines. arXiv preprintMihail Eric, Rahul Goel, Shachi Paul, Abhishek Sethi, Sanchit Agarwal, Shuyag Gao, and Dilek Hakkani- Tür. 2019. MultiWOZ 2.1: Multi-domain dialogue state corrections and state tracking baselines. arXiv preprint arXiv:1907.01669. Shuyang Gao, Abhishek Sethi, Sanchit Aggarwal, Tagyoung Chung, Dilek Hakkani-Tür, arXiv:1908.01946Dialog state tracking: A neural reading comprehension approach. arXiv preprintShuyang Gao, Abhishek Sethi, Sanchit Aggarwal, Tagyoung Chung, and Dilek Hakkani-Tür. 2019. Di- alog state tracking: A neural reading comprehension approach. arXiv preprint arXiv:1908.01946. The second dialog state tracking challenge. Matthew Henderson, Blaise Thomson, Jason D Williams, Proceedings of the 15th annual meeting of the special interest group on discourse and dialogue (SIGDIAL). the 15th annual meeting of the special interest group on discourse and dialogue (SIGDIAL)Matthew Henderson, Blaise Thomson, and Jason D Williams. 2014. The second dialog state tracking challenge. In Proceedings of the 15th annual meet- ing of the special interest group on discourse and dialogue (SIGDIAL), pages 263-272. Efficient dialogue state tracking by selectively overwriting memory. Sungdong Kim, Sohee Yang, Gyuwan Kim, Sang-Woo Lee, arXiv:1911.03906arXiv preprintSungdong Kim, Sohee Yang, Gyuwan Kim, and Sang- Woo Lee. 2019. Efficient dialogue state tracking by selectively overwriting memory. arXiv preprint arXiv:1911.03906. Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Adarsh Kumar, Peter Ku, Anuj Kumar Goyal, Angeliki Metallinou, Dilek Hakkani-Tür, arXiv:2002.08898MA-DST: Multi-attention based scalable dialog state tracking. arXiv preprintAdarsh Kumar, Peter Ku, Anuj Kumar Goyal, Ange- liki Metallinou, and Dilek Hakkani-Tür. 2020. MA- DST: Multi-attention based scalable dialog state tracking. arXiv preprint arXiv:2002.08898. Hwaran Lee, Jinsik Lee, Tae-Yoon Kim, arXiv:1907.07421SUMBT: Slot-utterance matching for universal and scalable belief tracking. arXiv preprintHwaran Lee, Jinsik Lee, and Tae-Yoon Kim. 2019. SUMBT: Slot-utterance matching for universal and scalable belief tracking. arXiv preprint arXiv:1907.07421. An end-to-end trainable neural network model with belief tracking for taskoriented dialog. Bing Liu, Ian Lane, arXiv:1708.05956arXiv preprintBing Liu and Ian Lane. 2017. An end-to-end trainable neural network model with belief tracking for task- oriented dialog. arXiv preprint arXiv:1708.05956. Nikola Mrkšić, O Diarmuid, Tsung-Hsien Séaghdha, Blaise Wen, Steve Thomson, Young, arXiv:1606.03777Neural belief tracker: Data-driven dialogue state tracking. arXiv preprintNikola Mrkšić, Diarmuid O Séaghdha, Tsung-Hsien Wen, Blaise Thomson, and Steve Young. 2016. Neu- ral belief tracker: Data-driven dialogue state track- ing. arXiv preprint arXiv:1606.03777. Elnaz Nouri, Ehsan Hosseini-Asl, arXiv:1812.00899Toward scalable neural dialogue state tracking model. arXiv preprintElnaz Nouri and Ehsan Hosseini-Asl. 2018. Toward scalable neural dialogue state tracking model. arXiv preprint arXiv:1812.00899. Large-scale multi-domain belief tracking with knowledge sharing. Paweł Osman Ramadan, Milica Budzianowski, Gašić, arXiv:1807.06517arXiv preprintOsman Ramadan, Paweł Budzianowski, and Milica Gašić. 2018. Large-scale multi-domain belief tracking with knowledge sharing. arXiv preprint arXiv:1807.06517. Scalable multi-domain dialogue state tracking. Abhinav Rastogi, Dilek Hakkani-Tür, Larry Heck, 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). IEEEAbhinav Rastogi, Dilek Hakkani-Tür, and Larry Heck. 2017. Scalable multi-domain dialogue state track- ing. In 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 561- 568. IEEE. Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, Pranav Khaitan, arXiv:1909.05855arXiv preprintAbhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2019. Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. arXiv preprint arXiv:1909.05855. Liliang Ren, Kaige Xie, Lu Chen, Kai Yu, arXiv:1810.09587Towards universal dialogue state tracking. arXiv preprintLiliang Ren, Kaige Xie, Lu Chen, and Kai Yu. 2018. Towards universal dialogue state tracking. arXiv preprint arXiv:1810.09587. Building a conversational agent overnight with dialogue self-play. Pararth Shah, Dilek Hakkani-Tür, Gokhan Tür, Abhinav Rastogi, Ankur Bapna, Neha Nayak, Larry Heck, arXiv:1801.04871arXiv preprintPararth Shah, Dilek Hakkani-Tür, Gokhan Tür, Ab- hinav Rastogi, Ankur Bapna, Neha Nayak, and Larry Heck. 2018. Building a conversational agent overnight with dialogue self-play. arXiv preprint arXiv:1801.04871. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, 15Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929-1958. Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008. A networkbased end-to-end trainable task-oriented dialogue system. David Tsung-Hsien Wen, Nikola Vandyke, Milica Mrkšić, Lina M Gašić, Pei-Hao Rojas-Barahona, Stefan Su, Steve Ultes, Young, arXiv:1604.04562arXiv preprintTsung-Hsien Wen, David Vandyke, Nikola Mrkšić, Milica Gašić, Lina M Rojas-Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2016. A network- based end-to-end trainable task-oriented dialogue system. arXiv preprint arXiv:1604.04562. Transferable multi-domain state generator for task-oriented dialogue systems. Chien-Sheng Wu, Andrea Madotto, Ehsan Hosseini-Asl, Caiming Xiong, Richard Socher, Pascale Fung, arXiv:1905.08743arXiv preprintChien-Sheng Wu, Andrea Madotto, Ehsan Hosseini- Asl, Caiming Xiong, Richard Socher, and Pascale Fung. 2019. Transferable multi-domain state gen- erator for task-oriented dialogue systems. arXiv preprint arXiv:1905.08743. Google's neural machine translation system: Bridging the gap between human and machine translation. Yonghui Wu, Mike Schuster, Zhifeng Chen, V Quoc, Mohammad Le, Wolfgang Norouzi, Maxim Macherey, Yuan Krikun, Qin Cao, Klaus Gao, Macherey, arXiv:1609.08144arXiv preprintYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between hu- man and machine translation. arXiv preprint arXiv:1609.08144. An end-to-end approach for handling unknown slot values in dialogue state tracking. Puyang Xu, Qi Hu, arXiv:1805.01555arXiv preprintPuyang Xu and Qi Hu. 2018. An end-to-end approach for handling unknown slot values in dialogue state tracking. arXiv preprint arXiv:1805.01555. Targeted feature dropout for robust slot filling in natural language understanding. Puyang Xu, Ruhi Sarikaya, Fifteenth Annual Conference of the International Speech Communication Association. Puyang Xu and Ruhi Sarikaya. 2014. Targeted feature dropout for robust slot filling in natural language un- derstanding. In Fifteenth Annual Conference of the International Speech Communication Association. The hidden information state model: A practical framework for POMDP-based spoken dialogue management. Steve Young, Milica Gašić, Simon Keizer, François Mairesse, Jost Schatzmann, Blaise Thomson, Kai Yu , Computer Speech & Language. 242Steve Young, Milica Gašić, Simon Keizer, François Mairesse, Jost Schatzmann, Blaise Thomson, and Kai Yu. 2010. The hidden information state model: A practical framework for POMDP-based spoken dialogue management. Computer Speech & Lan- guage, 24(2):150-174. Find or classify? dual strategy for slot-value predictions on multi-domain dialog state tracking. Jian-Guo Zhang, Kazuma Hashimoto, Chien-Sheng Wu, Yao Wan, S Philip, Richard Yu, Caiming Socher, Xiong, arXiv:1910.03544arXiv preprintJian-Guo Zhang, Kazuma Hashimoto, Chien-Sheng Wu, Yao Wan, Philip S Yu, Richard Socher, and Caiming Xiong. 2019. Find or classify? dual strat- egy for slot-value predictions on multi-domain dia- log state tracking. arXiv preprint arXiv:1910.03544. Global-locally self-attentive dialogue state tracker. Victor Zhong, Caiming Xiong, Richard Socher, arXiv:1805.09655arXiv preprintVictor Zhong, Caiming Xiong, and Richard Socher. 2018. Global-locally self-attentive dialogue state tracker. arXiv preprint arXiv:1805.09655.
[]
[ "SKEP: Sentiment Knowledge Enhanced Pre-training for Sentiment Analysis", "SKEP: Sentiment Knowledge Enhanced Pre-training for Sentiment Analysis" ]
[ "Hao Tian \nBaidu Inc\nBeijingChina\n\nUniversity of Science and Technology of China\n\n", "Can Gao \nBaidu Inc\nBeijingChina\n", "Xinyan Xiao \nBaidu Inc\nBeijingChina\n", "Hao Liu \nBaidu Inc\nBeijingChina\n", "Bolei He \nBaidu Inc\nBeijingChina\n", "Hua Wu \nBaidu Inc\nBeijingChina\n", "Haifeng Wang [email protected] \nBaidu Inc\nBeijingChina\n", "Feng Wu [email protected] \nUniversity of Science and Technology of China\n\n" ]
[ "Baidu Inc\nBeijingChina", "University of Science and Technology of China\n", "Baidu Inc\nBeijingChina", "Baidu Inc\nBeijingChina", "Baidu Inc\nBeijingChina", "Baidu Inc\nBeijingChina", "Baidu Inc\nBeijingChina", "Baidu Inc\nBeijingChina", "University of Science and Technology of China\n" ]
[ "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics" ]
Recently, sentiment analysis has seen remarkable advance with the help of pre-training approaches. However, sentiment knowledge, such as sentiment words and aspect-sentiment pairs, is ignored in the process of pre-training, despite the fact that they are widely used in traditional sentiment analysis approaches. In this paper, we introduce Sentiment Knowledge Enhanced Pre-training (SKEP) in order to learn a unified sentiment representation for multiple sentiment analysis tasks. With the help of automatically-mined knowledge, SKEP conducts sentiment masking and constructs three sentiment knowledge prediction objectives, so as to embed sentiment information at the word, polarity and aspect level into pre-trained sentiment representation. In particular, the prediction of aspect-sentiment pairs is converted into multi-label classification, aiming to capture the dependency between words in a pair. Experiments on three kinds of sentiment tasks show that SKEP significantly outperforms strong pre-training baseline, and achieves new state-of-the-art results on most of the test datasets. We release our code at https://github.com/baidu/Senta.
10.18653/v1/2020.acl-main.374
[ "https://www.aclweb.org/anthology/2020.acl-main.374.pdf" ]
218,595,658
2005.05635
a963d086824963fee591a76689518d57f1b14a42
SKEP: Sentiment Knowledge Enhanced Pre-training for Sentiment Analysis Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 5 -10, 2020. 2020 Hao Tian Baidu Inc BeijingChina University of Science and Technology of China Can Gao Baidu Inc BeijingChina Xinyan Xiao Baidu Inc BeijingChina Hao Liu Baidu Inc BeijingChina Bolei He Baidu Inc BeijingChina Hua Wu Baidu Inc BeijingChina Haifeng Wang [email protected] Baidu Inc BeijingChina Feng Wu [email protected] University of Science and Technology of China SKEP: Sentiment Knowledge Enhanced Pre-training for Sentiment Analysis Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsJuly 5 -10, 2020. 20204067 Recently, sentiment analysis has seen remarkable advance with the help of pre-training approaches. However, sentiment knowledge, such as sentiment words and aspect-sentiment pairs, is ignored in the process of pre-training, despite the fact that they are widely used in traditional sentiment analysis approaches. In this paper, we introduce Sentiment Knowledge Enhanced Pre-training (SKEP) in order to learn a unified sentiment representation for multiple sentiment analysis tasks. With the help of automatically-mined knowledge, SKEP conducts sentiment masking and constructs three sentiment knowledge prediction objectives, so as to embed sentiment information at the word, polarity and aspect level into pre-trained sentiment representation. In particular, the prediction of aspect-sentiment pairs is converted into multi-label classification, aiming to capture the dependency between words in a pair. Experiments on three kinds of sentiment tasks show that SKEP significantly outperforms strong pre-training baseline, and achieves new state-of-the-art results on most of the test datasets. We release our code at https://github.com/baidu/Senta. Introduction Sentiment analysis refers to the identification of sentiment and opinion contained in the input texts that are often user-generated comments. In practice, sentiment analysis involves a wide range of specific tasks (Liu, 2012), such as sentence-level sentiment classification, aspect-level sentiment classification, opinion extraction and so on. Traditional methods often study these tasks separately and design specific models for each task, based on manuallydesigned features (Liu, 2012) or deep learning (Zhang et al., 2018). Recently, pre-training methods (Peters et al., 2018;Radford et al., 2018;Devlin et al., 2019; have shown their powerfulness in learning general semantic representations, and have remarkably improved most natural language processing (NLP) tasks like sentiment analysis. These methods build unsupervised objectives at word-level, such as masking strategy (Devlin et al., 2019), next-word prediction (Radford et al., 2018) or permutation . Such wordprediction-based objectives have shown great abilities to capture dependency between words and syntactic structures (Jawahar et al., 2019). However, as the sentiment information of a text is seldom explicitly studied, it is hard to expect such pre-trained general representations to deliver optimal results for sentiment analysis (Tang et al., 2014). Sentiment analysis differs from other NLP tasks in that it deals mainly with user reviews other than news texts. There are many specific sentiment tasks, and these tasks usually depend on different types of sentiment knowledge including sentiment words, word polarity and aspect-sentiment pairs. The importance of these knowledge has been verified by tasks at different level, for instance, sentence-level sentiment classification (Taboada et al., 2011;Shin et al., 2017;Lei et al., 2018), aspect-level sentiment classification (Vo and Zhang, 2015;Zeng et al., 2019), opinion extraction (Li and Lam, 2017;Gui et al., 2017;Fan et al., 2019) and so on. Therefore, we assume that, by integrating these knowledge into the pre-training process, the learned representation would be more sentimentspecific and appropriate for sentiment analysis. In order to learn a unified sentiment representation for multiple sentiment analysis tasks, we propose Sentiment Knowledge Enhanced Pre-training (SKEP), where sentiment knowledge about words, polarity, and aspect-sentiment pairs are included to guide the process of pre-training. The sentiment knowledge is first automatically mined from unlabeled data (Section 3.1). With the knowledge Transformer Encoder [MASK] came this [MASK] and really [MASK] it I x 3 x 4 x 6 x 7 x 5 x 9 x 10 x 8 product fast appreciated product came this fast and really appreiated it I aspect-sentiment pair sentiment word Sentiment Prediction x 2 x 1 Sentiment Masking Figure 1: Sentiment Knowledge Enhanced Pre-training (SKEP). SKEP contains two parts: (1) Sentiment masking recognizes the sentiment information of an input sequence based on automatically-mined sentiment knowledge, and produces a corrupted version by removing these informations. (2) Sentiment pre-training objectives require the transformer to recover the removed information from the corrupted version. The three prediction objectives on top are jointly optimized: Sentiment Word (SW) prediction (on x 9 ), Word Polarity (SP) prediction (on x 6 and x 9 ), Aspect-Sentiment pairs (AP) prediction (on x 1 ). Here, the smiley denotes positive polarity. Notably, on x 6 , only SP is calculated without SW, as its original word has been predicted in the pair prediction on x 1 . mined, sentiment masking (Section 3.2) removes sentiment information from input texts. Then, the pre-training model is trained to recover the sentiment information with three sentiment objectives (Section 3.3). SKEP integrates different types of sentiment knowledge together and provides a unified sentiment representation for various sentiment analysis tasks. This is quite different from traditional sentiment analysis approaches, where different types of sentiment knowledge are often studied separately for specific sentiment tasks. To the best of our knowledge, this is the first work that has tackled sentiment-specific representation during pretraining. Overall, our contributions are as follows: • We propose sentiment knowledge enhanced pre-training for sentiment analysis, which provides a unified sentiment representation for multiple sentiment analysis tasks. • Three sentiment knowledge prediction objectives are jointly optimized during pre-training so as to embed sentiment words, polarity, aspect-sentiment pairs into the representation. In particular, the pair prediction is converted into multi-label classification to capture the dependency between aspect and sentiment. • SKEP significantly outperforms the strong pre-training methods RoBERTa on three typical sentiment tasks, and achieves new state-of-the-art results on most of the test datasets. Background: BERT and RoBERTa BERT (Devlin et al., 2019) is a self-supervised representation learning approach for pre-training a deep transformer encoder (Vaswani et al., 2017). BERT constructs a self-supervised objective called masked language modeling (MLM) to pre-train the transformer encoder, and relies only on large-size unlabeled data. With the help of pre-trained transformer, downstream tasks have been substantially improved by fine-tuning on task-specific labeled data. We follow the method of BERT to construct masking objectives for pre-training. BERT learns a transformer encoder that can produce a contextual representation for each token of input sequences. In reality, the first token of an input sequence is a special classification token [CLS]. In fine-tuning step, the final hidden state of [CLS] is often used as the overall semantic representation of the input sequence. In order to train the transformer encoder, MLM is proposed. Similar to doing a cloze test, MLM predicts the masked token in a sequence from their placeholder. Specifically, parts of input tokens are randomly sampled and substituted. BERT uniformly selects 15% of input tokens. Of these sampled tokens, 80% are replaced with a special masked token [MASK], 10% are replaced with a random token, 10% are left unchanged. After the construction of this noisy version, the MLM aims to predict the original tokens in the masked positions using the corresponding final states. Most recently, RoBERTa significantly outperforms BERT by robust opti-mization without the change of neural structure, and becomes one of the best pre-training models. RoBERTa also removes the next sentence prediction objective from standard BERT. To verify the effectiveness of our approach, this paper uses RoBERTa as a strong baseline. SKEP: Sentiment Knowledge Enhanced Pre-training We propose SKEP, Sentiment Knowledge Enhanced Pre-training, which incorporates sentiment knowledge by self-supervised training. As shown in Figure 1, SKEP contains sentiment masking and sentiment pre-training objectives. Sentiment masking (Section 3.2) recognizes the sentiment information of an input sequence based on automaticallymined sentiment knowledge (Section 3.1), and produces a corrupted version by removing this information. Three sentiment pre-training objectives (Section 3.3) require the transformer to recover the sentiment information for the corrupted version. Formally, sentiment masking constructs a corrupted version X for an input sequence X guided by sentiment knowledge G. x i and x i denote the i-th token of X and X respectively. After masking, a parallel data ( X, X) is obtained. Thus, the transformer encoder can be trained with sentiment pre-training objectives that are supervised by recovering sentiment information using the final states of encoder x 1 , ..., x n . Unsupervised Sentiment Knowledge Mining SKEP mines the sentiment knowledge from unlabeled data. As sentiment knowledge has been the central subject of extensive research, SKEP finds a way to integrate former technique of knowledge mining with pre-training. This paper uses a simple and effective mining method based on Pointwise Mutual Information (PMI) (Turney, 2002). PMI method depends only on a small number of sentiment seed words and the word polarity WP(s) of each seed word s is given. It first builds a collection of candidate word-pairs where each word-pair contains a seed word, and meet with pre-defined part-of-speech patterns as Turney (2002). Then, the co-occurrence of a word-pair is calculated by PMI as follows: PMI(w 1 , w 2 ) = log p(w 1 , w 2 ) p(w 1 )p(w 2 )(1) Here, p(.) denotes probability estimated by count. Finally, the polarity of a word is determined by the difference between its PMI scores with all positive seeds and that with all negative seeds. WP(w) = WP(s)=+ PMI(w, s) (2) − WP(s)=− PMI(w, s) If WP(w) of a candidate word w is larger than 0, then w is a positive word, otherwise it is negative. After mining sentiment words, aspect-sentiment pairs are extracted by simple constraints. An aspectsentiment pair refers to the mention of an aspect and its corresponding sentiment word. Thus, a sentiment word with its nearest noun will be considered as an aspect-sentiment pair. The maximum distance between the aspect word and the sentiment word of a pair is empirically limited to no more than 3 tokens. Consequently, the mined sentiment knowledge G contains a collection of sentiment words with their polarity along with a set of aspect-sentiment pairs. Our research focuses for now the necessity of integrating sentiment knowledge in pre-training by virtue of a relatively common mining method. We believe that a more fine-grained method would further improve the quality of knowledge, and this is something we will be exploring in the nearest future. Sentiment Masking Sentiment masking aims to construct a corrupted version for each input sequence where sentiment information is masked. Our sentiment masking is directed by sentiment knowledge, which is quite different from previous random word masking. This process contains sentiment detection and hybrid sentiment masking that are as follows. Sentiment Detection with Knowledge Sentiment detection recognizes both sentiment words and aspect-sentiment pairs by matching input sequences with the mined sentiment knowledge G. 1. Sentiment Word Detection. The word detection is straightforward. If a word of an input sequence also occurs in the knowledge base G, then this word is seen as a sentiment word. 2. Aspect-Sentiment Pair Detection. The detection of an aspect-sentiment pair is similar to its mining described before. A detected sentiment word and its nearby noun word are considered as an aspect-sentiment pair candidate, and the maximum distance of these two words is limited to 3. Thus, if such a candidate is also found in mined knowledge G, then it is considered as an aspect-sentiment pair. Hybrid Sentiment Masking Sentiment detection results in three types of tokens for an input sequence: aspect-sentiment pairs, sentiment words and common tokens. The process of masking a sequence runs in following steps: 1. Aspect-sentiment Pair Masking. At most 2 aspect-sentiment pairs are randomly selected to mask. All tokens of a pair are replaced by [MASK] simultaneously. This masking provides a way for capturing the combination of an aspect word and a sentiment word. 2. Sentiment Word Masking. For those unmasked sentiment words, some of them are randomly selected and all the tokens of them are substituted with [MASK] at the same time. The total number of tokens masked in this step is limited to be less than 10%. Common Token Masking. If the number of tokens in step 2 is insufficient, say less than 10%, this would be filled during this step with randomly-selected tokens. Here, random token masking is the same as RoBERTa. 1 Sentiment Pre-training Objectives Sentiment masking produces corrupted token sequences X, where their sentiment information is substituted with masked tokens. Three sentiment objectives are defined to tell the transformer encoder to recover the replaced sentiment information. The three objectives, Sentiment Word (SW) prediction L sw , Word Polarity (WP) prediction L wp and Aspect-sentiment Pair (AP) prediction L ap are jointly optimized. Thus, the overall pretraining objective L is: L = L sw + L wp + L ap(3) 1 For each sentence, we would always in total mask 10% of its tokens at step 2 and 3. Among these masked tokens, 79.9% are sentiment words (during step 2) and 20.1% are common words (during step 3) in our experiment. Sentiment Word Prediction Sentiment word prediction is to recover the masked tokens of sentiment words using the output vector x i from transformer encoder. x i is fed into an output softmax layer, which produces a normalized probability vectorŷ i over the entire vocabulary. In this way, the sentiment word prediction objective L sw is to maximize the probability of original sentiment word x i as follows:ŷ i = softmax( x i W + b) (4) L sw = − i=n i=1 m i × y i logŷ i(5) Here, W and b are the parameters of the output layer. m i = 1 if i-th position of a sequence is masked sentiment word 2 , otherwise it equals to 0. y i is the one-hot representation of the original token x i . Regardless of a certain similarity to MLM of BERT, our sentiment word prediction has a different purpose. Instead of predicting randomly masking tokens, this sentiment objective selects those sentiment words for self-supervision. As sentiment words play a key role in sentiment analysis, the representation learned here is expected to be more suitable for sentiment analysis. Word Polarity Prediction Word polarity is crucial for sentiment analysis. For example, traditional lexicon-based model (Turney, 2002) directly utilizes word polarity to classify the sentiment of texts. To incorporate this knowledge into the encoder, an objective called word polarity prediction L wp is further introduced. L wp is similar to L sw . For each masked sentiment token x i , L wp calculated its polarity (positive or negative) using final state x i . Then the polarity of target corresponds to the polarity of the original sentiment word, which can be found from the mined knowledge. Aspect-sentiment Pair Prediction Aspect sentiment pairs reveal more information than sentiment words do. Therefore, in order to capture the dependency between aspect and sentiment, an aspectsentiment pair objective is proposed. Especially, words in a pair are not mutually exclusive. This is quite different from BERT, which assumes tokens can be independently predicted. We thus conduct aspect-sentiment pair prediction with multi-label classification. We use the final state of classification token [CLS], which denotes representation of the entire sequence, to predict pairs. sigmoid activation function is utilized, which allows multiple tokens to occur in the output at the same time. The aspect-sentiment pair objective L ap is denoted as follows: y a = sigmoid( x 1 W ap + b ap ) (6) L ap = − a=A a=1 y a logŷ a(7) Here, x 1 denotes the output vector of [CLS]. A is the number of masked aspect-sentiment pairs in a corrupted sequence.ŷ a is the word probability normalized by sigmoid. y a is the sparse representation of a target aspect-sentiment pair. Each element of y a corresponds to one token of the vocabulary, and equals to 1 if the target aspect-sentiment pair contains the corresponding token. 3 As there are multiple elements of y a equals to 1, the predication here is multi-label classification. 4 Fine-tuning for Sentiment Analysis We verify the effectiveness of SKEP on three typical sentiment analysis tasks: sentence-level sentiment classification, aspect-level sentiment classification, and opinion role labeling. On top of the pre-trained transformer encoder, an output layer is added to perform task-specific prediction. The neural network is then fine-tuned on task-specific labeled data. Sentence-level Sentiment Classification This task is to classify the sentiment polarity of an input sentence. The final state vector of classification token [CLS] is used as the overall representation of an input sentence. On top of the transformer encoder, a classification layer is added to calculate the sentiment probability based on the overall representation. Aspect-level Sentiment Classification This task aims to analyze fine-grained sentiment for an aspect when given a contextual text. Thus, there are two parts in the input: aspect description and Opinion Role Labeling This task is to detect fine-grained opinion, such as holder and target, from input texts. Following SRL4ORL (Marasović and Frank, 2018), this task is converted into sequence labeling, which uses BIOS scheme for labeling, and a CRF-layer is added to predict the labels. 5 Experiment Dataset and Evaluation A variety of English sentiment analysis datasets are used in this paper. 2014 Task4 (Pontiki et al., 2014). This task contains both restaurant domain and laptop domain, whose accuracy is evaluated separately. (3) For opinion role labeling, MPQA 2.0 dataset (Wiebe et al., 2005;Wilson, 2008) is used. MPQA aims to extract the targets or the holders of the opinions. Here we follow the method of evaluation in SRL4ORL (Marasović and Frank, 2018), which is released and available online. 4-folder crossvalidation is performed, and the F-1 scores of both holder and target are reported. To perform sentiment pre-training of SKEP, the training part of Amazon-2 is used, which is the largest dataset among the list in Table 1. Notably, the pre-training only uses raw texts without any sentiment annotation. To reduce the dependency on manually-constructed knowledge and provide SKEP with the least supervision, we only use 46 sentiment seed words. Please refers to the appendix for more details about seed words. Experiment Setting We use RoBERTa as our baseline, which is one of the best pre-training mod-els. Both base and large versions of RoBERTa are used. RoBERTa base and RoBERTa large contain 12 and 24 transformer layers respectively. As the pre-training method is quite costly in term of GPU resources, most of the experiments are done on RoBERTa base , and only the main results report the performance on RoBERTa large . For SKEP, the transformer encoder is first initialized with RoBERTa, then is pre-trained on sentiment unlabeled data. An input sequence is truncated to 512 tokens. Learning rate is kept as 5e − 5, and batch-size is 8192. The number of epochs is set to 3. For the fine-tuning of each dataset, we run 3 times with random seeds for each combination of parameters (Table 2), and choose the medium checkpoint for testing according to the performance on the development set. Main Results We compare our SKEP method with the strong pretraining baseline RoBERTa and previous SOTA. The result is shown in Table 3. Comparing with RoBERTa, SKEP significantly and consistently improves the performance on both From Model Sentence Samples Prediction SST-2 RoBERTa altogether , this is ::::::::: successful as a film , while at the same time being a most touching reconsideration of the familiar :::::::::: masterpiece . positive SKEP altogether , this is ::::::::: successful as a film , while at the same time being a most touching reconsideration of the familiar :::::::::: masterpiece . positive Sem-L RoBERTa I got this at an ::::::: amazing price from Amazon and it arrived just in time . negative SKEP I got this at an ::::::: amazing price from Amazon and it arrived just in time . positive base and large settings. Even on RoBERTa large , SKEP achieves an improvement of up to 2.4 points. According to the task types, SKEP achieves larger improvements on fine-grained tasks, aspect-level classification and opinion role labeling, which are supposed to be more difficult than sentencelevel classification. We think this owes to the aspect-sentiment knowledge that is more effective for these tasks. Interestingly, "RoBERTa base + SKEP" always outperforms RoBERTa large , except on Amazon-2. As the large version of RoBERTa is computationally expensive, the base version of SKEP provides an efficient model for application. Compared with previous SOTA, SKEP achieves new state-of-the-art results on almost all datasets, with a less satisfactory result only on SST-2. Overall, through comparisons of various sentiment tasks, the results strongly verify the necessity of incorporating sentiment knowledge for pretraining methods, and also the effectiveness of our proposed sentiment pre-training method. Detailed Analysis Effect of Sentiment Knowledge SKEP uses an additional sentiment data for further pre-training and utilizes three objectives to incorporate three types of knowledge. Table 4 compares the contributions of these factors. Further pre-training with random sub-word masking of Amazon, Roberta base obtains some improvements. This proves the value of large-size task-specific unlabeled data. However, the improvement is less evident compared with sentiment word masking. This indicates that the importance of sentiment word knowledge. Further improvements are obtained when word polarity and aspect-sentiment pair objectives are added, confirming the contribution of both types of knowledge. Compare "+SW+WP+AP" with "+Random Token", the improvements are consistently significant in all evaluated data and is up to about 1.5 points. Overall, from the comparison of objectives, we conclude that sentiment knowledge is helpful, and more diverse knowledge results in better performance. This also encourages us to use more types of knowledge and use better mining methods in the future. Effect of Multi-label Optimization Multi-label classification is proposed to deal with the dependency in an aspect-sentiment pair. To confirm the necessity of capturing the dependency of words in the aspect-sentiment pair, we also compare it with the method where the token is predicted independently, which is denoted by AP-I. AP-I uses softmax for normalization, and independently predicts each word of a pair as the sentiment word prediction. According to the last line that contains AP-I in Table 4, predicting words of a pair independently do not hurt the performance of sentence-level classification. This is reasonable as the sentence-level task mainly relies on sentiment words. In contrast, in aspect-level classification and opinion role labeling, multi-label classification is efficient and yields improvement of up to 0.6 points. This denotes that multi-label classification does capture better dependency between aspect and sentiment, and also the necessity of dealing with such dependency. Comparison of Vector for Aspect-Sentiment Pair Prediction SKEP utilizes the sentence rep-resentation, which is the final state of classification token [CLS], for aspect-sentiment pair prediction. We call this Sent-Vector methods. Another way is to use the concatenation of the final vectors of the two words in a pair, which we call Pair-Vector. As shown in Table 6, the performances of these two decisions are very close. We suppose this dues to the robustness of the pre-training approach. As using a single vector for prediction is more efficient, we use final state of token [CLS] in SKEP. Table 5 shows the attention distribution of final layer for the [CLS] token when we adopt our SKEP model to classify the input sentences. On the SST-2 example, despite RoBERTa gives a correct prediction, its attention about sentiment is inaccurate. On the Sem-L case, RoBERTa fails to attend to the word "amazing", and produces a wrong prediction. In contrast, SKEP produces correct predictions and appropriate attention of sentiment information in both cases. This indicates that SKEP has better interpretability. Attention Visualization Related Work Sentiment Analysis with Knowledge Various types of sentiment knowledge, including sentiment words, word polarity, aspect-sentiment pairs, have been proved to be useful for a wide range of sentiment analysis tasks. Sentiment words with their polarity are widely used for sentiment analysis, including sentencelevel sentiment classification (Taboada et al., 2011;Shin et al., 2017;Lei et al., 2018;Barnes et al., 2019), aspect-level sentiment classification (Vo and Zhang, 2015), opinion extraction (Li and Lam, 2017), emotion analysis (Gui et al., 2017;Fan et al., 2019) and so on. Lexicon-based method (Turney, 2002;Taboada et al., 2011) directly utilizes polarity of sentiment words for classification. Traditional feature-based approaches encode sentiment word information in manually-designed features to improve the supervised models (Pang et al., 2008;Agarwal et al., 2011). In contrast, deep learning approaches enhance the embedding representation with the help of sentiment words (Shin et al., 2017), or absorb the sentiment knowledge through linguistic regularization (Qian et al., 2017;Fan et al., 2019). Aspect-sentiment pair knowledge is also useful for aspect-level classification and opinion extraction. Previous works often provide weak supervision by this type of knowledge, either for aspect-level classification (Zeng et al., 2019) or for opinion extraction (Yang et al., 2017;Ding et al., 2017). Although studies of exploiting sentiment knowledge have been made throughout the years, most of them tend to build a specific mechanism for each sentiment analysis task, so different knowledge is adopted to support different tasks. Whereas our method incorporates diverse knowledge in pretraining to provide a unified sentiment representation for sentiment analysis tasks. Pre-training Approaches Pre-training methods have remarkably improved natural language processing, using self-supervised training with large scale unlabeled data. This line of research is dramatically advanced very recently, and various types of methods are proposed, including ELMO (Peters et al., 2018), GPT (Radford et al., 2018), BERT (Devlin et al., 2019), XLNet and so on. Among them, BERT pre-trains a bidirectional transformer by randomly masked word prediction, and have shown strong performance gains. RoBERTa further improves BERT by robust optimization, and become one of the best pre-training methods. Inspired by BERT, some works propose finegrained objectives beyond random word masking. SpanBERT masks the span of words at the same time. ERNIE proposes to mask entity words. On the other hand, pre-training for specific tasks is also studied. GlossBERT (Huang et al., 2019) exploits gloss knowledge to improve word sense disambiguation. SenseBERT (Levine et al., 2019) uses WordNet super-senses to improve word-in-context tasks. A different ERNIE exploits entity knowledge for entity-linking and relation classification. Conclusion In this paper, we propose Sentiment Knowledge Enhanced Pre-training for sentiment analysis. Sentiment masking and three sentiment pre-training objectives are designed to incorporate various types of knowledge for pre-training model. Thought conceptually simple, SKEP is empirically highly effective. SKEP significantly outperforms strong pre-training baseline RoBERTa, and achieves new state-of-the-art on most datasets of three typical specific sentiment analysis tasks. Our work verifies the necessity of utilizing sentiment knowledge for pre-training models, and provides a unified senti-ment representation for a wide range of sentiment analysis tasks. In the future, we hope to apply SKEP on more sentiment analysis tasks, to further see the generalization of SKEP, and we are also interested in exploiting more types of sentiment knowledge and more fine-grained sentiment mining methods. Table 2 : 2Hyper-parameters for fine-tuning on each dataset. Batch and Epoch indicate batch size and maximum epoch respectively.contextual text. These two parts are combined with a separator[SEP], and fed into the transformer encoder. This task also utilizes the final state of the first token [CLS] for classification. Table 1 1summarizes the Table 3 : 3Comparison with RoBERTa and previous SOTA.For MPQA, here reports both binary-F1 and prop-F1 as Table 4 : 4Effectiveness of objectives. SW, WP, AP refers to pre-training objectives: Sentiment Word prediction, Word Polarity prediction and Aspect-sentiment Pair prediction. "Random Token" denotes random token masking used in RoBERTa. AP-I denotes predicting words in an Aspect-sentiment Pair Independently. Table 5 : 5Visualization of chosen samples. Words above wavy underline are mean sentiment words, and words above double underlines mean aspects. Color depth denotes importance for classification. The deeper color means more importance. The color depth is calculated by the attention weights with the classification token [CLS]. Model SST-2 dev Sem-L Sem-R Sent-Vector 96.87 81.32 87.92 Pair-Vector 96.91 81.38 87.95 Table 6 : 6Comparison of vector used for aspect- sentiment pair prediction. Sent-Vector uses sentence representation (output vector of [CLS]) for prediction, while pair-vector uses the concatenation of output vec- tors of the two words in a pair. In sentiment masking, we add common tokens to make up for the deficiency of masked tokens of sentiment words. Lsw also calculates these common tokens, while Lwp does not includes them. This means that the dimension of ya equals to the vocabulary size of pre-training method, which is 50265 in our experiment.4 It is possible to predict masked pairs with CRF-layer. However, it is more than 10-times slower than multi-label classification, thus could not be used in pre-training. All the pretraining models, including our SKEP and baselines use CRF-Layer here, thus their performances are comparable. AcknowledgmentsWe thanks Qinfei Li for her valuable comments. We also thank the anonymous reviewers for their insightful comments. This work was supported by the National Key Research and Development Project of China (No. 2018AAA0101900).A AppendixFor sentiment knowledge mining, we construct 46 sentiment seed words as follows. We first count the 9,750 items ofQian et al. (2017)on training data of Amazon-2, and get 50 most frequent sentiment words. Then, we manually filter out inappropriate words from these 50 words in a few minutes and finally get 46 sentiment words with polarities (Table 7). The filtered words are need, fun, plot and fine respectively, which are all negative words. Sentiment analysis of twitter data. Apoorv Agarwal, Boyi Xie, Ilia Vovsha, Owen Rambow, Rebecca Passonneau, Proceedings of the Workshop on Language in Social Media (LSM. the Workshop on Language in Social Media (LSMApoorv Agarwal, Boyi Xie, Ilia Vovsha, Owen Ram- bow, and Rebecca Passonneau. 2011. Sentiment analysis of twitter data. In Proceedings of the Work- shop on Language in Social Media (LSM 2011). Lexicon information in neural sentiment analysis: a multi-task learning approach. Jeremy Barnes, Samia Touileb, Lilja Øvrelid, Erik Velldal, Proceedings of the 22nd Nordic Conference on Computational Linguistics. the 22nd Nordic Conference on Computational LinguisticsJeremy Barnes, Samia Touileb, Lilja Øvrelid, and Erik Velldal. 2019. Lexicon information in neural sen- timent analysis: a multi-task learning approach. In Proceedings of the 22nd Nordic Conference on Com- putational Linguistics. BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In NAACL 2019. Recurrent neural networks with auxiliary labels for crossdomain opinion target extraction. Ying Ding, Jianfei Yu, Jing Jiang, Ying Ding, Jianfei Yu, and Jing Jiang. 2017. Recur- rent neural networks with auxiliary labels for cross- domain opinion target extraction. In AAAI 2017. A knowledge regularized hierarchical approach for emotion cause analysis. Chuang Fan, Hongyu Yan, Jiachen Du, Lin Gui, Lidong Bing, Min Yang, Ruifeng Xu, Ruibin Mao, EMNLP. Chuang Fan, Hongyu Yan, Jiachen Du, Lin Gui, Li- dong Bing, Min Yang, Ruifeng Xu, and Ruibin Mao. 2019. A knowledge regularized hierarchical ap- proach for emotion cause analysis. In EMNLP 2019. A question answering approach for emotion cause extraction. Lin Gui, Jiannan Hu, Yulan He, Ruifeng Xu, Qin Lu, Jiachen Du, Lin Gui, Jiannan Hu, Yulan He, Ruifeng Xu, Qin Lu, and Jiachen Du. 2017. A question answering ap- proach for emotion cause extraction. In EMNLP 2017. GlossBERT: BERT for word sense disambiguation with gloss knowledge. Luyao Huang, Chi Sun, Xipeng Qiu, Xuanjing Huang, Luyao Huang, Chi Sun, Xipeng Qiu, and Xuanjing Huang. 2019. GlossBERT: BERT for word sense disambiguation with gloss knowledge. In EMNLP 2019. What does BERT learn about the structure of language. Ganesh Jawahar, Benoît Sagot, Djamé Seddah, ACL 2019. Ganesh Jawahar, Benoît Sagot, and Djamé Seddah. 2019. What does BERT learn about the structure of language? In ACL 2019. SpanBERT: Improving pre-training by representing and predicting spans. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, Omer Levy, arXiv:1907.10529arXiv preprintMandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2019. SpanBERT: Improving pre-training by repre- senting and predicting spans. arXiv preprint arXiv:1907.10529. Albert: A lite bert for selfsupervised learning of language representations. Zhen-Zhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut, abs/1909.11942ArXiv. Zhen-Zhong Lan, Mingda Chen, Sebastian Good- man, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self- supervised learning of language representations. ArXiv, abs/1909.11942. A multi-sentiment-resource enhanced attention network for sentiment classification. Zeyang Lei, Yujiu Yang, Min Yang, Yi Liu, Zeyang Lei, Yujiu Yang, Min Yang, and Yi Liu. 2018. A multi-sentiment-resource enhanced attention net- work for sentiment classification. In ACL 2018. Sensebert: Driving some sense into bert. Yoav Levine, Barak Lenz, Or Dagan, Dan Padnos, Or Sharir, Shai Shalev-Shwartz. Yoav Levine, Barak Lenz, Or Dagan, Dan Padnos, Or Sharir, Shai Shalev-Shwartz, Amnon Shashua, and Yoav Shoham. 2019. Sensebert: Driving some sense into bert. Deep multi-task learning for aspect term extraction with memory interaction. Xin Li, Wai Lam, Xin Li and Wai Lam. 2017. Deep multi-task learning for aspect term extraction with memory interaction. In EMNLP 2017. Sentiment analysis and opinion mining. Bing Liu, Synthesis Lectures on Human Language Technologies. 5Bing Liu. 2012. Sentiment analysis and opinion min- ing. In Synthesis Lectures on Human Language Technologies 5.1 (2012): 1-167. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692Roberta: A robustly optimized bert pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692. SRL4ORL: Improving opinion role labeling using multi-task learning with semantic role labeling. Ana Marasović, Anette Frank, Ana Marasović and Anette Frank. 2018. SRL4ORL: Improving opinion role labeling using multi-task learning with semantic role labeling. In NAACL 2018. Opinion mining and sentiment analysis. Foundations and Trends R in Information Retrieval. Bo Pang, Lillian Lee, 2Bo Pang, Lillian Lee, et al. 2008. Opinion mining and sentiment analysis. Foundations and Trends R in In- formation Retrieval, 2(1-2):1-135. E Matthew, Mark Peters, Mohit Neumann, Matt Iyyer, Christopher Gardner, Kenton Clark, Luke Lee, Zettlemoyer, arXiv:1802.05365Deep contextualized word representations. arXiv preprintMatthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. arXiv preprint arXiv:1802.05365. SemEval-2014 task 4: Aspect based sentiment analysis. Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, SemEval. Ion Androutsopoulos, and Suresh ManandharMaria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. SemEval-2014 task 4: As- pect based sentiment analysis. In SemEval 2014. Linguistically regularized LSTM for sentiment classification. Qiao Qian, Minlie Huang, Jinhao Lei, Xiaoyan Zhu, Qiao Qian, Minlie Huang, Jinhao Lei, and Xiaoyan Zhu. 2017. Linguistically regularized LSTM for sen- timent classification. In ACL 2017. Improving language understanding with unsupervised learning. Alec Radford, Karthik Narasimhan, OpenAITechnical reportTime Salimans, and Ilya SutskeverAlec Radford, Karthik Narasimhan, Time Salimans, and Ilya Sutskever. 2018. Improving language un- derstanding with unsupervised learning. Technical report, Technical report, OpenAI. Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. Adapt or get left behind: Domain adaptation through bert language model finetuning for aspect-target sentiment classification. Alexander Rietzler, Sebastian Stabinger, Paul Opitz, Stefan Engl, abs/1908.11860ArXiv. Alexander Rietzler, Sebastian Stabinger, Paul Opitz, and Stefan Engl. 2019. Adapt or get left behind: Domain adaptation through bert language model finetuning for aspect-target sentiment classification. ArXiv, abs/1908.11860. Lexicon integrated CNN models with attention for sentiment analysis. Bonggun Shin, Timothy Lee, Jinho D Choi, Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media AnalysisBonggun Shin, Timothy Lee, and Jinho D. Choi. 2017. Lexicon integrated CNN models with attention for sentiment analysis. In Proceedings of the 8th Work- shop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. Recursive deep models for semantic compositionality over a sentiment treebank. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, Christopher Potts, EMNLP 2013. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In EMNLP 2013. Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hua Hao Tian, Wu, arXiv:1904.09223Ernie: Enhanced representation through knowledge integration. arXiv preprintYu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. Ernie: Enhanced rep- resentation through knowledge integration. arXiv preprint arXiv:1904.09223. Lexicon-based methods for sentiment analysis. Maite Taboada, Julian Brooke, Milan Tofiloski, Kimberly Voll, Manfred Stede, Computational linguistics. 372Maite Taboada, Julian Brooke, Milan Tofiloski, Kim- berly Voll, and Manfred Stede. 2011. Lexicon-based methods for sentiment analysis. Computational lin- guistics, 37(2):267-307. Learning sentiment-specific word embedding for twitter sentiment classification. Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, Bing Qin, Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. 2014. Learning sentiment-specific word embedding for twitter sentiment classification. In ACL 2014. Thumbs up or thumbs down?: semantic orientation applied to unsupervised classification of reviews. D Peter, Turney, Peter D Turney. 2002. Thumbs up or thumbs down?: semantic orientation applied to unsupervised classi- fication of reviews. In ACL 2002. Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS 2017. Target-dependent twitter sentiment classification with rich automatic features. Duy-Tin Vo, Yue Zhang, IJCAI. Duy-Tin Vo and Yue Zhang. 2015. Target-dependent twitter sentiment classification with rich automatic features. In IJCAI 2015. GLUE: A multi-task benchmark and analysis platform for natural language understanding. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel Bowman, 10.18653/v1/W18-5446Proceedings of the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP. the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLPAlex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In Pro- ceedings of the 2018 EMNLP Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP. Annotating expressions of opinions and emotions in language. Language Resources and Evaluation. Janyce Wiebe, Theresa Wilson, Claire Cardie, Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emo- tions in language. Language Resources and Evalua- tion. Fine-grained subjectivity and sentiment analysis: Recognizing the intensity, polarity, and attitudes of private states. Wilson Theresa Ann, Theresa Ann Wilson. 2008. Fine-grained subjectivity and sentiment analysis: Recognizing the intensity, polarity, and attitudes of private states. Unsupervised data augmentation. Qizhe Xie, Zihang Dai, Eduard H Hovy, Minh-Thang Luong, V Quoc, Le, abs/1904.12848CoRRQizhe Xie, Zihang Dai, Eduard H. Hovy, Minh-Thang Luong, and Quoc V. Le. 2019. Unsupervised data augmentation. CoRR, abs/1904.12848. Xlnet: Generalized autoregressive pretraining for language understanding. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, V Quoc, Le, Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. Transfer learning for sequence tagging with hierarchical recurrent networks. Zhilin Yang, Ruslan Salakhutdinov, William W Cohen, abs/1703.06345ArXiv. Zhilin Yang, Ruslan Salakhutdinov, and William W. Cohen. 2017. Transfer learning for sequence tag- ging with hierarchical recurrent networks. ArXiv, abs/1703.06345. A variational approach to weakly supervised document-level multi-aspect sentiment classification. Ziqian Zeng, Wenxuan Zhou, Xin Liu, Yangqiu Song, 10.18653/v1/N19-1036Ziqian Zeng, Wenxuan Zhou, Xin Liu, and Yangqiu Song. 2019. A variational approach to weakly super- vised document-level multi-aspect sentiment classifi- cation. In NAACL 2019. Deep learning for sentiment analysis : A survey. Lei Zhang, Shuai Wang, Bing Liu, 10.1002/widm.1253Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery. Lei Zhang, Shuai Wang, and Bing Liu. 2018. Deep learning for sentiment analysis : A survey. Wiley Interdisciplinary Reviews: Data Mining and Knowl- edge Discovery. Character-level convolutional networks for text classification. Xiang Zhang, Junbo Zhao, Yann Lecun, Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- sification. In NIPS 2015. ERNIE: Enhanced language representation with informative entities. Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, Qun Liu, Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: En- hanced language representation with informative en- tities. In ACL 2019. Modeling sentiment dependencies with graph convolutional networks for aspect-level sentiment classification. Pinlong Zhao, Linlin Hou, Ou Wu, abs/1906.04501CoRRPinlong Zhao, Linlin Hou, and Ou Wu. 2019. Mod- eling sentiment dependencies with graph convolu- tional networks for aspect-level sentiment classifica- tion. CoRR, abs/1906.04501.
[ "https://github.com/baidu/Senta." ]
[ "Unpaired Motion Style Transfer from Video to Animation", "Unpaired Motion Style Transfer from Video to Animation" ]
[ "Kfir Aberman \nBejing Film Academy & Tel-Aviv University\n\n", "Aicfve \nBejing Film Academy & Tel-Aviv University\n\n" ]
[ "Bejing Film Academy & Tel-Aviv University\n", "Bejing Film Academy & Tel-Aviv University\n" ]
[]
Transferring the motion style from one animation clip to another, while preserving the motion content of the latter, has been a long-standing problem in character animation. Most existing data-driven approaches are supervised and rely on paired data, where motions with the same content are performed in different styles. In addition, these approaches are limited to transfer of styles that were seen during training.In this paper, we present a novel data-driven framework for motion style transfer, which learns from an unpaired collection of motions with style labels, and enables transferring motion styles not observed during training. Furthermore, our framework is able to extract motion styles directly from videos, bypassing 3D reconstruction, and apply them to the 3D input motion.Our style transfer network encodes motions into two latent codes, for content and for style, each of which plays a different role in the decoding (synthesis) process. While the content code is decoded into the output motion by several temporal convolutional layers, the style code modifies deep features via temporally invariant adaptive instance normalization(AdaIN).Moreover, while the content code is encoded from 3D joint rotations, we learn a common embedding for style from either 3D or 2D joint positions, enabling style extraction from videos.Our results are comparable to the state-of-the-art, despite not requiring paired training data, and outperform other methods when transferring previously unseen styles. To our knowledge, we are the first to demonstrate style transfer directly from videos to 3D animations -an ability which enables one to extend the set of style examples far beyond motions captured by MoCap systems.
10.1145/3386569.3392469
[ "https://arxiv.org/pdf/2005.05751v1.pdf" ]
218,595,814
2005.05751
786c28e88d100f37633e1bf311b4a96c85731c02
Unpaired Motion Style Transfer from Video to Animation Kfir Aberman Bejing Film Academy & Tel-Aviv University Aicfve Bejing Film Academy & Tel-Aviv University Unpaired Motion Style Transfer from Video to Animation 10.1145/3386569.3392469YIJIA WENG * , CFCS, Peking University & AICFVE, Beijing Film Academy DANI LISCHINSKI, The Hebrew University of Jerusalem & AICFVE, Beijing Film Academy DANIEL COHEN-OR, Tel-Aviv University & AICFVE, Beijing Film Academy BAOQUAN CHEN † , CFCS, Peking University & AICFVE, Beijing Film AcademyCCS Concepts: • Computing methodologies → Motion process- ing; Neural networks Additional Key Words and Phrases: motion analysis, style transfer Transferring the motion style from one animation clip to another, while preserving the motion content of the latter, has been a long-standing problem in character animation. Most existing data-driven approaches are supervised and rely on paired data, where motions with the same content are performed in different styles. In addition, these approaches are limited to transfer of styles that were seen during training.In this paper, we present a novel data-driven framework for motion style transfer, which learns from an unpaired collection of motions with style labels, and enables transferring motion styles not observed during training. Furthermore, our framework is able to extract motion styles directly from videos, bypassing 3D reconstruction, and apply them to the 3D input motion.Our style transfer network encodes motions into two latent codes, for content and for style, each of which plays a different role in the decoding (synthesis) process. While the content code is decoded into the output motion by several temporal convolutional layers, the style code modifies deep features via temporally invariant adaptive instance normalization(AdaIN).Moreover, while the content code is encoded from 3D joint rotations, we learn a common embedding for style from either 3D or 2D joint positions, enabling style extraction from videos.Our results are comparable to the state-of-the-art, despite not requiring paired training data, and outperform other methods when transferring previously unseen styles. To our knowledge, we are the first to demonstrate style transfer directly from videos to 3D animations -an ability which enables one to extend the set of style examples far beyond motions captured by MoCap systems. INTRODUCTION The style of human motion may be thought of as the collection of motion attributes that convey the mood and the personality of a character. Human observers are extremely perceptive to subtle style variations; we can, for example, often tell whether a person is happy or sad from the way they walk. Consequently, for games * equal contribution † corresponding author Authors' addresses: Kfir Aberman,[email protected];Yijia Weng,Dani Lischinski,[email protected]; Fig. 1. Style transfer from video to animation. Our network, which is trained with unpaired motion sequences, learns to disentangle content and style. Our trained generator is able to produce a motion sequence that combines the content of a 3D sequence with the style extracted directly from a video. and movies that pursue realistic and expressive character animation, there is a long-standing interest in generating diverse stylized motions. However, capturing all desired motions in a variety of styles is practically infeasible. A much more promising option is to perform motion style transfer: modify the style of an existing motion into one taken from another. Furthermore, it is particularly attractive to use video clips to specify target motion styles. Since motion style eludes a precise definition, hand crafted representations are not well-suited to cope with style transfer, and most recent works attempt to infer style from examples. However, despite years of progress in data-driven motion style transfer, two main obstacles remain in practice: (i) avoiding the need for paired and registered data, and (ii) extracting style from only a few examples. Both of these hurdles arise because of the difficulty to collect and process sufficient motion data. In order to capture paired and registered data, the same actor must, for example, perform a walking sequence in different styles with identical steps and turns, which is tedious and, more importantly, unscalable to the huge training sets required by today's deep learning models. As for the extraction of styles, clearly, a large number of style examples might better characterize the style and facilitate the transfer. However, in reality, one can often obtain only a few examples for each (uncommon) style. Additionally, we also see great potential in videos as a massive source for motion styles. As the primary format for recording human activities, available videos contain a much wider range of motions and styles compared to 3D motion capture data. And if the desired style needs to be captured on-the-spot, shooting a video is a much easier and cheaper alternative to performing motion capture. As an important research topic, the problem of motion style transfer has been approached in several ways, all of which we find limited, especially with regard to the challenges above. Some works model style using hand-crafted representations, such as physical parameters or spectral characteristics, which may fail to fully capture complex and subtle properties. Other works adopt a data-driven approach. Their reliance on paired training data or large numbers of target style examples, however, hinders their applicability to real world settings. Recently, Mason et al. [2018] proposed a fewshot learning scheme to cope with the shortage of style references. However, they employ a specialized network that targets only locomotion. Also, like other previous works, their model can only extract style from 3D MoCap data. In this work, we circumvent the need for paired training data by learning without supervision, characterize unseen styles even from a single reference clip, and provide the ability to extract style from a video. To achieve these goals, we adopt a generative scheme, using temporal convolutional neural networks as the backbone of our model. Our network encodes content and style inputs into corresponding latent codes, which are then recombined and decoded to obtain a re-stylized result (see Figure 1). We argue that during training, our network learns a universal style extractor, which is then applicable to new styles, specified via a few examples at test time. Furthermore, our style extractor is applicable to both 3D motions and 2D motions observed in ordinary video examples, without requiring 3D reconstruction. Our main technical contribution lies in the architecture of the deep neural network outlined above, in which the content and style codes affect the generated motions via two different mechanisms. The content code is decoded into a motion by applying a sequence of temporal convolutional layers, each resulting in a set of temporal signals that represent joint rotations in a high-dimensional feature space. The style code is used to modify the second order statistics of these generated deep features via temporally-invariant adaptive instance normalization (AdaIN). These temporally-invariant affine transformations amplify or attenuate the temporal signals, while preserving their shape. Consequently, the motion content is also preserved. AdaIN has been used with great effect in image generation and style transfer (see Section 2.1), but to our knowledge we are the first to apply it in the context of motion. Our network is trained by optimizing a content consistency loss, which ensures that the content input to our network is reconstructed whenever the style input has the same style label as the content one. This loss forces the network to extract only those attributes that are shared among samples of the same style class. Simply copying the content input to the output is prevented by instance normalization during the encoding and by restricting the dimensionality of the latent content code. In order to extract style from videos we learn a joint embedding space for style codes extracted from either 3D or 2D joint positions. During training, we require that 3D motions and their 2D projections, are both mapped by a pair of corresponding encoders into the same style code. In addition to enabling style extraction from videos, this joint embedding supports style interpolation, as well as measuring "style distance" between videos and/or 3D motions. In summary, our contributions consist of a novel data-driven approach for motion style transfer that: (i) does not require paired training data; (ii) is able to transfer styles unseen during training, extracted from as little as a single example clip; and (iii) supports style extraction directly from ordinary videos. We discuss various insights related to the mechanism of the network and the analogy to some related works. Our results show that by leveraging the power of our new framework, we can match stateof-the-art motion style transfer results, by training only on unpaired motion clips with style labels. Furthermore, we outperform other methods for previously unseen styles, as well as styles extracted from ordinary videos. RELATED WORK 2.1 Image Style Transfer Our work is inspired by the impressive progress in image style transfer, achieved through the use of deep learning machinery. The pioneering work of Gatys et al. [2016] showed that style and content can be represented by statistics of deep features extracted from a pre-trained classification network. While their original approach required optimization to transfer style between each pair of images, Johnson et al. [2016] later converted this approach to a feed-forward one by training a network using a perceptual loss. Later on, Ulyanov et al. [2016] showed that the style of an image can be manipulated by modifying the second order statistics (mean and variance) of channels of intermediate layers and proposed an instance normalization layer which enables to train a network to modify the style of arbitrary content images into a single specific target style. This idea was further extended by Huang et al. [2017], who demonstrated that various target styles may be applied simply by using the Adaptive Instance Normalization (AdaIN) layer to inject different style statistics into the same network. The AdaIN mechanism has proved effective for various tasks on images, such as image-to-image translation [Huang et al. 2018] and image generation . Recently, Park et al. [2019] proposed a spatially adaptive normalization layer, for multi-modal generation of images, based on semantic segmentation maps, while introduced FUNIT, a few-shot unpaired image-toimage translation method, where only a few examples of the target class are required. Inspired by these recent achievements in image style transfer, our work makes use of temporally invariant AdaIN parameters, thus enabling manipulating a given motion sequence to perform in arbitrary styles. To our knowledge, we are the first to employ such a mechanism in the context of motion processing. Motion style transfer Motion style transfer is a long standing problem in computer animation. Previous works relied on handcrafted features, in frequency domain [Unuma et al. 1995], or in time [Amaya et al. 1996], to represent and manipulate style or emotion [Aristidou et al. 2017] of given 3D motions, or used physics-based optimizations [Liu et al. 2005] to achieve a similar goal. Yumer and Mitra [2016] showed that difference in spectral intensities of two motion signals with similar content but different styles enables the transfer between these two styles on arbitrary heterogeneous actions. Since style is an elusive attribute that defies a precise mathematical definition, data-driven approaches that infer style features from examples, might have an advantage compared to attempting to hand-craft features to characterize style and content. Indeed, several works used machine learning tools to perform motion style transfer [Brand and Hertzmann 2000;Hsu et al. 2005;Ikemoto et al. 2009;Ma et al. 2010;Wang et al. 2007;Xia et al. 2015]. However, these methods either assume that there are explicit motion pairs in the training data that exhibit the same motion with different styles, or limited to the set of styles given in the dataset. For example, Hsu et al. [2005] learned a linear translation model that can map a motion to a target style based on pairwise dense correspondence (per frame) between motions with similar content but different styles. Xia et al. [2015] used a KNN search over a database of motions to construct a mixture of regression models for transferring style between motion clips, and Smith et al. [2019] improved the processing using a neural network that is trained on paired and registered examples. Both latter methods represent style by a vector of fixed length, which is limited to the set of styles in the dataset, and can not be applied to unseen styles. In contrast, our approach only assumes that each motion clip is labeled by its style, without any requirement for content labels or correspondences between motion clips. In addition, our method enables extraction of unseen styles from 3D motion examples, and even from video, and is not limited to the styles in the dataset. Besides transferring motion style, other methods exploited machine learning approaches to cope with various closely related tasks. These include generating motion with constraints using inverse kinematics (IK) when the style is taken from a dataset [Grochow et al. 2004], performing independent component analysis to separate motion into different components and perform a transfer [Shapiro et al. 2006], or using restricted Boltzmann machines, conditioned on a style label, to model human motion and capture style [Taylor and Hinton 2009]. With the recent rapid progress in deep learning methods for character animation [Holden et al. 2017b[Holden et al. , 2016[Holden et al. , 2015, the flourishing image style transfer techniques were quickly adopted into the character animation domain. Holden et al. [2016], proposed a general learning framework for motion editing that enables style transfer. In analogy to Gatys et al. [2016], they optimize a motion sequence that satisfy two conditions; the activations of the hidden units should be similar to those of the content motion, while their Gram matrices should match those of the style input motion. Differently from Gatys et al. [2016], here the features are extracted using a pretrained autoencoder for motion, rather than a pretrained image classification network. Later on, Holden et al. [2017a] and Du et al. [2019] proposed to improve performance by replacing optimization with a feed-forward network that is trained to satisfy the same constraints. In both of these works, the pretrained network is not explicitly designed for style transfer. The features extracted by the pretrained autoencoder contain information on both content and style, which leads to a strong dependency between the two properties, as demonstrated in Section 5. Recently, Mason et al. [2018] proposed a method to transfer the style of character locomotion in real time, given a few shots of another stylized animation. In contrast to this approach, which is limited to locomotion, our approach can extract and transfer styles regardless of the motion content. Motion from Videos Motion reconstruction and pose estimation from monocular videos are long-standing fundamental tasks in computer vision, and are beyond the scope this paper. Along the years, various methods have dealt with the extraction of different motion properties directly from video, such as 2D poses [Cao et al. 2018], 3D poses [Pavllo et al. 2019b], and 3D motion reconstruction [Mehta et al. 2017]. Recently, Aberman et al. [2019b] extracted character-agnostic motion, view angle, and skeleton, as three disentangled latent codes, directly from videos, bypassing 3D reconstruction. Existing methods that extract motion from videos [Kanazawa et al. 2019;Mehta et al. 2017] are typically not concerned with style, while our work is the first to extract the style, rather than the motion, from video-captured human motion samples, bypassing 3D reconstruction. Another stream of work is focused on motion transfer in video using deep learning techniques. The existing approaches provide a different perspective on how to extract motion in 2D [Aberman et al. 2019a;Chan et al. 2019] or 3D ] from one video, and apply to it the appearance of a target actor from another video. MOTION STYLE TRANSFER FRAMEWORK Our framework aims at translating a motion clip of an animated character with a given content to another motion that exhibits the same content, but performed using a different style. The desired target style may be inferred from a few (or even a single) 3D motion clips, or a video example. Importantly, the target style might not be one of those seen during training. We only assume that each of the motion clips in the training set is assigned a style label; there's no pairing requirement, i.e., no need for explicit pairs of motions that feature the same content performed using two different styles. We treat style transfer as a conditional translation model, and propose a neural network that learns to decompose motion into two disentangled latent codes, a temporal latent code that encodes motion content, and a temporally-invariant latent code that encodes motion style. These two codes affect the generated motions via two different mechanisms. The content code is decoded into a motion by applying a sequence of temporal convolutional layers, each yielding Each plot visualizes a specific channel of a deep feature as eight different styles are applied to the same motion content. It can be seen that the signals differ only by a temporally-invariant affine transform and that the signal shape (per channel) is preserved. Thus, the motion content is preserved as well. a set of temporal signals that determine the joint rotations in a highdimensional feature space. The style code is used to modify only the means and variances of the generated temporal deep features via temporally-invariant adaptive instance normalization (AdaIN), thereby preserving the content of the original motion. Figure 2 visualizes three channels of deep features generated by our network's decoder, showing the effect of different styles applied to the same motion content. In addition, in order to support style extraction from videos, we learn a joint embedding space for style by extracting style codes from either 3D or 2D joint positions, using two different encoders, which are encouraged to map 3D motions and their 2D projections into the same code. Figure 3 describes the high-level architecture of our framework, whose various components are described in more detail below. Architecture Our framework, depicted in Figure 3, consists of a conditional motion translator that takes as input two motion clips: the content motion m s , with a source style s ∈ S, as well as a style motion n t , with a target style t ∈ S. The output motionm t is supposed to consist of the content of m s , performed using style t. We next describe the design and role of the different components in our framework. Motion Representation. In our setting, a motion clip m ∈ R T ×d is a temporal sequence of T poses where each pose is represented by d channels. We choose to represent the content input m s and the style input m t differently from each other. Since a motion is well defined by the joint rotations (as opposed to joint positions), and since the content input is strongly correlated with the output motion, we represent m s using rotations (unit quaternions), i.e., m s ∈ R T ×4J , where J is the number of joints. In contrast, since style can be inferred from the relative motion of joint positions, and to facilitate learning a joint embedding space for 3D and 2D motions, we represent the style input using joint positions (n t ∈ R T ×3J ). The output motionm t is represented using joint rotations, which enables the extraction of standard character animation files without any further post-processing. Note that the global root positions are discarded from the representation of the network's input/output, and are treated separately during test time, as explained later in this section. Motion Translator. The motion translator consists of a content encoder E C , a style encoder E S , and a decoder F , where E C and E S encode the input into two latent codes, z c and z s , respectively. E C consists of several temporal, 1D convolutional layers [Holden et al. 2015], followed by several residual blocks that map the content motion to a temporal content latent code z c . During encoding, the intermediate temporal feature maps undergo instance normalization (IN), which effectively ensures that the resulting content code is "stripped" of style. The encoding of style is performed by one of the two encoders E 2D S , E 3D S , depending on whether the joint coordinates are in 2D (extracted from a video) or in 3D. We assume that the style does not change in mid-motion, and use a sequence of 1D convolutional layers to map n t into a fixed-size (independent on the temporal length of the clip) latent code z s . We'd like 2D and 3D motions performed with the same style to be mapped to the same latent vector z s . Thus, at each iteration during training we use a 3D motion clip, along with a 2D perspective projection of that same clip, and feed both clips into the corresponding encoders. The latent code z s is then obtained by averaging the output of the two encoders. At test time, the style code is extracted by only one of the two encoders, depending on the type of style input. The decoder F consists of several residual blocks with adaptive instance normalization (AdaIN) [Huang and Belongie 2017], followed by convolutional layers with stride that upsample the temporal resolution of the clip. The AdaIN layer constitutes a normalization layer that applies an affine transform to the feature activations (per channel). For each AdaIN layer with c channels, the network learns a mapping (a multilayer perceptron, or MLP) of the style code z s into 2c parameters that modify the per-channel mean and variance. Note that the affine transformation is temporally invariant and hence only affects non-temporal attributes of the motion. Thus, while the content encoder effectively removes the source style s by normalizing the non-temporal attributes with IN, the decoder injects the target style t by using AdaIN to scale and shift the feature channels to target values inferred from the style code. In summary, using the notation introduced above, our conditional motion translator G may be formally expressed as: m t = G m s |n t = F E C (m s )|E S (n t ) . (1) Multi-Style Discriminator. Our discriminator D follows the multiclass discriminator baseline proposed at ]. D is a single component that is trained to cope with |S| adversarial tasks simultaneously, where each task aims to determine whether an input motion is a real motion of a specific style i ∈ S, or a fake output of G. When updating D for a real motion of source style i ∈ S, D is penalized if its i-th output is false. For a translation output yielding a fake motion of source style i, D is penalized if the i-th output is positive. Note that D is not penalized for not predicting false for motions of other styles. When updating G, it is penalized only if the i-th output of D is false. A detailed description of the architecture layers and parameters is given in the Appendix. Global Velocity. At test time, we aim to extract the desired style even from a single example. However, since the content of the two input motions (style and content) may be different, the translation of global velocity, a property which is often correlated with style, is E C E 2D S E 3D S F D Joint- embedding Triplet Loss Multi-Style Adversarial Loss z c Content Consistency Loss n t m sm t IN E S MLP z s Fig. 3. Our motion-style transfer architecture consists of encoders for content (E C ) and style (E S ), which extract a latent content code (z c ) and a latent style code (z s ), respectively, where the style code can be extracted from either 3D motions (via E 3D S ) or 2D projections (E 2D S ). During encoding, the content code is stripped of style by instance normalization (IN) layers. The content code is then used to reconstruct a motion by a decoder F , which contains AdaIN layers that modify temporally-invariant second order statistics of the intermediate deep features decoded by F . The output motions are fed into a multi-style discriminator D that judges whether it belongs to a certain style. a challenging task. For example, in order to convert neutral walking to old walking, the global velocity should decrease. However, inferring such a property form a single clip of old kicking, where the root position is nearly static, is practically impossible, especially when the style is previously unseen. A principled solution to this problem is outside the scope of this work; below, we describe a heuristic solution that has worked well in our experiments. In this solution, the root positions of our output motion are directly taken from the content input sequence. However, since global velocity is correlated with style, we perform a dynamic time warping on the global velocity, based on the velocity ratio between the two input motions. More precisely, for each motion sequence, we measure the velocity factor as the temporal average of the maximal local joint velocity, V = 1 T T τ =1 max j ∈J {v j (τ )},(2) where v j (τ ) is the local velocity of the j-th joint in time τ . Next, we warp the temporal axis by the factor V sty /V con , where V sty and V con are the velocity factors for the style and content inputs, respectively. We find that in most of our examples local joint velocity captured style information better than global velocity, and for that reason decided to use the above definition for the velocity factor. Note that this global transformation is reversible, such that if we use the output motion as the content input, and the original content motion as the style input, we recover the global velocity of the original motion. Foot Contact. As our network is built upon 1D temporal convolution layers, raw outputs tend to suffer from foot skating artifacts. In order to cope with the issue, we extract foot contact labels from the content input, use them to correct the feet positions and apply IK to fix the corresponding output poses (before the global velocity warping). As a result, we get visually plausible outputs, with no foot skating. While this fix works well in most cases, note that it assumes that the foot contact timing is part of the content, and not of the style. However, this is not always true: consider, for example, a zombie walking while dragging one of the feet. This aspect should be further addressed in future work. Training and Loss Our dataset is trimmed into short overlapped clips, which comprise our motion collection M. However, note that at test time the length of the input sequences can be arbitrary, since the networks are fully convolutional. Note that although the content input motion and the output motion are both represented using joint rotations, all of the losses are applied to joint positions as well. This is done by applying a forward kinematics layer [Pavllo et al. 2019a;Villegas et al. 2018] on the aforementioned components, which for simplicity, is not explicitly mentioned in the following equations. Content Consistency Loss. In case that the content input m s and the style input n t share the same style (t = s), it is expected that the translator network will constitute an identity map, regardless of the content of n t . Thus, in every iteration we randomly pick two motion sequences from our dataset M, with the same style label, and apply the content consistency loss which is given by L con = E m s ,n s ∼M ∥F E C (m s )|E S (n s ) − m s ∥ 1 ,(3) where ∥·∥ represents the L 1 norm. Note that when n s = m s , Equation (3) becomes a standard reconstruction loss. Adversarial Loss. Since our training is unpaired, the adversarial loss is the component which is responsible, in practice, to manipulate the style of m s via L adv = E n t ∼M ∥D t (n t ) − 1∥ 2 (4) + E m s ,n t ∼M×M ∥D t (F E C (m s )|E S (n t ) ∥ 2 , where D t (·) represents the discriminator output which corresponds to the style class t ∈ S. In order to stabilize the training of the generator in the multi-class setting, we apply feature matching loss as regularization ]. This loss minimizes the distance between the last feature of the discriminator when fed by a real input of a specific style (averaged over the set) to the same feature when fed by a fake output of the same target style, via L reg = E m s ,n t ∼M×M ∥D f (m t ) − 1 |M t | i ∈M t D f (n i t )∥ 1 ,(5) where M t is a subset of motions with style t ∈ S and D f is a sub-network of D that doesn't include the prediction (last) layer. Joint 2D-3D Style Embedding Intuitively, the style of motion can be identified by observing the character 3D positions in space-time as well as their 2D projections using a view with a reasonable elevation angle. While handcrafted representations are designed to treat a single form of input, we exploit the power of deep learning to learn a joint embedding space for extracting style from both 3D and 2D motion representations. The learned common latent space can be used for various tasks, such as video-based motion style retrieval, style interpolation and more. In our context, we exploit the properties of this space to extract style directly from real videos during test time, while bypassing the need for 3D reconstruction, an error-prone process which adds noise into the pipeline. Joint Embedding Loss. In order to construct a common latent space for style, our loss encourages pairs of 3D-2D motions to be mapped into the same feature vector by L joint = E n t ∼M ∥E 3D S (n t ) − E 2D S (P(n t ; p))∥ 2 ,(6) where P is a weak perspective projection operator that projects the input to a camera plane, with camera parameters p, that consist of scale s and the Euler angles v = (v pitch , v yaw , v roll ). For each motion clip, we define the local Z-axis to be the temporal average of per-frame forward directions, which are computed based on the cross product of the Y-axis and the average of vectors across the shoulders and the hips. During training v roll = v pitch = 0 are fixed while v yaw ∈ [−90 • , 90 • ] and s ∈ [0.8, 1.2] are randomly sampled five times in each iteration, to create five projections that are transferred to E 2D S . Style Triplet Loss. In order to improve the clustering of the different styles in the latent space, we exploit the style labels and use the technique suggested by Aristidou et al. [2018] to explicitly encourage inputs with similar style to be mapped tightly together, by applying a triplet loss on the style latent space via L trip = E n t ,x t ,w s ∼M [∥E S (n t ) − E S (x t )∥ − (7) ∥E S (n t ) − E S (w s )∥ + δ ] + , where x t and w s are two motions with different styles s ̸ = t, and δ = 5 is our margin. This loss encourages the distance between feature vectors of two motion inputs that share the same style to be smaller, at least by α, than the distance between two motions with different styles. Our final loss is given by a combination of the aforementioned loss terms: L = L con + α adv L adv + α reg L reg + α joint L joint + α trip L trip , (8) where in our experiments we use α adv = 1, α reg = 0.5, α joint = 0.3 and α trip = 0.3. DISCUSSION As discussed in Section 2, adjusting the mean and variance of deep feature channels in a neural network proved to be effective for manipulating the style of 2D images. Although motion style is a conceptually and visually different notion, we have shown that a similar mechanism can be used to manipulate the style of motion sequences. Below we show that our technique may be seen as a generalization of the style transfer technique of Yumer and Mitra [2016]. Their technique is based on pairs, while ours is unpaired, and uses learning to deal with unseen styles. However, we present a derivation that shows the commonality in the building blocks, the analysis of the motion, and how style is transferred. These commonalities contribute to the understanding of our method. In their work, Yumer and Mitra [2016] propose a method for motion style transfer in the frequency domain. They show that given two motions y s and y t with similar content and different styles s ̸ = t, a new, arbitrary, motion x s with style s can be transferred to style t. The transfer result is given in the frequency domain bỹ x t (ω) = |x t (ω)|e i ∡x t (ω) ,(9) where the magnitude of the output is given by |x t (ω)| = |x s (ω)|+|y t (ω)|−|y s (ω)| and the phase function is taken from the original input signal ∡x t (ω) = ∡x s (ω). Applying the inverse Fourier transform to Equation (9), we getx t (τ ) ≈ x s (τ ) + д s→t (τ ) * x s (τ ),(10) where д s→t (τ ) is a convolution kernel (in time, τ ) whose weights depend on the source and target styles. Equation (10) implies that the approach of Yumer and Mitra [2016] may be implemented using a convolutional residual block to modify the style of a motion sequence, where the target style is defined by the weights of the kernel д s→t , which depend on the source style s and well as the target style t. Similarly, our style translation framework also uses convolutional residual blocks as its core units, where the kernel weights effectively depend on the source and target style. Although the kernels are fixed at test time, their weights are effectively modified by the IN and AdaIN layers, as a function of the styles s and t. Convolution in time (with kernel k(τ ) and bias b) followed by an IN/AdaIN layer can be expressed as: x(τ ) = β [x(τ ) * k(τ ) + b] + γ = x(τ ) * βk(τ ) + βb + γ ,(11) where β and γ , are the IN or AdaIN parameters. Equation (10) is, in fact, equivalent to a single convolution with an effective kernel weightsk(τ ) = βk(τ ) and biasb = βb(τ ) + γ , that are modified as a function of some input. For the IN case, β and γ depend on the input signal x s , since they are calculated such that the output has zero mean and unit variance. In the AdaIN case, the parameters are produced by an MLP layer as a mapping of the target style's latent code. Thus, the use of IN and AdaIN in our architecture effectively controls the convolutions by the source and target styles. Rather than directly transferring the motion style from s to t with a single convolution, as in Equation (10), our process may be viewed as consisting of two steps: first, the source style is removed by the encoder E C , which depends only on s, and then the target style is applied by the decoder F , via the AdaIN layers, which depend only on t. Thus, our network performs style transfer in a modular way, making it easier to accommodate new styles. In contrast, using a single kernel makes it necessary to learn how to translate each style in S to a every other style (all pairs). EXPERIMENTS AND EVALUATION In this section we evaluate our method and perform various experiments and comparisons that demonstrate some interesting insights and interpretation of our style transfer mechanism. Firstly, several samples of our style transfer results are shown in Figure 4. The full motion clips are included in the accompanying video. Note that our network outputs joint rotations, hence, our results do not require any further processing such as IK, and can be directly converted to the commonly used motion representation files, and visualized. Although our joint rotations are represented by unit quaternions, which may lead to discontinuities within neural networks ], our output quaternions tend to be smooth due to the temporal 1D convolutions performed by our network. Our results demonstrate that our system can transfer styles that are extracted from various sources, such as 3D animated characters, 2D projection of 3D motions and real videos, within a unified framework. The examples demonstrating style transfer from a video use only a short (3-second) video clip as the sole (and previously unseen) style example. We are not aware of any other style transfer method with this capability. Implementation Details. We used two different datasets to perform our experiments. The first dataset, supplied by Xia et al. [2015], contains motion sequences that are labeled with eight style labels. The second, is our own newly captured dataset, which contains various motions performed by a single character in 16 distinct styles. For convenience, we refer to the datasets as A and B, respectively. The motion sequences within each of the datasets are trimmed into short overlapping clips of T = 32 frames with overlap of T /4, resulting in about 1500 motion sequences for dataset A and 10500 for B. In addition, the motions in each dataset are split into two disjoint train and test sets, with the test set consisting of 10% of the samples. Our framework is implemented in PyTorch and optimized by the Adam optimizer. A training session takes about about 8 hours for dataset A, and double the time for dataset B, using an NVIDIA GeForce GTX Titan Xp GPU (12 GB). Latent Space Visualization In this experiment we project the content codes and style parameters of some random motion samples from dataset A onto a 2D space by using t-distributed stochastic neighbor embedding (t-SNE), and plot the results in order to gain a better understanding of how the network interprets content and style in practice. Style Code. Figure 5 shows the 2D projection of our style parameters (AdaIN), where each sample is marked with a color corresponding to its style label. It can be seen that our network learns to cluster the style parameters, which means that style inputs that share the same style will manipulate the motion content in a similar way. This result demonstrates that the extracted style parameters mostly depend on the style label. As previously discussed, our framework treats style as a set of properties, shared by motions in a group, which can be manipulated (added/removed) by an affine, temporally invariant, transformation (AdaIN) applied to deep features. When such common properties exist within the group, the clusters are naturally formed even without the need for triplet loss (Figure 5(a)). However, since a given style may be described by different nuances for different content motions (e.g., proud boxing has some hand gestures that do not exist in proud walking), a triplet loss encourages (but does not enforce) style codes of the same group to be closer to each other. This loss emphasizes commonalities within the group, making the clusters tighter, as can be observed in Figure 5(b), and leads to better content-style disentanglement. Figure 6 visualizes style codes parameters (AdaIN) extracted from 3D motions together with ones extracted from video. It may be seen that the latter codes, for the most part fall into the same clusters as the former ones. Unseen Styles. Generally speaking, our network enables to extract styles from arbitrary motion clips during test time. However, in practice, when the number of seen styles is small, the network may overfit to the existing styles, as one might suspect when observing the well-separated clusters in Figure 5. We retrained our model with dataset A, excluding the motions that are labeled by the "old" style label, and then tested it using the motions within this group. Although the network successfully clusters the samples (Figure 7(a)), our results show that the style properties of the output are adapted from visually similar styles among those that were seen during training. For example, the "old walking" style code is close that that of "depressed walking", and the style transfer result indeed resembles that of "depressed walking". We performed the same experiment with dataset B (which includes 16 styles), excluding the "heavy" style, and then tested the trained system with these motions. As can be seen in Figure 7 (b), the network again learns to cluster the test samples by their style labels. However, in this case, the output motions successfully adapted style properties from the new unseen clip, which means that the overfitting is significantly reduced when training on dataset B. This demonstrates that for the same framework with fixed dimensions, style can be generalized better, when there are more style classes, and that the network learns to identify properties that may be related to style, and to extract them even from unseen style inputs. 5. The AdaIN parameters extracted from the style codes are projected onto 2D space using t-SNE and colored based on their style labels. The system is trained without triplet loss (a) and with triplet loss (b). It can be seen that our framework learns to cluster the AdaIN parameters as a function of style label in both cases, while the addition of the triplet loss results in tighter clusters. However, when the number of styles is small, or the characteristic style properties are very different from those encountered during training, the network fails to generalize style. The output motion clips of two experiments with unseen settings are shown in our supplemental video, next to an output that demonstrates how the same pair of inputs in each experiment is translated once the network has seen that style during training. As can be seen, although the outputs are different, in both cases the motion is plausible and the target style can be identified. Content Code. Figure 8(a) visualizes 2D projections of our content codes (dataset A), colored by their style labels. It can be seen that there is no clear correlation between the spatial positions of the points and their labels, which suggests that the content code probably does not contain significant style information. Surprisingly, by observing the spatial distribution of the 2D points it can be seen that a subset of the samples forms a circle. The circle becomes nearly perfect by filtering out all the non-walking motions (using content motion labels that exist in the original dataset) and scaling the 2D space (the unscaled projection is elliptical). For walking motion samples, the reduced space achieved by PCA captures 97.4% of the original variation, which means that our projected content code preserves the information well. The nearly perfect circle is achieved due to 3 main reasons: (i) Walking motions in dataset A exhibit approximately the same velocity. (ii) The network discards global velocity and orientation (iii) Our motion samples are represented by a fixed size temporal window. Thus, the content of these periodic motions can be parameterized with a single parameter: the phase. In order to confirm this interpretation we calculate the period of each walking motion sample, extract the phase Θ of the middle frame, and color the corresponding point with sin(Θ) in Figure 8(b). The continuous variation of the color along the circle suggests that our network effectively strips the style and represents walking motions with a single phase parameter. The phase representation for locomotion is well-known in character animation and is used, for example, to dictate the state or mode of a phase-based neural network that generates animations of humans [Holden et al. 2017b]. Comparison In this section we compare our approach to the method of Holden et al. [2016] that performs style transfer by optimizing a motion sequence to satisfy two constraints, one for motion and one for style. Similarly to the seminal work of Gatys et al. [2016] for image style transfer, the content is described by a set of deep features, and the style is represented by the Gram matrix of those features. However, while in the image domain the features are extracted by a classification network, here they are extracted by a motion autoencoder. In order to perform the comparison the approaches are qualitatively evaluated by a user study that measures a few aspects of style transfer approaches. The results are evaluated with styles extracted from 3D motions, as well as from videos. However, since Holden et al. [2016] extract styles only from 3D motions, we use a state-of-the-art 3D pose estimation algorithm [Pavllo et al. 2019b] to recover 3D poses, when the provided style input is a video. For a fair comparison we use dataset A, which is part of the CMU dataset [CMU 2019], which Holden et al. [2016] used to train their model. A few results extracted from the full comparison given in our supplementary video are depicted in Figure 9. User Study. We performed a user study to perceptually evaluate the realism, style expressiveness and content preservation of our transfer results, while the style is extracted both from 3D motions and videos. 22 subjects were asked to answer a questionnaire with three types of questions, which we describe below. Realism. In this part, we evaluate the realism of different motions. Users were presented with a pair of motions, both depicting the same type of content and style (e.g., angry jump). The motions were taken from three different sources: (1) Our original MoCap dataset, (2) Results of Holden et al. [2016] (3) Our results. Note that (2) and (3) are generated with similar inputs. Users were asked questions of the form: "Which of the above motions look like a more realistic old walk?", and had to choose one of the four answers: Left, Right, Both, or None. 132 responses were collected for this question type. Table 1 reports the realism ratios for each motion source. It may be seen that 75% of our results were judged as realistic, which is a significantly higher ratio than the one measured for Holden et al. [2016], and not far below the realism ratio of real MoCap motions. Content Preservation and Style Transfer. In this part, we compare our style transfer results to those of Holden et al. [2016] in terms of two aspects: the preservation of content and the transfer of style. Users were presented with a content input, a style input, and two transferred results, one by Holden et al. [2016] and the other by our method. They were asked to first select the motion whose content is closer to the content input ("Which of the motions on the right is more similar to the motion on the left in content?"), and then select the motion whose style is closer to the style input ("Which of the motions on the right is more similar to the motion on the left in style?"). 110 responses were collected for each of these two questions. The results are reported in Table 1. The results indicate that our method was judged far more successful in both aspects (content preservation and style transfer), both when using a 3D motion as the style input, and when using a video. The reasons to which we attribute the large gaps in the ratings are discussed in detail later in this section. It can be seen that the user study states that our method yields results which are more faithful to the task of style transfer. In particular, it can be seen that the approach of Holden et al. [2016] struggles to transfer style when the content of the two input motions is different (for example, when the input content motion is "proud walking" and the input style is "depressed kicking"). The main reason is that both content and style representations are derived from the same deep features, which leads to a dependency of content and style. In order to get a better understanding of their style representation, we projected the styles extracted by both methods into 2D, using PCA. Figure 10 shows the resulting maps. It can be seen that while our samples are clustered by the style labels (right plot), this cannot be observed for Holden's representation, which results in a multitude of small clusters, scattered over the 2D plane. Thus, while Gram matrices of features extracted by an autoencoder enable some degree of style transfer, they are clearly affected by other information present in the motion samples. Moreover, we use video style examples, where a person demonstrates different walking styles, while walking on a treadmill. When poses are extracted from such a video, the root velocity is very small. In contrast, most of the content inputs have significant root velocity. This discrepancy poses no problem for our approach, but it adversely affects the method of Holden et al.[2016], which is limited to work with input pairs that share the same content. Our method explicitly attempts to extract a latent style code from an input style motion, which enables clustering of motions with different content but similar style in the latent style space, thereby disentangling style from content. In contrast, Holden et al. [2016] represent style using Gram matrices, similarly to the seminal work of Gatys et al. [2016] for image domain style transfer. In order to demonstrate the difference between the resulting style representations, we project the styles extracted by both methods into 2D, using PCA, and the results are shown in Figure 10. Ablation Study and Insights Effect of Adversarial Loss. In this experiment we discarded the adversarial loss L adv from our training. Surprisingly, our experiments show that the discriminator does not play the key role in the transferring of style. Furthermore, a single content consistency loss is sufficient to train the network to extract shared property with from labeled styles, and to cluster the style code samples by their style labels. However, we found that without the attendance of the adversarial loss, the perceived realism of the output motions is degraded, and artifacts such as some shaking can be observed. The comparison can be found in the supplementary video. Style Code and Neutral Style. In order to gain a better understating of the impact of the style code and the structure of its space, we neutralized the style branch by setting the AdaIN output to identity parameters (zero mean, unit variance). With these settings, the network outputs pure noise. The reason is that the network is trained in an end-to-end fashion, the scale and translation are also responsible for modifying the features such that the network outputs valid motions. In addition, in order to understand whether the neutral style is more centralized in the style latent space than other styles, for every style label, we calculated the mean distance between its average style code to all the other average style codes. We found that in both of the datasets the neutral style is among the top three styles in terms of that mean distance, which might suggest that the network learns to branch from neutral style into the other styles. However, we are not able to reach a definitive conclusion based on this experiment. Style Interpolation Our learned continuous style code space can be used to interpolate between styles. Style interpolation can be achieved by linearly interpolating between style code, and then decoding the results through our decoder. Our video demonstrates motions where the content input is fixed (neutral walking) and the style is interpolated between two different style codes (depressed to proud and neutral to old). Figure 11 shows a key-frame from each interpolated motion sequence. CONCLUSIONS AND FUTURE WORK We have presented a neural network that transfers motion style from one sequence into another. The key novelty is that the network is trained without paired data, and without attempting to explicitly define either motion content or motion style. Nevertheless, the results show that the network succeeds to implicitly disentangle style and content and combine even previously unseen styles with a given content. We partly attribute the success of the approach to the asymmetric structure of the network, where the style is represented and controlled by instance normalization layers, while the content by deep convolutional layers. Instance normalization layers have the innate tendency to control mainly local statistics, or in other words, details, while the convolutional layers preserve the content. Our training protocol, with various motion styles, encourages this asymmetric network to disentangle the latent style from the motion's content. Although there is no universally accepted definition of motion style, it may be argued that our framework defines style as the set of properties, shared by motions in a group, which can be manipulated (added/removed) by an affine, temporally invariant, transformation (AdaIN) applied to deep features. As a consequence, the complementary part of style that enables the reconstruction of local motion (without the root position), is defined as the content. In addition, the global positions, which are taken directly from the content input, are considered as part of the content, while the global velocity, which Fig. 9. Qualitative comparison of our method to the approach of Holden et al. [2016]. The content input is shared across all the examples (each column shows a different example), the input style is depicted in the first row, while the results of Holden et al. [2016] and ours are given in the second and last row, respectively. We picked a fixed set of key frames of each motion to demonstrate the results. The full video sequences and more results can be found in the supplemental video. Style Classes angry childlike depressed neutral old proud sexy strutting Fig. 10. Style codes extracted by our method (right) compared to the style representation of Holden et al. [2016] (left). While our style codes are clustered by style labels, while in Holden's representation the style representation for many small scattered clusters, which implies dependencies between style and content. While our style codes are clustered by style labels, the style representation of Holden et al. [2016] forms many small scattered clusters, which imply dependencies between style and content. is temporally warped based on the style input, is defined to be part of the style in our case. Our mechanism aims at disentangling style and content of arbitrary motions based on style labels. However, if the two input motions (content input and style input) during test time are different (lacking commonalities) and the target style departs too much from the ones used for training, the network will not be able to infer which style properties should be transferred. Moreover, due to the fact that the majority of the motions in our datasets depict locomotion (walking and running) the network tends to output motions of higher-quality with such samples during test time. In turn, this motivates us to use our generative system to produce more data and new styles by possibly mixing styles or amplifying (or attenuating) available styles or mixes of styles. Another notable limitation is that testing the system with characters that have different body proportions from those which were seen during training, may lead to implausible results.Âă In order to cope with such cases, motion retargeting should be performed prior to the style transfer pass. Motion retargeting is a challenging problem in its own right, and is outside the scope of this work. In order to support style transfer of various unseen skeletons in an end-to-end fashion, a different solution would have to be proposed. We leave this issue to future work. In our current implementation, a given pair of input content and style motions yields a deterministic output. We would like to consider extending the system by injecting noise to produce slight variations. This will allow a temporal prolongation of the input sequence without noticeable repetitions or discontinuous transitions. In the future, we would also consider segmenting the sequence temporally and transferring different styles to different segments. Fig. 2 . 2Visualization of deep features in our decoder. Fig. 4 . 4Samples of our style transfer results. The motion of the content input (top row) is transferred to a motion with similar content and different style (bottom row), while the style can be extracted from various sources (middle row) such as 3D animated characters (a) 2D projection of 3D motions (b) and video sequences (c). Fig. 6 . 6Joint embedding of style codes parameters extracted from 3D motions as well as directly from 2D videos. Fig. 7 .Fig. 8 . 78Unseen styles. (a) Trained on dataset A excluding the "old" style. (b) Trained on dataset B excluding the "heavy" style. It can be seen that a larger number of style classes enables the network to better generalize styles in the latent space The content codes of our test samples are projected onto 2D space using PCA. (a) The samples are labeled by the style label. No clustering based on style label may be observed, suggesting that style information has been removed. (b) When visualizing only walking motions, while labeling samples by the phase of walking, it may be seen that our content code effectively parameterizes the motions using a single parameter -the phase. Fig. 11 . 11Style interpolation. Our style space code enables motion style interpolation. A neutral walking is transferred to an interpolated style. (a) Depressed to proud. (b) neutral to old. ; Baoquan Chen, [email protected]. 0730-0301/2020/7-ART1 $15.00 https://doi.org/10.1145/3386569.3392469 Content Code Style Code Generator Content Input Style Input Output Table 1. User study results. (a) Realism ratios. (b) Content preservation and style transfer ratings ([Holden et al. 2016] vs. Ours). The style inputs were either from 3D motion, or from video.MoCap Holden et al. [2016] Ours 79.17% 12.5% 75% (a) Holden et al. [2016] Ours Content Preservation -3D 38.89% 61.11% Content Preservation -video 25% 75% Style Transfer -3D 5.56% 94.44% Style Transfer -video 8.33% 91.67% (b) We believe that the role of instance normalization in motion processing and animation is likely to increase, especially for generative models. The work we presented, is only a first step in that direction.ACKNOWLEDGMENTSWe thank the anonymous reviewers for their constructive comments. This work was supported in part by National Key R&D Program of China (2018YFB1403900, 2019YFF0302902), and by the Israel Science Foundation (grant no. 2366/16). Deep Video-Based Performance Cloning. Kfir Aberman, Mingyi Shi, Jing Liao, Dani Lischinski, Baoquan Chen, Daniel Cohen-Or, Computer Graphics Forum. Online LibraryWiley38Kfir Aberman, Mingyi Shi, Jing Liao, Dani Lischinski, Baoquan Chen, and Daniel Cohen- Or. 2019a. Deep Video-Based Performance Cloning. In Computer Graphics Forum, Vol. 38. Wiley Online Library, 219-233. Learning Character-Agnostic Motion for Motion Retargeting in 2D. Kfir Aberman, Rundi Wu, Dani Lischinski, Baoquan Chen, Daniel Cohen-Or, ACM Transactions on Graphics (TOG). 3875Kfir Aberman, Rundi Wu, Dani Lischinski, Baoquan Chen, and Daniel Cohen-Or. 2019b. Learning Character-Agnostic Motion for Motion Retargeting in 2D. ACM Transac- tions on Graphics (TOG) 38, 4 (2019), 75. Emotion from motion. Kenji Amaya, Armin Bruderlin, Tom Calvert, Graphics Interface. Toronto, Canada96Kenji Amaya, Armin Bruderlin, and Tom Calvert. 1996. Emotion from motion. In Graphics Interface, Vol. 96. Toronto, Canada, 222-229. Deep Motifs and Motion Signatures. Andreas Aristidou, Daniel Cohen-Or, Jessica K Hodgins, Yiorgos Chrysanthou, Ariel Shamir, 10.1145/3272127.3275038ACM Trans. Graph. 37187Andreas Aristidou, Daniel Cohen-Or, Jessica K. Hodgins, Yiorgos Chrysanthou, and Ariel Shamir. 2018. Deep Motifs and Motion Signatures. ACM Trans. Graph. 37, 6, Article 187 (Nov. 2018), 13 pages. https://doi.org/10.1145/3272127.3275038 Emotion control of unstructured dance movements. Andreas Aristidou, Qiong Zeng, Efstathios Stavrakis, Kangkang Yin, Daniel Cohen-Or, Yiorgos Chrysanthou, Baoquan Chen, Proc. ACM SIGGRAPH/Eurographics Symposium on Computer Animation. ACM. ACM SIGGRAPH/Eurographics Symposium on Computer Animation. ACM9Andreas Aristidou, Qiong Zeng, Efstathios Stavrakis, KangKang Yin, Daniel Cohen-Or, Yiorgos Chrysanthou, and Baoquan Chen. 2017. Emotion control of unstructured dance movements. In Proc. ACM SIGGRAPH/Eurographics Symposium on Computer Animation. ACM, 9. Style machines. Matthew Brand, Aaron Hertzmann, Proc. SIGGRAPH. SIGGRAPHACM Press/Addison-Wesley Publishing CoMatthew Brand and Aaron Hertzmann. 2000. Style machines. In Proc. SIGGRAPH 2000. ACM Press/Addison-Wesley Publishing Co., 183-192. Zhe Cao, Gines Hidalgo, Tomas Simon, Shih-En Wei, Yaser Sheikh, arXiv:1812.08008OpenPose: realtime multi-person 2D pose estimation using Part Affinity Fields. arXiv preprintZhe Cao, Gines Hidalgo, Tomas Simon, Shih-En Wei, and Yaser Sheikh. 2018. OpenPose: realtime multi-person 2D pose estimation using Part Affinity Fields. arXiv preprint arXiv:1812.08008 (2018). Everybody dance now. Caroline Chan, Shiry Ginosar, Tinghui Zhou, Alexei A Efros, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionCaroline Chan, Shiry Ginosar, Tinghui Zhou, and Alexei A Efros. 2019. Everybody dance now. In Proceedings of the IEEE International Conference on Computer Vision. 5933-5942. Stylistic Locomotion Modeling with Conditional Variational Autoencoder. Erik / Han Du, Janis Herrmann, Sprenger, Proc. Eurographics. The Eurographics Association. Eurographics. The Eurographics AssociationNoshaba Cheema, Klaus Fischer, Philipp SlusallekCMU ; CMU Graphics Lab Motion Capture Databaseet al. 2019CMU. 2019. CMU Graphics Lab Motion Capture Database. http://mocap.cs.cmu.edu/ Han Du, Erik Herrmann, Janis Sprenger, Noshaba Cheema, Klaus Fischer, Philipp Slusallek, et al. 2019. Stylistic Locomotion Modeling with Conditional Variational Autoencoder. In Proc. Eurographics. The Eurographics Association. Image style transfer using convolutional neural networks. A Leon, Alexander S Gatys, Matthias Ecker, Bethge, Proc. CVPR. CVPRLeon A Gatys, Alexander S Ecker, and Matthias Bethge. 2016. Image style transfer using convolutional neural networks. In Proc. CVPR. 2414-2423. Stylebased inverse kinematics. Keith Grochow, L Steven, Aaron Martin, Zoran Hertzmann, Popović, ACM transactions on graphics (TOG). ACM23Keith Grochow, Steven L Martin, Aaron Hertzmann, and Zoran Popović. 2004. Style- based inverse kinematics. In ACM transactions on graphics (TOG), Vol. 23. ACM, 522-531. Fast neural style transfer for motion data. Daniel Holden, Ikhsanul Habibie, Ikuo Kusajima, Taku Komura, IEEE computer graphics and applications. 37Daniel Holden, Ikhsanul Habibie, Ikuo Kusajima, and Taku Komura. 2017a. Fast neural style transfer for motion data. IEEE computer graphics and applications 37, 4 (2017), 42-49. Phase-functioned neural networks for character control. Daniel Holden, Taku Komura, Jun Saito, ACM Transactions on Graphics (TOG). 3642Daniel Holden, Taku Komura, and Jun Saito. 2017b. Phase-functioned neural networks for character control. ACM Transactions on Graphics (TOG) 36, 4 (2017), 42. A deep learning framework for character motion synthesis and editing. Daniel Holden, Jun Saito, Taku Komura, ACM Transactions on Graphics (TOG). 35138Daniel Holden, Jun Saito, and Taku Komura. 2016. A deep learning framework for character motion synthesis and editing. ACM Transactions on Graphics (TOG) 35, 4 (2016), 138. Learning motion manifolds with convolutional autoencoders. Daniel Holden, Jun Saito, Taku Komura, Thomas Joyce, SIGGRAPH Asia. ACM18Daniel Holden, Jun Saito, Taku Komura, and Thomas Joyce. 2015. Learning motion manifolds with convolutional autoencoders. In SIGGRAPH Asia 2015 Technical Briefs. ACM, 18. Style translation for human motion. Eugene Hsu, Kari Pulli, Jovan Popović, In ACM Transactions on Graphics (TOG). 24ACMEugene Hsu, Kari Pulli, and Jovan Popović. 2005. Style translation for human motion. In ACM Transactions on Graphics (TOG), Vol. 24. ACM, 1082-1089. Arbitrary style transfer in real-time with adaptive instance normalization. Xun Huang, Serge Belongie, Proc. ICCV. ICCVXun Huang and Serge Belongie. 2017. Arbitrary style transfer in real-time with adaptive instance normalization. In Proc. ICCV. 1501-1510. Multimodal unsupervised image-to-image translation. Xun Huang, Ming-Yu Liu, Serge Belongie, Jan Kautz, Proc. ECCV. ECCVXun Huang, Ming-Yu Liu, Serge Belongie, and Jan Kautz. 2018. Multimodal unsupervised image-to-image translation. In Proc. ECCV. 172-189. Generalizing motion edits with gaussian processes. Leslie Ikemoto, Okan Arikan, David Forsyth, ACM Transactions on Graphics (TOG). 281Leslie Ikemoto, Okan Arikan, and David Forsyth. 2009. Generalizing motion edits with gaussian processes. ACM Transactions on Graphics (TOG) 28, 1 (2009), 1. Perceptual losses for real-time style transfer and super-resolution. Justin Johnson, Alexandre Alahi, Li Fei-Fei, Proc. ECCV. ECCVSpringerJustin Johnson, Alexandre Alahi, and Li Fei-Fei. 2016. Perceptual losses for real-time style transfer and super-resolution. In Proc. ECCV. Springer, 694-711. Learning 3d human dynamics from video. Angjoo Kanazawa, Jason Y Zhang, Panna Felsen, Jitendra Malik, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionAngjoo Kanazawa, Jason Y Zhang, Panna Felsen, and Jitendra Malik. 2019. Learning 3d human dynamics from video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 5614-5623. A style-based generator architecture for generative adversarial networks. Tero Karras, Samuli Laine, Timo Aila, Proc. CVPR. CVPRTero Karras, Samuli Laine, and Timo Aila. 2019. A style-based generator architecture for generative adversarial networks. In Proc. CVPR. 4401-4410. Learning physics-based motion style with nonlinear inverse optimization. C Karen, Aaron Liu, Zoran Hertzmann, Popović, In ACM Transactions on Graphics (TOG). 24ACMC Karen Liu, Aaron Hertzmann, and Zoran Popović. 2005. Learning physics-based motion style with nonlinear inverse optimization. In ACM Transactions on Graphics (TOG), Vol. 24. ACM, 1071-1081. Lingjie Liu, Weipeng Xu, Michael Zollhoefer, Hyeongwoo Kim, Florian Bernard, Marc Habermann, Wenping Wang, Christian Theobalt, arXiv:1809.03658Neural Rendering and Reenactment of Human Actor Videos. arXiv preprintLingjie Liu, Weipeng Xu, Michael Zollhoefer, Hyeongwoo Kim, Florian Bernard, Marc Habermann, Wenping Wang, and Christian Theobalt. 2018. Neural Rendering and Reenactment of Human Actor Videos. arXiv preprint arXiv:1809.03658 (2018). Few-shot unsupervised image-to-image translation. Ming-Yu Liu, Xun Huang, Arun Mallya, Tero Karras, Timo Aila, Jaakko Lehtinen, arXiv:1905.01723arXiv preprintMing-Yu Liu, Xun Huang, Arun Mallya, Tero Karras, Timo Aila, Jaakko Lehtinen, and Jan Kautz. 2019. Few-shot unsupervised image-to-image translation. arXiv preprint arXiv:1905.01723 (2019). Modeling style and variation in human motion. Wanli Ma, Shihong Xia, Jessica K Hodgins, Xiao Yang, Chunpeng Li, Zhaoqi Wang, Proc. 2010 ACM SIG-GRAPH/Eurographics Symposium on Computer Animation. Eurographics Association. 2010 ACM SIG-GRAPH/Eurographics Symposium on Computer Animation. Eurographics AssociationWanli Ma, Shihong Xia, Jessica K Hodgins, Xiao Yang, Chunpeng Li, and Zhaoqi Wang. 2010. Modeling style and variation in human motion. In Proc. 2010 ACM SIG- GRAPH/Eurographics Symposium on Computer Animation. Eurographics Association, 21-30. Few-shot Learning of Homogeneous Human Locomotion Styles. Ian Mason, Sebastian Starke, He Zhang, Hakan Bilen, Taku Komura, Computer Graphics Forum. Online LibraryWiley37Ian Mason, Sebastian Starke, He Zhang, Hakan Bilen, and Taku Komura. 2018. Few-shot Learning of Homogeneous Human Locomotion Styles. In Computer Graphics Forum, Vol. 37. Wiley Online Library, 143-153. VNect: Real-time 3D Human Pose Estimation with a Single RGB Camera. Dushyant Mehta, Srinath Sridhar, Oleksandr Sotnychenko, Helge Rhodin, Mohammad Shafiei, Hans-Peter Seidel, Weipeng Xu, Dan Casas, Christian Theobalt, 10.1145/3072959.3073596ACM Trans. Graph. 36ArticleDushyant Mehta, Srinath Sridhar, Oleksandr Sotnychenko, Helge Rhodin, Mohammad Shafiei, Hans-Peter Seidel, Weipeng Xu, Dan Casas, and Christian Theobalt. 2017. VNect: Real-time 3D Human Pose Estimation with a Single RGB Camera. ACM Trans. Graph. 36, 4, Article 44 (July 2017), 44:1-44:14 pages. https://doi.org/10.1145/ 3072959.3073596 Semantic image synthesis with spatially-adaptive normalization. Taesung Park, Ming-Yu Liu, Ting-Chun Wang, Jun-Yan Zhu, Proc. CVPR. CVPRTaesung Park, Ming-Yu Liu, Ting-Chun Wang, and Jun-Yan Zhu. 2019. Semantic image synthesis with spatially-adaptive normalization. In Proc. CVPR. 2337-2346. Dario Pavllo, Christoph Feichtenhofer, Michael Auli, David Grangier, arXiv:1901.07677Modeling Human Motion with Quaternion-based Neural Networks. arXiv preprintDario Pavllo, Christoph Feichtenhofer, Michael Auli, and David Grangier. 2019a. Mod- eling Human Motion with Quaternion-based Neural Networks. arXiv preprint arXiv:1901.07677 (2019). 3D human pose estimation in video with temporal convolutions and semi-supervised training. Dario Pavllo, Christoph Feichtenhofer, David Grangier, Michael Auli, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionDario Pavllo, Christoph Feichtenhofer, David Grangier, and Michael Auli. 2019b. 3D human pose estimation in video with temporal convolutions and semi-supervised training. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 7753-7762. Style components. Ari Shapiro, Yong Cao, Petros Faloutsos, Proc. Graphics Interface. Graphics InterfaceCanadian Information Processing SocietyAri Shapiro, Yong Cao, and Petros Faloutsos. 2006. Style components. In Proc. Graphics Interface 2006. Canadian Information Processing Society, 33-39. Efficient Neural Networks for Real-time Motion Style Transfer. Jesse Harrison, Chen Smith, Michael Cao, Yingying Neff, Wang, PACMCGIT. 217Harrison Jesse Smith, Chen Cao, Michael Neff, and Yingying Wang. 2019. Efficient Neural Networks for Real-time Motion Style Transfer. PACMCGIT 2, 2 (2019), 13:1-13:17. Factored conditional restricted Boltzmann machines for modeling motion style. W Graham, Geoffrey E Taylor, Hinton, Proc. ICML. ACM. ICML. ACMGraham W Taylor and Geoffrey E Hinton. 2009. Factored conditional restricted Boltz- mann machines for modeling motion style. In Proc. ICML. ACM, 1025-1032. Instance normalization: The missing ingredient for fast stylization. Dmitry Ulyanov, Andrea Vedaldi, Victor Lempitsky, arXiv:1607.08022arXiv preprintDmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. 2016. Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022 (2016). Fourier principles for emotion-based human figure animation. Munetoshi Unuma, Ken Anjyo, Ryozo Takeuchi, Proc. SIGGRAPH '95. ACM. SIGGRAPH '95. ACMMunetoshi Unuma, Ken Anjyo, and Ryozo Takeuchi. 1995. Fourier principles for emotion-based human figure animation. In Proc. SIGGRAPH '95. ACM, 91-96. Neural kinematic networks for unsupervised motion retargetting. Ruben Villegas, Jimei Yang, Duygu Ceylan, Honglak Lee, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionRuben Villegas, Jimei Yang, Duygu Ceylan, and Honglak Lee. 2018. Neural kinematic networks for unsupervised motion retargetting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 8639-8648. Multifactor Gaussian process models for style-content separation. M Jack, David J Wang, Aaron Fleet, Hertzmann, Proc. ICML. ACM. ICML. ACMJack M Wang, David J Fleet, and Aaron Hertzmann. 2007. Multifactor Gaussian process models for style-content separation. In Proc. ICML. ACM, 975-982. High-resolution image synthesis and semantic manipulation with conditional gans. Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, Bryan Catanzaro, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionTing-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. 2018. High-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE conference on computer vision and pattern recognition. 8798-8807. Realtime style transfer for unlabeled heterogeneous human motion. Shihong Xia, Congyi Wang, Jinxiang Chai, Jessica Hodgins, ACM Transactions on Graphics (TOG). 34119Shihong Xia, Congyi Wang, Jinxiang Chai, and Jessica Hodgins. 2015. Realtime style transfer for unlabeled heterogeneous human motion. ACM Transactions on Graphics (TOG) 34, 4 (2015), 119. Spectral style transfer for human motion between independent actions. M Ersin Yumer, J Niloy, Mitra, ACM Transactions on Graphics (TOG). 35137M Ersin Yumer and Niloy J Mitra. 2016. Spectral style transfer for human motion between independent actions. ACM Transactions on Graphics (TOG) 35, 4 (2016), 137. On the continuity of rotation representations in neural networks. Yi Zhou, Connelly Barnes, Jingwan Lu, Jimei Yang, Hao Li, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionYi Zhou, Connelly Barnes, Jingwan Lu, Jimei Yang, and Hao Li. 2019. On the continuity of rotation representations in neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 5745-5753.
[]
[ "NILE : Natural Language Inference with Faithful Natural Language Explanations", "NILE : Natural Language Inference with Faithful Natural Language Explanations" ]
[ "Sawan Kumar [email protected] \nIndian Institute of Science\nBangalore\n", "Partha Talukdar \nIndian Institute of Science\nBangalore\n" ]
[ "Indian Institute of Science\nBangalore", "Indian Institute of Science\nBangalore" ]
[ "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics" ]
The recent growth in the popularity and success of deep learning models on NLP classification tasks has accompanied the need for generating some form of natural language explanation of the predicted labels. Such generated natural language (NL) explanations are expected to be faithful, i.e., they should correlate well with the model's internal decision making. In this work, we focus on the task of natural language inference (NLI) and address the following question: can we build NLI systems which produce labels with high accuracy, while also generating faithful explanations of its decisions? We propose Naturallanguage Inference over Label-specific Explanations (NILE), a novel NLI method which utilizes auto-generated label-specific NL explanations to produce labels along with its faithful explanation. We demonstrate NILE's effectiveness over previously reported methods through automated and human evaluation of the produced labels and explanations. Our evaluation of NILE also supports the claim that accurate systems capable of providing testable explanations of their decisions can be designed. We discuss the faithfulness of NILE's explanations in terms of sensitivity of the decisions to the corresponding explanations. We argue that explicit evaluation of faithfulness, in addition to label and explanation accuracy, is an important step in evaluating model's explanations. Further, we demonstrate that task-specific probes are necessary to establish such sensitivity. V8
10.18653/v1/2020.acl-main.771
[ "https://www.aclweb.org/anthology/2020.acl-main.771.pdf" ]
218,869,840
2005.12116
cc424a4d2ac0a06f1745533e0bdd796817f1cfab
NILE : Natural Language Inference with Faithful Natural Language Explanations Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 5 -10, 2020. 2020 Sawan Kumar [email protected] Indian Institute of Science Bangalore Partha Talukdar Indian Institute of Science Bangalore NILE : Natural Language Inference with Faithful Natural Language Explanations Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsJuly 5 -10, 2020. 20208730 The recent growth in the popularity and success of deep learning models on NLP classification tasks has accompanied the need for generating some form of natural language explanation of the predicted labels. Such generated natural language (NL) explanations are expected to be faithful, i.e., they should correlate well with the model's internal decision making. In this work, we focus on the task of natural language inference (NLI) and address the following question: can we build NLI systems which produce labels with high accuracy, while also generating faithful explanations of its decisions? We propose Naturallanguage Inference over Label-specific Explanations (NILE), a novel NLI method which utilizes auto-generated label-specific NL explanations to produce labels along with its faithful explanation. We demonstrate NILE's effectiveness over previously reported methods through automated and human evaluation of the produced labels and explanations. Our evaluation of NILE also supports the claim that accurate systems capable of providing testable explanations of their decisions can be designed. We discuss the faithfulness of NILE's explanations in terms of sensitivity of the decisions to the corresponding explanations. We argue that explicit evaluation of faithfulness, in addition to label and explanation accuracy, is an important step in evaluating model's explanations. Further, we demonstrate that task-specific probes are necessary to establish such sensitivity. V8 Introduction Deep learning methods have been employed to improve performance on several benchmark classification tasks in NLP (Wang et al., 2018(Wang et al., , 2019. Typically, these models aim at improving label accuracy, while it is often desirable to also produce explanations for these decisions (Lipton, 2016;Chakraborty et al., 2017). In this work, we focus on producing natural language explanations for Natural Language Inference (NLI), without sacrificing much on label accuracy. There has been growing interest in producing natural language explanations for deep learning systems (Huk Park et al., 2018;Kim et al., 2018;Ling et al., 2017), including NLI (Camburu et al., 2018). In general, the explanations from these methods can typically be categorized as post-hoc explanations (Lipton, 2016). Camburu et al. (2018) propose an NLI system which first produces an explanation and then processes the explanation to produce the final label. We argue that these explanations also resemble post-hoc explanations (Section 4.2). Further, existing methods don't provide a natural way to test the faithfulness of the generated explanations, i.e., how well do the provided explanations correlate with the model's decision making. We therefore propose Natural-language Inference over Label-specific Explanations (NILE) 1 , which we train and evaluate on English language examples. Through NILE, we aim to answer the following question: Can we build NLI systems which produce faithful natural language explanations of predicted labels, while maintaining high accuracy? Briefly, in NILE, we first generate natural language explanations for each possible decision, and subsequently process these explanations to produce the final decision. We argue that such a system provides a natural way of explaining its decisions. The key advantage is the testability of these explanations, in themselves, as well as in terms of the sensitivity of the system's prediction Figure 1: Overview of NILE: A Premise and Hypothesis pair is input to label-specific Candidate Explanation Generators G which generate natural language explanations supporting the corresponding label. The generated explanations are then fed to the Explanation Processor S, which generates label scores using the evidence present in these explanations (see Figure 3 for the architectures used in this work). In addition to the explanations, NILE also utilizes the premise and hypothesis pair (See Section 4.4.2 for a discussion on the challenges in building such a system). Please see Section 4 for details. to these explanations. We choose NLI due to its importance as an NLP task, and the availability of e-SNLI, a large dataset annotated both with entailment relation labels and natural language human explanations of those labels (Camburu et al., 2018;Bowman et al., 2015). In summary, we make the following contributions in this work. 1. We propose NILE, an NLI system which generates and processes label-specific explanations to infer the task label, naturally providing explanations for its decisions. 2. We demonstrate the effectiveness of NILE compared to existing systems, in terms of label and explanation accuracy. 3. Through NILE, we provide a framework for generating falsifiable explanations. We propose ways to evaluate and improve the faithfulness of the system's predictions to the generated explanations. We claim that task-specific probes of sensitivity are crucial for such evaluation. We have released the source code of NILE to aid reproducibility of the results. Related Work Explainability of a model's predictions has been studied from different perspectives, including feature importance based explanations (Ribeiro et al., 2016;Lundberg and Lee, 2017;Chen et al., 2018), or post-hoc natural language explanations (Huk Park et al., 2018;Kim et al., 2018;Ling et al., 2017). produce counterfactual natural language explanations for image classification given an image and a counter-class label. Camburu et al. (2018) propose a model for NLI to first generate a free-form natural language explanation and then infer the label from the explanation. However, as noted by Oana-Maria et al. (2019a), the system tends to generate inconsistent explanations. We reason that requiring a model to generate an explanation of the correct output requires it to first infer the output, and the system thus resembles post-hoc explanation generation methods. Given the diversity of desiderata and techniques for interpretability, the need for understanding interpretation methods and evaluating them has grown. Difficulty in building interpretation models and the lack of robustness of the same are some of the major issues in existing deep neural networks systems (Feng et al., 2018;Ghorbani et al., 2019;Oana-Maria et al., 2019b). Given these observations, measuring faithfulness, i.e., how well do the provided explanations correlate with the model's decision making, is crucial. DeYoung et al. (2019) propose metrics to evaluate such faithfulness of rationales (supporting evidence) for NLP tasks. Through NILE, we propose a framework for generating faithful natural language explanations by requiring the model to condition on generated natural language explanations. The idea of using natural language strings as a latent space has been explored to capture compositional task structure (Andreas et al., 2018). explore improving visual question answering by learning to generate question-relevant captions. aim to improve commonsense question answering by first generating commonsense explanations for multiple-choice questions, where the question and the choices are provided as the prompt. Similar to (Camburu et al., 2018), they learn by trying to generate human-provided explanations and subsequently conditioning on the generated explanation. In NILE, we instead aim to produce an explanation for each possible label and subsequently condition on the generated label-specific explanations to produce the final decision. Background In this section, we discuss the datasets (Section 3.1) and pre-trained models (Section 3.2) used to build NILE. Data SNLI: The Stanford NLI dataset (Bowman et al., 2015) contains samples of premise and hypothesis pairs with human annotations, using Amazon Mechanical Turk. The premises were obtained from pre-existing crowdsourced corpus of image captions. The hypotheses were obtained by presenting workers with a premise and asking for a hypothesis for each label (entailment, neutral and contradiction), resulting in a balanced set of ∼570K pairs. e-SNLI: Camburu et al. (2018) extend the SNLI dataset with natural language explanations of the ground truth labels. The explanations were crowdsourced using Amazon Mechanical Turk. Annotators were first asked to highlight words in the premise and hypothesis pairs which could explain the labels. Next, they were asked to write a natural language explanation using the highlighted words. Similar to Camburu et al. (2018), for all our experiments, we filter out non-informative examples where the explanations contain the entire text of the premise or hypothesis. In particular, we drop any training example where the uncased premise or hypothesis text appears entirely in the uncased explanation. This leads to a training data size of ∼532K examples. Pretrained Language Models Transformer architectures (Vaswani et al., 2017) pre-trained on large corpora with self-supervision have shown significant improvements on various NLP benchmarks (Devlin et al., 2019;Radford et al., 2019;Yang et al., 2019;Lan et al., 2019). Improvements have been demonstrated for text classification as well as text generation tasks Raffel et al., 2019). In this work, we leverage the implementation of transformer architectures and pre-trained models provided by Wolf et al. (2019). GPT-2: We use the GPT-2 architecture (Radford et al., 2019), which is trained using a causal language modeling loss (CLM), and includes a leftto-right decoder suitable for text generation. In particular, we use the gpt2-medium model. This model has 24 layers, 16 attention heads and a hidden size of 1024 (∼345M parameters). For text generation, the model can be finetuned using CLM on desired text sequences. RoBERTa: For classification modules, we leverage RoBERTa , which is trained using a masked language modeling loss (MLM). In particular, we use the roberta-base model. This model has 12 layers, 12 attention heads and a hidden size of 768 (∼125M parameters). For downstream classifications tasks, a classification layer is added over the hidden-state of the first token in the last layer. 4 Natural-language Inference over Label-specific Explanations (NILE) The overall architecture employed in NILE is shown in Figure 1. We introduce the notation used in this paper in Section 4.1. We then discuss the motivation for the major design choices in Section 4.2. NILE performs the following steps to produce labels and explanations: 1. Candidate Explanation Generators: Labelspecific Candidate Explanation Generators first generate explanations supporting the respective labels (Section 4.3). Explanation Processor: The Explanation Processor takes the explanations and also the premise and hypothesis pairs as input to produce the task label (Section 4.4). We also build NILE-PH, where the Explanation Processor has access only to the generated explanations (Section 4.4.1). We note that NILE-PH more naturally fits the desiderata described in Section 1, while we design and evaluate NILE for the more general case where the Explanation Processor also accesses the premise and hypothesis pair. In Section 4.5, we describe comparable baseline architectures. Notation We denote each data point by (p, h), where p is the premise and h the hypothesis sentence. G denotes a model trained to generate natural language explanations. Specifically, G x denotes a model which generates natural language explanations t x of type x, where x ∈ {entail, contradict, neutral}. We denote the human-provided gold explanation for the correct predictions as t g . S denotes a module which predicts label scores. The true label for an example is denoted by y, while a model prediction is denoted by y , and label scores by l x . Posthoc generation: Given an input instance, first the label is predicted and then an explanation generated conditioned on the label and the input text. B. ExplainThen-Predict (Camburu et al., 2018): Given the input instance, first the desired explanation is generated, and then the label is predicted using only the generated explanation. We argue that neither architecture provides a natural way to test the sensitivity of the model's predictions to the generated explanation. Please see Section 4.2 for details. Why do it this way? In this section, we describe the motivation for adopting a two-step pipelined approach. Label-specific explanations: Consider two alternative existing architectures in Figure 2. In Figure 2A, a model S pre is trained directly on the example sentences (p & h) to produce a label (y ), which together with the example sentences are used to produce an explanation t g using G post . It can be argued that while the target explanations may regularize the system, there is no reason for t g to be aligned with the reason why the model chose a particular label. Figure 2B corresponds to a model which has also been trained on e-SNLI (Camburu et al., 2018). G pre is first trained to produce natural language explanations t g using human-provided explanations (t g ) as targets, using only the example sentences as inputs. A model S post then chooses the label corresponding to the generated explanation t g . While at first, it appears that this system may provide faithful explanations of its decisions, i.e., the generated explanations are the reason for the label prediction, we argue that it may not be so. In Figure 2B, G pre is required to generate the explanation of the correct label for an example. It must first infer that label and then produce the corresponding explanation. Further analysis of the free-form human-provided explanations has revealed clear differences in the form of explanations, through alignment to label-specific templates (Camburu et al., 2018;Oana-Maria et al., 2019a). The Explanation Processor S post then only needs to infer the form of t g . G pre then resembles post-hoc generation methods, with the label (as the form of t g ) and explanation t g being produced jointly. The claim is supported by inconsistencies found in the generated explanations (Oana-Maria et al., 2019a). Neither architecture allows a natural way to test the sensitivity of the model's predictions to its explanations. In NILE, we first allow explanations for each label, and then require the Explanation Processor to select the correct explanation. This allows us to naturally test whether the model's predictions are indeed due to the selected explanation. This can be done, for example, by perturbing the input to the Explanation Processor. A pipelined approach: We use a pipelined approach in NILE ( Figure 1). The Candidate Explanation Generators are first trained using humanprovided explanations. The Explanation Processor takes as input the generated label-specific explanations. This prevents the system from producing degenerate explanations to aid task performance. It also allows perturbing the generated explanations to probe the system in a more natural way compared to an unintelligible intermediate state of a learnt model. We believe that systems can be designed to work in this setting without compromising task performance. Candidate Explanation Generators We train label-specific explanation generators, G x , x ∈ {entail, contradict, neutral}, using humanprovided explanations of examples with the corresponding label. For example, to train G entail , we are special tokens added to the vocabulary. During fine-tuning, the language modeling loss function is used only over the explanation tokens. Next, we create prompts of the form "Premise: p Hypothesis: h [EXP]" and require each trained language model to independently complete the sequence. In this way we obtain label specific expla- nations t x , t x = G x (p, h), for x ∈ {entail, contra- dict, neutral}. Explanation Processor The Explanation Processor in NILE takes as input the generated label-specific explanations, as well as the premise and hypothesis pair to generate label scores l x , x ∈ {entail, contradict, neutral}. During training, these scores are passed through a softmax layer and a cross-entropy loss is used to generate the training signal. During testing, the label with the maximum score is selected. We leverage a pre-trained roberta-base model for all our experiments, and fine-tune it as specified in the following subsections. In each case, any intermediate scores are generated through transformations of the first token ([CLS]) embedding from the last layer. We define: F model (inp) = tanh(W.CLS embed (inp)) where inp is a pair of sequences in NILE, a single sequence in NILE-PH, and W are the learnable parameters for the model. For simplicity, and to elucidate the desired behavior, we first describe how explanations are processed in NILE-PH (Section 4.4.1). We then dis-cuss the construction of NILE, a potential issue, and a fix for the same (Section 4.4.2). Processing Explanations In this section, we describe how explanations are processed in NILE-PH, which is generalized in NILE (Section 4.4.2). We experiment with three architectures, described below (also see Figure 3). A. Independent: In the Independent model, explanations are fed to F Ind , which generates a score for each explanations independently: l x = W Ind F Ind (t x )(1) where x ∈ {entail, contradict, neutral}. We expect this score to represent the truthfulness of the input explanation. B. Aggregate: The Independent model would need all three explanations to be available to reliably produce label scores. We believe a system should be able to handle one or more missing or ambiguous explanations. For example, the entailment explanation: "t entail : A dog is a cat" would provide evidence for contradiction. To capture this notion, we require the Explanation Processor to produce two intermediate scores V 1 and V 2 , where we expect V 1 to collect evidence supporting an input claim and V 2 to collect evidence against an input claim: V i (x) = W Agg,i F Agg (t x ), where i ∈ {1, 2}(2) The intermediate score are then aggregated into the final label scores: l entail = Cmb(V 1 (t entail ), V 2 (t contradict )) l contradict = Cmb(V 1 (t contradict ), V 2 (t entail )) l neutral = V 1 (t neutral )(3) where Cmb is the LogSumExp function. The reason for this choice of aggregation is that while evidence against entailment might point to contradiction and vice versa, evidence against neutral doesn't necessarily provide any information about entailment or contradiction relations. C. Append: Finally, to allow the model to reason arbitrarily between the three generated explanations, we created a single sequence, concat ecn : "entailment: t entail contradiction: t contradict neutral: t neutral ", and generate the scores as follows: l x = W Apn,x F Apn (concat ecn )(4) where x ∈ {entail, contradict, neutral}. Processing Premise and Hypothesis In NILE, to process premise p and hypothesis h, we first concatenate p and h into concat ph : "Premise: p Hypothesis: h". The label scores are then obtained as in Section 4.4.1, by modifying Equation 1, 2 and 4 as follows: replace F z (x) by F z (concat ph , x), where z ∈ {Ind, Agg, Apn}. We note that appending the example sentences to the generated explanations (as in Append) would result in having no control over whether the explanations are used for the final prediction. The case for Independent and Aggregate is not immediately clear. We now discuss a potential issue with these architectures when processing premise and hypothesis text, and suggest a fix for the same. The issue: We expect NILE to answer the question: Is (concat ph , t x ), where x ∈ {entail, contradict, neutral}, a valid instance-explanation pair? The Independent and Aggregate architectures for NILE have been designed such that the model can't ignore the label-specific explanations. For example, the Independent model will produce identical scores for each output label, if it chooses to completely ignore the input explanations. However, the model is still free to learn a different kind of bias which is an outcome of the fact that natural language explanations convey ideas through both content and form. If the form for explanations of different labels is discriminative, an unconstrained learning algorithm could learn to infer first the type of explanation and use it to infer the task. For example, given the input (concat ph , t x ), where x ∈ {entail, contradict, neutral}, if a model could learn whether t x is an entailment explanation, it then only has to output whether concat ph corresponds to an entailment relation. Essentially, high label accuracy can be achieved by inferring first what task to do using only the form of t x . The fix: To prevent NILE from exploiting the form of an explanation as described above, we create additional training examples, where we require NILE to score valid instance-explanation pairs higher. In particular, we sample negative explanations for an instance, of the same form as the correct label. For example, an instance labeled as entailment would have an additional training signal: Score (concat ph , t entail ) higher than (concat ph , t entail ) and (concat ph , t entail ), where t entail and t entail are randomly sampled entailment form explanations. We note that the fix leaves room for other kinds of biases to be learnt. However, the key advantage with NILE is that it is easy to design probes to test for such biases and subsequently fix them (see Section 5.3). Baselines We now describe baselines which use the same underlying blocks as NILE, for generating explanations and classification. NILE:post-hoc: To understand the drop in performance which could be associated with constraining models as we have done, we train a model with full access to input examples (See Figure 2A). l x = W x F pre (p, h) where x ∈ {entail, contradict, neutral}. Further, we provide a strong baseline for posthoc generators using this model, where using the model's predictions, we simply pick the corresponding label-specific generated explanation. t g = G post (l x ) = t x We note that the model's predictions have no sensitivity to the generated explanations in NILE: posthoc. ExplainThenPredictAttention (ETPA): Following (Camburu et al., 2018), (see Figure 2B), we train a pipelined system, where we first learn to generate the gold explanation t g , followed by a classification of t g to predict the label: t g = G pre (concat ecn ) l x = W x F post (t g ) where x ∈ {entail, contradict, neutral}. (Camburu et al., 2018) as well as the results with our reproduction of ETPA. ETPA (reproduced) is directly comparable with NILE (Section 4.5). NILE-PH competes with or outperforms ETPA baselines on label accuracy, while NILE-NS and NILE provide significant gains in label accuracy. NILE and NILE-NS are competitive with the best reported results in terms of label accuracies. We report the number of correct explanations, averaged across annotators (B) as well as when all annotators agree on correctness (C). All NILE variants are able to provide more correct explanations than the ETPA baseline. We also report the percentage of correct explanations in the subset of correct label predictions (B/A, C/A). On this metric, NILE variants are comparable with the ETPA baseline. However, the real value of NILE lies in being able to probe the faithfulness of its decisions (Section 5.3). Further, NILE explanations generalize significantly better on out-of-domain examples (See Table 2). Please see Section 5.1 for details. Experiments In this section, we aim to answer the following questions: We provide training details in Appendix A, and examples of generated label-specific explanations in Appendix B. Q1 In-domain Results We report the label accuracies of the baselines and proposed architectures on the SNLI Dev and Test set in Table 1. We also report explanation accuracies, obtained through human evaluation of the generated explanations in the first 100 test examples. Binary scores on correctness were sought from five annotators (non-experts in NLP) on the generated explanations. For both label and explanation accuracies, we report using a model selected using the SNLI Dev set label accuracy across 5 runs with 5 different seeds of random initialization. Please see the Appendix for more details on the the 5 runs. First, through NILE:post-hoc, we provide a strong baseline for obtaining high label and explanation accuracy. Our aim in this work is to learn explanations that serve as the reason for the model's We report the number of correct explanations, averaged across annotators (B) as well as when all annotators agree on correctness (C). All NILE variants provide more correct explanations than the ETPA baseline (B, C). Further, the percentage of correct explanations in the subset of correct label predictions (B/A, C/A) is significantly better for all NILE variants. The results demonstrate that NILE provides a more generalizable framework for producing natural language explanations. Please see Section 5.2 for details. predictions. Nevertheless, we are able to compete or outperform this baseline, in terms of explanation accuracy, while incurring a only a small drop in label accuracy. All variants of NILE, including NILE-PH and NILE-NS (which is not trained using negative samples of explanations as described in Section 4.4.2), produce more correct explanations than the ETPA baseline. NILE-PH:Append, NILE and NILE-NS provide gains over label accuracies compared to the ETPA baseline. Additionally, NILE and its variants provide natural ways to probe the sensitivity of the system's predictions to the explanations, as demonstrated in the subsequent sections. Finally, the explanations generated by all NILE variants generalize significantly better on out-of-distribution examples when compared to the ETPA baseline (See Section 5.2). Transfer to Out-of-domain NLI To test the generalization capability of NILE, we do training and model selection on the SNLI dataset (Section 5.1), and evaluate on the out-of-domain MNLI (Williams et al., 2018) development sets. Transfer without fine-tuning to out-of-domain NLI has been a challenging task with transfer learning for generating explanations in MNLI being particularly challenging (Camburu et al., 2018). We report label accuracies on the Dev (matched) and Dev-mm (mismatched) sets, and explanation evaluation on the first 100 Dev samples in Table 2. Explanation evaluation was done by three annotators (who also annotated the SNLI explanations). While the label accuracies follow a similar pattern as the in-domain SNLI Test set, all variants of NILE provide gains in the quality of generated explanations. All variants of NILE produce more correct explanations (B, C) as well as a higher percentage of correct generated explanations among correct predictions (B/A, C/A). This demonstrates that NILE, through intermediate label-specific natural language explanations, provides a more general way for building systems which can produce natural language explanations for their decisions. Evaluating Faithfulness using Sensitivity Analysis NILE and its variants allow a natural way to probe the sensitivity of their predictions to the generated explanations, which is by perturbing the explanations themselves. In this way, NILE resembles Table 4: Probing the sensitivity of the system's predictions by shuffling instance-explanation pairs. Each instance is attached to a randomly selected explanation of the same form as the original pair. The results demonstrate a much weaker link between NILE-NS's predictions and associated explanations. On the other hand, NILE behaves more expectedly. Note that the baselines don't allow a similar mechanism to test their faithfulness, and such testability is a key advantage of NILE. Please see Section 5.3 for details. explanation systems which provide input text fragments as reasons for their decisions. DeYoung et al. (2019) propose metrics to evaluate the faithfulness of such explanations. Following their work, we first attempt to measure the explanations generated by the methods proposed in this paper for comprehensiveness (what happens when we remove the explanation from the input) and sufficiency (what happens if we keep only the explanations). In Table 3, we show these measures for NILE and NILE-NS. The results seem to indicate that explanations for both NILE and NILE-NS are comprehensive, while having higher sufficiency in the case of NILE-NS. We first note that the comprehensiveness of these systems is ensured by design, and the input is indistinguishable without an explanation. Second, we argue that sufficiency may indicate correlations which don't necessarily exist in the system otherwise. We study the sensitivity of the explanations through a probe motivated by an understanding of the task and the training examples (see Section 4.4.2). We perturb the instanceexplanation inputs such that for each test instance, the explanation is replaced by a randomly selected explanation of the same label. The results (Table 4) indicate that NILE-NS is more robust to random perturbations of input explanations, and presumably uses the form of the explanation to infer the task (see Section 4.4.2 for a discussion). It is true that NILE behaves expectedly as we have specifically designed NILE to prevent the associated bias, and that this could potentially lead the system to learn other such biases. However, a key advantage of the proposed architecture is the ability to identify and fix for such biases. We leave it as an interesting and challenging future work to find and fix more such biases. Conclusion In this paper we propose NILE, a system for Natural Language Inference (NLI) capable of generating labels along with natural language explanations for the predicted labels. Through extensive experiments, we demonstrate the effectiveness of this approach, in terms of both label and explanation accuracy. NILE supports the hypothesis that accurate systems can produce testable natural language explanations of their decisions. In the paper, we also argue the importance of explicit evaluation of faithfulness of the generated explanations, i.e., how correlated are the explanations to the model's decision making. We evaluate faithfulness of NILE's explanations using sensitivity analysis. Finally, we demonstrate that task-specific probes are necessary to measure such sensitivity. For fine-tuning gpt2-medium language models for explanation generation as well as roberta-base models, we leverage code and pre-trained models from the "transformers" library available at https://github.com/huggingface. In each case we train on the train split for three epochs. Apart from batch size, sequence length and seed for random initialization, we keep the other hyperparameters fixed throughout the experiments. We don't do any fine-tuning on seeds of random initialization. A Experimental Setup For roberta-base models, we report results through model selection on models trained using 5 seeds of random initialization -42, 219, 291, 67 and 741. Model selection is done using label accuracies on SNLI Dev set. In Table 5, we report the mean and standard deviation for the label accuracies across 5 runs. We ran our experiments on GeForce GTX 1080 Ti GPUs. We adjust the batch size to be the largest multiple of 16 to fit on the GPU memory (∼12GB). We now list all the hyper-parameters used. GPT2: The hyper-parameters used for finetuning gpt2-medium include a maximum sequence length of 128, batch size of 2, learning rate of 5e-5, Adam epsilon of 1e-8, max gradient norm of 1.0 and a seed of 42. For generating text, we used greedy decoding. Table 6: Hyper-parameters (batch size and maximum sequence length) used for fine-tuning roberta-base RoBERTa: The fixed parameters for fine-tuning robera-base included a learning rate of 2e-5, Adam epsilon of 1e-8 and max gradient norm 1.0. Other parameters are captured in Table 6. B Generated Explanations In this section, we list the label-specific explanations generated for five Dev set examples, along with the premise, hypothesis and gold labels, for both SNLI (Section B.1) and MNLI (Section B.2) datasets. B.1 SNLI holding to go packages. Hypothesis: The sisters are hugging goodbye while holding to go packages after just eating lunch. entailment: Sisters are women. contradiction: The women cannot be embracing if they are hugging goodbye. neutral: Two women embracing does not imply they are sisters hugging goodbye. entailment: Two young children are two kids, and blue jerseys implies numbered jerseys. contradiction: The kids are either standing on wooden steps or wash their hands. neutral: The kids could be washing their hands in a sink for any reason, not just because they wash their hands. entailment: A propeller is a type of ball. contradiction: A child cannot touch the propeller of a plane while playing with a ball. neutral: Just because the child reaches up to touch the propeller of a plane does not mean the child is playing with a ball. Figure 2 : 2Existing alternative architectures.: A. Figure 3: Explanation Processor architectures. A. Independent (Ind) collects evidence for a label symmetrically from the corresponding explanation. B. Aggregate (Agg) allows handling missing explanations by looking for contradictory evidence. C. Append (Apn) allows arbitrary evidence collection for each label. Please see Section 4.4.1 for details. Premise and hypothesis sentences are processed by additionally providing them to each block F z where z ∈ {Ind, Agg, Apn}. Please see Section 4.4.2 for details. collect all triplets (p, h, t g ) annotated as entailment.We create text sequences of the form: "Premise:p Hypothesis: h [EXP] t g [EOS]" to fine-tune a pretrained language model, where [EXP] and [EOS]F Agg F Agg F Agg t entail t contradict t neutral F Apn t entail t contradict t neutral B C V6 Aggregate Append F Ind F Ind F Ind t entail t contradict t neutral A Independent 0.8 0.0 0.2 l entail l contradict l neutral 1 : 1Comparison of label and explanation accuracy on the in-domain SNLI evaluation sets. Models are selected using the Dev set label accuracy over 5 runs with different seeds of random initialization. Mean (and standard deviation) over the 5 runs are reported in the Appendix. # indicates the best reported result at https://nlp.stanford.edu/projects/snli/ at the time of writing. Note that SemBERT does not provide natural language explanations and is reported here only for reference. Bold numbers indicate highest among methods that produce explanations. Explanations are evaluated on the first 100 SNLI Test examples. We present reported numbers of ETPAModel SNLI Dev SNLI Test Explanation evaluation on first 100 SNLI Test Samples Label Accuracy Label Accuracy A: Correct Labels Averaged over annotators Annotators in-agreement B: Correct Expl. B/A C: Correct Expl. C/A SemBERT # (Zhang et al., 2019) 92.2 91.9 - - - - - ETPA (Camburu et al., 2018) Reported 81.71 - - 64.27 - - Reproduced 86.98 86.22 77 71.2 92.47 59 76.62 NILE:post-hoc 91.86 91.49 90 81.4 90.44 68 75.56 NILE-PH Independent 84.69 84.13 78 72.0 92.31 61 78.21 Aggregate 85.71 85.29 80 73.4 91.75 62 77.50 Append 88.49 88.11 85 78.0 91.76 66 77.65 NILE-NS Independent 91.56 90.91 88 80.8 91.82 69 78.41 Aggregate 91.55 91.08 89 80.6 90.56 68 76.40 Append 91.74 91.12 89 80.4 90.34 67 75.28 NILE Independent 91.29 90.73 91 82.4 90.55 69 75.82 Aggregate 91.19 90.91 90 81.4 90.44 68 75.56 Table Table 2 : 2Testing the generalization capability of NILE on the out-of-domain MNLI Dev sets.Training and model Table 3 : 3Estimatingthe sensitivity of the system's pre- dictions to input explanations through erasure. During testing, we erase either the instance or the explanations from the input to NILE-NS and NILE. The results seem to indicate that NILE-NS's predictions are more faith- ful, in the sense of having a higher sufficiency. How- ever, as demonstrated subsequently, the sensitivity of NILE-NS's prediction to the input explanations is not as desired. Please see Section 5.3 for details. Model Dev Set Shuffled Dev Set NILE-NS Independent 91.6 88.1 Aggregate 91.6 89.6 Append 91.7 88.5 NILE Independent 91.3 35.3 Aggregate 91.2 31.6 Table 5 : 5Mean and Standard Deviation for label accuracues on SNLI Dev set are reported. NILE-NS:Independent system has a high standard deviation and relatively lower mean accuracy. This is due to a bad random initialization with seed 219. When seed 219 results are excluded, the mean and standard deviation are 91.41 and 0.20 respectively. NILE source code available at https://github.com/SawanKumar28/nile AcknowledgmentsWe thank the anonymous reviewers for their constructive comments. This work is supported by the Ministry of Human Resource Development (Government of India). We would also like to thank HuggingFace for providing a state-of-the-art Transformers library for natural language understanding. Finally, we want to thank the annotators who annotated generated explanations for correctness. Learning with latent language. Jacob Andreas, Dan Klein, Sergey Levine, 10.18653/v1/N18-1197Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLong Papers; New Orleans, LouisianaAssociation for Computational Linguistics1Jacob Andreas, Dan Klein, and Sergey Levine. 2018. Learning with latent language. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Pa- pers), pages 2166-2179, New Orleans, Louisiana. Association for Computational Linguistics. A large annotated corpus for learning natural language inference. R Samuel, Gabor Bowman, Christopher Angeli, Christopher D Potts, Manning, 10.18653/v1/D15-1075Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalAssociation for Computational LinguisticsSamuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal. Association for Compu- tational Linguistics. e-SNLI: Natural Language Inference with Natural Language Explanations. Oana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, Phil Blunsom, Advances in Neural Information Processing Systems. Oana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-SNLI: Nat- ural Language Inference with Natural Language Ex- planations. In Advances in Neural Information Pro- cessing Systems, pages 9539-9549. Interpretability of Deep Learning Models: A Survey of Results. Supriyo Chakraborty, Richard Tomsett, Ramya Raghavendra, Daniel Harborne, Moustafa Alzantot, Federico Cerutti, Mani Srivastava, Alun Preece, Simon Julier, M Raghuveer, Rao, 2017 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computed, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation. IEEESupriyo Chakraborty, Richard Tomsett, Ramya Raghavendra, Daniel Harborne, Moustafa Alzantot, Federico Cerutti, Mani Srivastava, Alun Preece, Simon Julier, Raghuveer M Rao, et al. 2017. In- terpretability of Deep Learning Models: A Survey of Results. In 2017 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computed, Scalable Computing & Communica- tions, Cloud & Big Data Computing, Internet of People and Smart City Innovation (Smart- World/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), pages 1-6. IEEE. Learning to Explain: An Information-Theoretic Perspective on Model Interpretation. Jianbo Chen, Le Song, Martin Wainwright, Michael Jordan, International Conference on Machine Learning. Jianbo Chen, Le Song, Martin Wainwright, and Michael Jordan. 2018. Learning to Explain: An Information-Theoretic Perspective on Model Inter- pretation. In International Conference on Machine Learning, pages 882-891. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics. Jay Deyoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, Byron C Wallace, arXiv:1911.03429ERASER: A Benchmark to Evaluate Rationalized NLP Models. arXiv preprintJay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C Wallace. 2019. ERASER: A Benchmark to Evaluate Rationalized NLP Models. arXiv preprint arXiv:1911.03429. Pathologies of Neural Models Make Interpretations Difficult. Eric Shi Feng, Alvin Wallace, I I Grissom, Mohit Iyyer, Pedro Rodriguez, Jordan Boyd-Graber, 10.18653/v1/D18-1407Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsShi Feng, Eric Wallace, Alvin Grissom II, Mohit Iyyer, Pedro Rodriguez, and Jordan Boyd-Graber. 2018. Pathologies of Neural Models Make Interpretations Difficult. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3719-3728, Brussels, Belgium. Association for Computational Linguistics. Interpretation of Neural Networks is Fragile. Amirata Ghorbani, Abubakar Abid, James Zou, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Amirata Ghorbani, Abubakar Abid, and James Zou. 2019. Interpretation of Neural Networks is Fragile. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3681-3688. Generating counterfactual explanations with natural language. Lisa Anne Hendricks, Ronghang Hu, Trevor Darrell, Zeynep Akata, ICML Workshop on Human Interpretability in Machine Learning. Lisa Anne Hendricks, Ronghang Hu, Trevor Darrell, and Zeynep Akata. 2018. Generating counterfactual explanations with natural language. In ICML Work- shop on Human Interpretability in Machine Learn- ing, pages 95-98. Multimodal Explanations: Justifying Decisions and Pointing to the Evidence. Dong Huk Park, Lisa Anne Hendricks, Zeynep Akata, Anna Rohrbach, Bernt Schiele, Trevor Darrell, Marcus Rohrbach, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionDong Huk Park, Lisa Anne Hendricks, Zeynep Akata, Anna Rohrbach, Bernt Schiele, Trevor Darrell, and Marcus Rohrbach. 2018. Multimodal Explanations: Justifying Decisions and Pointing to the Evidence. In Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition, pages 8779- 8788. Textual Explanations for Self-Driving Vehicles. Jinkyu Kim, Anna Rohrbach, Trevor Darrell, John Canny, Zeynep Akata, Proceedings of the European conference on computer vision (ECCV). the European conference on computer vision (ECCV)Jinkyu Kim, Anna Rohrbach, Trevor Darrell, John Canny, and Zeynep Akata. 2018. Textual Expla- nations for Self-Driving Vehicles. In Proceedings of the European conference on computer vision (ECCV), pages 563-578. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut, International Conference on Learning Representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. In Interna- tional Conference on Learning Representations. BART: Denoising Sequence-to-Sequence Pretraining for Natural Language Generation. Mike Lewis, Yinhan Liu, Naman Goyal ; Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, Luke Zettlemoyer, arXiv:1910.13461Translation, and Comprehension. arXiv preprintMike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. BART: Denoising Sequence-to-Sequence Pre- training for Natural Language Generation, Trans- lation, and Comprehension. arXiv preprint arXiv:1910.13461. Program Induction by Rationale Generation: Learning to Solve and Explain Algebraic Word Problems. Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom, 10.18653/v1/P17-1015Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational Linguistics1Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blun- som. 2017. Program Induction by Rationale Gen- eration: Learning to Solve and Explain Algebraic Word Problems. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 158-167, Vancouver, Canada. Association for Computational Linguistics. Zachary C Lipton, arXiv:1606.03490The Mythos of Model Interpretability. arXiv preprintZachary C Lipton. 2016. The Mythos of Model Inter- pretability. arXiv preprint arXiv:1606.03490. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretrain- ing Approach. arXiv preprint arXiv:1907.11692. A Unified Approach to Interpreting Model Predictions. M Scott, Su-In Lundberg, Lee, Advances in Neural Information Processing Systems. Scott M Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In Ad- vances in Neural Information Processing Systems, pages 4765-4774. Make Up Your Mind! Adversarial Generation of Inconsistent Natural Language Explanations. Camburu Oana-Maria, Shillingford Brendan, Minervini Pasquale, Lukasiewicz Thomas, Blunsom Phil, arXiv:1910.03065arXiv preprintCamburu Oana-Maria, Shillingford Brendan, Min- ervini Pasquale, Lukasiewicz Thomas, and Blunsom Phil. 2019a. Make Up Your Mind! Adversarial Gen- eration of Inconsistent Natural Language Explana- tions. arXiv preprint arXiv:1910.03065. Camburu Oana-Maria, Giunchiglia Eleonora, Foerster Jakob, Lukasiewicz Thomas, Blunsom Phil, arXiv:1910.02065Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods. arXiv preprintCamburu Oana-Maria, Giunchiglia Eleonora, Foerster Jakob, Lukasiewicz Thomas, and Blunsom Phil. 2019b. Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods. arXiv preprint arXiv:1910.02065. Language Models are Unsupervised Multitask Learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, Ope-nAI Blog. 81Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners. Ope- nAI Blog, 1(8). Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, arXiv:1910.10683arXiv preprintColin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the Lim- its of Transfer Learning with a Unified Text-to-Text Transformer. arXiv preprint arXiv:1910.10683. Explain yourself! leveraging language models for commonsense reasoning. Bryan Nazneen Fatema Rajani, Caiming Mccann, Richard Xiong, Socher, 10.18653/v1/P19-1487Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsNazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself! leveraging language models for commonsense rea- soning. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 4932-4942, Florence, Italy. Association for Computational Linguistics. Why Should I Trust You?": Explaining the Predictions of Any Classifier. Sameer Marco Tulio Ribeiro, Carlos Singh, Guestrin, Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. the 22nd ACM SIGKDD international conference on knowledge discovery and data miningACMMarco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why Should I Trust You?": Ex- plaining the Predictions of Any Classifier. In Pro- ceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135-1144. ACM. Attention is All you Need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in neural information pro- cessing systems, pages 5998-6008. SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel Bowman, Advances in Neural Information Processing Systems. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems. In Advances in Neural In- formation Processing Systems, pages 3261-3275. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel Bowman, 10.18653/v1/W18-5446Proceedings of the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP. the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLPBrussels, BelgiumAssociation for Computational LinguisticsAlex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A Multi-Task Benchmark and Analysis Plat- form for Natural Language Understanding. In Proceedings of the 2018 EMNLP Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP, pages 353-355, Brussels, Belgium. Association for Computational Linguistics. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference. Adina Williams, Nikita Nangia, Samuel Bowman, 10.18653/v1/N18-1101Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaLong Papers1Association for Computational LinguisticsAdina Williams, Nikita Nangia, and Samuel Bowman. 2018. A Broad-Coverage Challenge Corpus for Sen- tence Understanding through Inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguis- tics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R&apos;emi Louf, abs/1910.03771Morgan Funtowicz, and Jamie Brew. 2019. HuggingFace's Transformers: State-of-the-art Natural Language Processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. HuggingFace's Trans- formers: State-of-the-art Natural Language Process- ing. ArXiv, abs/1910.03771. Generating question relevant captions to aid visual question answering. Jialin Wu, Zeyuan Hu, Raymond Mooney, 10.18653/v1/P19-1348Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsJialin Wu, Zeyuan Hu, and Raymond Mooney. 2019. Generating question relevant captions to aid visual question answering. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 3585-3594, Florence, Italy. Asso- ciation for Computational Linguistics. XLNet: Generalized Autoregressive Pretraining for Language Understanding. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, R Russ, Quoc V Salakhutdinov, Le, Advances in neural information processing systems. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized Autoregressive Pretraining for Language Understanding. In Advances in neural in- formation processing systems, pages 5754-5764. Zhuosheng Zhang, Yuwei Wu, Hai Zhao, Zuchao Li, Shuailiang Zhang, Xi Zhou, Xiang Zhou, arXiv:1909.02209Semantics-aware BERT for Language Understanding. arXiv preprintZhuosheng Zhang, Yuwei Wu, Hai Zhao, Zuchao Li, Shuailiang Zhang, Xi Zhou, and Xiang Zhou. 2019. Semantics-aware BERT for Language Understand- ing. arXiv preprint arXiv:1909.02209.
[ "https://github.com/huggingface.", "https://github.com/SawanKumar28/nile" ]
[ "5G Network Slicing with QKD and Quantum-Safe Security", "5G Network Slicing with QKD and Quantum-Safe Security" ]
[ "Paul Wright \nBT Labs\nAdastral Park, IpswichU.K\n", "Catherine White \nBT Labs\nAdastral Park, IpswichU.K\n", "Ryan C Parker \nBT Labs\nAdastral Park, IpswichU.K\n", "Jean-Sébastien Pegon \nID Quantique SA\nGenevaSwitzerland\n", "Marco Menchetti \nBT Labs\nAdastral Park, IpswichU.K\n", "Joseph Pearse \nYork Centre for Quantum Technologies\nDepartment of Physics\nUniversity of York\nYorkU.K\n", "Arash Bahrami \nYork Centre for Quantum Technologies\nDepartment of Physics\nUniversity of York\nYorkU.K\n", "Anastasia Moroz \nBT Labs\nAdastral Park, IpswichU.K\n", "Adrian Wonfor \nDepartment of Engineering\nUniversity of Cambridge\nCambridgeU.K\n", "Timothy P Spiller \nYork Centre for Quantum Technologies\nDepartment of Physics\nUniversity of York\nYorkU.K\n", "Richard V Penty \nDepartment of Engineering\nUniversity of Cambridge\nCambridgeU.K\n", "Andrew Lord \nBT Labs\nAdastral Park, IpswichU.K\n" ]
[ "BT Labs\nAdastral Park, IpswichU.K", "BT Labs\nAdastral Park, IpswichU.K", "BT Labs\nAdastral Park, IpswichU.K", "ID Quantique SA\nGenevaSwitzerland", "BT Labs\nAdastral Park, IpswichU.K", "York Centre for Quantum Technologies\nDepartment of Physics\nUniversity of York\nYorkU.K", "York Centre for Quantum Technologies\nDepartment of Physics\nUniversity of York\nYorkU.K", "BT Labs\nAdastral Park, IpswichU.K", "Department of Engineering\nUniversity of Cambridge\nCambridgeU.K", "York Centre for Quantum Technologies\nDepartment of Physics\nUniversity of York\nYorkU.K", "Department of Engineering\nUniversity of Cambridge\nCambridgeU.K", "BT Labs\nAdastral Park, IpswichU.K" ]
[]
We demonstrate how the 5G network slicing model can be extended to address data security requirements. In this work we demonstrate two different slice configurations, with different encryption requirements, representing two diverse use-cases for 5G networking namely, an enterprise application hosted at a metro network site, and a content delivery network.We create a modified software-defined networking (SDN) orchestrator which calculates and provisions network slices according to the requirements, including encryption backed by quantum key distribution (QKD), or other methods. Slices are automatically provisioned by SDN orchestration of network resources, allowing selection of encrypted links as appropriate, including those which use standard Diffie-Hellman key exchange, QKD and quantum-resistant algorithms (QRAs), as well as no encryption at all. We show that the set-up and tear-down times of the network slices takes of the order of 1-2 minutes, which is an order of magnitude improvement over manually provisioning a link today.
10.1364/jocn.413918
[ "https://arxiv.org/pdf/2007.03377v1.pdf" ]
220,381,416
2007.03377
a9bd77494e6fefd316c209172f47ea79116f0ec0
5G Network Slicing with QKD and Quantum-Safe Security Paul Wright BT Labs Adastral Park, IpswichU.K Catherine White BT Labs Adastral Park, IpswichU.K Ryan C Parker BT Labs Adastral Park, IpswichU.K Jean-Sébastien Pegon ID Quantique SA GenevaSwitzerland Marco Menchetti BT Labs Adastral Park, IpswichU.K Joseph Pearse York Centre for Quantum Technologies Department of Physics University of York YorkU.K Arash Bahrami York Centre for Quantum Technologies Department of Physics University of York YorkU.K Anastasia Moroz BT Labs Adastral Park, IpswichU.K Adrian Wonfor Department of Engineering University of Cambridge CambridgeU.K Timothy P Spiller York Centre for Quantum Technologies Department of Physics University of York YorkU.K Richard V Penty Department of Engineering University of Cambridge CambridgeU.K Andrew Lord BT Labs Adastral Park, IpswichU.K 5G Network Slicing with QKD and Quantum-Safe Security We demonstrate how the 5G network slicing model can be extended to address data security requirements. In this work we demonstrate two different slice configurations, with different encryption requirements, representing two diverse use-cases for 5G networking namely, an enterprise application hosted at a metro network site, and a content delivery network.We create a modified software-defined networking (SDN) orchestrator which calculates and provisions network slices according to the requirements, including encryption backed by quantum key distribution (QKD), or other methods. Slices are automatically provisioned by SDN orchestration of network resources, allowing selection of encrypted links as appropriate, including those which use standard Diffie-Hellman key exchange, QKD and quantum-resistant algorithms (QRAs), as well as no encryption at all. We show that the set-up and tear-down times of the network slices takes of the order of 1-2 minutes, which is an order of magnitude improvement over manually provisioning a link today. Introduction The recent introduction of 5G networks for commercial use promises to deliver increased bandwidth to customers, enabling faster speed connections, as well as lower-latency communications, the ability to meet Quality of Service demands, and many other service improvements. This opens up the possibility for far greater connectivity of devices than ever before. The benefits brought by 5G are as a result of the converged architecture, which is the core of 5G networks; resources are placed as close to the edge of the network as possible (i.e. as far away from the core network as can be), thus offering lower-latency services via so-called edge-computing [1]. Taking advantage of the edge-located resources and the fact that these resources are used more efficiently (with some sharing of compute resource, for example) are use-cases such as content delivery networks (CDNs) and edge-compute, automated vehicles and remote operations, as well as the monitoring and control of large-scale Internet of Things (IoT) networks, such as smart meters and distributed power generation. Due to the fact that there are a wide variety of new use-cases which are enabled by 5G technology, the network has had to be designed such that it can cope with this range of heterogeneous requirements, such as latency, reliability, security, and more [2]. Consequently, network slicing is utilised, and plays a key role within making 5G networks suitably flexible [3]. By effectively multiplexing separate virtualised networks over common physical infrastructure, network slices are made, and can be provisioned different resources. For example, a network slice providing communications for an automated vehicle will require very low latency, but a fairly low bandwidth, compared to high-definition video streaming which is more reliant on large bandwidth and less on latency [4]. Both of these use-cases can be delivered on the same physical infrastructure by separating these into separate virtualised networks through network slicing. Network slicing is reliant on software-defined networking (SDN) and network functions virtualisation (NFV). NFV allows network slices to be made via virtual machines (VMs), which are then connected together across the network via SDN orchestration [5]; SDN is used to flexibly configure network slices, as well as reserving resources for the wide range of use-cases possible via orchestration carried about by a network slice controller (as illustrated in Fig. 1). This SDN orchestration is vital within this work, as it is used to dynamically control the type of encryption deployed for each network slice. In general, however, 5G networking does not usually intrinsically provide encryption of data traffic, instead relying on over-the-top encrypted sessions (such as TLS) often placing a responsibility on the end user to maintain security updates. [5]. End-to-end security will always require encryption at the user equipment, of course, but 5G networks involve critical links within the tiered resources over which large concentrations of secure application traffic may flow, such as between the aggregation and metro nodes. These critical links could be very attractive targets for eavesdroppers; and so we suggest that network operators consider providing encryption for these links. A vital prerequisite for strong encryption is secure key exchange. Todays standard key exchange algorithms (such as Diffie-Hellman and RSA) are thought to be vulnerable to attacks by large-scale quantum computers. As such, there are two possible routes for avoiding this future threat: quantumresistant algorithms (QRAs), such as those being developed under the NIST program [6], and quantum key distribution (QKD). Whereas QRAs for key exchange would be reliant on strong mathematical proofs to safeguard against the increased compute power of a largescale quantum computer, QKD is based upon the fundamental laws of quantum physics, and if implemented properly is secure against any future computational threat. QKD utilises quantum states encoded on photons to agree a key between users with information theoretic security (ITS). ITS implies that we are able to calculate the statistical likelihood that an eavesdropper holds any information on the key, and show that this has been reduced to an infinitesimally small probability. We emphasise that QKD is secure against any future computational threat, be that classical or quantum, whereas QRAs may be insecure against a future quantum hacking algorithm, which is yet to be discovered. QKD requires an initial authentication step, which is straightforward where pre-shared key exists, but if this is not the case then QRAs may be needed for this first-time authentication. Moreover, if a QRA is used for the initial authentication step, once QKD has been performed it does not then matter if the QRA is subsequently broken, because the QKD key material has no algorithmic link to the QRA material that was used to authenticate the QKD exchange. To protect data for which there is a need for privacy or intellectual property retention over a timescale of years, we anticipate that network application designers will select QRAs. However, for the most valuable and/or sensitive data, further longterm key security can be provided by QKD, in conjunction with QRAs for encryption and authentication. 5G networks have the capability to dynamically control the type of encryption used for separate data channels. Sections of a single network slice may have different security requirements, for example where data is time sensitive and cached within the network, such as CDNs, or where data from multiple devices is aggregated; the level of security is another parameter of the connection which it would be useful to be able to control as part of a network slice. Using network slicing to control encryption is relatively novel, but nevertheless has already been considered theoretically in [7] and [8] by utilising QKD in tandem with a QRA (specifically, a QRA version of Elliptic-Curve Cryptography), and has also performed experimentally over the Bristol City 5G UK Test Network in the works of [9][10][11], by applying QKD to 5G networking. Moreover, in [12], proof-of-transit of the 5G data traffic is demonstrated, using cryptographic techniques with QKD over the Madrid Quantum Network [13] this network has also been used to demonstrate securing the management of the SDN control plane through QKD in [14,15]. However, what differentiates our work is that we dynamically control the type of encryption Diffie-Hellman-AES, QRA-AES, QKD-AES, or no encryption at all to address the realistic scenario in which different data packets in a 5G network will have varying security requirements. We note here that the symmetric encryption algorithm used in this work is the Advanced Encryption Standard (AES) with 256 bit keys, from QKD, Diffie-Hellman or a QRA. AES is currently thought to be "quantumsafe", in that even a large-scale quantum computer will be unable to crack this method of encryption with an exponential speed-up, unlike Diffie-Hellman or RSA asymmetric algorithms used to establish shared secret keys which are susceptible to this type of cryptanalysis. Within this work we experimentally demonstrate 5G network slicing to dynamically control the type of encryption (and therefore the level of data security) over existing commercial telecommunications infrastructure, to represent the possibility of supporting the variety of potential new use-cases born through 5G networks, which will inevitably have diverse security requirements. More specifically, we experimentally simulate two potential usecases an enterprise application hosted at a metro site in the network, and a CDN use-case. This paper is organised as follows: in Section 2 we describe our 5G network topology and design, and methodology behind our proof of concept demonstration, before discussing the results in Section 3. Section 3 is divided into subsections in which we first address the two network slice configurations separately (Subsections 3.1 and 3.2), before moving to present results regarding the timing (namely the provision and deprovision times) of each network slice in Subsection 3.3. Methodology Within this section we discuss the methodology used behind the test-bed configuration of our 5G network slicing prototype, with dynamicallycontrolled encryption. Fig. 2 schematically describes the architecture of the representative network test-bed used within this work. There are four node types in this network cell, aggregation, metro, and core. Traffic flows from the cell sites to the core site, via use of Ethernet switches and optical switches. In reality such an exemplar network would likely be located as per Fig. 5, in which the two cell sites could be Felixstowe and Woodbridge, with the aggregation site in Ipswich, the metro site in Cambridge, and the core node in London. However, in this work we use the UKQNtel infrastructure, which is a section of the UK Quantum Network, containing intermediate trusted nodes for QKD link handover and classical amplification (for further detail, see [16]), as this has QKD-capable networking over a 121 km link from BT Research Labs in Ipswich (Adastral Park) to Cambridge. Available for interconnections over this infrastructure are 5×100G channels on a coherent dense wavedivision multiplexing (DWDM) system looped back over the 121 km optical fibre link (242 km in total see Fig. 2). Each of the five 100G channels within this link provides 10×10G client Ethernet ports, and all interconnections between 5G network sites are 10G. There is no segregation of encryption between 10G clients on the same 100G channel (one encryption key per 100G channel, refreshed at 3s intervals). In other implementations it might be preferred to have a separate encryption key per client port, but this would not affect the Network Slicing Orchestrator approach demonstrated here. Three channels are configured to provide: no encryption; standard Diffie-Hellman with Advanced Encryption Standard (DH-AES); a prototype QRA, specifically an NTRU implementation provided by the OpenQuan-tumSafe library [17], with AES (QRA-AES). The remaining two channels are in the default configuration for the UKQNtel link (256 bit AES, with keys provided via QKD, referred to herein as QKD-AES). Two exemplary network circuit schematics are shown in Fig. 3 to illustrate the specific connectivity between Adastral Park and Cambridge with the various encryption schemes utilised in this work. The ADVA 10-TCE encryption cards that were used for data transmission have two available models: one which supports encryption (10-TCE-AES, see Fig. 3), and one which does not (10-TCE). The resource limitations on encrypted links are therefore dependent on the hardware available. Similarly, adding QKD to an encrypted link is limited by available installed hardware, however, it may be possible to route traffic which does not strictly require encryption over free encrypted links. The delay introduced by the 10-TCE-AES is 15 µs (4 µs in the card, and 11 µs in the CFP module which applies Forward Error Correction) -this figure is the same for both the 10-TCE and 10-TCE-AES (encrypted) card. To demonstrate the ability of our orchestrator to create very diverse network slice requirements we added a further illustrative variation, namely between DH-AES and QRA-AES. However, in practice a network operator would likely select a network policy which always applies one, or both, of these techniques in addition to available QKD hardware. We view the QKD-AES encrypted links as offering the highest level of security, and note that in some implementations, since the main extra cost is for the QKD hardware, these may be implemented as QKD plus another method of key exchange in a single link. Central to this experiment is the use of SDN control and orchestration technologies. All of the network devices utilised within this demo have a YANG device model, and their configuration can be changed by issuing requests via a NETCONF interface. Network devices are registered with a Cisco Network Services Orchestrator (NSO) SDN Controller, and the orchestrator communicates with the SDN controller via a REST-API. Each slice is broken down into three connections: cell site to core site (for control plane traffic), cell site to compute site, and compute site to core site. To achieve the required network flexibility, Layer 2 (L2) switches are used at each site. The optical switch at the metro site provides necessary flexibility for allocation of the links with different security levels to different tasks. This approach allows a network operator to specify the properties of the new network slice required, through a portal or application programmable interface (API). The entity providing this is a custom Network Slicing Orchestrator (see Fig. 5), which we have created and modified to include security requirements. The Network Slicing Orchestrator has the full end-to-end view of the network and understands the requirements for network slices, as well as performing the routing and resource allocation. For each connection in a slice, the required security level (non-encrypted, DH-AES, QKD-AES or QRA-AES is specified, along with more traditional slice parameters such as bandwidth, latency and compute requirements. The portal interface and the slice requirement input screen showing the new security level options used in the experiment is shown in Fig. 4b, and the site selection interface is shown in Fig. 4a. Once the properties for a slice are submitted, the Network Slicing Orchestrator determines a suitable route through the network and checks whether sufficient network and compute resource are available, whilst also ensuring that the links selected meet the security requirements specified in the initial slice request. The NSO achieves this by allocating a security metric to each link which then is used as part of the path computation element. The network operator can then submit their request for the slice to be activated and the orchestrator then issues the configuration commands to the network devices. Results & Discussion We trialled two use-cases for 5G network slicing encryption. Two slice configurations are shown in Fig. 6, based on use-cases, and in the following subsections we discuss the network topology of each usecase separately before moving to present further results. Use-Case 1: Enterprise App Use-Case 1 is an enterprise app hosted at the metro site. The enterprise app processes data coming from user equipment (UE) which is connected to the cell sites. The link from each cell site to the metro site is secured with post-quantum security via use of a QRA, a solution which scales well. Premium QKD-AES encryption is selected for the link which passes aggregated data from the metro site to the mobile core node; this could be a prime target for a malicious eavesdropper, and therefore would benefit the most from the highest level of data security. Standard software-based key-exchange algorithms (Diffie-Hellman) are chosen as sufficient to protect the control plane, operating from the cell site to the core site, which is considered to require only shortlived security of encryption. Use-Case 2: CDN Use-Case 2 is a CDN, in which the delivery sites are placed close to the network edge, at aggregation sites, in order to reduce the load within the core of the network. The scenario is that sensitive data (such as pre-released video content or software packages) is delivered securely to the CDN, and an eavesdropper would place high value in retrieving this data ahead of the official release. The delivery of the content to the CDN is via an encrypted link based on QKD, while no encryption is provisioned between the aggregation node and the cell site, since after the data has been released it no longer needs to be protected. Again, we deploy standard DH-AES key exchange and encryption to the control plane traffic, from the cell site to the core site, as we did for Use-Case 1. Fig. 7 shows histograms to quantify the time taken to set-up (provision) and tear-down (deprovision) the network slices, in both use-cases. Fig. 7 shows that the distribution of times to set-up and tear down each of the two slices is, in each use-case, between 1 and 2 minutes. This is a significant improvement, as it is orders of magnitude shorter than the time it takes to provision a link manually today, which is a benefit to telecommunications operators. In Use-Case 2 the slice takes longer to provision/deprovision as it has an additional network element to provision (namely, a metro node Ethernet switch), which is not needed in Use-Case 1. Timing Each network configuration step is made in sequence (see Wireshark trace, Fig. 8), allowing for efficient roll-back if there is a problem. This sequential build-up of the slice increases the time taken to set it up (there is no parallel allocation or configuration of resources), but since the network configuration is locked by the orchestrator which only allows one change at a time, this approach would reduce race conditions and conflicts if this system were to be extended to support multiple simultaneous slice requests. Conclusion As highlighted throughout this work, there are usecases within network slicing and 5G networks that would greatly benefit from flexible selection of network encryption. Two such use-cases we demonstrate in this work are metro-site-hosted enterprise apps and content delivery networks, however there are many potential applications such as CAVs (connected and automated vehicles) communications, smart factories, connecting distributed research facilities with high-value intellectual property, and more. Moreover, the dynamic nature of this work also lends itself to applications with time variable demand, such as setting-up highly secure links for daily, or more frequent, back-up of data. For future-proof security, the secure link options will need to include quantum-safe methods such as NTRU (i.e. quantum-resistant algorithms) and QKD as demonstrated here, such that the customer, or network operator, are able to select the encryption level accordingly, based on the type of traffic. The security requirements of a 5G application can be included in the resource selection criteria of a 5G Network Slicing Orchestrator. This approach could help operators make maximum utilisation of premium security resources such as high speed, encrypted links and QKD. Acknowledgements We gratefully acknowledge that the UKQNtel network was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) (EP/N015207/1, EP/M013472/1, EP/N509802/1). We also thank the UK Quantum Communications Hub and ADVA for invaluable support. Figure 1 : 1A generic schematic to illustrate network slicing, orchestrated by a network slice controller within an exemplar 5G network. Figure 2 : 2The network test-bed configuration for the implementation of 5G network slicing, with varying levels of security provided. Figure 3 : 3Exemplary network diagrams to show the connectivity from the Adastral Park and Cambridge network nodes, illustrating the use of various encryption methods: a) QKD-AES encryption, b) DH-AES or QRA-AES encryption. The wavelengths of the various channels are denoted as follows: λ MGMT = management wavelength, λ DISC = QKD discussion channel wavelength, λ Q = quantum key transmission wavelength. OTN = optical transport network. Figure 4 : 4Two stages in the user journey for creating slices, highlighting the security specification: no encryption, DH-AES, QKD-AES or Post-Quantum (referred to in-text as "QRA-AES"). a) Selecting the sites, b) Configuring the security requirements for interconnections. Figure 5 : 5A representative view of how such a 5G network would be distributed: Cell Sites = Felixstowe and Woodbridge, Aggregation Node = Ipswich, Metro Node = Cambridge, Core Node = London. Figure 6 : 6Network configurations of a) Use-Case 1, representing an enterprise app hosted at a network metro site, and b) Use-Case 2, representing a content delivery network (CDN), hosted at the network aggregation site. Figure 7 : 7Histograms showing slice set-up times over 100 runs. a) Use-Case 1 provision times, and b) deprovision times, c) Use-Case 2 provision times, and d) deprovision times. Figure 8 : 8A Wireshark trace showing the complete provision of a network slice. Toward Slicing-Enabled Multi-Access Edge Computing in 5G. A Ksentini, P A Frangoudis, IEEE Netw. 34A. Ksentini and P. A. Frangoudis, "Toward Slicing-Enabled Multi-Access Edge Computing in 5G", IEEE Netw., 34, 99-105 (2020). Network Slicing in 5G: Survey and Challenges. X Foukas, G Patounas, A Elmokashfi, M K Marina, IEEE Commun. Mag. 55X. Foukas, G. Patounas , A. Elmokashfi and M. K. Marina, "Network Slicing in 5G: Survey and Chal- lenges, IEEE Commun. Mag., 55, 94-100 (2017). Network Slicing to Enable Scalability and Flexibility in 5G Mobile Networks. P Rost, C Mannweiler, D S Michalopoulos, C Sartori, V Sciancalepore, N Sastry, O Holland, S Tayade, B Han, IEEE Commun. Mag. 55P. Rost, C. Mannweiler, D. S. Michalopoulos, C. Sartori, V. Sciancalepore, N. Sastry, O. Holland, S. Tayade and B. Han, "Network Slicing to Enable Scalability and Flexibility in 5G Mobile Networks", IEEE Com- mun. Mag., 55, 72-79 (2017). NGMN 5G Initiative White Paper. Ngmn Alliance, "NGMN 5G Initiative White Paper", NGMN Alliance (2015). NFV and SDN -Key Technology Enablers for 5G Networks. F Z Yousaf, M Bredel, S Schaller, F Schneider, IEEE J. on Sel. Areas in Commun. 35F. Z. Yousaf, M. Bredel, S. Schaller and F. Schneider, "NFV and SDN -Key Technology Enablers for 5G Networks", IEEE J. on Sel. Areas in Commun., 35, 11, 2468-2478 (2017). Report on postquantum cryptography. L Chen, S Jordan, Y.-K Liu, D Moody, R Peralta, R Perlner, D Smith-Tone, Department of Commerce, National Institute of Standards and TechnologyL. Chen, S. Jordan, Y.-K. Liu, D. Moody, R. Peralta, R. Perlner and D. Smith-Tone, "Report on post- quantum cryptography", in Department of Commerce, National Institute of Standards and Technology (2016). An approach for Endto-End (E2E) security of 5G applications. A K Kumari, G S Sadasivam, S S Gowri, S A Akash, E G Radhika, IEEE 4th International Conference on Big Data Security on Cloud (BigDataSecurity), IEEE International Conference on High Performance and Smart Computing,(HPSC) and IEEE International Conference on Intelligent Data and Security. Institute of Electrical and Electronics EngineersA. K. Kumari, G. S. Sadasivam, S. S. Gowri, S. A. Akash and E. G. Radhika, "An approach for End- to-End (E2E) security of 5G applications", in IEEE 4th International Conference on Big Data Security on Cloud (BigDataSecurity), IEEE International Conference on High Performance and Smart Computing,(HPSC) and IEEE International Conference on Intelligent Data and Security, (Institute of Electrical and Electronics Engineers, 2018), pp. 133-138. Quantum-Elliptic curve Cryptography for Multihop Communication in 5G Networks. S Khan, J Abdullah, N Khan, A A Julahi, S Tarmizi, International Journal of Computer Science and Network Security. 17S. Khan, J. Abdullah, N. Khan, A. A. Julahi and S. Tarmizi, "Quantum-Elliptic curve Cryptography for Multihop Communication in 5G Networks", International Journal of Computer Science and Network Security, 17, 357-365 (2017). First Demonstration of Quantum-Secured Inter-Domain 5G Service Orchestration and On-Demand NFC Chaining over Flexi-WDM Optical Networks. R Nejabati, R Wang, A Bravalheri, A Muqaddas, N Uniyal, T Diallo, R Tessinari, R S Guimaraes, S Moazzeni, E Hugues-Salas, G T Kanellos, D Simeonidou, Optical Fiber Communication Conference Post-deadline Papers. Optical Society of AmericaR. Nejabati, R. Wang, A. Bravalheri, A. Muqaddas, N. Uniyal, T. Diallo, R. Tessinari, R. S. Guimaraes, S. Moazzeni, E. Hugues-Salas, G. T. Kanellos and D. Simeonidou, "First Demonstration of Quantum- Secured Inter-Domain 5G Service Orchestration and On-Demand NFC Chaining over Flexi-WDM Opti- cal Networks", in Optical Fiber Communication Conference Post-deadline Papers, (Optical Society of Amer- ica, 2019), pp. Th4C-6. End-to-End Quantum Secured Inter-Domain 5G Service Orchestration Over Dynamically Switched Flex-Grid Optical Networks Enabled by a q-ROADM. R Wang, R S Tessinari, E Hugues-Salas, A Bravalheri, N Uniyal, A S Muqaddas, R S Guimaraes, T Diallo, S Moazenni, Q Wang, G T Kanellos, R Nejabati, D Simeonidou, J. of Lightw. Tech. 38R. Wang, R. S. Tessinari, E. Hugues-Salas, A. Bravalheri, N. Uniyal, A. S. Muqaddas, R. S. Guimaraes, T. Diallo, S. Moazenni, Q. Wang, G. T. Kanellos, R. Nejabati and D. Simeonidou, "End-to-End Quantum Se- cured Inter-Domain 5G Service Orchestration Over Dynamically Switched Flex-Grid Optical Networks Enabled by a q-ROADM", J. of Lightw. Tech., 38, 139-149 (2019). Field Trial of Dynamic DV-QKD Networking in the SDN Controlled Fully-Meshed Optical Metro Network of the Bristol City 5GUK Test Network. R S Tessinari, A Bravalheri, E Hugues-Salas, R Collins, D Aktas, R S Guimaraes, O Alia, J Rarity, G T Kanellos, R Nejabati, D Simeonidou, European Conference on Optical Communication. Institute of Electrical and Electronics Engineers, 2019), pp. PD.3.6R. S. Tessinari, A. Bravalheri, E. Hugues-Salas, R. Collins, D. Aktas, R. S. Guimaraes, O. Alia, J. Rarity, G. T. Kanellos, R. Nejabati and D. Simeonidou, "Field Trial of Dynamic DV-QKD Networking in the SDN Controlled Fully-Meshed Optical Metro Network of the Bristol City 5GUK Test Network", in European Conference on Optical Communication, (Institute of Electrical and Electronics Engineers, 2019), pp. PD.3.6. Quantum Technologies in Support for 5G services: Ordered Proof-of-Transit. A Aguado, D R Lopez, V Lopez, F De La Iglesia, A Pastor, M Peev, W Amaya, F M , C Abellan, V Martin, European Conference on Optical Communication. Institute of Electrical and Electronics Engineers41A. Aguado, D. R. Lopez, V. Lopez, F. de la Iglesia, A. Pastor, M. Peev, W. Amaya, F. M, C. Abellan and V. Martin, "Quantum Technologies in Support for 5G services: Ordered Proof-of-Transit", in European Conference on Optical Communication, (Institute of Electrical and Electronics Engineers, 2019), pp. P41. The Madrid Quantum Network: A Quantum-Classical Integrated Infrastructure. V Martin, A Aguado, P Salas, A L Sanz, J P Brito, D R Lopez, V Lopez, A Pastor, J Folgueira, H H Brunner, S Bettelli, F Fung, L C Comandar, D Wang, A Poppe, M Peev, Photonics Networks and Devices. Optical Society of AmericaV. Martin, A. Aguado, P. Salas, A. L. Sanz, J. P. Brito, D. R. Lopez, V. Lopez, A. Pastor, J. Folgueira, H. H. Brunner, S. Bettelli, F. Fung, L. C. Comandar, D. Wang, A. Poppe and M. Peev, "The Madrid Quantum Network: A Quantum-Classical Integrated Infrastructure", in Photonics Networks and Devices, (Optical Society of America, 2019), pp. QtW3E-5. Quantum Aware SDN Nodes in the Madrid Quantum Network. V Martin, A Aguado, A L Sanz, J P Brito, P Salas, D R Lopez, V Lopez, A Pastor-Perales, A Poppe, M Peev, International Conference on Transparent Optical Networks. Institute of Electrical and Electronics EngineersV. Martin, A. Aguado, A. L. Sanz, J. P. Brito, P. Salas, D. R. Lopez, V. Lopez, A. Pastor-Perales, A. Poppe and M. Peev, "Quantum Aware SDN Nodes in the Madrid Quantum Network", in International Confer- ence on Transparent Optical Networks, (Institute of Electrical and Electronics Engineers, 2019), pp. 1-4. Enabling Quantum Key Distribution Networks via Software-Defined Networking. A Aguado, V Lopez, J Pedro Brito, A Pastor, D R Lopez, V Martin, Optical Network Design and Modelling (Institute of Electrical and Electronics Engineers. A. Aguado, V. Lopez, J. Pedro Brito, A. Pastor, D. R. Lopez and V. Martin, "Enabling Quantum Key Distribution Networks via Software-Defined Networking", in Optical Network Design and Modelling (In- stitute of Electrical and Electronics Engineers, 2020). Field Trial of Multi-Node, Coherent-One-Way Quantum Key Distribution with Encrypted 5x100G DWDM System. C White, A Wonfor, A Bahrami, J Pearse, G Duan, T Edwards, A Straw, T Spiller, R Penty, A Lord, European Conference on Optical Communications. Institute of Electrical and Electronics EngineersTh.1.A.1C. White, A. Wonfor, A. Bahrami, J. Pearse, G. Duan, T. Edwards, A. Straw , T. Spiller, R. Penty and A. Lord, "Field Trial of Multi-Node, Coherent-One-Way Quantum Key Distribution with Encrypted 5x100G DWDM System", in European Conference on Optical Communications, (Institute of Electrical and Electronics Engineers, 2019), pp. Th.1.A.1. Post-quantum key exchange for the internet and the open quantum safe project. M Mosca, D Stebila, International Conference on Selected Areas in Cryptography. ChamSpringerM. Mosca and D. Stebila, "Post-quantum key exchange for the internet and the open quantum safe project", in International Conference on Selected Areas in Cryptography, (Springer, Cham, 2016), pp. 14-37.
[]
[ "On anharmonicities of giant dipole excitations", "On anharmonicities of giant dipole excitations" ]
[ "D T De Paula \nInstituto de Física\nUniversidade Federal do Rio de Janeiro\nC.P. 6852821945-970Rio de JaneiroRJBrazil\n", "T Aumann \nGesellschaft für Schwerionenforschung (GSI)\nPlanckstr. 1D-64291DarmstadtGermany\n", "L F Canto \nInstituto de Física\nUniversidade Federal do Rio de Janeiro\nC.P. 6852821945-970Rio de JaneiroRJBrazil\n", "B V Carlson \nDepartmento de Física\nInstituto Tecnológico de Aeronáutica -CTA\n12228-900São José dos CamposSPBrazil\n", "H Emling \nGesellschaft für Schwerionenforschung (GSI)\nPlanckstr. 1D-64291DarmstadtGermany\n", "M S Hussein \nInstituto de Física\nUniversidade de São Paulo\nC.P. 6631805389-970São PauloSPBrazil\n" ]
[ "Instituto de Física\nUniversidade Federal do Rio de Janeiro\nC.P. 6852821945-970Rio de JaneiroRJBrazil", "Gesellschaft für Schwerionenforschung (GSI)\nPlanckstr. 1D-64291DarmstadtGermany", "Instituto de Física\nUniversidade Federal do Rio de Janeiro\nC.P. 6852821945-970Rio de JaneiroRJBrazil", "Departmento de Física\nInstituto Tecnológico de Aeronáutica -CTA\n12228-900São José dos CamposSPBrazil", "Gesellschaft für Schwerionenforschung (GSI)\nPlanckstr. 1D-64291DarmstadtGermany", "Instituto de Física\nUniversidade de São Paulo\nC.P. 6631805389-970São PauloSPBrazil" ]
[]
The role of anharmonic effects on the excitation of the double giant dipole resonance is investigated in a simple macroscopic model. Perturbation theory is used to find energies and wavefunctions of the anharmonic oscillator. The cross sections for the electromagnetic excitation of the one-and two-phonon giant dipole resonances in energetic heavy ion collisions are then evaluated through a semiclassical coupled-channel calculation. It is argued that the variations of the strength of the anharmonic potential should be combined with appropriate changes in the oscillator frequency, in order to keep the giant dipole resonance energy consistent with the experimental value. When this is taken into account, the effects of anharmonicities on the double giant dipole resonance excitation probabilities are small and cannot account for the well known discrepancy between theory and experiment.
10.1103/physrevc.64.064605
[ "https://export.arxiv.org/pdf/nucl-th/0202041v1.pdf" ]
119,412,871
nucl-th/0202041
d38c6937542e64c98381fb05b7058ca3c039c2ea
On anharmonicities of giant dipole excitations arXiv:nucl-th/0202041v1 13 Feb 2002 D T De Paula Instituto de Física Universidade Federal do Rio de Janeiro C.P. 6852821945-970Rio de JaneiroRJBrazil T Aumann Gesellschaft für Schwerionenforschung (GSI) Planckstr. 1D-64291DarmstadtGermany L F Canto Instituto de Física Universidade Federal do Rio de Janeiro C.P. 6852821945-970Rio de JaneiroRJBrazil B V Carlson Departmento de Física Instituto Tecnológico de Aeronáutica -CTA 12228-900São José dos CamposSPBrazil H Emling Gesellschaft für Schwerionenforschung (GSI) Planckstr. 1D-64291DarmstadtGermany M S Hussein Instituto de Física Universidade de São Paulo C.P. 6631805389-970São PauloSPBrazil On anharmonicities of giant dipole excitations arXiv:nucl-th/0202041v1 13 Feb 2002(March 30, 2022) The role of anharmonic effects on the excitation of the double giant dipole resonance is investigated in a simple macroscopic model. Perturbation theory is used to find energies and wavefunctions of the anharmonic oscillator. The cross sections for the electromagnetic excitation of the one-and two-phonon giant dipole resonances in energetic heavy ion collisions are then evaluated through a semiclassical coupled-channel calculation. It is argued that the variations of the strength of the anharmonic potential should be combined with appropriate changes in the oscillator frequency, in order to keep the giant dipole resonance energy consistent with the experimental value. When this is taken into account, the effects of anharmonicities on the double giant dipole resonance excitation probabilities are small and cannot account for the well known discrepancy between theory and experiment. The double giant dipole resonance (DGDR) has attracted considerable interest in the last decade. Several experiments to measure the DGDR cross section using relativistic heavy ion beams have been performed [1][2][3][4][5][6]. Comparison with the predictions of the harmonic oscillator model has clearly demonstrated a systematic discrepancy. The experimental values for the DGDR cross sections exceed the theoretical predictions by a considerable amount. One of the attempts to explain these differences was made by Bortignon and Dasso [7], using a macroscopic anharmonic oscillator model. These authors found that with a small anharmonic perturbation of the r 4 -type one can reproduce both the experimentally observed DGDR excitation energy (which only marginally differs from that obtained in the harmonic approximation) and the DGDR cross section for the 208 P b + 208 P b collision at 640 A·MeV. They reached a similar conclusion for the 136 Xe + 208 P b collision at 700 A·MeV, where a much greater discrepancy from the harmonic model appears [2]. The purpose of this paper is to point out that this model does not lead to the enhancement found in Ref. [7], if proper renormalization of the oscillator frequency is performed in order to guarantee that the theoretical giant dipole resonance (GDR) excitation energy is kept at the experimental value. The model of Refs. [7,?] is based on the following Hamiltonian H = H 0 + F (x, y, z; t),(1) where H 0 is the anharmonic oscillator describing the intrinsic motion of the projectile, H 0 = 1 2D (p 2 x + p 2 y + p 2 z ) + C 2 (x 2 + y 2 + z 2 ) + B 4 (x 2 + y 2 + z 2 ) 2 ,(2) where D is the mass parameter, C is the oscillator strength and B is the strength of the anharmonicity. Here, we take the mass parameter to be the reduced mass for the motion of the protons against the neutrons, D = NZ A m 0 , where m 0 is the average nucleon mass. The beam is assumed to be parallel to the x−axis and the coupling interaction F is derived from the Lienard-Wiechert potential [10] in the projectile frame φ(x, y, z, t) = Z T eγ [γ 2 (x − vt) 2 + (y − b) 2 + z 2 ] 1/2 ,(3) were Z T e is the charge of the target, b is the impact parameter, and γ is the Lorentz factor, γ = 1/ 1 − (v/c) 2 . To be specific, we study the 208 P b + 208 P b collision at 640 A·MeV. We first solve the Schrödinger equation for the intrinsic motion, described by H 0 . For this purpose it is convenient to recast the intrinsic Hamiltonian into the following equivalent form H 0 =hω 1 2 π 2 + ρ 2 + β ρ 4 .(4) In the above, the commonly used variable transformations ρ i = Dω h r i ; π i = p i √ Dhω(5) have been made, where r i and p i stand for the components of the position and momentum operators respectively. The oscillator frequency is given bȳ hω =h C D ,(6) and the dimensionless strength β is related to B as B = 4 (hω) 3 D 2 h 4 β .(7) In Fig. 1, we show the ratios E l=0 DGDR / (2 E GDR ) and E l=2 DGDR / (2 E GDR ) as a function of B, in the same range as chosen in Ref. [7]. In this range, the anharmonicity can be treated using first order perturbation theory to great accuracy (∼ 2%). The GDR and DGDR energies, to first order in β, are given by E GDR (β) =hω (1 + 5 β) ,(8)E l=0 DGDR (β) = 2hω (1 + 7.5 β) ,(9) E l=2 DGDR (β) = 2hω (1 + 6 β) . Fig. 1 is equivalent to that shown in Ref. [7] and our results are essentially identical to theirs. The reduced transition matrix elements can also be easily calculated to first order in the parameter β. We find GDR E1 GS = e S 1 hω 1/2 (1 − 2.5 β) , DGDR, l = 0 E1 GDR = e S 1 hω 1/2 2 3 (1 − 5 β), DGDR, l = 2 E1 GDR = e S 1 hω 1/2 10 3 (1 − 3.5β) , where e is the absolute value of the electron charge and S 1 is given by the energy-weighted sum rule, S 1 = 9 4πh 2 2m 0 NZ A . The energy-weighted sum rule for transitions from the ground state and from the GDR are satisfied to first order in the parameter β, using the above energies and reduced matrix elements. In order to maintain E GDR (β) at the experimental value, namely E GDR (β) = E exp GDR (13.4 MeV, in the present case), the oscillator frequency must be renormalized as β is changed. The resulting renormalized frequency, from Eq.(8), is hω(β) = E exp GDR (1 + 5 β) .(11) Note that in the B-range of Fig. 1, the dimensionless parameter varies in the range −0.014 < β < 0.014 which yields 1.08 E exp GDR >hω(β) > 0.93 E exp GDR . Whereas our oscillator frequency is a function of the anharmonicity parameter, in Ref. [7] it is kept constant at the harmonic value,hω(β = 0) = E exp GDR . This difference does not affect the ratio E l DGDR /(2 E GDR ) shown in Fig. 1, since the oscillator frequency cancels out in this case (see eqs. (8) to (10)). When the renormalized frequency is used in both the GDR and DGDR energies and matrix elements, the sum rules for transitions from the ground state and from the GDR are still satisfied. However, use of the renormalized frequency substantially changes the excitation probability of the DGDR, as will be shown below. The calculation of electromagnetic excitation probabilities and cross sections is performed with the code RELEX [9], based on the Winther and Alder theory [10]. With this code, we perform a full coupled-channels calculation of the electromagnetic excitation of the GDR and DGDR. Similar results (about 10% larger) would be obtained when perturbation theory is used for the collision dynamics [11]. In Fig. 2, we show the enhancement of the DGDR excitation probability relative to its harmonic value as a function of B for the impact parameter b = 30 fm. We find that for B ∼ −100 MeV/fm 4 (which in this case corresponds to β ∼ −0.7 ×10 −2 ) the overall enhancement is 6%. For purposes of comparison, we have also performed calculations using a constant frequency (hω = 13.4 MeV in this case). We then obtain an enhancement of 35%, as shown by the dashed in line in Fig. 2 , in agreement with Ref. [7] (see their Fig. 1). In Fig. 3a, obtained with fixed GDR energy, shown as a solid line in Fig. 3b, is close to one over the entire range of B values but is slightly less than one for large, negative anharmonicities, (about −0.5% at B = −100 MeV/fm 4 ). This small deviation is due to the increase in the population of the DGDR at these values of B and the correponding depopulation of the GDR. The GDR cross section ratio obtained with fixed oscillator frequency is shown as a dashed line in Fig. 3b. In this case, we find the GDR cross section to be enhanced by about 10% at B = −100 MeV/fm 4 . The enhancement of 10% in the GDR cross section of Fig. 3b is clearly responsible for the large enhancement of 22% in the DGDR cross section of Fig. 3a at B = −100 MeV/fm 4 . The above conclusions do not change noticeably when the calculations are extended to other systems, such as 136 Xe + 208 Pb at 700 A·MeV. The microscopic study of Ref. [12] established that the anharmonicity parameter scales as A −1 with the mass number. Thus, if B = −100 MeV/fm 4 represents a reasonable value for 208 Pb, then for 136 Xe a corresponding value would be B = −150 MeV/fm 4 . In Fig. 4, we display the results of calculations for this system as a function of the anharmonicity parameter B in Fig. 4. The solid line in the figure again shows the results of calculations in which the oscillator frequency is varied to maintain the GDR energy constant, while the dashed line represents the results of calculations in which the oscillator frequency is maintained fixed. Similar to the previous case, we find the enhancement of the DGDR cross section to be greatly reduced when the GDR resonance energy is maintained at a fixed value. As can be seen in Fig. 4, at B = −150 MeV/fm 4 , the DGDR cross section is enhanced by 62% when the oscillator frequency is maintained constant, but is enhanced by less than 10% when the GDR energy is maintained at its physical value. Before ending we comment briefly on the connection between the Bortignon-Dasso model used in this paper and microscopic models [12][13][14][15] that aim to assess the importance of the anharmonic effects both on the spectrum and on the transition operator. Ref. [14] finds, within the Lipkin model, small effects on the spectrum (which scale roughly as 1/A). Hamamoto finds, within nuclear field theory, that the nonlinear effects in the 1-phonon to 2-phonon transition operator are also quite small and scale as 1/A [16]. As mentioned above, Ref. [12], through detailed microscopic calculations, finds that the anharmonic effects are Another interesting point to mention is that the GDR has a width, which is considered neither by Bortignon and Dasso nor in the present calculation. The effect of the width of the GDR on the excitation of the DGDR has been recently studied within a harmonic picture [17]. The overall effect of the width, at the energies considered here is, to produce a slight increase in the DGDR cross section, although not enough to explain all the available data. It would certainly be of interest to extend the present calculation within the anharmonic model by coupling the oscillator to other degrees of freedom (which would generate the damping width). In conclusion, we have investigated the effect of anharmonicities in the excitation of the DGDR in relativistic heavy ion collisions, with the same macroscopic model used by Bortignon and Dasso [7]. We point out that variations of the anharmonicity strength must be accompanied by a renormalization of the oscillator frequency, in order to maintain the GDR energy at a value consistent with the experimental one. We have found that this condition strongly reduces the enhancement in the DGDR excitation probabilities and corresponding cross sections, so that they remain much below the experimental results. This work was supported in part by DAAD/CAPES cooperative agreement no. we show the enhancement in the impact-parameter integrated DGDR cross section (solid line) vs. B, for the same system. In the cross section calculations, impact parameters up to 200 fm are taken into account and a lower cut-off at 15 fm is used to eliminate nuclear effects. The full line in Fig. 3a represents the result of the present work, in which an enhancement of only 4% is obtained for B = −100 MeV/fm 4 . The dashed line, obtained using a fixed value of the oscillator frequency, yields an enhancement of the DGDR cross section of 22% for the same value of B. The GDR cross section ratio σ GDR (B)/σ GDR (B = 0) indeed small and scale as 1/A. The values of the parameter B in both the Bortignon-Dasso and present calculations are taken to be small enough to be in line with the microscopic findings but also with the experimentally observed DGDR excitation energies (although the enhancement of the DGDR cross section could be increased thorough an artificially large B, there is no choice for this parameter that whould simultaneously explain the observed cross section enhancement and the only very small deviations of the DGDR excitation energy from the harmonic limit). 415-bra-probral/bu, CNPq and the MCT/FINEP/CNPq(PRONEX) under contract no. 41.96.0886.00. D.T.P and L.F.C. acknowledge partial support from the Fundação Universitária José Bonifácio, and M.S.H. and B.V.C. acknowledge support from the FAPESP. [ 17 ] 17C.A.Bertulani and V.Yu.Ponomarev, Phys.Rep. 321, 139 (1999). Figure Captions • Figure 1 : 1The ratio E l DGDR / (2 E GDR ) vs the anharmonicity parameter B, for 208 P b. The solid line is for l = 2 and the dashed line for l = 0. The reduced mass for the oscillation of protons against neutrons is used for the mass parameter D.•Figure 2: The enhancement in the excitation of the DGDR in the collision of 208 P b+ 208 P b at 640 A·MeV for the impact parameter b = 30 fm. The solid line represents the results of the present calculation while the dashed line corresponds to a constant oscillator frequency. • Figure 3: Enhancement factor of the (a) DGDR and (b) GDR cross sections in the collision of 208 P b + 208 P b at 640 A·MeV. The dashed lines correspond to the results obtained with fixed oscillator frequency, while the full lines correspond to a fixed E GDR . • Figure 4: Enhancement factor of the DGDR cross section in the collision of 136 Xe + 208 P b at 700 A·MeV. The dashed line corresponds to the results obtained with fixed oscillator frequency, while the full line corresponds to a fixed E GDR . . . H See, Emling, Prog. Part. Nucl. Phys. 33729See eg. H. Emling, Prog. Part. Nucl. Phys. 33, 729 (1994); . Ph, N Chomaz, Frascaria, Phys. Rep. 252275Ph. Chomaz and N. Frascaria, Phys. Rep. 252, 275 (1995); . T Aumann, P F Bortignon, H Emling, Ann. Rev. Nucl. Part. Sci. 48351T. Aumann, P.F. Bortignon, and H. Emling, Ann. Rev. Nucl. Part. Sci 48, 351 (1998). . R Schmidt, Phys. Rev. Lett. 701767R. Schmidt et al., Phys. Rev. Lett. 70, 1767 (1993). . T Aumann, Phys. Rev. 471728T. Aumann et al., Phys. Rev. C47, 1728 (1993). . J L Ritman, Phys. Rev. Lett. 702659J.L. Ritman et al., Phys. Rev. Lett. 70, 533 (1993); 70, 2659(E), (1993). . J R Beene, Nucl. Phys. 569163J.R. Beene, Nucl. Phys. A569, 163c (1993). . K Boretzky, Phys. Lett. B. 38430K. Boretzky et al., Phys. Lett. B 384, 30 (1996). . P F Bortignon, C H Dasso, Phys Rev. 56574P. F. Bortignon, and C. H. Dasso, Phys Rev C56, 574 (1997). . C Volpe, F Catara, Ph, M V Chomaz, E G Andrés, Lanza, Nucl. Phys. 589521C. Volpe, F. Catara, Ph. Chomaz, M.V. Andrés, and E.G. Lanza, Nucl. Phys. A589, 521(1995). . C A Bertulani, Comp. Phys. Comm. 116345C.A. Bertulani, Comp. Phys. Comm. 116,345 (1999) . . A Winther, K Alder, Nucl. Phys. 319518A. Winther, and K. Alder, Nucl. Phys. A319, 518 (1979). . C A Bertulani, L F Canto, M S Hussein, A F R De Toledo Piza, Phys. Rev. 53334C.A. Bertulani, L.F. Canto, M.S. Hussein and A.F.R. de Toledo Piza, Phys. Rev. C53, 334 (1996). . V Yu, P F Ponomarev, R A Bortignon, V V Broglia, Voronov, Phys. Rev. Lett. 851400V. Yu. Ponomarev, P. F. Bortignon, R. A. Broglia, and V. V. Voronov, Phys. Rev. Lett. 85, 1400 (2000). . F Catara, Ph Chomaz, N Van Giai, Phys.Lett.B. 2336F.Catara, Ph.Chomaz and N.Van Giai, Phys.Lett.B 233, 6 (1989). . G F Bertsch, H Feldmeier, Phys.Rev. 56839G.F.Bertsch and H.Feldmeier, Phys.Rev. C56, 839 (1997). . G F Bertsch, P F Bortignon, K Hagino, Nucl.Phys. 65759G.F.Bertsch, P.F.Bortignon and K.Hagino, Nucl.Phys. A657, 59 (1999). . I Hamamoto, Phys.Rev. 6054320I.Hamamoto, Phys.Rev. C60 , 054320 (1999).
[]
[ "Enacting Musical Worlds: Common Approaches to using NIMEs within Performance and Person-Centred Arts Practices", "Enacting Musical Worlds: Common Approaches to using NIMEs within Performance and Person-Centred Arts Practices" ]
[ "Lauren Hayes [email protected] \nArts, Media + Engineering\nArizona State University Tempe\n85287AZ\n" ]
[ "Arts, Media + Engineering\nArizona State University Tempe\n85287AZ" ]
[]
Live music making can be understood as an enactive process, whereby musical experiences are created through human action. This suggests that musical worlds coevolve with their agents through repeated sensorimotor interactions with the environment (where the music is being created), and at the same time cannot be separated from their sociocultural contexts. This paper investigates this claim by exploring ways in which technology, physiology, and context are bound up within two different musical scenarios: live electronic musical performance; and person-centred arts applications of NIMEs.In this paper I outline an ethnographic and phenomenological enquiry into my experiences as both a performer of live electronic and electro-instrumental music, as well as my extensive background in working with new technologies in various therapeutic and person-centred artistic situations. This is in order to explore the sociocultural and technological contexts in which these activities take place. I propose that by understanding creative musical participation as a highly contextualised practice, we may discover that the greatest impact of rapidly developing technological resources is their ability to afford richly diverse, personalised, and embodied forms of music making. I argue that this is applicable over a wide range of musical communities.
10.5281/zenodo.1179082
[ "https://arxiv.org/pdf/2012.00927v1.pdf" ]
5,906,131
2012.00927
c2d57b116383dcb1fbc31aeaaf3c8332907689db
Enacting Musical Worlds: Common Approaches to using NIMEs within Performance and Person-Centred Arts Practices Lauren Hayes [email protected] Arts, Media + Engineering Arizona State University Tempe 85287AZ Enacting Musical Worlds: Common Approaches to using NIMEs within Performance and Person-Centred Arts Practices Author Keywords Enactionperson-centred arts practiceperformance prac- ticesociocultural contexts ACM Classification H55 [Information Interfaces and Presentation] Sound and Music Computing-Methodologies and techniquesJ5 [Arts and Humanities] Music Live music making can be understood as an enactive process, whereby musical experiences are created through human action. This suggests that musical worlds coevolve with their agents through repeated sensorimotor interactions with the environment (where the music is being created), and at the same time cannot be separated from their sociocultural contexts. This paper investigates this claim by exploring ways in which technology, physiology, and context are bound up within two different musical scenarios: live electronic musical performance; and person-centred arts applications of NIMEs.In this paper I outline an ethnographic and phenomenological enquiry into my experiences as both a performer of live electronic and electro-instrumental music, as well as my extensive background in working with new technologies in various therapeutic and person-centred artistic situations. This is in order to explore the sociocultural and technological contexts in which these activities take place. I propose that by understanding creative musical participation as a highly contextualised practice, we may discover that the greatest impact of rapidly developing technological resources is their ability to afford richly diverse, personalised, and embodied forms of music making. I argue that this is applicable over a wide range of musical communities. INTRODUCTION Christopher Small's concept of musicking firmly places participation at the centre of what it is to music. To take part in a musical activity-which includes sweeping the stage before a concert, selling tickets, in addition to accepted musical practices such as composing or performing-entails Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. NIME'15, May 31-June 3, 2015, Louisiana State Univ., Baton Rouge, LA. Copyright remains with the author(s). the forging of various relationships. Small argues that it is through these relationships, which may exist between people, sounds, and spaces, that meaning is constructed. In what follows I provide two contrasting accounts of NIME development. Through these, I explore how the relationships that Small outlines are forged over time through the lens of practice-led and ethnographic research. I offer examples which lie within two often unrelated areas: the Western experimental and electronic music communities; and the world of person-centred arts practices for people with complex disabilities. This is in order to illustrate how such an understanding of NIME development within one context may inform work in another. This may also illuminate parallels between what could be perceived as unrelated musical practices. By viewing musical engagement as an evolving and embodied process, which supports Small's definition of music as human action, it can be demonstrated that the relevance of technological developments in the field of live electronic and digital musical practice lies not necessarily within the material aspects per se. But rather, an important consequence is the potential for individualised practices to emerge, where each performer enacts a unique musical environment in coordination with their physiological, cultural, social, and musical histories. I will suggest that by viewing NIME-related practices in this way, we are afforded the opportunity to view musical activity in general as a "medium of social relation" [3] in various contexts. Perceptually Guided Action The idea of performance as perceptually guided action [8] suggests the importance of a multimodal approach to developing digital musical instruments (DMIs) or NIMEs. Moreover, this can inform our understanding of musical participation in general. Not only as listeners, but also as performers we are continuously making use of multiple streams of sensory feedback as we make our way through a performance: auditory, haptic, kinaesthetic, and visual. This draws on Francisco Varela, Evan Thompson and Eleanor Rosch's theory of enaction [11] as a way of understanding the importance of the role of the body withinspecifically-live electronic musical performance. An enactive understanding focuses on the idea of structural coupling between agent and environment through repeated sensorimotor interactions [11]. Both the perceptive capacities of various organisms as well as the environment itself emerge through reciprocal coordination and coevolution. In biological terms, for example, this phenomenon is "responsible for both the ultraviolet vision of bees and the ultraviolet reflectance patterns of flowers" [11]. Through phenomenological enquiry we start to see how musical worlds may evolve in a similar manner. The concept of enaction extends Maurice Merleau-Ponty's work on phenomenology, which posits the body as both the perceiving object, and at the same time, the subject of perception [9]. Merleau-Ponty illustrates the body's capacity for this duplicity of sensation through an example of the hands, which oscillate between touching and being touched. The enactive approach emphasises the mutuality between agent and environment. Similarly, musical works can emerge out of the relationships that develop over time between a specific combination of people, instruments, and space. This does not apply only to the immediacy of musical performance. This framework can also be used to understand the durational development of NIMEs, where an instrument may be iterated through a series of incremental adjustments informed by the experience of their use within different scenarios. Small suggests that when we rehearse and perform, we are exploring not only the sonic relationships that articulate how we ideally believe sounds should be organised, but also the relationships between sound and instruments; the relationship between the performers and the audience; the relationship between those taking part and the physical setting; and so on [1]. Ethnography and Creative Practice There has been a growing number of calls from the NIME community to acknowledge the importance of both ethnographic (see [2]) and practice-led (see [5]) research. These methodologies allow for a discussion of the complex relationships between the sociocultural contexts in which technical developments in various NIME-related fields are being made. Both creative practice and ethnographic approaches provide space for exploring how NIME-related research unfolds over time in the real world. The two case studies that I describe each offer accounts of highly personalised NIME development. In exploring these situations, it will become apparent that any attempt to optimise the instruments discussed for the wider community would be largely redundant as they are evolved through the physiologies and aesthetic choices of the specific musicians involved. Evaluation of these practices through objective testing would be fruitless. The methodologies employed allow the experiences of the users to be shared through reflective observation and discussion. Nevertheless, by offering this insight, we can start to find implications for engagement with DMIs in general. TWO CASE STUDIES In this section I discuss two case studies where I have developed NIMEs in what initially appears to be unrelated contexts: my own live electronic performance practice; and person-centred collaborative arts practice. In each case I examine the role of the physiology of the musician, their musical aesthetics and histories, the contexts in which the musical engagement takes place, and how this is bound up with the various technologies employed. Personal Performance Practice Background Over the last eight years, I have explored an approach to personal DMI design that focuses around the relationships between sound and touch. This explores the double aspects of Merleau-Ponty's notion of embodiment through, on one hand, themes of resistance and haptic technology [7], but also the perception of sound as vibration, through vibrotactile technology [6]. While the benefits of using haptic technology for improving certain aspects of instrumental skill acquisition are well documented [10], research in this area tends to be focused around technical development. My own research has attempted to provide an in-depth, practicebased perspective in this field. The Physiology of the Performer It is perhaps not surprising that my training as a classical pianist, which began formally at the age of four, has led to an exploration of musical HCI that is largely focused around the expressive capacities of the fingers. While I may have been drawn to the piano simply due to its ubiquity as a traditional Western instrument, through repeated engagement with the instrument from this young age, by way of lessons, exercises, and the sort of experimentation that I much later learned was called improvisation, I enacted my musical environment based around a very specific type of tactility. I learned to make use of both the vibrotactile feedback of the resonating body of the piano, as well as the particular resistances that it offered me as a physical instrument. When much later I started to perform with computers, the disconnect between sound and touch left me unfulfilled as a musician. Performance gestures contained none of the effort, struggle, or physicality that I was accustomed to making use of. This led me to question what it was that I was missing in my experience as a performer now engaging with digital technology, in order to adequately communicate a musical idea. How could I translate an intention into an expressive and articulated sonic result? It has been through my own personal history of musical performance that I have been prompted to examine the relationship between sound and touch. Experiencing what Simon Emmerson describes as "increasingly alienated from purely physical sound production" [4], urged me to explore more deeply the links between action and perception, specifically for the performer. This research has been extensively documented elsewhere, and has included incorporating haptic technologies [7,8], and vibrotactile feedback [6] into my instrument design. Rather than reiterate the technicalities of this work, it is important to note that my evolved means of musical expression has been closely tied to my physiology over a long period of time. As I approached NIME development, this relationship between body and instrument was key to informing my design choices in that they were deeply rooted within an exploration of physicality and touch. Sociocultural Context My performance practice using NIMEs has been largely situated under the umbrella of Western art music. Being based within universities has provided me with access to expertise in both software and hardware development, situated me within a community of potential mentors and collaborators, and offered me many opportunities to present and discuss my work. Working in and between genres such as contemporary classical, free improvisation, experimental beat-based music, and noise, has allowed my various performance environments to be explored within a broad range of scenarios. For example, at an academic conference, NIMEs used for performance must be extremely reliable and stable, with the ability to be set up quickly as there is often little time for this between performances. Within improvisation scenarios, the NIME must be flexible and adaptable. It must be able to give space to co-performers, yet possess a voice of its own. At a noise gig, my digital/laptop-based instruments must be able to hold their own in terms of sonic depth against analogue counterparts from myself or collaborators. Objectives for NIME Development While a large part of my practice has been based around the hybrid piano, formed around haptically and digitally augmented acoustic pianos [8], I have also performed extensively using a variety of hybrid (analogue and digital) electronic systems. These are assemblages of various components, including analogue synthesisers, hardware drum machines, various MIDI and game controllers, foot pedals, and bespoke software built using Max/MSP. Figure 1: Networked interactions. In working with hardware that does not offer the same rich physicality of the hybrid piano, I had to develop ways of introducing this tactile engagement to my performance environments. By interacting with all elements of a particular set up through a single game controller, I was able to simultaneously touch and engage with different parts of the instrument, bringing a sense of immediacy into my hands. For example, in one configuration, the game controller would trigger very short segments of sounds, which were in turn analysed by the software. This would send both MIDI information out via the sound card to trigger drum machine synthesis, as well as sending multiple control voltages out to an analogue synthesizer. At the same time, the audio output from these two external devices was sent back into the laptop. This would then be sampled and processed in Max/MSP, where several parameters were affected by my own interaction with the game controller. In this way I was able to access several parts of my performance system at once, bypassing some of the given-and to me, undesirable-control interfaces, such as the knobs of my Korg MS20 analogue synthesizer, or the buttons of the Elektron MachineDrum. While this approach involves initial one-to-many mapping choices, the overall result is a network of interdependent processes, which feed into each other. The resistances in my performance environments often lie within the extreme potential for activity through interconnections within the audio signal path, which must be negotiated by the performer, often through holding a static position for extended periods of time. The game controller, for example, is so easy to manipulate, that the musicality comes from resisting this by holding both thumbs fixed on the joysticks, which requires a great deal of pressure from the hands, and creates a tension in the body: a movement of even one millimetre can drastically alter the sound. Person-Centered Arts Practice Background Since 2006 I have worked in various music therapy-related and person-centred arts practice roles in the UK. These have included performing classical piano concerts in day care centres for adults with learning difficulties, as well as running several series of workshops for people with complex disabilities. In these workshops I employ a variety of traditional instruments along with numerous NIMEs, and I focus on the tangible experience of playing and perceiving sound. In 2012 I was asked to be involved with the Artlink 1 Ideas Team, established by the Edinburgh-based arts charity in 2010. This team, consisting of various local and international artists and members of Artlink staff, works with an individual together with psychologists, care workers and family members. The goal is to establish new ways of thinking about and making artworks by exploring creative responses to the daily experiences of people with profound learning disabilities. I was asked to work alongside artist Steve Hollingsworth to develop an instrument for a young autistic man (M), who was intensely drawn to piano performance 2 . The Physiology of the Performer "The nature from which man has selected his musical styles is not only external to him; it includes his own naturehis psychophysical capacities and the ways in which these have been structured by his experiences of interaction with people and things, which are part of the adaptive process of maturation in culture" [1]. Before any part of the design or build commenced, it was crucial to spend time with M at the piano to observe his engagement with the instrument. I visited him weekly to hear him play acoustic pianos as well as my digitally-augmented instrument, the hybrid piano [8]. Several important observations were made over the weeks that I visited him. Aside from a piano-based instrument being selected due to M's enthusiasm for it, his often overpowering strength meant that keyboards or MIDI instruments were ruled out as they were neither sturdy nor durable enough. There was a marked difference in M's playing when sitting at the piano alone, compared to when we played together. When playing alone, M would hammer the keys for several minutes at a time with much force. When playing together, he would pause to make eye contact, to listen, and to respond. He would mimic patterns that my fingers made on the keys, often loosely, and sometimes with accuracy. This posed the first question as to how I could achieve this sense of engagement in M's new instrument without me, or another musician, being physically present. Another important factor was the fact that M could get so enthralled in playing the piano that he would often become over-excited, and begin to sweat and hyperventilate. The posed a further problem as to how we could create something that could be enjoyable, without being over-stimulating. Sociocultural Context M's fondness for the piano did not stem from a particular interest in classical or romantic music, or from a childhood that involved taking piano lessons. The piano for M was a direct means of expression: tangible and immediate. This unique connection between player and instrument allowed an approach to developing the NIME that could view M's aesthetic choices as based around sounds that were enjoyed by him in his everyday life. Sound libraries were created 1 http://www.artlinkedinburgh.co.uk/ 2 http://issuu.com/artlinkedinburgh/docs/ artlink201112 which contained samples of his mother's voice, the sound of his dog barking, as well as car sounds, and sounds from car racing television shows which he enjoyed. Additionally, I was able to observe during our time together in workshops which types of sounds from the hybrid piano seemed to engage M, and which he seemed to dislike. As M had been improvising with me over several months, I sampled some of my own piano playing as source material. Objectives for NIME Development There were several practical design choices that we had to consider from the outset. As mentioned above, the instrument had to be stable. Furthermore, M was attracted to wires, and would grab at any that were visible, so everything had to be hidden and enclosed. A button interface was proposed, where buttons would be secured onto the front of the piano. These were combined with LED lights and vibration motors which offered direct sensory feedback to M to confirm that he had pressed the buttons. The buttons would change the sample sets, and stop and start the sounds. One of the first goals that was discussed in a meeting by M's mother was the idea for M to put a "square peg in a square hole" 3 . That is to say that the instrument should be able to demonstrate some degree of agency and intentionality from M. Furthermore, we envisaged that this would develop over time as M grew familiar with the instrument. As such, I had to ensure that the software could be updated when necessary to ensure that it would remain challenging yet engaging for M. The buttons began as simple start/stop switches, but could be modified to react differently if M started to exhibit choice in his actions. The project is ongoing and the Ideas Team continues to learn from M. CONCLUSION I have described two examples of NIME development in differing contexts. Each situation uses musical performance as a means of enabling creativity, expression, communication, and also personal development. The commonalities between the two situations are clear. In each case, the development of NIMEs has evolved through close attention to the users' physiologies and sensorimotor capacities; their musical histories; and the sociocultural contexts in which the musical engagement takes place. ACKNOWLEDGMENTS Thanks to Alison Stirling and Kara Christine, along with Steve Hollingsworth, for giving me the opportunity to work with them on the Artlink Ideas Team. Figure 2 : 2Detail of M's Piano. Personal email correspondence with Alison StirlingIt is evident that the type of physical, tactile engagement that I seek within live electronic musical performance has arisen out of my perceptive capacities and experience with the touch-based expressivity of the acoustic piano. However, there is now a generation of musicians who may not experience this loss of touch since their initial engagement with music may come through digital devices such as iPads, rather than acoustic instruments. The model of structural coupling between musician and instrument that I describe suggests a need for individualised systems that arise out of the specific interactions of each person in the world over time. For those with complex disabilities, an acknowledgement of the uniqueness of their experiences and responses is required before progress can be made.If we view music as an enactive process, where new technologies lead to more engaging, embodied relationships between people and instruments, both the social as well as the sonic relationships that we wish to explore, affirm and celebrate[1] can be realised. The two examples in this paper serve to highlight the need for more in-depth collaborative research that will combine creative practice and ethnography with DMI design in order to provide a better understanding of the experiences and benefits of using new technologies within musical contexts. How Musical Is Man?. J Blacking, University of Washington PressSeattle and LondonJ. Blacking. How Musical Is Man? University of Washington Press, Seattle and London, 1973. Collaborative composition and socially constructed instruments: Ensemble laptop performance through the lens of ethnography. G Booth, M Gurevich, Proceedings of the 2012 conference on New Interfaces for Musical Expression (NIME). the 2012 conference on New Interfaces for Musical Expression (NIME)Michigan, Ann ArborG. Booth and M. Gurevich. Collaborative composition and socially constructed instruments: Ensemble laptop performance through the lens of ethnography. In Proceedings of the 2012 conference on New Interfaces for Musical Expression (NIME), Michigan, Ann Arbor, 2012. Music in Everyday Life. T Denora, Cambridge University PressCambridgeT. DeNora. Music in Everyday Life. Cambridge University Press, Cambridge, 2003. Losing Touch?': The Human Performer and Electronics. S Emmerson, Music, Electronic Media and Culture. S. EmmersonS. Emmerson. 'Losing Touch?': The Human Performer and Electronics. In S. Emmerson, editor, Music, Electronic Media and Culture, pages 194-216. . Aldershot Ashgate, Ashgate, Aldershot, 2000. Nime, musicality and practice-led methods. O Green, Proceedings of the 2014 conference on New Interfaces for Musical Expression (NIME). the 2014 conference on New Interfaces for Musical Expression (NIME)LondonO. Green. Nime, musicality and practice-led methods. In Proceedings of the 2014 conference on New Interfaces for Musical Expression (NIME), London, 2014. Vibrotactile feedback-assisted performance. L Hayes, Proceedings of the 2011 Conference on New Interfaces for Musical Expression. the 2011 Conference on New Interfaces for Musical ExpressionOsloL. Hayes. Vibrotactile feedback-assisted performance. In Proceedings of the 2011 Conference on New Interfaces for Musical Expression, Oslo, 2011. NIME. Performing articulation and expression through a haptic interface. L Hayes, Proceedings of the 2012 International Computer Music Conference. the 2012 International Computer Music ConferenceLjubljanaICMAL. Hayes. Performing articulation and expression through a haptic interface. In ICMA, editor, Proceedings of the 2012 International Computer Music Conference., Ljubljana, 2012. Haptic augmentation of the hybrid piano. L Hayes, Contemporary Music Review. 5. Taylor and Francis32L. Hayes. Haptic augmentation of the hybrid piano. In Contemporary Music Review, volume 32:5. Taylor and Francis, 2013. . M Merleau-Ponty, Phenomenology of Perception (C. Smith, Trans.). Routledge. M. Merleau-Ponty. Phenomenology of Perception (C. Smith, Trans.). Routledge and Kegen Paul, 1962. Playing by Feel: Incorporating Haptic Feedback into Computer-Based Musical Instruments. S O&apos;modhrain, Stanford University, CAPhD thesisS. O'Modhrain. Playing by Feel: Incorporating Haptic Feedback into Computer-Based Musical Instruments. PhD thesis, Stanford University, CA, 2001. . F J Varela, E Thompson, E Rosch, The Embodied Mind. MIT. F. J. Varela, E. Thompson, and E. Rosch. The Embodied Mind. MIT, Cambridge, 1991.
[]
[ "CHEMICAL POTENTIAL OF A HADRONIC FIREBALL IN THE FREEZE-OUT STAGE", "CHEMICAL POTENTIAL OF A HADRONIC FIREBALL IN THE FREEZE-OUT STAGE" ]
[ "Yaroslav D Krivenko-Emetov [email protected] \nNational Technical University of Ukraine\n03056Kiev\n", "Andriy I Smetana \nNational Technical University of Ukraine\n03056Kiev\n" ]
[ "National Technical University of Ukraine\n03056Kiev", "National Technical University of Ukraine\n03056Kiev" ]
[]
This article explores the van der Waals gas model proposed to describe the hadronic stages of nuclear fireball evolution during the cooling stage. Two different models were proposed for the early and late stages of hadronization. At the initial stage, a two-component meson model consisting of π 0 and π + mesons was suggested, and at the later stage, a two-component nucleon model consisting of protons and neutrons was proposed. The interaction potential for both models was represented by a rectangular well, and the statistical sum was calculated using the saddle-point method. The analytic expressions for pressure and chemical potentials obtained from the model were compared with the corresponding numerical results of other authors obtained earlier using quantum chromodynamics (QCD) methods. The possibility of applying and using the effective chemical potential is also analyzed.
null
[ "https://export.arxiv.org/pdf/2305.09976v1.pdf" ]
258,741,000
2305.09976
ebe484076b6bcbee289649285b77723780926ba0
CHEMICAL POTENTIAL OF A HADRONIC FIREBALL IN THE FREEZE-OUT STAGE 17 May 2023 Yaroslav D Krivenko-Emetov [email protected] National Technical University of Ukraine 03056Kiev Andriy I Smetana National Technical University of Ukraine 03056Kiev CHEMICAL POTENTIAL OF A HADRONIC FIREBALL IN THE FREEZE-OUT STAGE 17 May 20231 This article explores the van der Waals gas model proposed to describe the hadronic stages of nuclear fireball evolution during the cooling stage. Two different models were proposed for the early and late stages of hadronization. At the initial stage, a two-component meson model consisting of π 0 and π + mesons was suggested, and at the later stage, a two-component nucleon model consisting of protons and neutrons was proposed. The interaction potential for both models was represented by a rectangular well, and the statistical sum was calculated using the saddle-point method. The analytic expressions for pressure and chemical potentials obtained from the model were compared with the corresponding numerical results of other authors obtained earlier using quantum chromodynamics (QCD) methods. The possibility of applying and using the effective chemical potential is also analyzed. Introduction Experiments that observe an elliptical flow in non-central collisions of heavy nuclei at high energies indicate that a state of quark-gluon plasma appears and thermalization occurs. This is due to the fact that particles collide with each other more than once. The substance in this state can be characterized by the thermodynamic quantities of temperature, viscosity, density, and others. After the quark-gluon plasma cools, a hadron gas is formed, which can be described in terms of statistical models of hadron gas, such as the van der Waals (vdW) model [1]- [10]. The vdW model is especially useful as it takes into account the repulsion effect that prevents high density at high temperatures. The Grand Canonical Ensemble (GCE) is a suitable mathematical formalism for the phenomena observed in heavy-ion collisions as the number of particles is not fixed. The vdW model proposed in [11] introduces the phenomenological parameters of the radii of the hard-core R ii and R ij that significantly change the yield of particles with different types N i . Various authors have proposed the development of the vdW model to describe more subtle effects in the dependence of the hadronic gas pressure on density (e.g. [12], [13] ). In a multicomponent gas, the parameter a corresponding to attraction transforms into parameters a ij , and the repulsive parameter b transforms into parameters b ij . The effective potential parameters depend on the effective radii of repulsion R 0 i and attraction R i . However, the vdW model cannot be properly developed when considering a finite nuclear system. For nuclear collisions, a nuclear fireball with dimensions < r > ∼ 7 − 100 Fm is observed. In this case, the GCE formalism leads to the use of a double sum, which can be transformed into a double integral, which can be integrated by the saddle point method. The problem has been solved for a two-component system using the GCE formalism, leading to the use of a double sum that can be transformed into a multidimensional integral. A model presented in [14] was believed to be applicable for collisions of heavy nuclei at CERN, with the assumption that characteristic temperatures do not exceed the temperatures at which new particles can be generated. The temperatures of the nuclear fireball are around T < 130 MeV, and the model should have a transparent nonrelativistic limit while considering the law of conservation of the total number of nucleons without the generation of new particles. In Fig. 1, the different stages of a nuclear fireball's evolution are presented, including the initial state of two touching ultrarelativistic nuclei, followed by a hot and superdense nuclear system, the quark-gluon phase, hadronization and chemical freeze-out, and finally, kinetic freeze-out. A more detailed and comprehensive explanation of the mathematical model proposed in [14] is provided in the article, which includes an evaluation of some finer effects such as additional corrections for pres- Figure 1: The successive stages of the evolution of a nuclear fireball [15] Figure 2: Schematic representation of the chemical potential and the forms of the equations of state in the crossover, critical region and first order region, respectively, from left to right sure, density, and root-mean-square (RMS) fluctuations. For temperatures above the production threshold, a new two-component meson model [16], [17] is proposed, where the number of mesons is not conserved when T > 135 MeV. In conclusion, the study of the quark-gluon plasma is an active field of research that involves many theoretical and experimental efforts. The understanding of its properties is essential in understanding the early universe, the behavior of matter under extreme conditions, and the dynamics of heavy-ion collisions. In the work [18], based on quantum chromodynamics (QCD) calculations, it was obtained that if one approaches the critical point along the first-order phase boundary, the corresponding shape of the potential looks like that of a second-order phase transition around a non-zero order parameter, as shown in Fig. 2.: I. ONE-COMPONENT VDW GAS Based on Fig. 1, the lifetime of a nuclear fireball (t > 10 −21 c) is estimated to be much longer than the characteristic nuclear interaction time of t ∼ 10 −23 − 10 −24 c. The relaxation time of the system is estimated to be of the order of τ ∼ 10 −20 − 10 −22 c for small local fireball volumes. As a result, it can be assumed that a local statistical equilibrium is established in the subsystem at each moment of time exceeding the relaxation time. This implies that the local fireball region is quasi-stationary, and statistical physics methods can be used to describe it. Since all thermodynamic potentials, entropy, and volume are extensive, the potentials of the entire system can be defined as the sum of the corresponding thermodynamic potentials of quasiclosed subsystems. This approach is based on the assumption that at each moment of time, a standard representation of the partition function of a rarefied quasi-ideal van der Waals gas in the canonical ensemble can be provided for such quasi-closed subsystems. This quantity has a specific form in the approximation of pair interaction and when the condition B(T )N/V 1 is satisfied [19]. Z(V, T, N ) = 1 N ! φ(T, m) N (V − B(T )N ) N ,(1) where, respectively, N and m are the number and mass of particles, V and T are the volume and temperature of the gas. Formula (1) uses the notation [11] ( = c = k B = 1): φ(m, T ) = = 1 2π 2 ∞ 0 p 2 exp − m 2 + p 2 T dp = m 2 T 2π 2 K 2 (m/T ),(2) where K 2 (z) is the modified Bessel function, and the second virial coefficient in (1) has the form: B(T ) = 1 2 ∞ 0 (1 − exp(−U/T ))dV(3) and includes pairwise interaction of particles, U = U ij , (i = j). In relativistic limit m T one can easy obtain, given the asymptotes of the Bessel function: φ(m, T ) ∼ mT 2π 3/2 exp − m T . This formula further leads to the effect of exponential suppression of the particle yields with large mass, which is important in the study of quark-gluon plasma. P(V, T, N ) = T ∂ ∂V ln[Z(V, T, N )] = T N V − B(T )N .(4) Note that if the Stirling formula is used in the partition function for the factorial: N ! ≈ √ 2πN (N/e) N , then the final pressure formula (4) will not change. The model [16], [17] being used for calculations of subsystems involves applying methods of statistical physics, assuming local statistical equilibrium and the fulfillment of the statistical boundary condition of N → N A , where N A isµ = ∂F (V, T, N ) ∂N = = T ln(N/V ) − ln(φ(T, m)) + 2B(T )N V(5) and the derivative of the chemical potential which in the statistical limit has the form: (∂µ/∂N ) = −(∂P/∂V )(V /N ) 2 = = lim N →N A T N + 2B(T )T V → 2B(T )T V .(6) Then, we obtain the Grand partition function (GPF) Z(V, T, µ) from the partition function Z(V, T, N ) taking into account the above physical considerations (see, e.g. [20], [21]): Z(V, T, µ) = N exp µN T Z(V, T, N ).(7) At high temperatures (which, for example, are realized during collisions of heavy ions in the GCE, and N/T → dN ) one can turn from the sum to the integral using the Euler-Maclaurin formula. In this case, the first integral term remains and the logarithm of the statistical sum is introduced into the exponent. Let's denote this indicator by Φ(N ) : Z(V, T, µ) = T ∞ 0 dN exp (µN + ln[Z(V, N (T ))]) = = T ∞ 0 dN exp (Φ(N )) .(8) The saddle point method( see Fig.3) is utilized for further integration, as the integrand has a clear maximum at high temperatures. The maximum point (N * ) for the integrand is obtained in [16], [17] through the extremum condition imposed on the saddle point: µ * (N * ) = − T ∂ ∂N ln[Z(V, T, N )] N =N * − N * (∂µ/∂N ) N =N * ≈ (9) ≈ T [ln(N * /V ) − ln(φ(T, m))] ,(10) where µ * is the chemical potential at the saddle point. As a result, obtaind [16], [17]: P (T, µ * ) ≈ T ξ[1 − B(T )ξ − ln (∂ 2 Φ * /∂N 2 ) N =N * /(2V ξ)] ≈ T ξ[1 − B(T )ξ − ln [B(T )ξ]/(2V ξ)],(11) where the saddle point, ξ = N * (V, T, µ * )/V , is defined according to (10) and (5) as ξ = φ(m, T ) exp (µ * (ξ)/T ). The parameter ξ can be eliminated from Eq. (11) using the definition of density, which in the thermodynamic limit turns into the well-known formula [19]: n = ∂P (T, µ) ∂µ = ξ[1 − 2B(T )ξ] − 1 2V → ξ[1 − 2B(T )ξ].(12) In the thermodynamic limit (N → N A , V → ∞ ) the chemical potential of the saddle point µ * from Eq. (10) when N * = N/(1 − 2B(T )N/V ) turns into the chemical potential µ ( µ * → µ ), which is determined by the well-known thermodynamic equation Eq. (5). Both equations (Eq. (11) and Eq. (12)) in parametric form (the saddle point ξ acts as a parameter) determine the relationship between pressure P , temperature T , and density n . In [16], [17] was obtained the state equation in GCE by excluding explicitly this parameter from the system of Eq. (11) and Eq. (12): P (T, n) ≈ T n[1 + B(T )n] + dP.(13) Of course, the resulting state equation is implicitly a parametric equation, since the saddle point ξ (and, hence, n ) determines the chemical potential µ according to Eq. (5) and Eq. (10), as: n = φ(T, m) exp(µ/T − 2B(T )n)(14) The formula derived takes into consideration the pressure contribution from the finite volume of the system, denoted as V s . The author admits that the nature of this contribution is not fully understood. It is possible that it is a non-physical outcome that can be reduced by considering additional terms of the expansion by the saddle-point method. However, until further analysis is conducted to confirm this, the author treats this contribution as real and proceeds to quantitatively evaluate it. It is worth noting that this contribution disappears in the thermodynamic limit, where there is no distinction between CE and GCE. As we consider large but finite volumes, only the last term on dP remains in 13, which is similar to what was observed in the meson and nuclear fireball model discussed in Sec. 3 and Sec. 4: dP = lim V →V s T 2V (1 + B(T )n − ln[B(T )n]) → dP = − T ln[B(T )n] 2V s .(15) If we disregard the correction obtained from the volume of dP and assume that B(T )n << 1, then by making the following substitution in the right-hand side of Eq. (13): (1+B(T )n) ∼ exp(B(T )n), taking into account Eq. (14), it will become the following: (12) where, according to (10), ξ is expressed in terms of µ: P (T, µ) ≈ T φ(T, m) exp(µ/T − B(T )n) = = T φ(T, m) exp(µ int /T ) = P id (T, µ int ).(16)n ≈ φ(T, m) exp(µ/T )[1 − 2B(T )φ(T, m) exp(µ/T )].(17) The RMS fluctuations of pressure and density calculated by known formulas (see, e.g., [19] [24]) give estimates of the found corrections to the corresponding quantities: < ( P ) >∼ T n/V [1 + B(T )n],(18)< ( n) >∼ 1 √ nV 3 [1 − B(T )n].(19) II. MULTICOMPONENT VDW GAS It is easy to extend the results obtained in Sec. 1 to the case of a two-component vdW gas (i = 1, 2) [16]: µ * i → µ i = ∂F (V,T,N i ,N j ) ∂N i , where F (V, T, N 1 , N 2 ) = −T ln[Z(V, T, N 1 , N 2 )] is the definition of free energy a two-component vdW gas, whis density of components: n i = ∂P (T, µ i , µ j )/∂µ i ∼ ξ i [1 − (2ξ i B ii + ξ j (B ij +B ji ))].(20) The virial expansion can be rewritten, taking into account Eq. (20), as a two-component vdW equation in the approximation b ij N i /V 1 and a ij /T b ij 1) : P (T, µ 1 , µ 2 ) = T n 1 1 − b 11 n 1 −b 21 n 2 + T n 2 1 − b 22 n 2 −b 12 n 1 −n 1 (a 11 n 1 +ã 21 n 2 ) − n 2 (a 22 n 2 +ã 12 n 1 ) + dP, where dP takes into account the "finite size of the fireball": dP ∼ = −T n ln[C(T,n 11 ,n 22 )] 2V , C(n, T ) = |n 11 B 11 n 22 B 22 −n 12B12 n 21B21 |. When formula (21) was derived, the expressionB ij ≈b ij −ã ij /T was used (see, e.g., [12]), and for each type of particles the corresponding parameters of attraction and repulsion were introduced: a ij ,ã ij ≈ 2γa ij a ii /(a ii + a jj ), b ij ,b ij = 2 b ii b ij b ii +b jj ,P (T, µ 1 , ..., µ K ) = K p=1      T n p 1 − K (p =q)=1 (b pp n p +b pq n q )      − K (p =q)=1 n p (a pp n p +ã qp n q ) + dP (T, µ 1 , ..., µ K ), In publications [14] - [17], a two-stage model was proposed to describe the nuclear fireball. At Only two types of particles are considered, namely the π 0 and π + mesons, with the average internucleon energy not exceeding the production threshold of the heavy mesons. The π + meson production is twice as likely as the π 0 meson production(also, the lifetime of π + mesons is greater than the lifetime of π 0 -mesons); thus, the densities are assumed to be equal to kn + = n 0 , where n 0 and n + are the densities of π 0 and π + mesons, respectively, and k < 1. The potential energy of meson interaction is effective, and its scalar part is shown in Fig. 4. It is assumed that the π 0 meson hard-core radius is much smaller than the π + meson hard-core radius. The effective radius of the π + meson is 0.46Fm, the π 0 meson is 0.01Fm, and the average volume of the meson fireball is estimated to be 600-1000 F m 3 The 16), it was shown that the Van der Waals gas pressure formula can be approximately rewritten as the pressure of an ideal gas with an effective potential µ int = µ − T B(T )n that takes into account interactions. However, this effective potential differs by 3T B(T )n from the chemical potential obtained from the known formula 5. The question arises as to how justified this approximation of the effective chemical potential is? The obtained formulas for chemical potentials, pressure, and density, obtained using the saddle point method, are used to analyze numerical data obtained by other authors using the lattice QCD The influence of interaction on the two-component gas is analyzed: (i) π 0 and π + mesons; (ii) baryons. The particles interact with a hard core potential at short distances and an attractive potential at long distances (effective radii of attraction). As an example of the use of the obtained formulas at temperatures above the particle production threshold (T>135 MeV), a generalized van der Waals model was proposed for the asymmetric case of the two-component model (π 0 and π + mesons) with realistic parameters of the hard core and attraction. At lower temperatures (T<90 MeV), a baryonic model was used, which takes into account the conservation law of baryonic charge. It was found that despite the roughness of the one-component effective models for the equations of state and chemical potentials, they qualitatively, and sometimes quantitatively, agree with the results obtained using the lattice QCD calculations. The use of an effective chemical potential, which can take into account the interaction in the ideal gas equation of state, is also analyzed. It was found that under the conditions of applicability of the baryonic model, the effective chemical potential differs little from the potential obtained using the saddle point method. For the mesonic model, the use of an effective mesonic chemical potential is much less justified. The research was carried out within the framework of the initiative scientific topic 0122U200549 ("National Technical University of Ukraine «Igor Sikorsky Kyiv Polytechnic Institute"). the Avogadro constant. This assumption is justified at the initial stages of evolution, as the number of particles generated in a fireball is around 3-5 thousand during high-energy nucleus-nucleus interactions. However, at later stages of evolution, the assumption becomes doubtful, as the number of nucleons in the nonrelativistic limit is limited by the baryon number conservation law and is equal to N ∼ 200. Nonetheless, the practical application of the van der Waals equation often goes beyond the conditions under which the virial approximation has been obtained. Therefore, the approximation is believed to be sufficiently justified, especially as it is always possible to restrict calculations to the first stage. Although the saddle point method is used when B(T ) < 0, the final formulas can be extended to a region where the second virial coefficient B(T ) is not necessarily negative. From the partition function Z(V, T, N ) one can also get: free energy F (V, T, N ) = −T ln[Z(V, T, N )], chemical potential Figure 3 : 3Graphic explanation of the saddle point method Thus, the equation of state with interaction can be obtained by making the substitution µ− > µ int = µ−T B(T )n in the equation of state of the ideal gas. These equations are density functionals, which, according to (5), at a fixed chemical potential, are found from the solution of a transcendental equation n = φ(T, m) exp(µ/T − 2B(T )n). Assuming B(T )n << 1, this formula can be replaced with γ is a phenomenological parameter reflecting the complexity of the problem.The above analysis can be extended to a multi-component VdW gas consisting of any number of different particles. By integrating over the particles' momenta and making some changes analogous to those made in the first example, an expression for the statistical sum of the multi-component(K-component) VdW gas can be obtained. Then, by integrating the obtained formula over the number of particles and making the corresponding changes to the formula, as was done in the case of a single-component gas, the corresponding expression for pressure can be obtained in the mathematical formalism of the Grand Canonical Ensemble[17]: where µ p = ∂F (V,T,N 1 ,...,N K ) ∂Np are the chemical potentials (p = 1, ..., K). The particle densities n p = ∂P (T, µ 1 , ..., µ K )/∂µ p along with the pressure are obtained as the solutions of the system of related equations depending on the parameter of the saddle points ξ p (p = 1, ..., K).Due to inelastic reactions between hadrons in the HGmodel in the grand canonical ensemble formulation, there are no fixed numbers for N 1 , ..., N K . However, the conserved charges of baryonic number B, strangeness S, and electric charge Q have fixed values. B corresponds to the number of participating nucleons in the reaction, S is zero, and Q = eZ is 0.5eA for intermediate nuclei and 0.4eA for heavy nuclei. Using the grand canonical formulation is more advantageous at high temperatures, where the system properties are determined by the pressure function 22. The chemical potentials µ i (where i = 1, ..., K) are defined as a combination of the baryonic µ B , strange µ S , and electric µ Q chemical potentials, with coefficients of expansion (γ B ) i , (γ S ) i , and (γ Q ) i respectively. Figure 4 : 4Scalar part of the effective meson-meson potential the first stage, since the considered nuclear-nuclear collisions (A + A) have very high energies, more than 1 GeV per nucleon, the number of produced mesons is much larger than nucleons and it is suggested that the main contribution to the process is made mainly by mesons (meson stage of the fireball dynamics). Later, at the second stage, since the lifetime of mesons is much shorter than the lifetime of nucleons, it is proposed that the main contribution at the final stage is made by nucleons (nucleon stage of the fireball). The meson model of the fireball is based on the following assumptions: The collision between nuclei (A+A) generates high energies of over 1 GeV per nucleon. During the initial freeze-out stage, mesons dominate. To explain these interactions above the new particles' production threshold (T>135 MeV), the vdW model is generalized to a medium-sized meson fireball. The mean semiaxes of the ellipsoid and mass number of nuclei left in the fireball after the collision, <a>, <b>, and <A>, respectively, are used to estimate the average potential energy. The fireball is composed mostly of mesons, considering that the number of nucleons is significantly less than the number of mesons. Figure 5 : 5nucleon model of the fireball is based on the following assumptions: Mesons with a short average lifetime (τ ∼ 10 −8 − 10 −16 c) dominate in the initial stages of freeze-out, leading to their quick decay. Baryons like protons and neutrons become dominant during the final stages of freezing. The finite volume size effects become evident at low density values, which corresponds to the last stages of fireball evolution. Although there is some uncertainty about the fireball's existence during The result of our calculations using formula 5 for the meson and nucleon stages of the evolution of the hadron fireball these late stages, a generalization of the vdW model to the nucleon fireball was proposed in [14] to describe nucleus-nucleus interactions during the last stage of freeze-out when new particles are not produced (T < 135 MeV). To simplify the model, the average energies of internucleon collisions are restricted to not exceed the production threshold of other hadrons, and only two varieties (protons and neutrons) are considered. The density of protons and neutrons is assumed to follow from the conservation of baryon number, and the nucleon composition of colliding nuclei is assumed to be known. The effective potential of the interactions between protons and neutrons, protons and protons, and neutrons and neutrons can be represented using the same model as in Fig. 4. The hard-core radius of the proton is assumed to be known, while the radius of the neutron is much smaller than that of the proton. Thanks to the known relationship between the number of protons and neutrons in heavy nuclei, n p = kn n , where k < 1, the two-component nucleon model essentially reduces to a one-component model. Similarly, since the lifetime of neutral pion mesons is much shorter than that of charged mesons, the same can be said for the two-component meson model. Interestingly, despite the crudeness of such a one-component approximation for the real multi-component vdW gas of the hadron fireball, as shown in Fig. 5, a good qualitative and quantitative agreement with the results of calculations by other authors is obtained for the chemical potential (see, for example, Fig. 2 and Fig. 6). Figure 6 : 6Sketch of the QCD matter phase diagram in the plane of temperature T and baryo-chemical potential µ B (this figure is taken from [26]). The parton-hadron phase transition line from lattice QCD [27]-[30] ends in a critical point E. A cross-over transition occurs at smaller µ B . Also shown are the points of hadro-chemical freeze-out from the grand canonical statistical model III. APPROXIMATION OF AN IDEAL GAS WITH AN EFFECTIVE CHEMICAL PO-TENTIAL In Sec. 1. (see Figures 7- 9 9show a comparison of the effective chemical potential and the real chemical potential at different parameter values for the nucleon and meson models.It can be seen that while the nucleon model of the fireball at realistic radii of the solid nucleon crust is practically indistinguishable when replacing the real chemical potential with the effective one, for the meson model of the fireball at realistic parameter values, the effective chemical potential yields an explicitly unphysical result. Figure 7 : 5 FmFigure 8 : 758Chemical potential calculated in the nucleon model of the fireball at hard core radii of R 0 = 0.Chemical potential calculated in the nucleon model of the fireball at hard core radii of R 0 = 0.9 FmIV. SUMMARYThe paper provides a detailed description of obtaining the equation of state with corrections that take into account the finite size of the hadronic fireball, as well as the mean square fluctuations of pressure and density. The pressure correction disappears in the thermodynamic limit when, according to statistical physics, there is no difference between different statistical ensembles.It is shown how, in the process of deriving the equation of state using the integration method near the saddle point, expressions for the chemical potentials are naturally obtained, which essentially serve as the extremum condition of the integrand. Figure 9 : 9Chemical potential calculated in the meson model of the fireball at hard core radii of R 0 = 0.69 Fm calculations and critical parameters in nucleus-nucleus collisions at high energy. Tests of thermalization in relativistic nucleus nucleus collisions. J Stachel, U Heidelberg, Nucl. Phys. A. 610509J. Stachel, U. Heidelberg. Tests of thermalization in relativistic nucleus nucleus collisions. Nucl. Phys. A 610 (1996) 509C Dynamics of ultra-relativistic nuclear collisions with heavy beams: An experimental overview. P Braun-Munzinger, J Stachel, Nucl. Phys. A. 6383P. Braun-Munzinger, J. Stachel. Dynamics of ultra-relativistic nuclear collisions with heavy beams: An experimental overview. Nucl. Phys. A 638 (1998) 3C. Thermal Hadron Production in High Energy Heavy Ion Collisions. J Cleymans, H Satz, Z. Phys. C. 57135J. Cleymans, H. Satz. Thermal Hadron Production in High Energy Heavy Ion Collisions. Z. Phys. C 57 (1993) 135. The hadronisation of a quark-gluon plasma. J Cleymans, Z. Phys. C. 58347J. Cleymans et al. The hadronisation of a quark-gluon plasma. Z. Phys. C 58 (1993) 347. Hadronisation of quark-gluon plasma. K Redlich, Nucl. Phys. A. 566391K. Redlich et al. Hadronisation of quark-gluon plasma. Nucl. Phys. A 566 (1994) 391. Thermal equilibration and expansion in nucleus-nucleus collisions at the AGS. P Braun-Munzinger, Phys. Lett. B. 34443P. Braun-Munzinger et al. Thermal equilibration and expansion in nucleus-nucleus collisions at the AGS. Phys. Lett. B 344 (1995) 43. Thermal and hadrochemical equilibration in nucleus-nucleus collisions at the SPS. P Braun-Munzinger, Phys. Lett. B. 3651P. Braun-Munzinger et al. Thermal and hadrochemical equilibration in nucleus-nucleus collisions at the SPS. Phys. Lett. B 365 (1996) 1. The excluded volume hadron gas model and pion production at the SPS. R A Ritchie, M I Gorenstein, H G Miller, Z. Phys. C. 75535R.A. Ritchie, M.I. Gorenstein, H.G. Miller. The excluded volume hadron gas model and pion production at the SPS. Z. Phys. C 75 (1997) 535. Excluded volume hadron gas model for particle number ratios in collisions. G D Yen, Phys. Rev. C. 562210G.D. Yen et al. Excluded volume hadron gas model for particle number ratios in collisions. Phys. Rev. C 56 (1997) 2210. Chemical freezeout in relativistic collisions: is it close to the quark-gluon plasma?. G D , J. Phys. G. 241777G.D. Yen at al. Chemical freezeout in relativistic collisions: is it close to the quark-gluon plasma? J. Phys. G 24 (1998) 1777. Van der Waals excluded-volume model of multicomponent hadron gas. M I Gorenstein, A P Kostyuk, Ya D Krivenko, J.Phys. G. M. I. Gorenstein, A.P. Kostyuk, Ya.D. Krivenko. Van der Waals excluded-volume model of multicom- ponent hadron gas. J.Phys. G, 25 (1999), P. 75-83; Attractive inter-particle force in van der Waals model of multicomponent hadron gas in the grand canonical ensemble. D Ya, Krivenko-Emetov, arXiv:1909.08441v1hep-phYa.D. Krivenko-Emetov. Attractive inter-particle force in van der Waals model of multicomponent hadron gas in the grand canonical ensemble. 2019 arXiv:1909.08441v1 [hep-ph]; Interparticle attractive forces account of the multicomponent hadron gas in the grand canonical ensenble. Book of abstract of the 24th Annual Scientific Conf. of Inst. for Nucl. D Ya, Krivenko-Emetov, 36Research, Kyiv; UkraineYa.D. Krivenko- Emetov. Interparticle attractive forces account of the multicomponent hadron gas in the grand canon- ical ensenble. Book of abstract of the 24th Annual Scientific Conf. of Inst. for Nucl. Research, Kyiv, Ukraine, April 10-13, 2017 (Kyiv, 2017) p. 36. Multicomponent van der Waals equation of state: Applications in nuclear and hadronic physics. V Vovchenko, Phys. Rev. C. 9645202V. Vovchenko at al. Multicomponent van der Waals equation of state: Applications in nuclear and hadronic physics. Phys. Rev. C 96 (2017) 045202. Finite volume effects in the two-component van der Waals model in relativistic nucleus-nucleus collisions of heavy ions. Book of abstract of the 28th Annual Scientific Conf. of Inst. for Nucl. D Ya, Krivenko-Emetov, 27Research, Kyiv, Ukraine; Kyiv, 2021Ya.D. Krivenko-Emetov. Finite volume effects in the two-component van der Waals model in relativistic nucleus-nucleus collisions of heavy ions. Book of abstract of the 28th Annual Scientific Conf. of Inst. for Nucl. Research, Kyiv, Ukraine, Sept. 27 -Oct. 01, 2021 (Kyiv, 2021) p. 27. Quark-Gluon Plasma (QGP) Physics with ALICE at the CERN LHC. Quark-Gluon Plasma (QGP) Physics with ALICE at the CERN LHC. URL: https://indico.cern.ch/event/1013634/contributions /4255256/attachments/2227069/3772748/IoP- . April2021, Pdf, April2021.pdf. Ukr). D. Sokolyuk, Ya. Krivenko-Emetov. Two-component van der Waals model of a nuclear fireball in the cooling stage (freezeout). Mat. of XX All-Ukrainian science and practice conf. students, postgraduates and young scientists "Theoretical and applied problems of physics, mathematics and informatics. D Ya, Krivenko-Emetov, 10.32347/2412-9933.2022.51Two-component statistical model of a nuclear fireball in the cooling stage (freezeout). Kyiv51Management of Development of Complex Systems. Igor Sikorsky Kyiv Polytechnic Institute, 2022. UkrYa.D. Krivenko-Emetov. Two-component statistical model of a nuclear fireball in the cooling stage (freezeout). Management of Development of Complex Systems, 51, 130-140, dx.doi.org 10.32347/2412- 9933.2022.51. pp. 130-140.(2022) (Ukr). D. Sokolyuk, Ya. Krivenko-Emetov. Two-component van der Waals model of a nuclear fireball in the cooling stage (freezeout). Mat. of XX All-Ukrainian science and practice conf. students, postgraduates and young scientists "Theoretical and applied problems of physics, mathematics and informatics", Kyiv, June 15, 2022 (Igor Sikorsky Kyiv Polytechnic Institute, 2022) pp. 88-93. (Ukr). D Ya, Krivenko-Emetov, arXiv:2301.00742Multicomponent van der Waals model of a nuclear fireball in the freeze-out stage. arXiv preprintYa.D. Krivenko-Emetov. Multicomponent van der Waals model of a nuclear fireball in the freeze-out stage. arXiv preprint arXiv:2301.00742 (2023). The phase diagram of dense QCD. Kenji Fukushima, Tetsuo Hatsuda, Rep. Prog. Phys. 7414001Kenji Fukushima and Tetsuo Hatsuda. The phase diagram of dense QCD. Rep. Prog. Phys. 74 014001(2011) L D Landau, E M , of Course of Theoretical Physics. Addison Wesley5pL. D. Landau, E. M. Lifshitz. Statistical Physics Vol. 5 of Course of Theoretical Physics. (2 ed. Addison Wesley, 1969) 484 p. Statistical mechanics (Moskva: Mir, 1967) 452 p. (Rus). R Kubo, R. Kubo. Statistical mechanics (Moskva: Mir, 1967) 452 p. (Rus). Statistical Mechanics: a set of lectures. R P Feynman, Advanced Book Classics. Reading, Mass354p2 ed. Perseus BooksR. P. Feynman. Statistical Mechanics: a set of lectures. Advanced Book Classics (2 ed. Perseus Books, Reading, Mass., 1998) 354 p. Saddle point method (Moskva, 1977) 368 p. (Rus). M V Fedoruk, M.V. Fedoruk. Saddle point method (Moskva, 1977) 368 p. (Rus). . D H Rischke, M I Gorenstein, H St Ocker, W Greiner, Z. Phys. C. 51485D. H. Rischke, M. I. Gorenstein, H. St ocker and W. Greiner, Z. Phys. C 51, 485 (1991); Am Fedorchenko, Theoretical physics. T.2. Quantum mechanics, thermodynamics and statistical physics (Kyiv: Vyshcha shkola. 415p. (Ukr)AM Fedorchenko. Theoretical physics. T.2. Quantum mechanics, thermodynamics and statistical physics (Kyiv: Vyshcha shkola, 1993) 415 p. (Ukr). Quantum statistics effects near the critical point in systems with different interparticle interactions. S N Fedotkin, A G Magner, U V Grygoriev, Phys. Rev. C. 10524621S. N. Fedotkin, A. G. Magner, and U. V. Grygoriev. Quantum statistics effects near the critical point in systems with different interparticle interactions. Phys. Rev. C 105(2022), 024621. Relativistic Nucleus-Nucleus Collisions and the QCD Matter Phase Diagram. R Stock, Physics Reference Library. Schopper, H.SpringerStock, R. (2020). Relativistic Nucleus-Nucleus Collisions and the QCD Matter Phase Diagram. In: Schopper, H. (eds) Particle Physics Reference Library. Springer, Cham. . F Karsch, Nucl. Phys. 590372F. Karsch, Nucl. Phys. A590 (1995) 372; . R Stock, hep-ph/9901415R. Stock, hep-ph/9901415 . Z Fodor, S D Katz, JHEP. 020314Z. Fodor and S. D. Katz, JHEP 0203 (2002) 014; . Ph, O De Forcrand, Philipsen, Nucl. Phys. 642290Ph. de Forcrand and O. Philipsen, Nucl. Phys. B642 (2002) 290 . C R Allton, Phys. Rev. 6814507C. R. Allton et al., Phys. Rev. D68 (2003) 014507 . F Karsch, E Laerman, R. C. Hwa and X. N.WangWorld Scientific1F. Karsch and E. Laerman, in Quark-Gluon Plasma 3, eds. R. C. Hwa and X. N.Wang, World Scientific 2004, p.1
[]
[ "Unbounded Quantum Advantage in One-Way Strong Communication Complexity of a Distributed Clique Labelling Relation", "Unbounded Quantum Advantage in One-Way Strong Communication Complexity of a Distributed Clique Labelling Relation" ]
[ "Nitica Sakharwade Id \nInternational Centre for Theory of Quantum Technologies (ICTQT)\nUniversity of Gdańsk\nJana Bażynskiego 880-309GdańskPoland\n", "Some Sankar \nInternational Centre for Theory of Quantum Technologies (ICTQT)\nUniversity of Gdańsk\nJana Bażynskiego 880-309GdańskPoland\n", "Bhattacharya Id ", "IDRavishankar Ramanathan \nInternational Centre for Theory of Quantum Technologies (ICTQT)\nUniversity of Gdańsk\nJana Bażynskiego 880-309GdańskPoland\n", "Paweł Horodecki Id \nInternational Centre for Theory of Quantum Technologies (ICTQT)\nUniversity of Gdańsk\nJana Bażynskiego 880-309GdańskPoland\n\nDepartment of Computer Science\nThe University of Hong Kong\nPokfulam RoadHong Kong\n" ]
[ "International Centre for Theory of Quantum Technologies (ICTQT)\nUniversity of Gdańsk\nJana Bażynskiego 880-309GdańskPoland", "International Centre for Theory of Quantum Technologies (ICTQT)\nUniversity of Gdańsk\nJana Bażynskiego 880-309GdańskPoland", "International Centre for Theory of Quantum Technologies (ICTQT)\nUniversity of Gdańsk\nJana Bażynskiego 880-309GdańskPoland", "International Centre for Theory of Quantum Technologies (ICTQT)\nUniversity of Gdańsk\nJana Bażynskiego 880-309GdańskPoland", "Department of Computer Science\nThe University of Hong Kong\nPokfulam RoadHong Kong" ]
[]
We investigate the one-way zero-error classical and quantum communication complexities for a class of relations induced by a distributed clique labelling problem. We consider two variants: 1) the receiver outputs an answer satisfying the relation -the traditional communication complexity of relations (CCR) and 2) the receiver has non-zero probabilities of outputting every valid answer satisfying the relation (equivalently, the relation can be fully reconstructed), that we denote the strong communication complexity of the relation (S-CCR). We prove that for the specific class of relations considered here when the players do not share any resources, there is no quantum advantage in the CCR task for any graph. On the other hand, we show that there exist, classes of graphs for which the separation between one-way classical and quantum communication in the S-CCR task grow with the order of the graph, specifically, the quantum complexity is O(1) while the classical complexity is Ω(log m). Secondly, we prove a lower bound (that is linear in the number of cliques) on the amount of shared randomness necessary to overcome the separation in the scenario of fixed restricted communication and connect this to the existence of Orthogonal Arrays. Finally, we highlight some applications of this task to semi-device-independent dimension witnessing as well as to the detection of Mutually Unbiased Bases.
null
[ "https://export.arxiv.org/pdf/2305.10372v1.pdf" ]
258,741,211
2305.10372
bc2ca338af9013161428f70f5c72c6fa2a0dd9bb
Unbounded Quantum Advantage in One-Way Strong Communication Complexity of a Distributed Clique Labelling Relation 17 May 2023 Nitica Sakharwade Id International Centre for Theory of Quantum Technologies (ICTQT) University of Gdańsk Jana Bażynskiego 880-309GdańskPoland Some Sankar International Centre for Theory of Quantum Technologies (ICTQT) University of Gdańsk Jana Bażynskiego 880-309GdańskPoland Bhattacharya Id IDRavishankar Ramanathan International Centre for Theory of Quantum Technologies (ICTQT) University of Gdańsk Jana Bażynskiego 880-309GdańskPoland Paweł Horodecki Id International Centre for Theory of Quantum Technologies (ICTQT) University of Gdańsk Jana Bażynskiego 880-309GdańskPoland Department of Computer Science The University of Hong Kong Pokfulam RoadHong Kong Unbounded Quantum Advantage in One-Way Strong Communication Complexity of a Distributed Clique Labelling Relation 17 May 2023 We investigate the one-way zero-error classical and quantum communication complexities for a class of relations induced by a distributed clique labelling problem. We consider two variants: 1) the receiver outputs an answer satisfying the relation -the traditional communication complexity of relations (CCR) and 2) the receiver has non-zero probabilities of outputting every valid answer satisfying the relation (equivalently, the relation can be fully reconstructed), that we denote the strong communication complexity of the relation (S-CCR). We prove that for the specific class of relations considered here when the players do not share any resources, there is no quantum advantage in the CCR task for any graph. On the other hand, we show that there exist, classes of graphs for which the separation between one-way classical and quantum communication in the S-CCR task grow with the order of the graph, specifically, the quantum complexity is O(1) while the classical complexity is Ω(log m). Secondly, we prove a lower bound (that is linear in the number of cliques) on the amount of shared randomness necessary to overcome the separation in the scenario of fixed restricted communication and connect this to the existence of Orthogonal Arrays. Finally, we highlight some applications of this task to semi-device-independent dimension witnessing as well as to the detection of Mutually Unbiased Bases. I. INTRODUCTION Quantum Shannon theory replaces the classical carrier of information with quantum systems in Shannon's model of communication [1]. This initiated a tide of attempts to understand the advantage of encoding classical information in a quantum system. Over the past few decades, there have been numerous works probing the advantage of quantum resources over classical counterparts in various information-theoretic scenarios. Many of these works provide a deeper insight into quantum theory. Some of these quantum advantages have found practical applications in the field of quantum cryptography [2,3], quantum communication [4][5][6][7][8] and quantum computing [9][10][11] to name a few. In a prepare and measure scenario, the major share of effort has been devoted to showing an advantage in quantum communication complexity [12,13], which involves computing the minimum communication required between distant parties in order to perform a distributed computation of functions [14]. Karchmer and Wigderson [15] initiated the study of the communication complexity of relations and established a connection between the communication complexity of certain types of relations and the complexity of Boolean circuits. In [16] Raz provided an example of an unbounded gap between the classical and quantum communication complexity for a relation. Another closely related line of study has been to explore the advantage of quantum communication in tasks based on orthogonality graphs. In most cases, orthogonality graphs that lead to quantum advantage are not Kochen-Specker colourable (KS-colourable) [17], thus connecting this set of tasks to the feature of quantum contextuality [18]. In this article, we introduce a new task based on the communication complexity of relations. For this task, we identify a class of relations based on graphs, such that there is an exponential gap between oneway zero-error classical and quantum communication. Two important points in which this work significantly differs from others is that, firstly, unlike in [17,19] the quantum advantage in our proposed task is independent of the graphs being KS-colourable (or not). Secondly, the exponential advantage in [16] requires an infinite set of inputs, whereas the work presented in this article requires only a finite set of inputs to establish an unbounded gap. In particular, we consider the one-way zero-error communication complexity of a relation (CCR) induced by rules of a distributed clique labelling problem (CLP) over a graph. For this CCR task where any valid answer belonging to the relation is accepted, we show that there is no advantage in using quantum systems as carriers of information. However, another version of the CCR task where Bob's output in different runs should span over the relation, called Strong Communication Complexity of Relation (S-CCR) entails non-trivial quantum advantage. This new task of S-CCR is equivalent to the possibility of reconstructing the relation from the complete observed input-output statistics. Demanding reconstruction is a stronger form of communication complexity of relations since a function (a special case of relations) can always be reconstructed from the observed statistics while in general for a relation this does not hold. Our main results consider two distinct scenarios depending on the availability of pre-shared correlations and direct communication resources between the two parties: (i) the spatially separated parties do not share any correlation, (ii) the communication channels can transmit systems of a fixed operational dimension. In the first scenario, we find that there exist communication tasks which entail an unbounded separation between the operational dimension of the classical and quantum message systems. We also demonstrate quantum advantage for a relation induced by Payley graphs which are a class of vertex-transitive selfcomplementary graphs. In the second scenario, we show that there exist communication tasks which imply classical channels require to be assisted by unbounded amounts of pre-shared classical correlations while the quantum channel does not require any preshared resources. Additionally, we show that there exist graphs for which the task with a classical channel requires shared randomness linear in cliques whereas with shared entanglement it can be performed by a 1ebit-assisted classical channel. A. Outline of the paper The article is organised as follows: in Sec. II we discuss the preliminaries about the orthogonal representation of graphs, and binary/KS colouring of a graph that we require in our communication task. In Sec. III we introduce the setup for the task of communication complexity of relations induced by graphs and subsequent variations. In Subsec. III B we introduce the clique labelling problem induced by a graph and show that considering the conventional one-way zero-error communication complexity of this relation does not lead to a quantum advantage. Next in Sec. III C we consider a stronger communication complexity scenario which implies that relations can be reconstructed from the observed input-output statistics. Sec. IV contains the bulk of our results where we show that the gap between classical and quantum communication required to compute and subsequently reconstruct the clique labelling relation is unbounded for a class of graphs. In Sec. V we list some applications of the proposed communication scenario and finally in Sec. VI we discuss an interpretation of the payoff of the proposed task in terms of a property of the graph used to execute the task and also list the open questions. II. PRELIMINARIES In this section, we briefly go over known concepts relevant to the article, including notions of orthogonal representation and binary colouring of graphs widely used in the study of contextuality [20] and finally, the notion of operational dimension that helps compare classical and quantum (communication) resources. A. Graphs, Orthogonal Representation, and Binary Colouring A graph G = (V, E ) consists of a set of vertices V := (v 1 , v 2 , . . . , v n ) and a set of edges E := (e 1 , e 2 , . . . , e m ) between the vertices. Additionally, the edges may also have a directional property and a weight, which gives rise to further classifications of directed or undirected graphs and weighted or unweighted graphs. In this work, we consider simple undirected unweighted graphs. A subgraph of a graph G is a graph G ′ = (V ′ , E ′ ) where E ′ ⊆ E such that ∀e i ∈ E ′ the vertices connected by e i belong in V ′ ⊆ V. For any graph G, a clique is a fully connected subgraph of G. The size of the clique is given by the number of vertices in the subgraph. Among many different representations of an arbitrary graph, orthogonal representation over complex fields has been shown to be useful in demonstrating the nonexistence of a non-contextual hidden variable model for quantum mechanics [20,21]. Here we make use of a general definition of orthogonal representation. We describe the orthogonal representation of a graph over arbitrary fields as the following [22]: Definition 1. Given a graph G := (V, E ), an orthogonal representation of G over field F is described by the function φ : V → F d , such that (i) for any two adjacent vertices v i and v j , φ(v i ), φ(v j ) = 0, (ii) φ(v i ) = φ(v j ) , for all i = j where, d is the dimension of the vector space over field F and , denotes the scalar product (bilinear form) over field F. This representation is faithful, if φ(v i ), φ(v j ) = 0 implies that v i and v j are adjacent; and is orthonormal, if |φ(v i )| = 1 for all v i ∈ V. An important problem regarding this representation is to find minimum d, such that the definition holds true. For such an optimal orthogonal representation we denote the faithful orthogonal range of the graph G over field F as d F (for example, d R , d C etc.). A lower and an upper bound to the faithful orthogonal range d R satisfied over an arbitrary field F, are given as follows: ω ≤ d F ≤ d F ′ ≤ |V |(1) where, F ′ ⊆ F, ω is the maximum clique size and |V | is the number of vertices in the graph G also known as the order of the graph. The lower bound follows from the fact that in the faithful orthogonal representation, there should be at least ω number of orthogonal vectors and the upper bound says that it is always trivially possible to provide an orthogonal representation with |V | number of vectors. Of course, saturating the upper bound implies that the orthogonal representation is not faithful whenever |V | > ω. Lovasz et al. [23] provided a necessary and sufficient condition for finding minimal d over real field R for a class of orthogonal representations called general position, for which any set of d representing real vectors are linearly independent. [23]] Any graph G := (V, E ), has a general position faithful orthogonal representation in R d if and only if at least (|V | − d) vertices are required to be removed to make the complementary graphḠ disconnected. Proposition 1. [Lovasz et al. '89 Later we will refer to this result to provide an upper bound on the faithful orthogonal range for a class of graphs. Given a graph G, the problems concerning the colouring of its vertices with one of two possible colours have been widely studied and share deep connections with quantum non-contextuality. We will refer to a graph along with a faithful orthogonal representation in minimum dimension as an orthogonality graph. In the following, we define the binary colouring of an orthogonality graph. Definition 2. Binary colouring of a graph G := (V, E ), is a binary function f : V → {0, 1}, such that (i) for any two adjacent vertices v i and v j , f(v i )f(v j ) = 0, (ii) for any maximum clique C k of the graph G, there is exactly one vertex v * ∈ C k , such that f(v * ) = 1. A point to note here is that not all graphs are binary colourable. A Binary Colouring of graph G with n vertices, if possible, can be thought of as a binary string of length n. On the other hand, the set of the binary strings corresponding to all different binary colourings uniquely describes the graph G. In the subsequent sections, we will use the term "colouring of a graph" to refer to the binary colouring of the graph. A uniquely binary colourable graph is a graph that has only one possible binary colouring up to the permutation of the colours. For example, all bipartite graphs are uniquely binary colourable. B. Operational dimension In any communication protocol, the carrier of the message as well as the sources of private or public coins are physical systems, which may be described as classical or quantum (or more generally but outside the purview of this work by a post-quantum theory). In order to be able to compare resources an important notion from the study of Generalised Probability Theories (GPT) [24] is that of the concept of Operational dimension. Definition 3. The operational dimension of a system is the largest cardinality of the subset of states that can be perfectly distinguished by a single measurement. Importantly, the operational dimension of a theory is different from the dimension of the vector space V in which the state space Ω is embedded. For instance, for qubit the state space, the set of density operators D(C 2 ) acting on C 2 is embedded in R 3 . However, the operational dimension of this system is 2, as at most two qubit states can be perfectly distinguished by a single measurement. Thus, the operation dimension is equivalent to the Hilbert space dimension for a quantum system. We will refer to this notion when comparing communication resources between the quantum and classical scenarios. III. COMMUNICATION COMPLEXITY OF RELATIONS In this Section, we will introduce the extension of bipartite communication complexity of functions to relations. A relation over a bipartite prepare and measure scenario is defined as a subset R ⊆ X × Y × B, where X and Y are the set of possible input values of Alice and Bob, respectively and B is the set of possible output values that can be produced by Bob. A simple example is the relation R where X and Y are sets of Parents and the set B is the set of Children and a valid tuple (x, y, b) is when b is a child of x and y. Clearly, there might be multiple correct answers if x and y have multiple children. There is also the possibility of no valid output for a given x and y if they have no children. We will consider relations that have a valid output b for any valid input (x, y). Let us now define what is meant by the Communication Complexity of a Relation (or CCR). Definition 4. CCR The communication complexity of a relation R ⊆ X × Y × B is the minimum communication that Alice requires to make with Bob for any input variables x ∈ X and y ∈ Y such that Bob's output b gives the tuple (x, y, b) which belongs to R. Note that Alice and Bob should know the relation R before the task commences. A protocol P to perform this task may involve oneway or two-way communications with single or multiple rounds. In this work, we will be interested in oneway communication protocols. The cost of a protocol P is defined as the maximum amount of communication required to perform the computation for some input (x, y). The communication complexity of the relation R is defined as the minimum cost over all protocols that can compute R. In a generalised setting, the computation may allow for some small errors to lower the cost. Throughout this article we consider only zero-error protocols, i.e. P(b|x, y) = 0 whenever (x, y, b) / ∈ R for all (x, y) ∈ X × Y. In most cases rather than finding the optimal protocol, which is a difficult task, one is interested in providing a lower bound for the communication complexity task. A trivial zero-error protocol has a cost of log |X| bits, which requires that Alice sends all information about her input to Bob and is also a trivial upper bound for communication complexity. The protocols for classical communication complexity of relations depending on encoding and decoding strategies have the following types Protocol with Private Coins: A classical one-way protocol with private coins is a tuple (P E , P D ), where P E and P D are probability distributions over the space of deterministic encodings (E) and decodings (D), respectively. We denote the private coin-assisted communication complexity of relation R as R priv (R), where P E and P D are probability distributions over the space of deterministic encodings (E) and decodings (D), respectively 3. Protocol with Public Coins: A classical one-way protocol with the public coin is P E×D , where P E×D is a probability distribution over the space of Cartesian product of deterministic encodings and decodings. We denote the public coinassisted communication complexity of relation R as R pub (R), where P E×D is a probability distribution over the space of Cartesian product of deterministic encodings and decodings The communication complexities of a relation R satisfy the following ordering: R pub (R) ≤ R priv (R) ≤ D(R)(2) In the communication complexity of functions, there is only a single correct answer that Bob may output. The task of communication complexity of relations differs from that of functions since there may be more than one correct answer for Bob. This allows us to define a stronger variation of CCR that enforces that Bob outputs all correct answers over different rounds of the prepare and measure scenario, which we call Strong Communication Complexity of Relations (S-CCR). Naturally then, when the relation is a function (a subclass of relations) S-CCR and CCR reduce to the same task. Definition 5. S-CCR The strong communication complexity of a relation R ⊆ X × Y × B is the minimum communication that Alice requires to make with Bob for any input variables x ∈ X and y ∈ Y such that Bob's output b gives the tuple (x, y, b) which belongs to R and that Bob's output b in different rounds of the prepare and measure scenario spans all valid b for each (x, y) input. Same as CCR Alice and Bob should know the relation R before the task commences. The aim of this task is to be able to decipher or reconstruct the relation R from the observed statistics {(x i , y i , b i )|i = runs}. In the limit of runs → ∞ the observed statistics can be used to get the conditional output probability distribution {P(b|x, y)} x,y,b . Note that for S-CCR, the necessary information to guess or reconstruct R correctly is given by the non-zero value of the observed conditional probabilities when (x, y, b) ∈ R (and zero otherwise) rather than the exact probabilities. We can define a natural (but not convex) payoff for S-CCR as follows: P R = min (x,y,b)∈R P(b|x, y).(3) One way to interpret this payoff P R is to think of it as related to the probability of success of reconstructing the relation R (See Appendix A), the higher the value of P R , the lesser runs one needs to reconstruct the relation. Note that for the success of reconstruction, we necessarily require P R > 0. It is worth mentioning that in S-CCR we are interested the minimum communication that does the task. However, the most optimal strategy using this amount of communication may not yield the maximum payoff. Also, two different communication and/or shared resources of the same dimension that individually perform the S-CCR task may also yield unequal payoff when optimised over all the strategies. However, there is a theoretical bound on the Payoff when optimised over any amount of communication resources and over all the possible strategies. This is trivially achieved if Alice communicates her input to Bob and Bob in turn uses this message and his input to give a randomly chosen output from the set of all correct answers in each run. In this work, we consider some specific relations induced by orthogonality graphs. These relations are spe-cified by a distributed clique labelling problem. Before we explain the setup let us introduce clique labelling. A. Binary colouring to clique labelling To find communication complexity, we require the minimum amount of communication between parties. While working with orthogonal graphs we have the binary colouring of each vertex of a graph which can be compressed/encoded. Therefore, we require a mapping that takes one from a binary colouring of vertices to the label of a clique and vice-versa. Consider an orthogonal graph G with the set of vertices V. The binary colour, denoted by f (.) is defined over each vertex (Def. 2). Now additionally consider some indexing of the vertices {1, · · · , |V |} of the graph. Let us define the set of vertices belonging to a clique C i as V C i ⊆ V. Observe that the binary colour takes value 1 for only one vertex of each clique of say size ω (with largest ω and assume that each vertex belongs to at least one such clique), that is ∀ v ∈ V C i , f (v) = δ v,v ′ for a v ′ ∈ V C i(4) We can now define clique labelling. Definition 6. The clique labelling is a mapping from f (v) for each vertex in a clique v ∈ V C i to a ω-valued label in Ω = {0, ..., ω − 1}. The vertices in V C i are ordered in increasing indices and the clique label is assigned from {0, ..., ω − 1} to the vertex whose binary colouring is 1 such that the clique label's position matches the vertex position in the index ordered set V C i . More concretely, the clique label is assigned from {0, ..., ω − 1} such that the lowest clique label 0 is assigned if the vertex with the lowest index in V C i has a binary colouring 1, the second lowest clique label 1 is assigned if the vertex with the second lowest index in V C i has a binary colouring 1 and so on. For example, for ω = 3 if a clique has vertices V C i = {v 3 , v 6 , v 7 } then if f (v 3 ) = 1 then g(C i ) = 0, if f (v 6 ) = 1 then g(C i ) = 1 and if f (v 7 ) = 1 then g(C i ) = 2, where g(C i ) is the ω-valued clique label of clique C i . Note that given the index of vertices, a clique and its clique label one can always map it back to the binary labelling of each vertex of a clique, that is the mapping is convertible, necessary for decoding. B. Clique Labelling Problem (CLP) Now, we present the class of relations for which we study CCR and S-CCR in this work. We are interested in relations based on the distributed clique labelling problem over a class of graphs. Here we consider graphs along with some faithful orthogonal representation in minimum dimension. We will refer to this graph along with this orthogonal representation together as orthogonality graph. We consider an orthogonality graph G with n cliques labelled as C i where i ∈ {1, ..., n} and the highest clique size of the graph be ω. We also assume that each vertex belongs to some ω-sized clique. We denote such graphs by G (n,ω) . Let us define the set of maximum cliques as C = {C 1 , C 2 , · · · , C n } and the set of (input and output) clique labels as Ω = {0, · · · , ω − 1} for the graph G (n,ω) . Note that the clique labels are related to the binary colouring of vertices through the definition given in subsection III A. The setup (given in Fig. 1) for our Clique Labelling Problem (CLP) is a prepare-and-measure scenario involving a referee and two spatially separated players, Alice and Bob. The referee shares the orthogonal graph G (n,ω) with some vertex indexing and a faithful orthogonal representation in minimum dimension with Alice and Bob at the beginning. The referee gives Alice the pair (C x , a) as input: a clique of size ω randomly chosen from G (n,ω) and a random possible labelling of the same clique, i.e., (C x , a) ∈ X = C × Ω. The referee gives a clique of size ω randomly chosen from G (n,ω) to Bob as input, C y ∈ Y = C. We will consider the inputs to be uniformly distributed in the sense that C x and C y are both randomly chosen from C and a is uniformly chosen from Ω. Alice is allowed to send some communication (either classical or quantum depending on the scenario) to Bob which we will optimise to find the communication complexity. (We will also consider situations with classical and quantum public coins later.) A B C x a C y b d Figure 1: Setup Given an orthogonal graph G (n,ω) and Alice's input is a maximum clique and a clique label, i.e. (C x , a) and Bob's input is some maximum clique C y . Bob must output a valid clique labelling b for his input clique such that (C x , a, C y , b) ∈ R CLP (G (n,ω) ). Alice can send a physical system of operational dimension d to Bob. Bob must output a valid labelling b ∈ B = Ω for C y which satisfies the constraints provided below coming from the rules of the binary colouring of the orthogonal graph G (n,ω) , that will define the relation. We call this Consistent Labelling of Pairwise Cliques: ouring should be identical to Alice's input colour. 2. If Alice and Bob receive two different cliques sharing some vertices, the binary colouring of each shared vertex (0 or 1) by Bob should be identical to Alice's colouring. 3. If Alice and Bob receive two different cliques sharing some edges, the vertices belonging to an edge should not both have the binary colour 1. 4. In all other cases Bob can colour the cliques independently of Alice's inputs. The conditions for consistent labelling of pairwise cliques are defined w.r.t. binary colourings, which can be then translated to conditions on the input and output clique labelling in {0, . . . , ω − 1} = Ω (subsection III A). The relation R ⊆ X × Y × B for the prepare and measure distributed CLP over the input and output sets X = C × Ω, Y = C and B = Ω, obeys the following structure R CLP ⊆ C × Ω × C × Ω. The game is successful if the Clique Labelling Problem (CLP) is satisfied, that is the tuple (x, y, b) ≡ (C x , a, C y , b) ∈ R CLP (G (n,ω) ), where R CLP (G (n,ω) ) is the relation defined by the constraints of consistent labelling of pairwise clique given above for some graph G (n,ω) . Note that having the relation is equivalent to having the graph itself. In Sec. IV, we show that considering CCR for R CLP (G (n,ω) ) there exists a protocol where d = ωvalued one-way communication from Alice to Bob both in the classical and quantum case win the game, and we do not have any quantum advantage. However, it is possible to realise unbounded quantum advantage when we look at the S-CCR for R CLP (G (n,ω) ). We add one observation here that will become relevant for some of the results in Sec. IV. For a graph G to have an orthogonal representation in dimension ω, any two distinct cliques for this graph can have at most ω − 2 points in common. Equivalently, every vertex v in C i that is not in a clique C j can be orthogonal to at most ω − 2 vertices in C j . C. Reconstruction of the Relation R CLP In the setup above Bob's output must be consistently labelled for CCR. Let us now consider the stronger version -S-CCR, where Bob must span all correct answers. This can be formulated as a reconstruction game where at the end of every round, the inputs and outputs of Alice and Bob are listed. After sufficient runs of the game, this list is shared with a randomly chosen Reconstructor ( Fig. 2), who at the beginning does not have any information about the graph and the induced relation thereof. After obtaining the list the Reconstructor becomes aware of the cardinality of the input and/or output sets of Alice and Bob from the list. The Reconstructor must reconstruct the relation R CLP (G (n,ω) ) and thus the graph G (n,ω) . For reconstruction to be possible, the outcomes of Bob b should be such that after many runs of the game, the set of tuples {(C x , a, C y , b)} can be used to deduce all the (non-)orthogonality relations in the graph G (n,ω) by the Reconstructor, without any prior information about the relation R CLP (G (n,ω) ) or graph G (n,ω) . Reconstructor {(C i x , a i , C i y , b i )|i ∈ runs of the experiments} R CLP (G (n,ω) ) After many runs of the game, the following payoff is calculated: P R CLP = min (C x ,a,C y ,b)∈R CLP P(b|C x , C y , a)(5) Here the minimisation is over the set of events in R CLP . The payoff P R CLP is necessarily non-zero if reconstruction is possible and the payoff can be interpreted as a measure of the efficiency of relation reconstruction over some number of runs. D. Probability Table for CCR CLP and S-CCR CLP One can analyse the task of CCR for consistent labelling with relation R CLP as well as the stronger task of distributed relation reconstruction problem (S-CCR) through a table of conditional probabilities p(b|C x , C y , a), that Alice and Bob can write down before the game begins. The rows of the table are given by Alice's possible inputs (C x , a), and the columns are denoted by the tuple of inputs-outputs of Bob (C y , b). This way of analysis will be important to understand some of the proofs. The favourable conditions of CCR CLP and S-CCR CLP can be mapped to the following properties of such a probability table: (T0): Consistent labelling If a tuple does not belong to the relation the corresponding conditional probab-ility entry should be zero ∀ (C x , a, C y , b) / ∈ R CLP (G (n,ω) ) =⇒ P(b|C x , a, C y ) = 0 (6) (∀ (C x , a, C y , b) ∈ R CLP (G (n,ω) ) =⇒ P(b|C x , a, C y ) > 0(7) (T2): Maximum Payoff One can provide an algebraic upper bound to the payoff P for a given graph G (n,ω) from the probability table in the following way. First, fix an input (C x ,ã) for Alice and C y for Bob. Now count the number of events (C x ,ã,C y , b) ∈ R CLP (G). Lets call this number η(C x ,ã,C y ). Maximise η(C x ,ã,C y ) over Alice's and Bob's input sets and call this number η. The payoff satisfies the following inequality P R(G) ≤ 1 η (8) For example, in the case of a graph which has all maximum cliques of size ω that are all disconnected the upper bound for this graph for the payoff becomes P R(G) ≤ 1 ω . It is worth highlighting that the payoff P R(G) is a faithful quantifier of the distributed relation reconstruction problem (S-CCR CLP ), that is P > 0 whenever relation reconstruction is possible and P = 0 implies reconstruction is impossible. Moreover, winning the S-CCR CLP game guarantees fulfilment of the condition in the CCR CLP task. Our objective is to optimise the payoff P R(G) using only direct communication resources such as classical bits or qubits, and also using shared resources such as classical shared randomness or entanglement. E. A concrete example In this subsection, we provide an example of a particular simple graph to help solidify the ideas of (S-)CCR and the conditional probability table provided in this Section. Consider the graph G (n=2,ω=3) (see Fig. 3 for vertex indexing), with two cliques n = 2 with size of cliques ω = 3 that share a vertex. The mapping of binary colourings to clique labellings for clique C 1 can be given by: Figure 3: In this example, the graph G (2,3) consists of two cliques C 1 and C 2 of size ω = 3. f (v 0 ) = 1, f (v 1 ) = f (v 2 ) = 0 =⇒ g(C 1 ) = 0 f (v 1 ) = 1, f (v 0 ) = f (v 2 ) = 0 =⇒ g(C 1 ) = 1 f (v 2 ) = 1, f (v 0 ) = f (v 1 ) = 0 =⇒ g(C 1 ) = 2 (9) v 0 v 2 v 1 C 1 v 3 v 4 C 2 Similarly, the mapping of binary colourings to clique labellings for clique C 2 can be given by: f (v 2 ) = 1, f (v 3 ) = f (v 4 ) = 0 =⇒ g(C 2 ) = 0 f (v 3 ) = 1, f (v 2 ) = f (v 4 ) = 0 =⇒ g(C 2 ) = 1 f (v 4 ) = 1, f (v 2 ) = f (v 3 ) = 0 =⇒ g(C 2 ) = 2 (10) Then relation R CLP (G (2,3) ) induced by the clique labelling problem with tuples (C x , a, C y , b) can be concretely given by: R CLP (G (2,3) ) = {(C 1 , 0, C 2 , 1), (C 1 , 0, C 2 , 2), (C 1 , 1, C 2 , 1), (C 1 , 1, C 2 , 2), (C 1 , 2, C 2 , 0), (C 2 , 1, C 1 , 0), (C 2 , 1, C 1 , 1), (C 2 , 2, C 1 , 0), (C 2 , 2, C 1 , 1), (C 2 , 0, C 1 , 2), (C 1 , 0, C 1 , 0), (C 1 , 1, C 1 , 1), (C 1 , 2, C 1 , 2), (C 2 , 0, C 2 , 0), (C 2 , 1, C 2 , 1), (C 2 , 2, C 2 , 2)}(11) For this graph, the table of conditional probability P(b|C x , C y , a) for all compatible labelling a, b and cliques C x , C y is the following: The entries marked with * are the free non-negative entries up to normalisation and the entries with 0 or 1 are constrained from the consistency conditions for CCR. This will give a table T0 satisfying condition (T0). A table satisfying condition (T1) must have positive numbers at all the entries marked with *. In this example, a table satisfying condition (T2) must have 0.5 at all the entries marked with *. C 1 C 2 b = 0 b = 1 b = 2 b = 0 b = 1 b = 2 a = 0 1 0 0 0 * * C 1 a = 1 0 1 0 0 * * a = 2 0 0 1 1 0 0 a = 0 0 0 1 1 0 0 C 2 a = 1 * * 0 0 1 0 a = 2 * * 0 0 0 1 In the next section, we present the bulk of our key results first considering the scenario with only direct communication resources (Sec), where the aim is to find the minimum dimension. Next, we consider the scenario where the direct communication resources are assisted by shared resources (i.e. public coins). IV. ONE-WAY ZERO-ERROR CLASSICAL AND QUANTUM CCR AND S-CCR In the setup described in the previous section, Alice and Bob have access to a noiseless one-way communication channel of limited capacity and arbitrary local sources of randomness (i.e. Private coins) that are considered to be free resources here. In addition, they may have some pre-shared correlations, i.e. public coin, among them. We first the scenario when no pre-shared randomness is allowed between Alice and Bob. In the section IV A we calculate the necessary and sufficient classical resource required to perfectly satisfy the CCR for R CLP (G (n,ω) ). A. Classical and Quantum deterministic one-way Communication Complexity of R CLP Consider a graph G (n,ω) . In the scenario described in the previous section, if Bob's output has to be such that the tuple consisting of Alice's and Bob's input along with Bob's output must belong to the relation R CLP (G (n,ω) ) (described as consistency condition), then we show that there is no advantage of quantum resource over its classical counterpart. Alice can send a ω level classical system using which Bob can choose a deterministic output b conditioned on his input C y and Alice's message. We start by making an observation that both the classical and quantum one-way communication complexity for R CLP (G (n,ω) ) is bounded from below by the maximum clique size ω of the given graph. It follows from considering the scenario where both Alice and Bob are given the same clique, Bob must know the input label of Alice (which has the size same as the maximum clique) to produce consistent labelling. Now the task reduces to show that this amount of communication is sufficient for classical protocol. Theorem 1. Given a graph G (n,ω) , classical deterministic one way zero error communication complexity of R CLP (G (n,ω) ) is log 2 ω bits. Proof. See appendix B. Thus, we observe no advantage in using quantum resources for communication over its classical analogue when considering CCR of R CLP (G (n,ω) ). In the section IV B, we calculate the amount of classical communication necessary and sufficient for accomplishing S-CCR of R CLP when considering the some class of graphs. B. Classical deterministic one-way Strong-Communication Complexity of R CLP Consider some Orthogonality Graph (OG)-G (n,ω) . In this section, we consider graphs G (n,ω) , which satisfy the following conditions: (G0): each vertex of the graph is part of at least one maximum clique of the graph, (G1): ∀v, v ′ ∈ V belonging to two different maximum cliques ∃ u ∈ V such that u is either adjacent to v or v ′ but not both. Observation 1. Given a graph G (n,ω) with maximum clique size ω, satisfying conditions (G0)-(G1), there exists induced subgraph consisting of at least two maximum cliques, say C i and C j , such that there is at least one label of C i for which there are at least two different choices of labelling for the other clique C j . Given a clique C i from the graph G (n,ω) there are ω possible labelling of a maximum clique C i that are labelled {0, 1, · · · , ω − 1}. Each labelling of the clique assigns binary colour 1 to exactly one vertex of the maximum clique while the rest of the vertices belonging to this clique are assigned a binary colour 0. Given an orthogonality graph G (n,ω) satisfying the properties listed above, we prove a tight lower bound for classical resources required to win S-CCR for R CLP (G (n,ω) ). This bound is calculated for the zeroerror scenario in which Bob should never output an outcome b such that the tuple consisting of Alice's and Bob's input, (C x , a) and C y respectively, and Bob's output does not belong to the relation R CLP (G (n,ω) ), i.e., (C x , a, C y , b) / ∈ R CLP (G (n,ω) ). Lemma 1. Given a graph G (n,ω) satisfying (G0)-(G1), it is necessary and sufficient to communicate a |V | level classical system, where |V | is the cardinality of the set of vertices in the graph, to perform distributed relation reconstruction task i.e. S-CCR for relation R CLP (G (n,ω) ). Proof. Before the game begins, Alice and Bob construct the table M of conditional probabilities p(b|C x , C y , a) which has nω rows and nω columns. Now upon receiving C x , a if Alice communicates the relevant row to Bob, they can reconstruct the relation. Therefore, we have a trivial upper bound on the dimension of the classical system which is required as nω. The deterministic classical strategy employed for Theorem 1 cannot reconstruct the graph since, the conditional probability table must contain nonzero entries corresponding to all events (C x , C y , a) ∈ R CLP (G (n,ω) ). Therefore, we cannot use the compression technique used before. Nonetheless, observe that if two rows of the probability table can be made identical even when satisfying the consistency condition, Alice and Bob can pre-assign them the same communication message. In the table, there is redundancy when the same vertex shows up in different cliques. For instance, v k is in both maximum clique C i and C j then the rows corresponding to (C x = C i , a) r and (C x = C j , a ′ ) r , where label a and a ′ for the clique C i and C j respectively colour the vertex v k as 1, can be assigned the same entries, therefore, be encoded in the same mes- sage. For such a vertex v k , (C x = C i , a, C y = C j , b = a ′ ), (C x = C j , a ′ , C y = C i , b = a), (C x = C j , a ′ , C y = C j , b = a ′ ), (C x = C i , a, C y = C i , b = a) ∈ R CLP (G (n,ω) ). Also for any other C y ( = C i , C j ), (C x = C i , a, C y , b) ∈ R CLP (G (n,ω) ) =⇒ (C x = C j , a ′ , C y , b) ∈ R CLP (G (n,ω) ). Thus, the entries in the table M corresponding to the rows (C x = C i , a) r and (C x = C j , a ′ ) r can be assigned identically (especially the non-zero entries) while guaranteeing perfect relationship reconstruction without violation of the consistency condition. The entries that are necessarily zero in one of the rows are also zero for the other row. Therefore, Alice and Bob can remove all redundant rows in this manner and end up with an encoding based on the compressed table which now has |V | distinct rows, where each row corresponds to one vertex. Sufficiency of communicating |V |-level classical system follows trivially since it allows Alice to send all information about her input. The relation |V | ≤ nω is saturated if all the maximum cliques are disconnected in the given graph. Now we prove the necessity of |V |-level classical system to achieve perfect S-CCR when considering R CLP (G (n,ω) ). For every vertex v in a clique C i where i ∈ {1, · · · , n} there is an input corresponding to this vertex for Alice (C i , a) where label a assigns colour 1 to v and rest of the vertices in the clique are assigned 0. This is due to condition (G0). For any clique C i , each of the Alice's input, (C i , a) where a ∈ {0, · · · , ω − 1}, must be encoded with different message alphabet. This is because Bob needs to exactly guess the input clique label of Alice whenever his input is C y = C i . Now for any two vertices v, v ′ that belong to two different cliques, say v ∈ C i and v ′ ∈ C j (where i = j), there exists a clique C k and a vertex u ∈ C k (where k maybe i, j or some other number) such that it is orthogonal exactly to one of these vertices (say v w.l.o.g.). This is due to condition (G1). Let the Alice's input corresponding to v and v ′ be (C x = C i , a) and (C x = C j , a ′ ) respectively and Bob has input C y = C k . For these rounds, P(b|C x = C i , a, C y = C k ) = 0 and P(b|C x = C j , a ′ , C y = C k ) > 0 where Bob's output label b for clique C y = C k assigns 1 exactly to vertex u. Thus the inputs corresponding to every pair of vertices that do not belong to the same clique must be encoded with different message alphabets to obtain a non-zero payoff. Since there are |V | vertices, the classical message must be encoded in a system of dimension ≥ |V |. Thus communication of |V |-level classical system is necessary for reconstruction of the relation R CLP (G (n,ω) ). Private Coins do not improve the protocol: Note that, randomising the deterministic protocols with communication less than |V |-level classical system, does not allow to accomplish both (T0) and (T1) simultaneously. To see this in a clearer way, let us consider a convex combination of deterministic encoding of Alice for protocols with communication of a (|V | − 1)-level classical system. Given each such deterministic encoding, for any clique C i , each of Alice's input, (C i , a) where a ∈ {0, · · · , ω − 1}, must be encoded with different message alphabet. Also, this encoding will encode some (C x = C i , a) and (C x = C j , a ′ ) in the same message where vertex v ∈ C i and v ′ ( = v) ∈ C j( =i) . Here labels a and a ′ assign 1 to v in C i and v ′ in C j respectively and the rest of the vertices in these cliques are assigned 0. Individually, each of these encodings will be unsuccessful in relationship reconstruction. Furthermore, since, Alice and Bob do not have access to pre-shared randomness, therefore, Bob is not aware of Alice's choice of encoding in a given round. Thus, Bob cannot use decoding that is correlated with Alice's encoding strategy in a given round. If Bob tries to satisfy consistency conditions then he will not be able to have non-zero probability corresponding to all the events (C x , a, C y , b) ∈ R CLP (G (n,ω) ). As we show in this section, the amount of classical communication required for accomplishing zero error S-CCR, when considering R CLP (G (n,ω) ), scales linearly with the number of vertices in the graph. In the next section, we show a tight lower bound on the amount of quantum communication necessary under the same constraints when no pre-shared randomness is allowed between Alice and Bob. We also show, that there exists an unbounded gap between quantum and classical resources when no public coin is pre-shared between the two players. This separation is observed for a sub-class of graphs considered in this section. C. Unbounded quantum advantage in one-way Strong Communication Complexity of R CLP Lemma 2. Given a graph G (n,ω) satisfying (G0)-(G1) that has orthogonal range faithful in dimension d C , it is necessary and sufficient to communicate a d C level quantum system to perform distributed relation reconstruction task perfectly. Proof. The strategy to show that d C level quantum system is sufficient to win CCR and S-CCR when considering relation R CLP (G (n,ω) ), for a graph G (n,ω) which has orthogonal representation over the complex field in dimension d C is straightforward, compared to the classical strategy. Alice has access to C x , a corresponding to a graph that has a binary colouring and an orthonormal representation over C in dimension d C , which provides a measurement associated with the clique C x and an outcome associated with the label a given the clique C x . Alice uses the associated measurement for the clique C x to prepare a qudit in the state associated with the label a and sends the qudit to Bob. Bob performs a measurement associated with his clique C y and finds an outcome that he outputs as his label b. The quantum strategy guarantees consistent labelling of maximum cliques stated equivalently as R CLP (G (n,ω) ) due to the orthonormal representation. This strategy described above is necessary if the dimension of a quantum system is d C . A quantum system with dimension less than d C will not be able to satisfy both the conditions for S-CCR given the relation R CLP (G (n,ω) ). This can be proven in the following way. Let Alice encode the classical input (C x , a) in a quantum message system ρ C x ,a ∈ D(C d ′ (≤d C ) ) and Bob performs a ω outcome POVM {M j C y } ω−1 j=0 on the message qudit when his input is C y . Whenever Bob's input C y = C x , his output b, in order to satisfy the consistency condition, must be the same as a. P(b|C x , a, C y = C x ) = Tr[ρ C x ,a M b C y ] = δ a,b ∀ C x , a(12) Thus, the encoding ρ C x ,a = |ψ C x ,a ψ C x ,a | where |ψ C x ,a ∈ C d C and M j C y = |ψ C y ,j ψ C y ,j | where |ψ C y ,j ∈ C d C . So, the dimension of Alice's encoded quantum message must be d ′ = d C . Also, Bob should perform a d outcome projective measurement corresponding to his input. Now, consider a situation where vertex v is shared between Clique C x and C x ′ . Let label a of clique C x and label a ′ of C x ′ be such that f C x ,a (v) = 1 and f C ′ x ,a ′ (v) = 1. Now if Alice's input is C x , a and Bob's input C y = C x ′ then Bob's output b must be equal to a ′ in order to satisfy the consistency condition. P(b|C x , a, C y = C x ′ ) = Tr[ρ C x ,a M b C x ′ ] = δ a,a ′(13) Thus, ρ C x ,a = |ψ C x ,a ψ C x ,a | = M a ′ C x ′ = |ψ C x ′ ,a ′ ψ C x ′ ,a ′ |. Similar results holds if cliques share more than one vertex. Now, consider a situation where vertex v in clique C x is orthogonal to vertex u in C x ′ . Let label a of clique C x and label a ′ of C x ′ be such that f C x ,a (v) = 1 and f C ′ x ,a ′ (u) = 1. Now if Alice's input is C x , a and Bob's input C y = C x ′ then Bob's output b must never be equal to a ′ in order to satisfy the consistency condition. P(b = a ′ |C x , a, C y = C x ′ ) = Tr[ρ C x ,a M a ′ C x ′ ] = 0 (14) Thus, ρ C x ,a = |ψ C x ,a ψ C x ,a | is orthogonal to the support of the projector M a ′ C x ′ = |ψ C x ′ ,a ′ ψ C x ′ ,a ′ |. Similar relations hold if more than one vertex of the clique are individually orthogonal to multiple vertices of another clique. For any other input of Alice and Bob that does not belong to the consistency condition, the following should hold in order to obtain a non-zero payoff: P(b|C x , a, C y ) = Tr[ρ C x ,a M b C y ] = 0 (15) =⇒ | ψ C x ,a |ψ C y ,b | 2 = 0(16) Thus, any quantum strategy involving a d = d C level quantum system which satisfies the consistency condition and gives a non-zero payoff corresponds to a faithful orthogonal representation of the graph in dimension d C . Encoding ρ C x ,a by Alice corresponds to the vector representation of the vertex v in clique C x for which f C x ,a (v) = 1. If the input is C y , Bob's decoding strategy involves him performing the projective meas- urement {M j C y } d C −1 j=0 where the projectors correspond to the vector representation of vertices in clique C y . This concludes the proof. Theorem 2. There exist a class of graphs such that the separation between one-way classical and quantum communication, required for zero-error reconstruction of the given S-CCR CLP induced by these graphs, can be unbounded. Proof. Let us consider a graph G (n,ω) . Lemma 1 tells us that a classical protocol that achieves zero-error reconstruction of this graph must communicate ⌈log 2 nω⌉ bits. On the other hand, Lemma 2 implies that protocols using quantum resources can achieve the same by communicating ⌈log 2 ω⌉ qubits, provided the graph G (n,ω) has an orthogonal range d R = ω. According to Lovasz's theorem (Proposition 1) [23] a faithful orthogonal representation of the graph G (n,ω) exists with d C = ω, since it is necessary to remove (nω − ω) vertices from the complementary graphḠ (n,ω) to make it completely disconnected. It also follows from Eq. (1) that for the graph G (n,ω) , the faithful orthogonal range over complex field d C = ω. Thus one can obtain such a faithful orthogonal representation of the graph G (n,ω) with orthogonal range d C = ω by considering n different orthonormal bases in C ω . The separation between classical (⌈log 2 nω⌉ bits) and quantum (⌈log 2 ω⌉ qubits) communication can be made unbounded by considering large n. Given any graph G (n,ω) , having an orthogonal range d C = ω and satisfying conditions (G0)-(G1), the maximum payoff P achievable by a direct quantum communication resource of operational dimension ω is connected to the optimal faithful orthogonal representation of the graph with orthogonal range d C . To see this, one can consider two extreme cases. First, let us suppose that the given graph G (n,ω) does have a faithful orthogonal representation in dimension d C = ω, then the maximum payoff for quantum strategy is given by the minimum overlap of the vectors corresponding to any two disconnected vertices of the graph (following the same protocol as in Lemma 2). Second, if the graph G (n,ω) has a faithful orthogonal representation in dimension d C > ω, the maximum payoff achievable by communicating a ω-dimensional quantum system is zero (see Section V B for an implication of this observation). So, keeping in mind the correspondence between quantum strategy and faithful orthogonal representation of the graph G (satisfying (G0)-(G1)), one can rephrase the maximum payoff (Eqn 5) with communication of d = ω-dimensional quantum system, as an optimisation over the faithful orthogonal representations of the graph G with orthogonal range ω on the complex field, i.e. P max Q ω (G) = min (C x ,a,C y ,b)∈R CLP (G) P(b|C x , C y , a) (17) = max FOR(C ω ) min (C x ,a,C y ,b)∈R CLP (G) Tr[Π C x a Π C y b ] (18) = max FOR(C ω ) min (i,j) / ∈E | v(i), v(j) | 2(19) where, FOR(C ω ) denotes the set of all faithful orthogonal representations with range ω over complex field. This relation connects a property of the graph G (on the right) to an operational quantity (on the left). D. Quantum advantage in one-way Strong-Communication Complexity of R CLP for other graphs In this section we will consider a class of orthogonality graphs(G (n,ω) , V, E ) with the following properties: (G0): each vertex of the graph is part of at least one maximum clique of the graph. (G1): ∀v, v ′ ∈ V belonging to two different maximum cliques ∃ u ∈ V such that u is either adjacent to v or v ′ but not both. (G2): G (n,ω) has orthogonal range d less than order of the graph d < |V |. The class of graphs considered in theorem 2 satisfy these properties. This can be seen because for these graphs any two distinct maximum cliques must have at least two points each that are not shared between them. However, there can be a wider class of graphs with the aforementioned properties. For the graphs G (n,ω) considered in this section, orthogonal range d ≤ |V |. Note that we already know that for graphs satisfying (G0)-(G1), the classical strong communication complexity is the order of the graph, i.e. |V | (Lemma 1). Thus, graphs having orthogonal range d strictly less than the order of the graph entail an advantage of using quantum communication (following the same protocol described in the proof of Lemma 2) over classical communication when considering the one-way strong communication complexity of R CLP . For example, in the next subsection, we will discuss in detail the class of well-known Paley graphs, which has an orthogonal range of almost half of the order of the graph (see Theorem 3). Paley graphs In the discussion above, we considered graphs with the property (G0)-(G2). A class of graphs that satisfy these conditions are Paley graphs (see observation 2 and theorem 3). Paley graphs G Paley are simple undirected graphs whose vertices denote the elements of a finite field F q (of order prime power q = 4k + 1 for integer k), and whose edges denote that the corresponding elements differ by a quadratic residue. Paley Graphs G Paley are simple undirected graphs whose vertices denote the elements of a finite field F q (of order prime power q = 4k + 1 for integer k), and whose edges denote that the corresponding elements differ by a quadratic residue. Paley Graphs have the interesting property that they are vertex-transitive, selfcomplementary graphs which means that by Lovász's original result, the value of θ(G Paley ) can be computed exactly to be θ(G Paley ) = |V(G Paley )| 1/2 = √ q. Some simple Paley graphs are shown in figure 4. [25]. Thus, for every pair of different vertices v, v ′ there exists a third vertex u that is adjacent to exactly one of the vertex v or v ′ . This implies that condition (G1) is satisfied by Paley graphs. Quantum advantage in S-CCR for Paley graphs In the following, we will show that there exists FOR for Paley graphs with q vertices in q+1 2 dimension. Also, we show that quantum achieves protocol achieves the maximum payoff 2 √ q+1 which can be obtained by using an q+1 2 level quantum system. We note that θ(G Paley ) can be computed using the semi-definite programming formulation given as θ(G Paley ) = max M=(M i,j ) q i,j=1 q ∑ i,j=1 M i,j s.t. M 0, ∑ i M i,i = 1.(20) Let Γ Paley denote the automorphism group of G Paley , i.e., the set of all permutations σ that preserve the adjacency structure of the graph. Suppose M is an optimal solution point for the optimisation in (20), then M * = 1 |Γ Paley | ∑ σ∈Γ Paley σ T Mσ also satisfies the constraints of positive semi-definiteness, trace one and the sum over entries being equal to θ(G Paley ). Since G Paley is vertextransitive, the sum over permutations in Γ Paley goes over transpositions between every pair of vertices so that M * i,i = 1/q for all i ∈ [q]. M * is the Gram Matrix of a set of vectors (each of norm 1/ √ q) forming an orthogonal representation of G Paley . Let us denote by S opt = |u 1 , . . . , |u q the corresponding set of normalised vectors forming the optimal solution to the Lovásztheta optimisation, and by M opt = qM * the corresponding Gram Matrix. We see that θ(G Paley ) = q ∑ i,j=1 1 q u i |u j .(21) In other words, we have ∑ q i,j=1 u i |u j = q 3/2 . By symmetry and the fact that every vertex in G Paley has degree (q − 1)/2 it also follows that u i |u j = (q 3/2 − q)/(q(q − 1)/2) = 2/(q 1/2 + 1) for i ≁ j. Let us now compute the dimensionality of the vectors |u i in S opt that form the optimal representation giving rise to θ(G Paley ). This quantity is the dimension of the vectors giving rise to the faithful representation S opt that is traditionally denoted as ξ * (G Paley ). Theorem 3. The dimension of the optimal representation of G Paley that gives rise to θ(G Paley ) is (q + 1)/2. Proof. We are looking to compute the dimension of the faithful representation that gives the optimal solution to the Lovász-theta optimisation of G Paley , i.e., we want to find the minimum ξ * (G Paley ) such that |u i ∈ R ξ * (G Paley ) for the vectors |u i ∈ S opt . This quantity is given by the rank of the Gram Matrix M opt of the set of (normalised) vectors S opt . We have that (M opt ) k,l =      1 k = l 0 k ∼ l 2/(q 1/2 + 1) (k ≁ l) ∧ (k = l). In other words, M opt = I + 2 q 1/2 +1 A(G Paley ), where A(G Paley ) denotes the adjacency matrix of the complement of G Paley (which is isomorphic to G Paley since the graph is self-complementary). To compute rank(M opt ), we calculate its spectrum and show that it has exactly (q + 1)/2 non-zero eigenvalues, so that rank(M opt ) = (q + 1)/2. To do this, we compute the spectrum of A(G Paley ) = A(G Paley ). Following [26], let us define a matrix K based on the quadratic characters χ(k − l) χ(k − l) =      1 (k − l) is quadratic residue modulo q 0 k = l −1 else.(22) by K k,l = χ(k − l). By the property of the characters that χ(xy) = χ(x)χ(y) and ∑ q−1 x=0 χ(x) = 0, we have the following result. Proof. We want to prove that the diagonal entries of K 2 are equal to (q − 1) and the off-diagonal entries are equal to −1. The diagonal entries are given by the squared norms of the columns of K, which have one zero entry, (q − 1)/2 entries of value 1 (corresponding to the quadratic residues modulo q and the degree of each vertex in G Paley ) and (q − 1)/2 entries of value −1. Therefore, the squared norms of the columns and hence the diagonal entries of K 2 are equal to q − 1. The off-diagonal entries (K 2 ) k,l are given by ∑ q−1 j=0 χ(k − j)χ(l − j) = ∑ q−1 j ′ =0 χ(j ′ )χ((l − k) + j ′ ). Since χ(0) = 0, the term for j ′ = 0 vanishes and we have ∑ q−1 j ′ =1 χ(j ′ )χ((l − k) + j ′ ). Since χ(j ′ ) ∈ {±1} for j ′ = 0, the sum reduces to ∑ q−1 j ′ =1 χ((l − k) + j ′ )/χ(j ′ ) = ∑ q−1 j ′ =1 χ((l − k)/j ′ + 1) where we used the property of the characters that χ(xy) = χ(x)χ(y). We finally obtain ∑ q−1 j ′ =1 χ((l − k)/j ′ + 1) = ∑ q−1 j ′′ =0 χ(j ′′ ) − χ(1) = 0 − 1 = −1 where we used the property that as j ′ ranges over [q − 1], the argument (l − k)/j ′ + 1 ranges over elements {0, . . . , q − 1} \ {1}. Therefore, we obtain the off-diagonal entries to be −1 thus showing that K 2 = qI − J. We also see by direct term-by-term comparison that the adjacency matrix of the Paley graph can be written as A(G Paley ) = 1 2 (K + J − I) .(23) We, therefore, obtain that A(G Paley ) 2 = q − 1 4 (J + I) − A(G Paley ).(24) Now observe that the all-ones vector |j is an eigenvector of A(G Paley ) and consider another eigenvector |e λ corresponding to eigenvalue λ = 0. Since |e λ is orthogonal to |j , we have that J|e λ = 0, so that (25) or in other words that A(G Paley ) 2 |e λ = λ 2 |e λ = q − 1 4 − λ |e λ ,λ 2 + λ − q − 1 4 = 0, =⇒ λ = 1 2 −1 ± q 1/2 .(26) Thus, the spectrum and corresponding degeneracies of A G Paley are found to be spec A G Paley =        (q − 1)/2 1 1 2 −1 + q 1/2 (q − 1)/2 1 2 −1 − q 1/2 (q − 1)/2. As we have seen, the Gram Matrix M opt from the optimal representation giving rise to θ(G Paley ) is given by M opt = I + 2 q 1/2 + 1 A G Paley .(27) Therefore, the spectrum of M opt consists of exactly (q + 1)/2 non-zero eigenvalues given by spec M opt =      √ q 1 2 √ q/(1 + √ q) (q − 1)/2 0 (q − 1)/2. Therefore, we obtain that rank(M opt ) = ξ * (G Paley ) = (q + 1)/2. We can even go further and note that since the adjacency matrix of the Paley graph is a circulant matrix (the k-th row is a cyclic permutation of the 1-st row with offset k), the eigenvectors of the adjacency matrix A(G Paley ) (and therefore the Gram Matrix M opt ) are the Fourier vectors |e λ = 1 q 1, ω λ , ω 2λ , . . . , ω (q−1)λ ,(28) with λ = 0, 1, . . . , q − 1, where ω = exp 2πi q is a primitive q-th root of unity. Note that |e 0 = |j is the allones vector. We can then explicitly calculate that M opt |e λ =     √ q − 1 √ q + 1 + 1 √ q + 1 ∑ l:l =1 (1−l) is a quad. res. modq ω (l−1)λ − 1 √ q + 1 ∑ l:l =1 (1−l) is not a quad. res. modq ω (l−1)λ     |e λ(29) We can then explicitly compute for prime q not only the eigenvalues of M opt as above but also see that the eigenvalue √ q corresponds to the eigenvector |j = |e 0 , the eigenvalues 2 √ q/(1 + √ q) correspond to the eigenvectors |e λ for λ being the remaining quadratic residues modulo q, and the zero eigenvalues correspond to the eigenvectors |e λ for λ being the quadratic non-residues modulo q. Consider the payoff function defined for a graph G as P max Q d (G) = max FOR d (G) min (i,j) / ∈E(G) | v i |v j | 2 ,(30) where FOR d (G) denotes the set of faithful orthogonal representations in dimension d of G. Let us compute this function for the class of Paley graphs. Firstly, we consider For (k, l) / ∈ E(G Paley ), let S (k,l) denote a point in FOR(G Paley ) that achieves the maximum for the optimisation problem in (31) with the minimum being realised at (k, l) / ∈ E(G). That is, S (k,l) = |v P max Q d (G Paley ) ≤ max FOR(G Paley ) min (i,j) / ∈E(G Paley ) | v(i)|v(j) | 2 ,(31)(k,l) 1 , . . . , v (k,l) q with v (k,l) i |v (k,l) j = 0 for (i, j) ∈ E(G Paley ) and | v (k,l) k |v (k,l) l | 2 ≤ | v (k,l) k ′ |v (k,l) l ′ | 2 for any (k ′ , l ′ ) ∈ E(G Paley ), (k ′ , l ′ ) = (k, l) . We claim that S (k,l) = S opt , that is the set of vectors realizing the optimal value in the Lovász-theta optimisation. To this end, we claim that | v (k,l) k |v (k,l) l | ≤ 2 √ q + 1 . (32) For suppose that | v (k,l) k |v (k,l) l | > 2 √ q+1 . Then consider the Gram Matrix M (k,l) formed by the set of normalised vectors in S (k,l) . We see that (1/q)M (k,l) also satisfies the constraints of positive semi-definiteness and trace one for the Lovász-theta optimisation in Eq. (20). But if the minimum non-zero off-diagonal entry of (1/q)M (k,l) is larger than the minimum non-zero off-diagonal entry of the optimal matrix M * (with both matrices having diagonal entries all equal to (1/q)) then we obtain that ∑ q i,j=1 (1/q) M (k,l) i,j > ∑ q i,j=1 (M * ) i,j = θ(G Paley ) which is a contradiction. Therefore, we must have that the optimal value of the payoff function is at most | v (k,l) k |v (k,l) l | 2 = 2 √ q + 1 2 ,(33) with the optimum achieved by the set of vectors S opt in R (q+1)/2 that also incidentally achieve the optimum value of Lovász-theta for the graph G Paley . E. Public coins In the previous subsections, we considered strong communication complexity of relation when a public coin or pre-shared randomness between Alice and Bob was not allowed. Here we consider correlations shared by the parties along with one-way direct communication resources. We find that there exist graphs for which non-zero payoff while using restricted classical communication implies the presence of shared correlation. In public coin-assisted communication complexity problems, usually, the amount of communication necessary and/or sufficient is studied. In these problems, unbounded amount of public coin is allowed to be shared between the players. However, here we allow for restricted direct communication, either quantum or classical, and compare the amount of shared randomness required to accomplish S-CCR when considering relation R CLP (G (n,ω) ). We also compare the amount of quantum shared correlation with classical shared randomness that is required when only a restricted amount of one-way classical communication is allowed in order to perform relation reconstruction for some specific graphs. In these cases, we show there is an unbounded gap between quantum and classical shared randomness. Classical communication assisted by shared randomness In Theorem 1 we showed that a ω-level classical message is necessary and sufficient for (T0) while in Lemma 1 we saw that a |V |-level classical message is necessary and sufficient for simultaneously satisfying (T0) and (T1) when considering graph G (n,ω) . Here we consider the class of graphs G (n,ω) which satisfies the constraint (G0)-(G2) and has faithful orthogonal representaion in minimum dimension ω. We show here if we restrict classical communication to a ω-level classical message and allow shared randomness then one can satisfy (T0), (T1) and achieve maximum payoff for G (n,ω) (See observation 3). We then ask what would be the lower bound on the shared randomness for any graphs to satisfy (T0), (T1) and achieve the maximum payoff. Observation 3. Given a graph G (n,ω) , the classical strategy with only classical communication for satisfying (T0) is based on Alice and Bob finding a suitable deterministic strategy, i.e table M, before the beginning of the game expressed through a ω × nω table compressed from the nω × nω table of conditional probabilities p(b|C x , C y , a). In the shared randomness scenario, Alice and Bob prepare many such deterministic strategies (or tables) each of which satisfies consistent labelling of cliques (T0) before the game begins and they index these tables. They use shared randomness to choose which table to use for a particular run of the game. Over multiple runs, they can satisfy (T1). Trivially, they could use shared randomness of the order of the total number of such deterministic strategies where each satisfies consistent labelling of the cliques (T0). For example consider the graph shown in Fig. 3, we saw that one classical deterministic strategy was represented through Table VI in Appendix B. Similarly, Alice and Bob could prepare another Table II Fig. 3 If Alice and Bob use a bit of unbiased shared randomness to choose between Table VI and Table II, they effectively are using the strategy given in Table III which satisfies (T0) as well as (T1) and obtain maximum payoff since they fill all the entries * with 0.5. The maximum payoff for this Graph is P = 0.5 Fig. 3 Theorem 4. Given a graph G (n,ω) satisfying conditions (G0)-(G1), for a protocol using communication of ω-level classical system, the lower bound on the amount of shared randomness required to perfectly accomplish distributed relation reconstruction is equal to the minimum amount of shared randomness required for the same task when considering another graph G (n,ω=2) with n disconnected maximum cliques. C 1 C 2 b = 0 b = 1 b = 2 b = 0 b = 1 b = 2 a = 0 1 0 0 0 1 0 C 1 a = 1 0 1 0 0 0 1 a = 2 0 0 1 1 0 0 a = 0 0 0 1 1 0 0 C 2 a = 1 1 0 0 0 1 0 a = 2 0 1 0 0 0 1C 1 C 2 b = 0 b = 1 b = 2 b = 0 b = 1 b = 2 a = 0 1 0 0 0 0.5 0.5 C 1 a = 1 0 1 0 0 0.5 0.5 a = 2 0 0 1 1 0 0 a = 0 0 0 1 1 0 0 C 2 a = 1 0.5 0.5 0 0 1 0 a = 2 0.5 0.5 0 0 0 1 Proof. The amount of shared randomness depends on the graph and can be loosely upper bounded by the total number of different classical deterministic strategies (or the total number of different tables of conditional probabilities that Alice and Bob can prepare while satisfying the constraints mentioned in Appendix B). We observe that for different graphs G with the same number of maximum size cliques n of clique size ω, the graph in which all maximum cliques are disconnected requires the most amount of shared randomness. On the other hand, graphs in which every clique shares the most number of its vertices with other cliques, require the least amount of shared randomness due to the least number of * entries in their conditional probability table (for example in Table IV). We also know that the most number of vertices that any two cliques can share is ω − 2 to have an orthogonal representation in C ω (Proposition 1). An example is provided in Fig. 5 for ω = 5 and n = 2. To calculate the lower bound on shared randomness for a graph with n cliques with maximum clique size ω, we can calculate the shared randomness for a graph where any two maximum size cliques share ω − 2 vertices, as such a graph will saturate the lower bound. Such a graph has the relation |V | = ω + 2(n − 1) number of vertices. We also observe that for such a graph the associated conditional probability nω × nω table with entries G 1 G 2 Figure 5: Two Graphs with (ω = 5, n = 2)-the graph on the left G 1 has two cliques of size ω = 5 and ω − 2 = 3 vertices common between these cliques. The graph on the right G 2 consists of two disconnected cliques of size ω = 5. p(b|C x , C y , a), the number and structure of the free entries * is equivalent to that of a graph with n disconnected cliques of size ω = 2, and thus the number of classical deterministic strategies and therefore shared randomness required for these two graphs are the same. For example, in the case of the graph shown in Fig. 3 we see that Table IV is the conditional probability table which is also equivalent (in terms of * ) to the conditional probability table for graph G (n=2,ω=2) in Fig. 6. Therefore we have shown that we can calculate the lower bound for shared randomness required for a graph with ω-sized n maximum cliques by calculating the shared randomness required for a graph with ω = 2-sized n maximum cliques. C 1 C 2 b = 0 b = 1 b = 2 b = 0 b = 1 b = 2 a = 0 1 0 0 0 * * C 1 a = 1 0 1 0 0 * * a = 2 0 0 1 1 0 0 a = 0 0 0 1 1 0 0 C 2 a = 1 * * 0 0 1 0 a = 2 * * 0 0 0 1 ≡ C 1 C 2 b = 0 b = 1 b = 0 b = 1 C 1 a = 0 1 0 * * a = 1 0 1 * * C 2 a = 0 * * 1 0 a = 1 * * 0 1 We now provide the lower bounds on shared randomness required for the combined task (T0) and (T1). Figure 6: Graph with ω sized n cliques with ω − 2 common vertices is equivalent (in terms of the number of classical deterministic strategies) to Graph with ω = 2 sized n disconnected cliques. d R = ω, it is necessary v 0 v 2 v 1 C 1 v 3 v 4 C 2 ∼ = v 0 v 1 C 1 v 3 v 4 C 2 (but may not be sufficient) to communicate a ω-level classical system assisted with maximum shared randomness with n-inputs (i.e. 1 n ∑ n i=1 (|ii ii|)) to accomplish S-CCR CLP (G (n,ω) ) perfectly. Proof. By Theorem 4, to find the lower bound of shared randomness required for a graph G (n,ω) to satisfy Graph Reconstruction (T0) and (T1), we calculate the shared randomness required for the same task when considering a graph G (n,ω=2) where any two maximum cliques are disconnected. For Graph Reconstruction of G (n,ω=2) , we require convex combination of strategies such that for every (C x , a, C y =x ) there must be a table with P(b|C x , a, C y ) = 0 and a table with P(b|C x , a, C y ) = 1. Any classical strategy constitutes of filling the table of conditional probability such that every off-diagonal block matrix of this table (C x , C y =x ) is either I 2 or σ x , where I 2 is the identity matrix and σ x is the Pauli-x operator (or the NOT operator). The set of n classical strategies to achieve reconstruction are the following. The i th strategy corresponds to the table where only off-diagonal block matrix (C 1 , C i ) = σ x and rest (C 1 , C j( =i) ) = I 2 ∀i ∈ {2, · · · , n}. Note that fixing the block matrices in the first row alone fixes the entire table if the amount of classical communication is restricted to 1-bit (See Appendix B). Note that taking each such n deterministic classical strategies discussed earlier and their convex combinations yields a table of conditional probabilities P(b|C x , a, C y ), M, that leads to some non-zero payoff. It is worth mentioning that the payoff for the above strategy is P = 1 n > 0 and since a non-zero payoff ensures relation reconstruction, thus we satisfy (T0) and (T1) but the is not always the maximum achievable payoff for the graph under consideration. Before moving forward, we first introduce Orthogonal Arrays. Definition 7. An N × k array A with entries from set S is called an orthogonal array OA(N, k, s, t) with s levels, strength t(∈ {0, 1, · · · , k}) and index λ if every n × t sub-array of A contains each t-tuples based on S exactly λ times as a row [27]. Orthogonal Arrays have found interesting connections with absolutely maximally entangled states [28], multipartite entanglement [29,30], quantum error correcting codes [31] etc. Here, we will consider orthogonal arrays OA (N, k, s, t) where t = 2 and s = 2 and S = {(00, 01, 10, 11}. Let T k be the minimum N for a fixed k such that OA(N = T k , k, s = 2, t = 2) is an orthogonal array with S = {(00, 01, 10, 11}. T n is related to the lower bound on the amount of shared randomness necessary and sufficient for accomplishing the S-CCR of R CLP (G (n,ω) ) with maximum payoff when log ω bit classical communication is allowed from Alice to Bob. Corollary 2. Given a graph G (n,ω) satisfying (G0)-(G1) with FOR in minimum dimension ω, it is necessary (but may not be sufficient) to communicate a ω-level classical system assisted by shared randomness with 2-inputs (for n = 2) and log 2 T n−1 -inputs (for n > 2) in order to satisfy S-CCIND with maximum payoff. By Theorem 4, to find the lower bound on shared randomness required for a graph G (n,ω) to satisfy maximum Payoff (T2), we calculate the shared randomness required to achieve P = 0.5 for a graph G (n,ω=2) with n disconnected maximum size cliques. Similar to the relation reconstruction problem we will again fill every off-diagonal block (C x , C y ) by I 2 or σ x , where I 2 is 2 × 2 identity matrix. If we consider any such deterministic strategy then, (C x , C y ) = (C 1 , C x ) ⊕ 2 (C 1 , C y ) where I 2 → 0 and σ x → 1. Unlike relation reconstruction for G (n,ω=2) , here we require a convex mixture of more deterministic classical strategies. This is because we want each entry in every off-diagonal block (C x , C y ) in table M to be 0.5. This is possible if we have a uniform convex mixture of deterministic tables where half of them have (C x , C y ) = σ x and the rest have (C x , C y ) = I 2 such that the effective weight for each free entry * is 0.5. For n=2, convex combination of two tables, one with (C 1 , C 2 ) = I 2 and other with (C 1 , C 2 ) = σ x , give payoff P = 0.5. For n = 3, we need four tables i.e. (C 1 , C 2 ) = I 2 or σ x and (C 1 , C 3 ) = I 2 or σ x . Note any combination lesser than these four will lead to a collection of deterministic tables with an unequal number of I 2 and σ x corresponding to some off-diagonal block matrix (C x , C y ). For n ≥ 2, by the similar argument we need a minimal collection of deterministic tables such that corresponding to every two-block matrix of the form (C 1 , C j =1 ) and (C 1 , C j ′ =1 ), there are an equal number of tables where (C 1 , C j ) = I 2 and (C 1 , C j ′ ) = I 2 , (C 1 , C j ) = I 2 and (C 1 , C j ′ ) = σ x , (C 1 , C j ) = σ x and (C 1 , C j ′ ) = I 2 , and (C 1 , C j ) = σ x and (C 1 , C j ′ ) = σ x . This is exactly the problem for orthogonal arrays that have been discussed above if we substitute I 2 → 0 and σ x → 1. Thus, for a graph G (n,ω=2) with n(> 2) the players Alice and Bob need shared randomness with T n−1 input to get maximum payoff P = 0.5 when they are allowed to communicate ω level classical system. This completes the proof. Now we present a result which says that 1 bit classical communication when assisted by a finite amount of shared randomness can be arbitrarily powerful compared to quantum direct communication resources. Theorem 5. There exist graphs G (n,ω) satisfying (G0)-(G2) and having faithful orthogonal representation in minimum dimension ω, such that for a fixed dimension of a classical or quantum channel, the assistance of public coins is necessary to perform S-CCR CLP (G (n,ω) ) with optimal payoff. Figure 7: In the case of this graph with n cliques of maximum size 2, the maximum payoff achieved when 1 qubit is communicated from Alice to Bob is P ≤ 1 2 . However, the maximum payoff can be achieved by finite Public coin and communicating 1-bit. For n = 4, one pre-shared maximally entangled two-qubit state (1-bit of quantum public coin) along with 1-bit classical communication can achieve maximum payoff while Alice and Bob will require more than 1-bit classical Public coin to obtain the optimal payoff. v 0 v 1 C 1 0 1 v 0 v 1 C 2 0 1 v 0 v 1 C n 0 1 Proof. W.l.o.g. we assume that Alice is allowed to communicate an ω dimensional system to Bob. We prove the above-mentioned theorem by showing the existence of a graph that satisfies the claim. Let us consider the graph G (n=ω+2,ω) satisfying (G0)-(G2) and having faithful orthogonal representation in minimum dimension ω where any two the maximum size cliques are disconnected. For an example, see Fig. 7 where ω = 2. Note that for such a graph, the maximum payoff achievable by communicating log ω qubit, P max Q 2 , is always less than the algebraic maximum 1 ω . This is be-cause, only ω + 1 mutually unbiased bases (MUBs) are possible in C ω , which can be used to encode and decode in an unbiased way, a maximum of ω + 1 cliques in the considered graph. If Alice sends log ω bits then the payoff obtained is zero (see Lemma 1). On the other hand, by using finite shared randomness, all the deterministic strategies using log ω bits which satisfy (T(0)) (which are finite in number) can be mixed to obtain the algebraic maximum of the payoff, 1 ω . Moreover, for the graph in Fig. 7, the necessary amount of shared randomness as in Corollary 2 is also sufficient to achieve P = 1 2 while communicating 1 c-bit. Classical communication assisted by quantum entanglement At this point, a natural question is whether quantum correlations (quantum public coin) can enhance classical communication more than classical public coin. In the following theorem, we mention an instance where this is the case. Theorem 6. For classical communication with assistance from public coins, there exist graphs G (n,ω) satisfying conditions (G0)-(G2), such that the separation between classical and quantum public coins required for perfect S-CCR of relation R CLP (G (n,ω) ) is unbounded. Proof. Let us consider the graph G (n,ω) given by n disjoint cliques of size 2. 1-bit classical communication assisted by n − 1 input shared randomness gives payoff P = 0 (see Corollary 1). When assisted by 1 bit of entanglement, Alice chooses n distinct orthogonal pairs of states from the equatorial circle of the Bloch sphere corresponding to the n possible input cliques. Now Alice and Bob perform the protocol the same as remote state preparation [32,33], which allows perfect transmission of the states from an equatorial circle of the Bloch sphere with 1 bit of shared entanglement and 1 bit of classical communication. After successful transmission of the state, Bob performs qubit projective measurement based on his input C y along one of the bases chosen by Alice. This makes the payoff P > 0. For n = 4, four equally spaced directions, this protocol can achieve P = In this section, we discuss a number of applications of S-CCR CLP task. The first application is the operational detection of MUBs from the observation of the statistics. When we consider some specific type of graph G with both maximum clique size and orthogonal representation in dimension ω. If a quantum strategy using a ω level quantum system can achieve the upper bound of the payoff for such a graph G, then Bob must have used measurements corresponding to MUBs for decoding. In the next application, we consider the problem of detecting the non-classical resources in both direct communication and in the shared correlation (black-box) scenario. Finally, we consider a larger class of graphs that do not have orthogonal representation in dimension ω where ω is the size of a maximum clique and show that these graphs can be used to detect whether the dimension of the direct communication resource is greater than ω or otherwise. In the following, we discuss each of the applications in greater detail. cos 2 ( π 8 ) 2 ≈ 0.42677. A. Detecting Mutually Unbiased Bases A pair of projective measurements for a ddimensional Hilbert space are mutually unbiased if the squared length of the projection of any basis element from the first onto any basis element of the second is exactly 1/d. Mutually unbiased bases (MUBs) are found to be optimal in several information-theoretic tasks and also in quantum cryptography [34][35][36][37][38][39]. Observation 4. Consider a graph G consisting of r maximum cliques of size ω that are completely disconnected from each other. This graph has faithful orthogonal representation in dimension d R = d C = ω. If a quantum strategy with direct communication of an ω-level system can achieve the algebraic maximum of the payoff i.e. P = 1 ω , then the measurements performed by Bob must be those corresponding to MUBs. For example, let us consider one such graph, which allows for the detection of qubit-MUBs. The simplest graph consisting of three maximum cliques of size ω = 2 that are disconnected from each other allows for a quantum strategy where Alice prepares the pairs of eigenstates of three qubit-MUBs corresponding to the three disjoint cliques of this graph and sends the qubit to Bob. Bob performs his measurement corresponding to one of the above three MUBs based on his input cliques. Evidently in this case, the payoff turns out to be P = 1 2 . From the other direction, one can see that in order to maximize the payoff it is required to make the prepare and measure probabilities corresponding to the disconnected pairs of vertices of the graph be completely unbiased. B. Semi-Device Independent Detection of Non-Classical Resources and Dimension Witness In a prepare and measure setup, which underlies a number of information-theoretic tasks, two prime questions of practical interest are-(i) is the transmitted system (alternatively, are the prepare and measure devices) nonclassical? and (ii) what is the operational dimension of the transmitted system? For quantum systems the second question reduces to finding a lower bound of the Hilbert space dimension, ı.e. to find a dimension witness [40][41][42][43]. If these questions are answered based on the input-output probability distribution {P(b|x, y)}, where x ∈ X and y ∈ Y are inputs and b ∈ B is the output, without referring to any information about the encoding and decoding devices, the protocol is device independent. If partial information about the devices is available, the scenario is called semi-device independent. In the following, we show that the proposed S-CCR CLP task can be used as a semi-device independent witness of non-classicality as well as dimension. While answering the first question we will consider two scenarios-first, where no public coin is available. This scenario allows us to determine the non-classicality of the transmitted system. Second, where only a finite amount of public coins are available and a classical bit has been transmitted, allows us to answer whether the public coin is non-classical or not. For both cases let us consider the two distant parties executing the S-CCR CLP task with a class of graphs satisfying conditions condition (G0)-(G1). Now, in the first case let us also assume that it is known that the operational dimension of the transmitted system is strictly upper bounded by |V |, the number of vertices of the graph. If the distant parties can achieve a payoff P > 0 (calculated from P(b|x, y) according to the definition in Eq.3), it follows from Theorem 1 that the transmitted system is non-classical. In the second case with a finite public coin and a classical bit communication, let us assume that the given graph has a faithful orthogonal representation d R = ω and the local dimension of the public coin is strictly upper-bounded by n, the number of maximum cliques in the graph. There exist graphs (see the example in the proof of Theorem 6) such that P > 0 implies that the public coin is non-classical. To answer the question about dimension witness we first observe the following: it follows from Lemma 2 that given a graph with n number of maximum cliques of size ω, non-zero payoff (P > 0) can be achieved in the S-CCR CLP task when ω-level quantum system is allowed to be communicated from Alice to Bob given that the graph has faithful orthogonal range d R = ω. This lemma applies for graphs that satisfy condition (G0)-(G1). Here, we consider graphs that are not uniquely KS colourable (with the binary colouring of vertices) and their faithful orthogonal range is d R > ω. For example, consider the graphs with 3 maximum cliques, each of which is of size ω and they share ω − 1 vertices with the adjacent cliques. For example, the graphs in Fig. 8, the graphs have faithful orthogonal representation in dimension d R > 2 and d R > 3, respectively. Now let us consider the S-CCR CLP task for this class of graphs (with d R > ω) when only one-way communication is allowed from Alice to Bob. In the absence of Public coins, if Alice's encoding and Bob's decoding can achieve a non-zero payoff, i.e. P > 0, it will imply that the communicated quantum system must have a Hilbert space dimension at least greater than ω. Since the quantum protocol mentioned in the proof of Lemma 2 is both necessary and sufficient for reconstruction of the relation R CLP , the non-zero payoff achieved by communication of a quantum system with dimension equal to ω would imply that the graph in consideration would have a representation in C ω , which leads to a contradiction. VI. SUMMARY & DISCUSSIONS In a non-asymptotic prepare and measure scenario, the problem of efficient encoding of classical information in a quantum system has been a topic of interest in recent times [44][45][46][47][48][49]. Communication complexity, a prototype of distributed computing, measures the efficiency Figure 8: Example of graphs used in constructing the dimension witnesses for d > 2 (on the left) and d > 3 (on the right). v 0 0 v 1 C 1 0 1 v 3 C 2 0 1 v 4 C 3 0 1 v 0 0 v 2 2 v 1 1 C 1 0 v 3 1 v 4 2 C 3 0 2 1 C 2 of such an encoding by the separation between the operational dimension of the classical and quantum message systems. A large separation for some computation tasks demonstrates the advantage of quantum communication resources over classical ones. The present work proposes one such task, called strong-communication complexity of relations induced by the clique labelling problem. In this S-CCR problem, we show that there exists a class of graph for which the separation between the dimension of quantum and classical systems necessary can be made unbounded in the absence of a public coin or shared randomness between the players. In the presence of public coins, however, this separation disappears. While quantum communication does not require public coin, the amount of public coin assistance that is necessary (but may not be sufficient) for classical communication for accomplishing the task scales linearly with the number of cliques. Additionally, we also show that a 1 − ebit assisted classical 1 − cbit channel performs a task that would otherwise require the assistance of a 1 − cbit channel and an unbounded amount of classical public coin. The present work can be seen as an addition to the earlier attempts to demonstrate the separation of classical and quantum communication complexity with relations [14,16,50,51]. For example, Buhrman, Cleve, and Wigderson in [14] considered an exponential gap in classical and quantum communication for one-way and interactive protocols for a promise problem with zero error probability in the absence of Public coins. Later Raz in [16] showed that an exponential gap in communication exists for a relation in bounded-error interactive protocols. Bar-Yossef et al. in [50] showed an exponential separation for one-way protocols as well as simultaneous protocols with public coins for a relational problem, called the Hidden Matching Problem. An important aspect of the present work is that the relations considered here are given by orthogonality graphs. A similar approach while demonstrating the advantage of quantum communication over clas-sical was taken by Saha et al. in [17]. The authors in [17] considered a graph colouring task, called vertex equality problem, executed by two spatially separated parties. They showed that quantum advantage in one-way communication appeared whenever a class of graphs, called state-independent contextuality graphs (SIC graphs) are considered. Whereas quantum advantage in the communication task proposed in this article can be observed independent of the usefulness of the graphs in demonstrating state-independent contextuality. Therefore in our case, the quantum advantage in one-way communication can not be attributed to contextuality. Interestingly in [19] the authors showed that the quantum separation for a communication task based on state-independent contextuality witnesses can be unbounded, whereas our task, independent of contextuality can obtain a similar separation. This work leaves a number of questions open. For example, could there be a task such that the scaling of classical vs quantum communication with binary colourable graphs be exponential in the presence/absence of public coins (possibly for two-way communication complexity)? Could one obtain a linear scaling when the two parties compute a function instead of a relation? Besides these general questions, there are some particular points about the present study that remain unresolved. First, does the unbounded separation between classical and quantum communication persist when one considers a degree of error in the computation? In Section IV E 1, the connection between a lower bound to the amount of classical public coin in the bounded communication setting and orthogonal arrays, shows that given arbitrary graphs with a large number of cliques, finding such a lower bound is a hard problem. In Section IV E 2 the advantage of using entanglement instead of a classical public coin (shared randomness) to assist a bounded classical communication has been demonstrated by achieving a higher payoff function. But it remains unknown what is the optimal payoff for the entanglement-assisted case. A monotonically decreasing payoff with the number of maximum cliques (n) increasing might suggest a limit of this advantage. Finally in the applications section (Section V A) the robustness of the scheme to detect MUBs is not known. Finally, one can look at the present protocol from different perspectives. Namely, it can be seen, in a way, as a qualitative simulation of the quantum statistics on demand. In fact, the relation-reconstruction condition for the strong communication complexity proposed in this article could bridge the gap between conventional communication complexity and sampling problems with communication [52,53]. Precisely, in our protocol, the spatially separated parties are given some set of favourable events and it is required that the events be quantitatively simulated by classical communication so that all of them occur with nonzero probability like it is in the quantum case. Looking at the protocol from yet another angle, we can see it as a distribution of a (conditional) randomness with the help of a restricted communication channel. This raises the question of the possible relation of the present scheme to discrete analogues of bosonic sampling [54]. Quantum advantage in the latter case relies on the hypotheses of the computational hardness of some classical tasks. It would be interesting to see whether additional graph structure and modification of the present protocol could imply exponential separation in sampling that would not rely on hypotheses of this type. P(x, y, b) = ∑ τ x P(x, y, b, τ x ) (A1) = ∑ τ x P(b|y, τ x )P(τ x |x)P(x)P(y) (A2) P(x, y, b) = P(b|y, x)P(x)P(y) (A3) if P(τ x |x) = 1∀x ∈ X (this is the situation in the scenario when a pre-shared public coin is not allowed). We can consider a strict ordering of the elements in R. Given this ordered list, we can define α(k) = {α 1 , α 2 , · · · , α Γ } where α i ∈ N is the frequency of occurrence of the i th element (x i , y i , b i ) of ordered list R given k number of rounds have occurred and thus ∑ Γ i=1 α i = k ∀ α. The instances favourable for successful reconstruction of relation corresponds to the set of α(k) where each of the elements of R occur with non-zero frequency. The probability of reconstruction of R given k number of rounds is thus given by the total probability of occurrences of the α(k) with the aforementioned property. P k (R) = ∑ α(k) P(α(k)|k) = ∑ α P({α 1 , α 2 , · · · , α Γ }|k) = ∑ α ( Γ ∏ i=1 P α i (x i , y i , b i )) (A4) Since, ∀ α ∀i ∈ {1, 2 · · · , Γ}, α i > 0, therefore, P k (R) = Γ ∏ i=1 P(x i , y i , b i ) ∑ α ( Γ ∏ i=1 P α i −1 (x i , y i , b i )) (A5) Notice that if any of the terms P(x i , y i , b i ) = 0 then the probability of successful reconstruction after k rounds P k (R) becomes zero as well. Therefore, P k (R) = 0 =⇒ P(b|x, y) = 0 ∀(x, y, b) ∈ R (A6) Remark: P(b|x, y, τ x ) = 1 ∀x ∈ X, y ∈ Y such that ∃!b ∈ B satisfying (x, y, b) ∈ R. For rest of the (x, y, b) ∈ R, P(b|x, y) ∈ (0, 1). Now, we define B x,y = {b ∈ B : (x, y, b) ∈ R} which is the set of all acceptable outputs for Bob given the input are x and y for Alice and Bob respectively. Then, ∑ b∈B x,y P(b|x, y, τ x ) = 1 ∀ B x,y . We aim to maximise the success probability P k (R) in the scenario when Alice and Bob are not aware of the total number of rounds, say k max , a priory and thus they should decide the probabilities of the events in R independent of k max . To achieve this we start by using the Lagrange multiplier. Now, in order to maximise the success probability of reconstruction for k number of rounds we define L = P k (R) − ∑ B x,y λ B x,y   1 − ∑ b∈B x,y P(b|x, y)   (A7)(A8) For j th element (x j , y j , b j ) in ordered list of R, ∂L ∂P(x j , y j , b j ) = 0 =⇒ ∑ α(k) α j P(x j , y j , b j ) −1 Γ ∏ i=1 P α i (x i , y i , b i ) − λ B x j ,y j P(b j |x j , y j ) (A9) =⇒ λ B x j ,y j = ∑ α(k) α j ∏ Γ i=1 P α i (x i , y i , b i ) P 2 (x j , y j , b j ) (A10) For a given k, the optimal probabilities P(x i , y i , b i ) = P(b i |x i , y i )P(x i , y i ) can be calculated that yields maximum value of P k (R). However, for any arbitrary k, the expression of λ B x i ,y i is a function of k as α(k) and α i are a function of k. Since Alice and Bob do not have prior information about k, thus they have to agree on values of probabilities P(x i , y i , b i ) independent of k. Thus, the obvious solution is P(b|x, y) = constant ∀ b ∈ B x,y . Here we assume that the inputs are sampled from a uniform distribution. This shows the necessity of our payoff function. Maximising the Payoff guarantees that the P k (R) is maximised in some local maxima. Appendix B: Proof of Theorem 1 Before we delve into the proof, let us introduce a few notations that we will frequently use to in the later sections. Prior to the game, Alice and Bob are given G (n,ω) and they a construct a table M whose entries are conditional probabilities p(b|C x , C y , a) of compatible labels a, b, for all possible cliques C x , C y ∈ G (n,ω) . In this table the probability p(b|C x , C y , a) ≡ ((C x , a, C y , b)) is the entry in the table M corresponding to the event (C x , a, C y , b) where (C x , a) ∈ X, C y ∈ Y and b ∈ B. The rows and the columns of this table are indexed as (C x , a) r and (C y , b) c respectively. In this table, the index runs over all the a, b first and then updates the C x , C y . This table has nω rows and nω columns and may be perceived as a n × n block matrix with elements indexed (C x , C y ). We have I ω×ω on the diagonal blocks of the table as Bob has to output the same label as Alice whenever they get the same cliques as input. The aforementioned game can be mapped to the following properties (T0) of the table M. We have equivalence between the communication game and the table M with the constraint (T0). (T0): Consistent labelling of cliques: If the event (C x , a, C y , b) / ∈ R CLP (G (n,ω) ) =⇒ P(b|C x , a, C y ) = 0. Proof. If Alice and Bob manage to compress the nd rows of the table M (i.e., the set of all possible inputs for Alice) into at least ω partitions such that no two rows in the same partition have entries in any columns that are different (may be due to constraints imposed by property (T0) or by choice filling the probabilities corresponding to events outside C(G (n,ω) )) then there exists a protocol such that Theorem 1 is satisfied. Alice will communicate with Bob the partition to which her input belongs and then Bob can suitably pick a label for her input clique C y while satisfying the probability distribution table that players agreed upon at the start and thereby satisfying the consistency condition. However, notice that there cannot be any less than ω number of partitions of the rows of the table M satisfying (T0) such that no two rows in the same partition have entries in any columns that are different. This can be easily shown as every two rows corresponding to each block diagonal entry of M, i.e. (C x = C i , C y = C i ) = I ω×ω , are distinct. Thus, each of the ω rows corresponding to Alice's input clique C x = C i must belong to a different partition. This implies that every disjoint partition τ(i)(i ∈ {0, 1, · · · , ω − 1}) of the rows described above must have exactly one row of the form (C x , a) for each clique C x , i.e., a row corresponding to exactly one out of all the possible label a for every clique C x . In the following, we argue that there are ω such disjoint partitions of rows. But before we proceed, we will list some properties of the table M when such partitioning is possible. If there is an imposition that the rows of table M can be partitioned into at least ω disjoint partitions τ(i) while satisfying the constraints discussed above there are some additional restrictions regarding the structure of table M that can be decided by both Alice and Bob in order to win the CLP. • If some row (say (C x , a ′ ) r ) of off-diagonal block matrix (C x , C y ) has more than one non-zero entries (say((C x , a ′ , C y ,b)) = 0 and ((C x , a ′ , C y ,b ′ )) = 0) then the corresponding row in M cannot belong to any partition that contains a row with index (C x ′ (=y) , a) r where a ∈ {0, · · · , ω − 1} as there exist column (C y , b) c where these two rows have different entries. This is because the block matrix (C y , C y ) = I ω×ω and thus none of the rows have non-zero entries in two different columns in this block. Thus, this row must belong to a new partition and thus increasing the total number of partitions to ω + 1. • If some column (say (C y , b ′ ) c ) of off-diagonal block matrix (C x , C y ) has more than one non-zero row entries then the rows corresponding to these nonzero entries in M can only belong to the partition that contains the row (Cx (=y) ,ã(= b ′ )). However, as discussed above exactly one out of all the possible label a for every clique C x can belong to a partition. Therefore, Alice and Bob will be forced to create at least ω + 1 partitions. Therefore, if the number of partitions is restricted to ω then each row and column of every off-diagonal block matrix (C x , C y ) is some permutation Π C x ,C y of I ω×ω . • The table must have the property M = M T . If this does not hold then there exists an element for which ((C x , a, C y , b)) = 1 = ((C x ′ (=y) , a ′ (= b), C y ′ (=x) , b ′ (= a))). The row (C x , a) r must belong to same partition as (C x ′ (=y) ,ã(= b)) r as ((C x , a, C y , b)) = 1 = ((C x ′ (=y) ,ã, C y , b)) only forã = b. For any other allowed value ofã, ((C x ′ (=y) ,ã, C y , b)) = 0. However, the row (C x , a) r and (C x ′ (=y) ,ã(= b)) r have different entries in the column (C y"(=x) , b"(= a)) c . ((C x , a, C y"(=x) , b"(= a))) = 1 = ((C x ′ (=y) , a ′ (= b), C y ′ (=x) , b ′ (= a))). Thus, the row (C x , a) r cannot belong to any partition that contains a row indexed (C x ′ (=y) ,ã) whereã ∈ {0, · · · , ω − 1}. Now, we will create a specific kind of ω disjoint partitions(τ(i), i ∈ {0, · · · , ω − 1}) of the input received by Alice considering a probability table having the form discussed above. • Step 1: ∀a ∈ {0, 1, · · · , ω − 1}, (C 1 , a) r ∈ τ(a). • Step 2: ∀j ∈ {2, · · · , n}, say the block matrix (C 1 , C j ) is a permutation matrix Π 1,C j then the row (C j , a ′ ) r ∈ τ(a) where a ′ is the (a) th element of Π 1,C j * (0 1 · · · ω − 1) T . When Alice communicates the partition to which her input (C x , a) belongs, Bob can pick the label for clique C y that obeys the consistency condition C(G (n,ω) ). It is important to note that each row associated with Alice's input clique C x must belong to a distinct partition else Bob might not be able to assign a label obeying the consistency condition. For example, consider the graph shown in Fig. 3. Alice and Bob adopt a deterministic strategy and fill the free entries marked with * in Table I with 0s and 1s as seen in Table VI. Fig. 3 satisfying (T0)⇔(D0) C 1 C 2 b = 0 b = 1 b = 2 b = 0 b = 1 b = 2 a = For Table VI, we can make three partitions τ(0), τ(1) and τ (2) for the rows such that exactly one row of each clique belongs to a partition. In this the partitions are τ(0) = {(C 1 , a = 0) r , (C 2 , a = 2) r }, τ(1) = {(C 1 , a = 1) r , (C 2 , a = 1) r } and τ(2) = {(C 1 , a = 2) r , (C 2 , a = 0) r }. Upon receiving C x and a in each round Alice can send i corresponding to τ(i). After knowing the partition τ(i) Bob can always pick the label for his clique C y that does not violate the consistency condition. Thus, a classical three-level system is sufficient for winning the game (T0) for the graph considered here. Figure 2 : 2Reconstruction of Relation After many runs of the game, the statistics {(C i x , a i , C i y , b i )} are sent to the Reconstructor, who attempts to recover R CLP (G (n,ω) ) Fig. 3 3satisfying (T0), the necessary condition for CCR CLP , where * ∈ [0, 1] are free entries upto normalisation. For S-CCR CLP all elements marked by * ∈ (0, 1) upto normalisation. For maximum payoff all elements marked by * = 0.5 ∈ (0, 1) G Paley 5 G Paley 9 Figure 4 : 594Example of the 5-Paley graph G Paley 5 (left) and the 9-Paley graph G Paley 9 (right) Observation 2. In the class of Paley graphs, any two vertices in the graph have the same degree, i.e. in a graph with q Lemma 3 . 3K 2 = qI − J,where J denotes the all-ones matrix. where FOR(G Paley ) denotes the set of faithful orthogonal representations of G Paley in any dimension. Corollary 1 . 1Given a graph G (n,ω) satisfying (G0)-(G1) with faithful orthogonal range VII. ACKNOWLEDGEMENTS S.R. acknowledges Markus Grassl for discussion on Orthogonal Arrays. S. R., N. S., S. S. B. and P. H. acknowledge partial support by the Foundation for Polish Science -IRAP project, ICTQT, contract no. MAB/2018/5, which is carried out within the International Research Agendas Programme of the Foundation for Polish Science co-financed by the European Union from the funds of the Smart Growth Operational Programme, axis IV: Increasing the research potential (Measure 4.3). R. R. acknowledges support from the Early Career Scheme (ECS) grant "Device-Independent Random Number Generation and Quantum Key Distribution with Weak Random Seeds" (Grant No. 27210620), the General Research Fund (GRF) grant "Semi-device-independent cryptographic applications of a single trusted quantum system" (Grant No. 17211122) and the Research Impact Fund (RIF) "Trustworthy quantum gadgets for secure online communication" (Grant No. R7035-21). message, 1 . 1Deterministic Protocol: A classical one-way deterministic protocol is an encoding-decoding tuple (E, D), where E is a 'log |X|-bit to mbit' deterministic function and D is a 'm log |Y|bit to log |Z|-bit' deterministic function, i.e. E : {1, · · · , |X|} → {0, . . . , m − 1} and D : {0, . . . , m − 1} × {1, · · · , |Y|} → {1, · · · , |Z|}. The communication cost of such a protocol is defined as the length of the message in bits sent by Alice on the worst choice of inputs x and y. The oneway deterministic zero-error communication complexity of relation R, denoted by D(R) is the cost of the best protocol (i.e protocol with minimum communication cost) that allows computation of relation R without any error. Table I : IExample of a table of conditional probabilities p(b|C x , C y , a) corresponding to the graph in Table II : IIAnother classical deterministic strategy for Graph in Table III : IIIEffective classical strategy with shared randomness for Graph in Table IV : IVTable with ω sized n cliques with ω − 2 common vertices is equivalent (in terms of number of classical deterministic strategies) to Table with ω = 2 sized n disconnected cliques Table V : VResource Comparison Classical vs Quantum for communication tasksV. APPLICATIONS Table VI : VIExample of a table of conditional probabilities p(b|C x , C y , a) for the graph in Appendix A: Success Probability for Reconstruction of RelationsGiven a relation R ⊆ X × Y × B for the bipartite prepare and measure a scenario where X and Y are the set of inputs for Alice and Bob and B is the set of outputs for Bob, we are interested in success probability P k (R) of relation reconstruction after k number of rounds, where k is large. Additionally, Alice and Bob's protocol is agnostic to the number of rounds. Every tuple (x, y, b) ∈ R must occur at least once in these k rounds for the correct reconstruction of the relation R. The cardinality |R| = Γ is the total number of all such events, which implies k ≥ Γ for reconstruction to be possible. Here we assume that the inputs are sampled from a uniform distribution. If Alice encodes her input x ∈ X in the message τ x in each round and Bob outputs b ∈ B depending on his input y ∈ Y and Alice's From classical to quantum shannon theory. M M Wilde, arXiv:1106.1445arXiv preprintM. M. Wilde, "From classical to quantum shannon the- ory," arXiv preprint arXiv:1106.1445, 2011. Quantum cryptography: Public key distribution and coin tossing. C H Bennett, G Brassard, Theoretical Computer Science. 560C. H. Bennett and G. Brassard, "Quantum cryptography: Public key distribution and coin tossing," Theoretical Com- puter Science, vol. 560, pp. 7-11, Dec. 2014. Practical quantum cryptography based on two-photon interferometry. A K Ekert, J G Rarity, P R Tapster, G. Massimo Palma, Phys. Rev. Lett. 69A. K. Ekert, J. G. Rarity, P. R. Tapster, and G. Massimo Palma, "Practical quantum cryptography based on two-photon interferometry," Phys. Rev. Lett., vol. 69, pp. 1293-1295, Aug 1992. Quantum random access codes with shared randomness. A Ambainis, D Leung, L Mancinska, M Ozols, arXiv:0810.2937arXiv preprintA. Ambainis, D. Leung, L. Mancinska, and M. Ozols, "Quantum random access codes with shared random- ness," arXiv preprint arXiv:0810.2937, 2008. Nonlocality and communication complexity. H Buhrman, R Cleve, S Massar, R De Wolf, Rev. Mod. Phys. 82H. Buhrman, R. Cleve, S. Massar, and R. de Wolf, "Non- locality and communication complexity," Rev. Mod. Phys., vol. 82, pp. 665-698, Mar 2010. Quantum communication complexity advantage implies violation of a bell inequality. H Buhrman, Ł Czekaj, A Grudka, M Horodecki, P Horodecki, M Markiewicz, F Speelman, S Strelchuk, Proceedings of the National Academy of Sciences. 113H. Buhrman, Ł. Czekaj, A. Grudka, M. Horodecki, P. Horodecki, M. Markiewicz, F. Speelman, and S. Strelchuk, "Quantum communication complexity ad- vantage implies violation of a bell inequality," Proceedings of the National Academy of Sciences, vol. 113, pp. 3191-3196, Mar. 2016. Better nonlocal games from hidden matching. H Buhrman, G Scarpa, R De Wolf, arXiv:1007.2359arXiv preprintH. Buhrman, G. Scarpa, and R. de Wolf, "Better non- local games from hidden matching," arXiv preprint arXiv:1007.2359, 2010. Does violation of a bell inequality always imply quantum advantage in a communication complexity problem?. A Tavakoli, M Żukowski, Brukner, 4316QuantumA. Tavakoli, M.Żukowski, andČ. Brukner, "Does viol- ation of a bell inequality always imply quantum advant- age in a communication complexity problem?," Quantum, vol. 4, p. 316, Sept. 2020. Contextuality supplies the 'magic' for quantum computation. M Howard, J Wallman, V Veitch, J Emerson, Nature. 510M. Howard, J. Wallman, V. Veitch, and J. Emerson, "Con- textuality supplies the 'magic' for quantum computa- tion," Nature, vol. 510, pp. 351-355, June 2014. A fast quantum mechanical algorithm for database search. L K Grover, Proceedings of the twenty-eighth annual ACM symposium on Theory of computing -STOC '96. the twenty-eighth annual ACM symposium on Theory of computing -STOC '96ACM PressL. K. Grover, "A fast quantum mechanical algorithm for database search," in Proceedings of the twenty-eighth annual ACM symposium on Theory of computing -STOC '96, ACM Press, 1996. Rapid solution of problems by quantum computation. D Deutsch, R Jozsa, Proceedings of the Royal Society of London. Series A: Mathematical and Physical Sciences. 439D. Deutsch and R. Jozsa, "Rapid solution of problems by quantum computation," Proceedings of the Royal Soci- ety of London. Series A: Mathematical and Physical Sciences, vol. 439, no. 1907, pp. 553-558, 1992. Some complexity questions related to distributive computing(preliminary report). A C , -C Yao, Proceedings of the Eleventh Annual ACM Symposium on Theory of Computing, STOC '79. the Eleventh Annual ACM Symposium on Theory of Computing, STOC '79New York, NY, USAAssociation for Computing MachineryA. C.-C. Yao, "Some complexity questions related to dis- tributive computing(preliminary report)," in Proceedings of the Eleventh Annual ACM Symposium on Theory of Com- puting, STOC '79, (New York, NY, USA), p. 209-213, As- sociation for Computing Machinery, 1979. E Kushilevitz, N Nisan, Communication Complexity. Cambridge University PressE. Kushilevitz and N. Nisan, Communication Complexity. Cambridge University Press, 1997. Quantum vs. classical communication and computation. H Buhrman, R Cleve, A Wigderson, Proceedings of the Thirtieth Annual ACM Symposium on Theory of Computing, STOC '98. the Thirtieth Annual ACM Symposium on Theory of Computing, STOC '98New York, NY, USAAssociation for Computing MachineryH. Buhrman, R. Cleve, and A. Wigderson, "Quantum vs. classical communication and computation," in Proceed- ings of the Thirtieth Annual ACM Symposium on Theory of Computing, STOC '98, (New York, NY, USA), p. 63-68, Association for Computing Machinery, 1998. Monotone circuits for connectivity require super-logarithmic depth. M Karchmer, A Wigderson, Proceedings of the Twentieth Annual ACM Symposium on Theory of Computing, STOC '88. the Twentieth Annual ACM Symposium on Theory of Computing, STOC '88New York, NY, USAAssociation for Computing MachineryM. Karchmer and A. Wigderson, "Monotone circuits for connectivity require super-logarithmic depth," in Proceed- ings of the Twentieth Annual ACM Symposium on Theory of Computing, STOC '88, (New York, NY, USA), p. 539-550, Association for Computing Machinery, 1988. Exponential separation of quantum and classical communication complexity. R Raz, Proceedings of the Thirty-First Annual ACM Symposium on Theory of Computing, STOC '99. the Thirty-First Annual ACM Symposium on Theory of Computing, STOC '99New York, NY, USAAssociation for Computing MachineryR. Raz, "Exponential separation of quantum and clas- sical communication complexity," in Proceedings of the Thirty-First Annual ACM Symposium on Theory of Comput- ing, STOC '99, (New York, NY, USA), p. 358-367, Associ- ation for Computing Machinery, 1999. State independent contextuality advances one-way communication. D Saha, P Horodecki, M Pawłowski, New Journal of Physics. 2193057D. Saha, P. Horodecki, and M. Pawłowski, "State in- dependent contextuality advances one-way communica- tion," New Journal of Physics, vol. 21, p. 093057, Sept. 2019. The problem of hidden variables in quantum mechanics. S Kochen, E P Specker, The Logico-Algebraic Approach to Quantum Mechanics. Springer NetherlandsS. Kochen and E. P. Specker, "The problem of hidden vari- ables in quantum mechanics," in The Logico-Algebraic Ap- proach to Quantum Mechanics, pp. 293-328, Springer Neth- erlands, 1975. Quantum contextuality provides communication complexity advantage. S Gupta, D Saha, Z.-P Xu, A Cabello, A S Majumdar, Phys. Rev. Lett. 13080802S. Gupta, D. Saha, Z.-P. Xu, A. Cabello, and A. S. Majumdar, "Quantum contextuality provides communic- ation complexity advantage," Phys. Rev. Lett., vol. 130, p. 080802, Feb 2023. (non-)contextuality of physical theories as an axiom. A Cabello, S Severini, A Winter, arXiv:1010.2163arXiv preprintA. Cabello, S. Severini, and A. Winter, "(non- )contextuality of physical theories as an axiom," arXiv preprint arXiv:1010.2163, 2010. Multigraph approach to quantum nonlocality. R Rabelo, C Duarte, A J López-Tarrida, M T Cunha, A Cabello, J. Phys. A Math. Theor. 47424021R. Rabelo, C. Duarte, A. J. López-Tarrida, M. T. Cunha, and A. Cabello, "Multigraph approach to quantum non- locality," J. Phys. A Math. Theor., vol. 47, p. 424021, Oct. 2014. Orthogonal representations over finite fields and the chromatic number of graphs. R Peeters, Combinatorica. 16R. Peeters, "Orthogonal representations over finite fields and the chromatic number of graphs," Combinatorica, vol. 16, pp. 417-431, Sep 1996. Orthogonal representations and connectivity of graphs. L Lovász, M Saks, A Schrijver, Linear Algebra and its applications. 114L. Lovász, M. Saks, and A. Schrijver, "Orthogonal repres- entations and connectivity of graphs," Linear Algebra and its applications, vol. 114, pp. 439-454, 1989. General probabilistic theories: An introduction. M Plávala, arXiv:2103.07469M. Plávala, "General probabilistic theories: An introduc- tion," arXiv: 2103.07469, Aug 2021. Paley graphs and their generalizations. A N Elsawy, arXiv:1203.1818A. N. Elsawy, "Paley graphs and their generalizations," arXiv:1203.1818, 2012. Eigenvalues of graphs. L M Lovász, L. M. Lovász, "Eigenvalues of graphs," 2007. Orthogonal Arrays. A S Hedayat, N J A Sloane, J Stufken, SpringerNew YorkA. S. Hedayat, N. J. A. Sloane, and J. Stufken, Orthogonal Arrays. Springer New York, 1999. Absolutely maximally entangled states, combinatorial designs, and multiunitary matrices. D Goyeneche, D Alsina, J I Latorre, A Riera, K Życzkowski, Phys. Rev. A. 9232316D. Goyeneche, D. Alsina, J. I. Latorre, A. Riera, and K.Życzkowski, "Absolutely maximally entangled states, combinatorial designs, and multiunitary matrices," Phys. Rev. A, vol. 92, p. 032316, Sep 2015. Multipartite entanglement in heterogeneous systems. D Goyeneche, J Bielawski, K Życzkowski, Phys. Rev. A. 9412346D. Goyeneche, J. Bielawski, and K.Życzkowski, "Mul- tipartite entanglement in heterogeneous systems," Phys. Rev. A, vol. 94, p. 012346, Jul 2016. Genuinely multipartite entangled states and orthogonal arrays. D Goyeneche, K Życzkowski, Phys. Rev. A. 9022316D. Goyeneche and K.Życzkowski, "Genuinely multipart- ite entangled states and orthogonal arrays," Phys. Rev. A, vol. 90, p. 022316, Aug 2014. Construction of binary quantum error-correcting codes from orthogonal array. S Pang, H Xu, M Chen, Entropy. 241000S. Pang, H. Xu, and M. Chen, "Construction of binary quantum error-correcting codes from orthogonal array," Entropy, vol. 24, p. 1000, July 2022. Minimum classical bit for remote preparation and measurement of a qubit. A K Pati, Physical Review A. 63114302A. K. Pati, "Minimum classical bit for remote preparation and measurement of a qubit," Physical Review A, vol. 63, no. 1, p. 014302, 2000. Remote state preparation. C H Bennett, D P Divincenzo, P W Shor, J A Smolin, B M Terhal, W K Wootters, Physical Review Letters. 87777902C. H. Bennett, D. P. DiVincenzo, P. W. Shor, J. A. Smolin, B. M. Terhal, and W. K. Wootters, "Remote state prepar- ation," Physical Review Letters, vol. 87, no. 7, p. 077902, 2001. Geometrical description of quantal state determination. I D Ivonovic, Journal of Physics A: Mathematical and General. 14I. D. Ivonovic, "Geometrical description of quantal state determination," Journal of Physics A: Mathematical and General, vol. 14, pp. 3241-3245, Dec. 1981. Optimal statedetermination by mutually unbiased measurements. W K Wootters, B D Fields, Annals of Physics. 191W. K. Wootters and B. D. Fields, "Optimal state- determination by mutually unbiased measurements," Annals of Physics, vol. 191, pp. 363-381, May 1989. Entropic uncertainty relations and locking: Tight bounds for mutually unbiased bases. M A Ballester, S Wehner, Phys. Rev. A. 7522319M. A. Ballester and S. Wehner, "Entropic uncertainty re- lations and locking: Tight bounds for mutually unbiased bases," Phys. Rev. A, vol. 75, p. 022319, Feb 2007. Locking classical correlations in quantum states. D P Divincenzo, M Horodecki, D W Leung, J A Smolin, B M , Phys. Rev. Lett. 9267902D. P. DiVincenzo, M. Horodecki, D. W. Leung, J. A. Smolin, and B. M. Terhal, "Locking classical correlations in quantum states," Phys. Rev. Lett., vol. 92, p. 067902, Feb 2004. Solution to the king's problem in prime power dimensions. P K Aravind, Zeitschrift für Naturforschung A. 58P. K. Aravind, "Solution to the king's problem in prime power dimensions," Zeitschrift für Naturforschung A, vol. 58, pp. 85-92, Mar. 2003. The mean king's problem: prime degrees of freedom. B.-G Englert, Y Aharonov, Physics Letters A. 284B.-G. Englert and Y. Aharonov, "The mean king's prob- lem: prime degrees of freedom," Physics Letters A, vol. 284, pp. 1-5, May 2001. Testing the dimension of hilbert spaces. N Brunner, S Pironio, A Acin, N Gisin, A A Méthot, V Scarani, Phys. Rev. Lett. 100210503N. Brunner, S. Pironio, A. Acin, N. Gisin, A. A. Méthot, and V. Scarani, "Testing the dimension of hilbert spaces," Phys. Rev. Lett., vol. 100, p. 210503, May 2008. Dimension witnesses and quantum state discrimination. N Brunner, M Navascués, T Vértesi, Phys. Rev. Lett. 110150501N. Brunner, M. Navascués, and T. Vértesi, "Dimension witnesses and quantum state discrimination," Phys. Rev. Lett., vol. 110, p. 150501, Apr 2013. Experimental tests of classical and quantum dimensionality. J Ahrens, P Badziąg, M Pawłowski, M Żukowski, M Bourennane, Phys. Rev. Lett. 112140401J. Ahrens, P. Badziąg, M. Pawłowski, M.Żukowski, and M. Bourennane, "Experimental tests of classical and quantum dimensionality," Phys. Rev. Lett., vol. 112, p. 140401, Apr 2014. A new device-independent dimension witness and its experimental implementation. Y Cai, J.-D Bancal, J Romero, V Scarani, Journal of Physics A: Mathematical and Theoretical. 49305301Y. Cai, J.-D. Bancal, J. Romero, and V. Scarani, "A new device-independent dimension witness and its experi- mental implementation," Journal of Physics A: Mathemat- ical and Theoretical, vol. 49, p. 305301, jun 2016. Classical information storage in an n-level quantum system. P E Frenkel, M Weiner, Communications in Mathematical Physics. 340P. E. Frenkel and M. Weiner, "Classical information stor- age in an n-level quantum system," Communications in Mathematical Physics, vol. 340, pp. 563-574, Dec 2015. Communication tasks in operational theories. T Heinosaari, O Kerppo, L Leppäjärvi, Journal of Physics A: Mathematical and Theoretical. 53435302T. Heinosaari, O. Kerppo, and L. Leppäjärvi, "Commu- nication tasks in operational theories," Journal of Phys- ics A: Mathematical and Theoretical, vol. 53, p. 435302, oct 2020. Almost qudits in the prepare-and-measure scenario. J Pauwels, S Pironio, E Woodhead, A Tavakoli, Phys. Rev. Lett. 129250504J. Pauwels, S. Pironio, E. Woodhead, and A. Tavakoli, "Al- most qudits in the prepare-and-measure scenario," Phys. Rev. Lett., vol. 129, p. 250504, Dec 2022. Classical cost of transmitting a qubit. M J Renner, A Tavakoli, M T Quintino, Phys. Rev. Lett. 130120801M. J. Renner, A. Tavakoli, and M. T. Quintino, "Classical cost of transmitting a qubit," Phys. Rev. Lett., vol. 130, p. 120801, Mar 2023. Classical analogue of quantum superdense coding and communication advantage of a single quantum. R K Patra, S G Naik, E P Lobo, S Sen, T Guha, S S Bhattacharya, M Alimuddin, M Banik, R. K. Patra, S. G. Naik, E. P. Lobo, S. Sen, T. Guha, S. S. Bhattacharya, M. Alimuddin, and M. Banik, "Classical analogue of quantum superdense coding and communic- ation advantage of a single quantum," Feb. 2022. Quantum vs classical: identifying the value of a random variable unambiguously. S Halder, A Streltsov, M Banik, arXiv:2211.09194S. Halder, A. Streltsov, and M. Banik, "Quantum vs clas- sical: identifying the value of a random variable unam- biguously," arXiv: 2211.09194, 2022. Exponential separation of quantum and classical one-way communication complexity. Z Bar-Yossef, T S Jayram, I Kerenidis, SIAM Journal on Computing. 381Z. Bar-Yossef, T. S. Jayram, and I. Kerenidis, "Exponential separation of quantum and classical one-way communic- ation complexity," SIAM Journal on Computing, vol. 38, no. 1, pp. 366-384, 2008. Exponential separations for one-way quantum communication complexity, with applications to cryptography. D Gavinsky, J Kempe, I Kerenidis, R Raz, R De Wolf, Proceedings of the Thirty-Ninth Annual ACM Symposium on Theory of Computing, STOC '07. the Thirty-Ninth Annual ACM Symposium on Theory of Computing, STOC '07New York, NY, USAAssociation for Computing MachineryD. Gavinsky, J. Kempe, I. Kerenidis, R. Raz, and R. de Wolf, "Exponential separations for one-way quantum communication complexity, with applications to cryptography," in Proceedings of the Thirty-Ninth An- nual ACM Symposium on Theory of Computing, STOC '07, (New York, NY, USA), p. 516-525, Association for Com- puting Machinery, 2007. The quantum communication complexity of sampling. A Ambainis, L J Schulman, A Ta-Shma, U Vazirani, A Wigderson, SIAM Journal on Computing. 326A. Ambainis, L. J. Schulman, A. Ta-Shma, U. Vazirani, and A. Wigderson, "The quantum communication com- plexity of sampling," SIAM Journal on Computing, vol. 32, no. 6, pp. 1570-1585, 2003. Communication complexity with small advantage. T Watson, computational complexity. 292T. Watson, "Communication complexity with small ad- vantage," computational complexity, vol. 29, p. 2, Apr 2020. An Introduction to Boson-Sampling. B T Gard, K R Motes, J P Olson, P P Rohde, J P Dowling, 8B. T. Gard, K. R. Motes, J. P. Olson, P. P. Rohde, and J. P. Dowling, An Introduction to Boson-Sampling, ch. Chapter 8, pp. 167-192.
[]
[ "Geodesics and visual boundary of horospherical products", "Geodesics and visual boundary of horospherical products", "Geodesics and visual boundary of horospherical products", "Geodesics and visual boundary of horospherical products" ]
[ "Tom Ferragut ", "Tom Ferragut " ]
[]
[]
We study the geometry of horospherical products by providing a description of their distances, geodesics and visual boundary. These products contains both discrete and continuous examples, including Cayley graphs of lamplighter groups and solvable Lie groups of the form R ⋉ (N 1 × N 2 ), where N 1 and N 2 are two simply connected, nilpotent Lie groups.
null
[ "https://export.arxiv.org/pdf/2009.04698v2.pdf" ]
221,586,415
2009.04698
0ba0e99799384a93d1bfebd265454b33eca1fb6a
Geodesics and visual boundary of horospherical products 6 Feb 2023 February 7, 2023 Tom Ferragut Geodesics and visual boundary of horospherical products 6 Feb 2023 February 7, 2023 We study the geometry of horospherical products by providing a description of their distances, geodesics and visual boundary. These products contains both discrete and continuous examples, including Cayley graphs of lamplighter groups and solvable Lie groups of the form R ⋉ (N 1 × N 2 ), where N 1 and N 2 are two simply connected, nilpotent Lie groups. Introduction A horospherical product is a metric space constructed from two Gromov hyperbolic spaces X and Y , it is included in their Cartesian product X × Y and can be seen as a diagonal in it. Let β X ∶ X → R and β Y ∶ Y → R be two Busemann functions. The horospherical product of X and Y , denoted by X ⋈ Y , is defined as the set of points in X × Y such that the two Busemann functions add up to zero, namely X ⋈ Y ∶= {(x, y) ∈ X × Y β X (x) + β Y (y) = 0}. The level-lines of the Busemann functions are called horospheres, one can see the horospherical product X ⋈Y as X crossed with an upside down copy of Y in parallel to these horospheres. We will call height function the opposite of the chosen Busemann function. Let N be a simply connected, nilpotent Lie group and let A be a derivation of Lie(N ) whose eigenvalues have positive real parts. Then R ⋉ A N is called a Heintze group and is Gromov hyperbolic, they are the only examples of negatively curved Lie groups. Let X and Y are two Heintze groups, we can choose the Busemann functions to be such that ∀(t, n) ∈ R ⋉ A N we have β(t, n) = −t. Then we obtain (R ⋉ A 1 N 1 ) ⋈ (R ⋉ A 2 N 2 ) = R ⋉ Diag(A 1 ,−A 2 ) (N 1 × N 2 ). When N = R, the corresponding Heintze group is a hyperbolic plan H 2 , and as their horospherical products we obtain the Sol geometries, one of the eight Thurston's geometries. We can also build Diestel-Leader graphs and the Cayley 2-complexes of Baumslag-Solitar groups BS(1, n) as the horospherical products of trees or hyperbolic plans. In the second section of [29], the last three sets of examples are well detailed, and presented as horocyclic products of either regular trees or the hyperbolic plan H 2 . We choose the name horospherical product instead of horocyclic product since in higher dimension, level-sets according to a Busemann function are not horocycles but horospheres. As Woess suggested in the end of [29], we explore here a generalization for horospherical products. The horospherical product construction can be realized for more than two spaces, see [1] for a study of the Brownian motion on a multiple horospherical product of trees. However in this work we will stay in the setting of two Gromov hyperbolic spaces. To study the geometry of horospherical products we require that our components X and Y are two proper, geodesically complete, Gromov hyperbolic, Busemann spaces. A Busemann space is a metric space where the distance between any two geodesics is convex, and a metric space X is geodesically complete if and only if a geodesic segment α ∶ I → X can be prolonged into a geodesic linê α ∶ R → X. The Busemann hypothesis suits with the definition of horospherical product since we require the two heights functions to be exactly opposite. Furthermore, adding the assumptions that X and Y are geodesically complete allows us to prove that the horospherical product X ⋈ Y is connected (see Lemma 3.11). In the next part of this introduction we present our main results, which hold when X and Y are two proper, geodesically complete, Gromov hyperbolic, Busemann spaces. It covers the case where X and Y are solvable Lie groups of the form R ⋉ A N . In [12] and [13], using the horospherical product structure of treebolic space, Farb and Mosher proved a rigidity results for quasi-isometries of BS(1, n). In [10] and [11], Eskin, Fisher and Whyte obtained a similar rigidity results for the Diestel-Leader graphs and the Sol geometries, again using their horospherical product structure. Besides being results on their own, the tools we develop in this paper are used in [14] to study the quasi-isometry classification of the aforementioned horospherical products. In [14] we generalise the results obtained by Eskin, Fisher and Whyte in [10], and provide new quasi-isometric classifications for some family of solvable Lie groups. There are many possible choices for the distance on X ⋈ Y in this paper we work with a family of length path metrics induced by distances on X × Y (see Definition 3.2). We require that the distance on X ⋈ Y comes from an admissible norm N on R 2 (e.g. any ℓ p norm). Our first result describes these distances. Theorem A. Let d ⋈ be an admissible distance on X ⋈ Y . Then there exists a constant M depending only on the metric spaces (X ⋈ Y, d ⋈ ) such that for all p = (p X , p Y ), q = (q X , q Y ) ∈ X ⋈ Y : d ⋈ (p, q) − d X (p X , q X ) + d Y (p Y , q Y ) − h(p) − h(q) ≤ M. Therefore, given two admissible distances d and d ′ , the horospherical products (X ⋈ Y, d) and (X ⋈ Y, d ′ ) are roughly isometric, which means that there exists a (1, c)-quasi-isometry between them, Figure 1: Shape of geodesic segments when h(p) ≤ h(q) − κ in X ⋈ Y . The neighbourhoods' shapes are distorted since when going upward, distances are contracted in the "direction" X and expanded in the "direction" Y . V 1 V 2 (V X 1 , V Y 2 ) X Y h N M (V 1 ) p q h(q) α for a constant c ≥ 0. Le Donne, Pallier and Xie proved in [22] that for the solvable groups R⋉ Diag(A 1 ,−A 2 ) (N 1 × N 2 ), changing the left-invariant Riemannian metric results in the identity map being a rough similarity. Theorem A is one of the tools we use in [14], where we prove a geometric rigidity property of quasi-isometries between families of horospherical products. This property leads to quasi-isometric invariants in such spaces, and a first result in the quasi-isometry classification of some solvable Lie groups. Throughout this paper we provide a coarse description of geodesics and of the visual boundary of a broad family of horospherical products. Following the characterisation of the distances on horospherical products, we describe the shape of geodesic segments. Theorem B. Let X and Y be two proper, geodesically complete, δ-hyperbolic, Busemann spaces and let d ⋈ be an admissible distance on X ⋈ Y . Let p = p X , p Y and q = q X , q Y be two points of X ⋈ Y and let α be a geodesic segment of (X ⋈ Y, d ⋈ ) linking p to q. There exists a constant M depending only on (X ⋈ Y, d ⋈ ), and there exist two vertical geodesics V 1 = V X 1 , V Y 1 and V 2 = V X 2 , V Y 2 such that: 1. If h(p) ≤ h(q) − M then α is in the M -neighbourhood of V 1 ∪ V X 1 , V Y 2 ∪ V 2 ; 2. If h(p) ≥ h(q) + M then α is in the M -neighbourhood of V 1 ∪ V X 2 , V Y 1 ∪ V 2 ; 3. If h(p) − h(q) ≤ M then at least one of the conclusions of 1. or 2. holds. Specifically V 1 and V 2 can be chosen such that p is close to V 1 and q is close to V 2 . An example is illustrated on Figure 1 for h(p) ≤ h(q) − κ. Coarsely speaking, Theorem B ensures that any geodesic segment is constructed as the concatenation of three vertical geodesics. This result is similar to the Gromov hyperbolic case, where a geodesic segment is in the constant neighbourhood of two vertical geodesics. This result leads us to the existence of unextendable geodesics, which are called dead-ends. Geodesics shapes was already well-known in lamplighter groups. In the case of Sol, we recover, up to an additive constant, Troyanov's description of global geodesics (see [27]). The horospherical product between X and R is isometric to X, therefore given any vertical geodesic V Y of Y , X ⋈ V Y is an embedded copy of X in X ⋈ Y . A geodesic line of X ⋈ Y looks either like a geodesic of X or like a geodesic of Y . Corollary C. Let X and Y be two proper, geodesically complete, δ-hyperbolic, Busemann spaces. Then there exists M ≥ 0 depending only on δ such that for all geodesic line α ∶ R → X ⋈ Y at least one of the two following statements holds. 1. α is included in a constant M -neighbourhood of a geodesics contained in a embedded copy of X; 2. α is included in a constant M -neighbourhood of a geodesics contained in a embedded copy of Y . If a geodesic verifies both conclusions, it is in the M -neighbourhood of a vertical geodesic of X ⋈Y . Let o ∈ X ⋈ Y , the visual boundary of X ⋈ Y with respect to the base point o, denoted by ∂ o (X ⋈ Y ), stands for the set of equivalence classes of geodesic rays starting at o. Consequently to the description of geodesic segments, we obtain that for any geodesic ray k of X ⋈ Y there exists a vertical geodesic ray at finite distance of k. Therefore we classify all possible shapes for geodesic rays, then we give a description of the visual boundary of X ⋈ Y . Theorem D. Let X and Y be two proper, geodesically complete, δ-hyperbolic, Busemann spaces. Let (w X , a X ) ∈ X × ∂X, (w Y , a Y ) ∈ Y × ∂Y and let X ⋈ Y be the horospherical product with respect to (w X , a X ) and (w Y , a Y ). Then the visual boundary of X ⋈ Y with respect to any point o = (o X , o Y ) can be decomposed as: ∂ o (X ⋈ Y ) = ∂X ∖ {a X } × {a Y } ⋃ {a X } × ∂Y ∖ {a Y } = ∂X × {a Y } ⋃ {a X } × ∂Y ∖ {(a X , a Y )} When X ∶= R ⋉ A 1 N 1 and Y ∶= R ⋉ A 2 N 2 we obtain that ∂ R ⋉ Diag(A 1 ,−A 2 ) (N 1 × N 2 ) = N 1 × N 2 . In the case of Sol, the last result is similar to Proposition 6.4 of [27]. However, unlike Troyanov in his work, we are focusing on minimal geodesics and not on local ones. One can see that this visual boundary neither depends on the chosen admissible distance d nor on the base point o. Framework The paper is organized as follows. • In Section 2 we present the context in which we will construct the horospherical products, namely Gromov hyperbolic, Busemann spaces. Y h X (∂X \ {a X }) × {a Y } {a X } × (∂Y \ {a Y }) Figure 3: Depiction of ∂ o (X ⋈ Y ). • Then in Section 3 we define horospherical products and give some examples. • In Section 4 we present an estimate on the length of paths avoiding horoballs in hyperbolic spaces, namely Lemma 4.9, which will be central in our control of the distances on X ⋈ Y . Then we give an estimate of the distances on X ⋈ Y through Theorem 4.13. • Last, in Section 5 , we prove our main results, Theorem A follows from Corollary 4.13. The description of geodesic lines of Theorem B follows from Theorem A and gives us the tools to prove Theorem D. Context The Let us fix a point a ∈ ∂H on the boundary. We call vertical geodesic ray, respectively vertical geodesic line, any geodesic ray in the equivalence class a, respectively with one of its rays in a. The study of these specific geodesic rays is central in this work. Busemann spaces and Busemann functions D γ,γ ′ ∶ [a, b] × [a ′ , b ′ ] → H (t, t ′ ) ↦ d H (γ(t), γ ′ (t ′ )). It is a weaker assumption than being CAT(0) (Theorem 1.3 of [15]), however it implies that H is uniquely geodesic. See Chap.8 and Chap.12 of [23] for more details on Busemann spaces. This convex assumption removes some technical difficulties in a significant number of proofs in this work. If H is a Busemann space in addition to being Gromov hyperbolic, for all x ∈ H there exists a unique vertical geodesic ray, denoted by V x , starting at H. In fact the distance between two vertical geodesics starting at x is a convex and bounded function, hence decreasing and therefore constant equal to 0. The construction of the horospherical product of two Gromov hyperbolic space X and Y requires the so called Busemann functions. Their definition is simplified by the Busemann assumption. Let us consider ∂X, the Gromov boundary of X (which, in this setting, is the same as the visual boundary). Both the boundary ∂X and X ∪ ∂X, endowed with the natural Hausdorff topology, are compact. Then, given a ∈ ∂X a point on the boundary, and w ∈ X a base point, we define a Busemann function β (a,w) with respect to a and w. Let V w be the unique vertical geodesic ray starting from w. ∀ x ∈ X, β (a,w) (x) ∶= lim sup t→+∞ (d(x, V w (t)) − t) . This function computes the asymptotic delay a point x ∈ X has in a race towards a against the vertical geodesic ray starting at w. The horospheres of X with respect to (a, w) ∈ ∂X × X are the level-sets of β (a,w) . These horospheres depend on the previously chosen couple (a, w) of ∂X × X. Heights functions and vertical geodesics 1. lim x→a h (a,w) (x) = +∞ 2. lim x→b h (a,w) (x) = −∞, ∀b ∈ ∂H ∖ {a} 3. ∀x, y, z ∈ H, β a (x, y) + β a (y, z) − β a (x, z) ≤ 200δ. Furthermore, a geodesic ray is in a ∈ ∂H if and only if its height tends to +∞. 1. lim t→+∞ h (a,w) (α(t)) = +∞ 2. α([0, +∞[) ∈ a. Proof. As for any geodesic ray α ∶ [0, +∞[→ H there exists b ∈ ∂H such that α([0, +∞[) ∈ b, this proposition is a particular case of Proposition 2.1. An important property of the height function is to be Lipschitz. Proposition 2.3. Let a ∈ ∂H and w ∈ H. The height function h a ∶= −β a (⋅, w) is Lipschitz: ∀x, y ∈ H, h (a,w) (x) − h (a,w) (y) ≤ d(x, y). Proof. By using the triangle inequality we have for all x, y ∈ H: −h (a,w) (x) = β a (x, w) = sup{lim sup t→+∞ (d(x, k(t)) − t) k vertical rays starting at w} ≤ d(x, y) + sup{lim sup t→+∞ (d(y, k(t)) − t) k vertical rays starting at w} ≤ d(x, y) + β a (y, w) ≤ d(x, y) − h (a,w) (y). The result follows by exchanging the roles of x and y. From now on, we fix a given a ∈ ∂H and a given w ∈ H. Therefore we simply denote the height function by h instead of h (a,w) . Proposition 2.4. Let α be a vertical geodesic of H. We have the following control on the height along α: ∀t 1 , t 2 ∈ R, t 2 − t 1 − 200δ ≤ h α(t 2 ) − h α(t 1 ) ≤ t 2 − t 1 + 200δ. Proof. Let t 1 , t 2 ∈ R, then: h(α(t 2 )) − h(α(t 1 )) = β α(t 1 ), w − β α(t 2 ), w = β α(t 1 ), α(t 2 ) − β α(t 2 ), w − β α(t 1 ), w + β α(t 1 ), α(t 2 ) . The third point of Proposition 2.1 applied to the last bracket gives: β α(t 1 ), α(t 2 ) − 200δ ≤ h(α(t 2 )) − h(α(t 1 )) ≤ β α(t 1 ), α(t 2 ) + 200δ.(1) Since t ↦ α(t + t 2 ) is a vertical geodesic starting at α(t 2 ) we have: β α(t 1 ), α(t 2 ) = sup lim sup t→+∞ d(α(t 1 ), k(t)) − t k vertical rays starting at α(t 2 ) ≥ lim sup t→+∞ d α(t 1 ), α(t + t 2 ) − t ≥ lim sup t→+∞ t + t 2 − t 1 − t ≥ t 2 − t 1 , for t large enough. Using this last inequality in inequality (1) we get t 2 − t 1 − 200δ ≤ h(α(t 2 )) − h(α(t 1 ) ). The result follows by exchanging the roles of t 1 and t 2 . Using Proposition 2.4 with t 1 = 0 and t 2 = t, the next corollary holds. Corollary 2.5. Let α be a vertical geodesic parametrised by arclength and such that h(α(0)) = 0. We have: ∀t ∈ R, h(α(t)) − t ≤ 200δ. From now on, H will be a proper, geodesic, Gromov hyperbolic, Busemann space. Hence the height function is convex along a vertical geodesic. Property 2.6 (Prop. 12.1.5 in p.263 of Papadopoulos [23]). Let δ ≥ 0 be a non negative number. Let H be a proper δ-hyperbolic, Busemann space. For every geodesic α, the function t ↦ −h(α(t)) is convex. The Busemann hypothesis implies that the height along geodesic behaves nicely. This means that we can drop the constant 200δ from Corollary 2.5. It is one of the main reasons why we require our spaces to be Busemann spaces. Proposition 2.7. Let H be a δ-hyperbolic and Busemann space and let V ∶ R → H be a path of H. Then V is a vertical geodesic if and only if ∃c ∈ R such that ∀t ∈ R, h(V (t)) = t + c. Proof. Let V be a vertical geodesic in H. By Property 2.6 we have that t ↦ −h(V (t)) is convex. Furthermore, from Corollary 2.5, we get h(V (t)) − t ≤ 200δ. Thereby the bounded convex function t ↦ t − h(V (t)) is constant. Then there exists a real number c such that ∀t ∈ R, h(V (t)) = t + c. We now assume that there exists a real number c such that ∀t ∈ R, h(V (t)) = t + c. Therefore, for all real numbers t 1 and t 2 we have d V (t 1 ), V (t 2 ) ≥ ∆h V (t 1 ), V (t 2 ) = t 1 − t 2 . By definition V is a connected path, hence t 1 − t 2 ≥ d V (t 1 ), V (t 2 ) which implies with the previous sentence that t 1 − t 2 = d V (t 1 ), V (t 2 ) , then V is a geodesic. Furthermore lim t→+∞ h(V (t)) = +∞, which implies by definition that V is a vertical geodesic. A metric space is called geodesically complete if all its geodesic segments can be prolonged into geodesic lines. In H is geodesically complete in addition to its other assumptions, then any point of H is included in a vertical geodesic line. V x ∶ R → H such that V x contains x Proof. Let us consider in this proof w ∈ H and a ∈ ∂H, from which we constructed the height h of our space H. Then by definition we have h (a,w) = h. Proposition 12.2.4 of [23] ensures the existence of a geodesic ray R x ∈ a starting at x. Furthermore as H is geodesically complete R x can be prolonged into a geodesic V x ∶ R → H such that V x ([0; +∞[) ∈ a, hence V x is a vertical geodesic. Horospherical products In this part we generalise the definition of horospherical product, as seen in [10] for two trees or two hyperbolic planes, to any pair of proper, geodesically complete, Gromov hyperbolic, Busemann spaces. We recall that given a proper, δ-hyperbolic space H with distinguished a ∈ ∂H and w ∈ H, we defined the height function on H in Section 2.3 from the Busemann functions with respect to a and w. Definitions Let X and Y be two δ−hyperbolic spaces. We fix the base points w X ∈ X, w Y ∈ Y and the directions in the boundaries a X ∈ ∂X, a Y ∈ ∂Y . We consider their heights functions h X and h Y respectively on X and Y . Definition 3.1 (Horospherical product). The horospherical product of X and Y , denoted by X ⋈ Y = X ⋈ Y is X ⋈ Y ∶= (p X , p Y ) ∈ X × Y h X (p X ) + h Y (p Y ) = 0 . From now on, with slight abuse, we omit the base points and fixed points on the boundary in the construction of the horospherical product. The metric space X ⋈ Y refers to a horospherical product of two Gromov hyperbolic Busemann spaces. We choose to denote X and Y the two components in order to identify easily which objects are in which component. In order to define a Horospherical product in a wider settings, one might only a Busemann function on a metric space. One of our goals is to understand the shape of geodesics in X ⋈ Y according to a given distance on it. In a cartesian product the chosen distance changes the behaviour of geodesics. However we show that in a horopsherical product the shape of geodesics does not change for a large family of distances, up to an additive constant. We will define the distances on X ⋈ Y = X ⋈ Y as length path metrics induced by distances on X × Y . A lot of natural distances on the cartesian product X × Y come from norms on the vector space R 2 . Let N be such a norm and let us denote d N ∶= N (d X , d Y ), which means that for all couples (p X , p Y ), (q X , q Y ) ∈ X × Y we have that d N (p X , p Y ), (q X , q Y ) = N d X (p X , q X ), d Y (p Y , q Y ) . The length l N (γ) of a path γ = (γ X , γ Y ) in the metric space X × Y, d N is defined by: l N (γ) = sup θ∈Θ([t 1 ,t 2 ]) n θ −1 i=1 d N (γ(θ i ), γ(θ i+1 )) . Where Θ([t 1 , t 2 ]) is the set of subdivisions of [t 1 , t 2 ]. Then the N -path metrics on X ⋈ Y is: Definition 3.2 (The N -path metrics on X ⋈ Y ). Let N be a norm on the vector space R 2 . The N -path metric on X ⋈ Y , denoted by d ⋈ , is the length path metric induced by the distance N (d X , d Y ) on X × Y . For all p and q in X ⋈ Y we have: d ⋈ (p, q) = inf{l N (γ) γ path in X ⋈ Y linking p to q}.(2) Any norm N on R 2 can be normalised such that N (1, 1) = 1. We call admissible any such norm which satisfies an additional condition. N (a, b) ≥ a + b 2 .(3) Since all norms are equivalent in R 2 , there exists a constant C N ≥ 1 such that: N (a, b) ≤ C N a + b 2 .(4) As an example, any l p norm with p ≥ 1 is admissible. Property 3.4. Let N be an admissible norm on the vector space R 2 . Let γ ∶= (γ X , γ Y ) ⊂ X × Y be a connected path. Then we have: l X (γ X ) + l Y (γ Y ) 2 ≤ l N (γ) ≤ C N l X (γ X ) + l Y (γ Y ) 2 . Proof. Let γ ∶= (γ X , γ Y ) ∶ [t 1 , t 2 ] → X × Y be a connected path and θ a subdivision of [t 1 , t 2 ], then by the definition of the length: l N (γ) ≥ n θ −1 i=1 d N (γ(θ i ), γ(θ i+1 )) = n θ −1 i=1 N d X γ X (θ i ), γ X (θ i+1 ) , d Y γ Y (θ i ), γ Y (θ i+1 ) ≥ n θ −1 i=1 1 2 d X γ X (θ i ), γ X (θ i+1 ) + d Y γ Y (θ i ), γ Y (θ i+1 ) , since N is admissible. ≥ 1 2 n θ −1 i=1 d X γ X (θ i ), γ X (θ i+1 ) + n θ −1 i=1 d Y γ Y (θ i ), γ Y (θ i+1 ) . Any couple of subdivision θ 1 and θ 2 can be merge into a subdivision θ that contains θ 1 and θ 2 . Furthermore the last inequality holds for any subdivision θ, hence by taking the supremum on all the subdivisions we have: l N (γ) ≥ l X (γ X ) + l Y (γ Y ) 2 . Furthermore, we have that ∀a, b ∈ R, N (a, b) ≤ C N a+b 2 , hence: n θ −1 i=1 d N (γ(θ i ), γ(θ i+1 )) ≤ C N 2 n θ −1 i=1 d X (γ X (θ i ), γ(θ i+1 )) + n θ −1 i=1 d Y (γ Y (θ i ), γ Y (θ i+1 )) ≤ C N l X (γ X ) + l X (γ X ) 2 Since last inequality holds for any subdivision θ, we have that l N (γ) ≤ C N l X (γ X )+l X (γ X ) 2 . The definition of height on X and Y is used to construct a height function on X ⋈ Y . Definition 3.5 (Height on X ⋈ Y ). The height h(p) of a point p = (p X , p Y ) ∈ X ⋈ Y is defined as h(p) = h X (p X ) = −h Y (p Y ). On Gromov hyperbolic spaces we have that de distance between two points is greater than their height difference. The same occurs on horospherical products given with an admissible norm. Let x and y be two points of X ⋈ Y , and let us denote ∆h(p, q) ∶= h(p) − h(q) their height difference. Lemma 3.6. Let N be an admissible norm, and let d ⋈ the distance on X ⋈ Y induced by N . Then the height function is 1-Lipschitz with respect to the distance d ⋈ , i.e., ∀p, q ∈ X ⋈ Y, d ⋈ (p, q) ≥ ∆h(p, q).(5) Proof. Since N is admissible we have: d ⋈ (p, q) ≥ d X (p X , q X ) + d Y (p Y , q Y ) 2 ≥ ∆h(p X , q X ) + ∆h(p Y , q Y ) 2 = ∆h(p X , q X ) = ∆h(p, q). Following Proposition 2.7, we define a notion of vertical paths in a horospherical product. Definition 3.7 (Vertical paths in X ⋈ Y ). Let V ∶ R → X ⋈ Y be a connected path. We say that V is vertical if and only if there exists a parametrisation by arclength of V such that h(V (t)) = t for all t. Actually, a vertical path of a horospherical product is a geodesic. Lemma 3.8. Let N be an admissible norm. Let V ∶ R → X ⋈ Y be a vertical path. Then V is a geodesic of (X ⋈ Y, d ⋈ ). Proof. Let t 1 , t 2 ∈ R. The path V is vertical therefore ∆h V (t 1 ), V (t 2 ) = t 1 −t 2 . Since V is connected and parametrised by arclength, we have that: t 1 − t 2 = l N V [t 1 ,t 2 ] ≥ d ⋈ V (t 1 ), V (t 2 ) ≥ ∆h V (t 1 ), V (t 2 ) = t 1 − t 2 . Then d ⋈ V (t 1 ), V (t 2 ) = t 1 − t 2 , which ends the proof. Such geodesics are called vertical geodesics. Next proposition tells us that vertical geodesics of X ⋈ Y are exactly couples of vertical geodesics of X and Y . V = (V X , V Y ) ∶ R → X ⋈ Y be a geodesic of (X ⋈ Y, d ⋈ ) . The two following properties are equivalent: 1. V is a vertical geodesic of (X ⋈ Y, d ⋈ ) 2. V X and V Y are respectively vertical geodesics of X and Y . Proof. Let us first assume that V be a vertical geodesic, we have for all real t that h(V X (t)) = h(V (t)) = t, hence ∀t 1 , t 2 ∈ R: d X V X (t 1 ), V X (t 2 ) ≥ ∆h V X (t 1 ), V X (t 2 ) = t 1 − t 2 .(6) Similarly we have that d Y V Y (t 1 ), V Y (t 2 ) ≥ t 1 − t 2 . Using that N is admissible and that V is a geodesic we have: d X V X (t 1 ), V X (t 2 ) = 2 d X V X (t 1 ), V X (t 2 ) + d Y V Y (t 1 ), V Y (t 2 ) 2 − d Y V Y (t 1 ), V Y (t 2 ) ≤ 2d ⋈ V (t 1 ), V (t 2 ) − t 1 − t 2 = t 1 − t 2 . Combine with inequality (6) we have that d X V X (t 1 ), V X (t 2 ) = t 1 − t 2 , hence V X is a vertical geodesic of X. Similarly, V Y is a vertical geodesic Y . Let us assume that V X and V Y are vertical geodesics of X and Y . Let t 1 , t 2 ∈ R, we have: d ⋈ (V (t 1 ), V (t 2 )) = sup θ∈Θ([t 1 ,t 2 ]) n θ −1 i=1 d N (V (θ i ), V (θ i+1 )) = sup θ∈Θ([t 1 ,t 2 ]) n θ −1 i=1 N d X V X (θ i ), V X (θ i+1 ) , d Y V Y (θ i ), V Y (θ i+1 ) = sup θ∈Θ([t 1 ,t 2 ]) n θ −1 i=1 N ∆h V X (θ i ), V X (θ i+1 ) , ∆h V Y (θ i ), V Y (θ i+1 ) = sup θ∈Θ([t 1 ,t 2 ]) N (1, 1) n θ −1 i=1 ∆h V X (θ i ), V X (θ i+1 ) = N (1, 1)∆h V X (t 1 ), V X (t 2 ) = t 1 − t 2 , since N (1, 1) = 1. Where Θ([t 1 , t 2 ]) is the set of subdivision of [t 1 , t 2 ]. Hence the proposition is proved. This previous result is the main reason why we are working with distances which came from admissible norms. A metric space is called geodesically complete if all its geodesic segments can be prolonged into geodesic lines. If X and Y are proper hyperbolic geodesically complete Busemann spaces, their horospherical product X ⋈ Y is connected. Property 3.11. Let X and Y be two proper, geodesically complete, δ-hyperbolic, Busemann spaces. Let X ⋈ Y be their horospherical product. Then X ⋈ Y is connected, furthermore 1 2 (d X + d Y ) ≤ d X⋈Y ≤ 2C N (d X + d Y ). Proof. Let p = (p X , p Y ) and q = (q X , q Y ) be two points of X ⋈ Y . From Property 2.8, there exists a vertical geodesic V p Y such that p Y is in the image of V p Y , and there exists a vertical geodesic V q X such that q X is in the image of V q X . Let q ′ Y be the point of V p Y at height h(q Y ). Let α X be a geodesic of X linking p X to q X and let α ′ Y be a geodesic of Y linking q ′ Y to q Y . We will connect x to y with a path composed with pieces of α X , α ′ Y , V p Y and V q X . We first link (p X , p Y ) to (q X , q ′ Y ) with α X and V p Y . It is possible since V p Y is parametrised by its height. More precisely we construct the following path c 1 : ∀t ∈ [0, d(p X , q X )], c 1 (t) = α X (t), V p Y − h(α X (t)) . Since V p Y is parametrised by its height, we have h V p Y − h(α X (t)) = −h(α X (t)) which im- plies c 1 (t) ∈ X ⋈ Y . Furthermore, using the fact that the height is 1-Lipschitz, we have ∀t 1 , t 2 ∈ [0, d(p X , q X )]: d Y V p Y − h(α X (t 1 )) , V p Y − h(α X (t 2 )) = h(α X (t 1 )) − h(α X (t 2 )) ≤ d X (α X (t 1 ), α X (t 2 )). Hence c 1,Y ∶ t ↦ V p Y − h(α X (t)) is a connected path such that l(c 1,Y ) ≤ l(α X ) ≤ d X (p X , q X ). Hence c 1 is a connected path linking (p X , p Y ) to (q X , q ′ Y ). Using Property 3.4 on c 1 provides us with: l N (c 1 ) ≤ C N 2 (l(c 1,Y ) + l(α X )) ≤ C N l(α X ) ≤ C N d X (p X , q X ) We recall that by definition q ′ Y = V p Y (h(q Y )). We show similarly that c 2 ∶ t ↦ V q X −h(α ′ Y (t)) , α ′ Y (t) is a connected path linking (q X , q ′ Y ) to (q X , q Y ) such that: l(c 2 ) ≤ C N d Y (q ′ Y , q Y ) ≤ C N d Y (q ′ Y , p Y ) + d Y (p Y , q Y ) = C N ∆h(p Y , q Y ) + d Y (p Y , q Y ) , since q ′ Y = V p Y (h(q Y )) ≤ 2C N d Y (p Y , q Y ). Hence, there exists a connected path c = c 1 ∪ c 2 linking p to q such that: l(c) ≤ C N d X (p X , q X ) + 2C N d Y (p Y , q Y ) ≤ 2C N d X (p X , q X ) + d Y (p Y , q Y ) .(7) However if the two components X and Y are not geodesically complete, X ⋈ Y may not be connected. Example 3.12. Let X and Y be two graphs, constructed from an infinite line Z (indexed by Z) with an additional vertex glued on the 0 for X and on the −2 for Y . Their construction are illustrated in Figure 4. They are two 0-hyperbolic Busemann spaces which are not geodesically complete. Let w X ∈ X be the vertex indexed by 0 in X, and let w Y ∈ Y be the vertex indexed by −2 in Y . We choose them to be the base points of X and Y . Since ∂X and ∂Y contain two points each, we fix in both cases the point of the boundary a X or a Y to be the one that contains the geodesic ray indexed by N. On figure 4, we denoted the height of a vertex inside this one. Then the horospherical product X ⋈ Y taken with the ℓ 1 path metric is not connected. Since some vertices of X and Y are not contained in a vertical geodesic, one may not be able to adapt its height correctly while constructing a path joining Figure 4: Example of horospherical product which is not connected. The number in a vertex is the height of that vertex. p X −1 , p Y (2,1) to p X (0,−1) , p Y (2,1) . 0 -1 -2 1 2 X ∶ Y ∶ 3 4 0 -1 -2 1 2 3 4 -1 1 a X ∈ ∂X a Y ∈ ∂Y X ⋈ Y ∶ p X 0 p X 1 p X −2 p X −1 p X 2 p X 3 p X (0,−1) p Y 1 p Y 0 p Y −1 p Y −2 p Y 2 p Y 3 p Y (2,1) (p X 2 , p Y −2 ) (p X 1 , p Y −1 ) (p X 0 , p Y 0 ) (p X −1 , p Y 1 ) (p X −2 , p Y 2 ) (p X −3 , p Y 3 ) (p X 3 , p Y −3 ) (p X (0,−1) , p Y 1 ) (p X (0,−1) , p Y (2,1) ) (p X −1 , p Y (2,1) ) -1 0 -1 -1 -2 1 3 2 -3 -1 It is not clear that a horospherical product is still connected without the hypothesis that X and Y are Busemann spaces. In that case we would need a "coarse" definition of horospherical product. Indeed, the height along geodesics would not be smooth as in Proposition 2.7, therefore the condition requiring to have two exact opposite heights would not suits. Examples A Heintze group is a Lie group of the form R ⋉ A N defined by the action on R, t ↦ exp(tA), with N a simply connected nilpotent Lie group and with A ∈ Lie(A) a derivation whose eigenvalues have positive real parts. Heintze proved in [20] that any simply connected, negatively curved Lie group is isomorphic to a Heintze group. Moreover, a Busemann metric space is simply connected, hence any Gromov hyperbolic, Busemann Lie group is isomorphic to a Heintze group. Consequently, Heintze groups are natural candidates for the two components from which a horospherical product is constructed. In his paper [30], Xie classifies the subfamily of all negatively curved Lie groups R ⋉ R n up to quasi-isometry. Let H 1 ∶= R ⋉ A 1 N 1 and H 2 ∶= R ⋉ A 2 N 2 be two Heintze groups, then H 1 ⋈ H 2 is isomorphic to R ⋉ Diag(A 1 ,−A 2 ) (N 1 × N 2 ), where Diag(A 1 , −A 2 ) is the block diagonal matrix containing A 1 and −A 2 on its diagonal. In fact, We have that H 1 × H 2 is the group R 2 ⋉ (A 1 ,A 2 ) (N 1 × N 2 ) defined by the action on R 2 , (t 1 , t 2 ) ↦ (exp(t 1 A 1 ), exp(t 2 A 2 )). Let (0, e N 1 ) ∈ N 1 , (0, e N 2 ) ∈ N 2 be the two base points, and let t ↦ (t, e N 1 ) and t ↦ (t, e N 2 ) be there respective vertical geodesic rays corresponding to the chosen Busemann functions. Then we have that for all (t, n) ∈ H i , h(t, n) = t. Under this setting we have that H 1 ⋈ H 2 = {(t 1 , t 2 , n 1 , n 2 ) ∈ H 1 × H 2 t 1 = −t 2 } = {(t, −t, n 1 , n 2 ) ∈ H 1 × H 2 } . Thanks to this characterisation, we show that H 1 ⋈ H 2 is a subgroup of R 2 ⋉ (A 1 ,A 2 ) (N 1 × N 2 ) . Furthermore the following map is an isomorphism H 1 ⋈ H 2 → R ⋉ Diag(A 1 ,−A 2 ) (N 1 × N 2 ) (t, −t, n 1 , n 2 ) ↦ (t, n 1 , n 2 ), where R ⋉ Diag(A 1 ,−A 2 ) (N 1 × N 2 ) is determined by the action t ↦ (exp(tA 1 ), exp(−tA 2 )). Therefore, we have that (R ⋉ A 1 N 1 ) ⋈ (R ⋉ A 2 N 2 ) ≅ iso R ⋉ Diag(A 1 ,−A 2 ) (N 1 × N 2 ) The Sol geometries are specific cases of such solvable Lie groups when N i = R for i ∈ {1, 2}, and where the matrices A i are positive reals. In this context, for m > 0 we have that R ⋉ m R is the Log model of a real hyperbolic plan, otherwise stated the Riemannian manifold with coordinates (x, z) ∈ R 2 endowed with the Riemannian metric ds 2 = e −2mz dx 2 + dz 2 . Then (R ⋉ m R) ⋈ (R ⋉ n R) = R ⋉ Diag(m,−n) R 2 is a Sol geometry, or also the Riemannian manifold with coordinates (x 1 , x 2 , z) ∈ R 3 endowed with the Riemannian metric ds 2 = e −2mz dx 2 1 + e 2nz dx 2 2 + dz 2 . A first discrete example of horospherical product is the family of Diestel-Leader graphs defined by DL(n, m) = T n ⋈ T m with n, m ≥ 2 and where T n and T m are regular trees. We see T n and T m as connected metric spaces with the usual distance on them. By choosing half of the ℓ 1 path metric on DL(n, m), this horospherical product becomes a graph with the natural distance on it. Indeed, the set of vertices of DL(n, m) is then defined by the subset of couples of vertices of T n × T m included in DL(n, m). In this horospherical product, two points (p n , p m ) and (q n , q m ) of DL(n, m) are connected by an edge if and only if p n and q n are connected by an edge in T n and if p m and q m are connected by an edge in T m . Furthermore, when n = m, there is a one-to-one correspondence between DL(n, n) and the Cayley graph of the lamplighter group Z Y ≀ Z, see [28] for further details. Figure 6: The Sol geometry and two geodesics of embedded copies of H 2 3 ⋈ T 3 A 1 B 1 z x 1 B 2 x 2 A 2 H 2 ↑ H 2 ↓ Depending on the case, we either used the ℓ 1 path metric or the ℓ 2 path metric. However, we will see in Proposition 4.14 that it does not matter, up to an additive uniform constant. Quasi-isometric rigidity results in the Diestel-Leader graphs and the Sol geometry have been proved using the same techniques in [10] and [11]. The horospherical product of a hyperbolic plane and a regular tree has been studied as the 2complex of Baumslag-Solitar groups in [2], they are called the treebolic spaces. The distance they choose on the treebolic spaces is similar to ours. In fact our Proposition 4.13 and their Proposition 2.8 page 9 (in [2]) tell us they are equal up to an additive constant. Rigidity results on the quasi-isometry classification of the treebolic spaces were brought up in [12] and [13]. 4 Estimates on the length of specific paths 4 .1 Geodesics in Gromov hyperbolic Busemann spaces This section focuses on length estimates in Gromov hyperbolic Busemann spaces. The central result is Proposition 4.9, which presents a lower bound on the length of a path staying between two horospheres. Before moving to the technical results of this section, let us introduce some notations. h + (γ) = sup t∈I h(γ(t)) ; h − (γ) = inf t∈I h(γ(t)) . Let x and y be two points of H, we denote the height difference between them by: ∆h(x, y) = h(x) − h(y) . We define the relative distance between two points x and y of H as: d r (x, y) = d(x, y) − ∆h(x, y). Let us denote V x a vertical geodesic containing x, we will assume it to be parametrised by arclength. Thanks to Proposition 2.7 we choose a parametrisation by arclength such that ∀t ∈ R, h(V x (t)) = t + 0. The relative distance between two points quantifies how far a point is from the nearest vertical geodesic containing the other point. In the sequel we want to apply the slim triangles property on ideal triangles, hence we need the following result of [5]. Property 4.2 (Proposition 2.2 page 19 of [5]) . Let a, b and c be three points of X ∪ ∂X. Let α, β, γ be three geodesics of X linking respectively b to c, c to a, and a to b. Then every point of α is at distance less than 24δ from the union β ∪ γ. Next lemma tells us that in order to connect two points, a geodesic needs to go sufficiently high. This height is controlled by the relative distance between these two points. Lemma 4.3. Let H be a δ-hyperbolic and Busemann metric space, let x and y be two elements of H such that h(x) ≤ h(y), and let α be a geodesic linking x to y. Let us denote z = α ∆h(x, y) + 1 2 d r (x, y) , x 1 ∶= V x h(y) + 1 2 d r (x, y) the point of V x at height h(y) + 1 2 d r (x, y) and y 1 ∶= V y h(y) + 1 2 d r (x, y) the point of V y at the same height h(y) + 1 2 d r (x, y). Then we have: 1. h + (α) ≥ h(y) + 1 2 d r (x, y) − 96δ 2. d (z, x 1 ) ≤ 144δ 3. d (z, y 1 ) ≤ 144δ 4. d (x 1 , y 1 ) ≤ 288δ. Proof. The lemma and its proof are illustrated in Figure 7. Following Property 4.2, the triple of geodesics α, V x and V y is a 24δ-slim triangle. Since the sets {t ∈ [0, d(x, y)] d(α(t), V x ) ≤ 24δ} and {t ∈ [0, d(x, y)] d(α(t), V y ) ≤ 24δ} are closed sets covering [0, d(x, y)], their intersection is non empty. Hence there exists t 0 ∈ [0, d(x, y)], x 2 ∈ V x and y 2 ∈ V y such that d(α(t 0 ), x 2 ) ≤ 24δ and d(α(t 0 ), y 2 ) ≤ 24δ. Let us first prove that t 0 is close to ∆h(x, y) + 1 2 d r (x, y). By the triangle inequality we have that: t 0 − d(x, x 2 ) = d(x, α(t 0 )) − d(x, x 2 ) ≤ d(x 2 , α(t 0 )) ≤ 24δ. Let us denote x 3 ∶= V x (h(x) + t 0 ) the point of V x at height h(x) + t 0 , and y 3 = V y (h(y) + d(x, y) − t 0 ) the point of V y at height h(y) + d(x, y) − t 0 . Then by the triangle inequality: d(α(t 0 ), x 3 ) ≤ d(α(t 0 ), x 2 ) + d(x 2 , x 3 ) = d(α(t 0 ), x 2 ) + d(x, x 2 ) − d(x, x 3 ) ≤ d(α(t 0 ), x 2 ) + d(x, x 2 ) − t 0 ≤ 48δ.(8)x y V x V y H α α(t 0 ) ≤ 24δ ≤ 24δ h(x) h(y) z x 1 x 2 y 1 y 2 h x 3 y 3 h(x) + t 0 h(y) + 1 2 d r (x, y) ∆h(x, y) In the last inequality we used that d(x, x 3 ) = t 0 , which holds by the definition of x 3 . We show in the same way that d(α(t 0 ), y 3 ) ≤ 48δ. By the triangle inequality we have d(x 3 , y 3 ) ≤ 96δ. As the height function is Lipschitz we have ∆h( x 3 , y 3 ) ≤ d(x 3 , y 3 ) ≤ 96δ , which provides us with: 1 2 d r (x, y) + ∆h(x, y) − t 0 = 1 2 d r (x, y) + ∆h(x, y) + h(y) − h(x) − 2t 0 = 1 2 h(y) + d(x, y) − t 0 − (h(x) + t 0 ) = 1 2 ∆h(x 3 , y 3 ) ≤ 96δ 2 ≤ 48δ. (9) In particular it gives us that d(z, α(t 0 )) ≤ 48δ. We are now ready to prove the first point using inequalities (8) and (9): h + (α) ≥h(α(t 0 )) ≥ h(x 3 ) − ∆h(α(t 0 ), x 3 ) ≥ h(x) + t 0 − 48δ ≥h(x) + 1 2 d r (x, y) + ∆h(x, y) − 96δ ≥ h(y) + 1 2 d r (x, y) − 96δ, as we have h(x) ≤ h(y). The second point of our lemma is proved as follows: d(z, x 1 ) ≤ d(z, α(t 0 )) + d(α(t 0 ), x 1 ) ≤ 48δ + d(α(t 0 ), x 3 ) + d(x 3 , x 1 ) ≤ 96δ + t 0 + h(x) − 1 2 d r (x, y) + h(y) = 96δ + t 0 − ∆h(x, y) + 1 2 d r (x, y) ≤ 144δ. The proof of 3. is similar, and 4. is obtained from 2. and 3. by the triangle inequality. The next lemma shows that in the case where h(x) ≤ h(y) a geodesic linking x to y is almost vertical until it reaches the height h(y). Lemma 4.4. Let H be a δ-hyperbolic and Busemann space. Let x and y be two points of H such that h(x) ≤ h(y). We define x ′ ∶= V x (h(y)) to be the point of the vertical geodesic V x at the same height as y. Then: d r (x, y) − d(x ′ , y) ≤ 54δ.(10)h − ([x ′ , y]) − δ ≤ h(m) ≤ h + ([x, x ′ ]) + δ. Let R x ′ and R y be two vertical geodesic rays respectively contained in V x and V y and respectively starting at x ′ and y. Then Property 4.2 used on the ideal triangle R x ∪ R y ∪ [x ′ , y] implies that h − ([x ′ , y]) ≥ h(y) − 24δ, therefore we have h + ([x, x ′ ]) = h(y). Then h(y) − 25δ ≤ h(m) ≤ h(y) + δ holds. It follows that m and x ′ are close to each other: d(m, x ′ ) ≤ d(m, p 1 ) + d(p 1 , x ′ ) ≤ δ + ∆h(p 1 , x ′ ) ≤ δ + ∆h(p 1 , m) + ∆h(m, y) + ∆h(y, x ′ ) ≤ δ + d(p 1 , m) + 25δ + 0 ≤ 27δ.(11) Then we give an estimate on the distance between x and m: d(x, m) − ∆h(x, y) = d(x, m) − d(x, x ′ ) ≤ d(m, x ′ ) ≤ 27δ.(12) However d r (x, y) = d(x, y) − ∆h(x, y) and d(x, y) = d(x, m) + d(m, y), therefore: d r (x, y) = d(x, m) + d(m, y) − ∆h(x, y).(13) Combining inequalities (12) and (13) we have d r (x, y) − d(m, y) ≤ 27δ. Then: d r (x, y) − d(x ′ , y) ≤ 27δ + d(x ′ , m) ≤ 54δ. We are now able to prove the estimates of the next section. Length estimate of paths avoiding horospheres Consider a path γ and a geodesic α sharing the same end-points in a proper, Gromov hyperbolic, Busemann space. We prove in this section that if the height of γ does not reach the maximal height of the geodesic α, then γ is much longer than α. Furthermore, its length increases exponentially with respect to the difference of maximal height between γ and α. To do so, we make use of Proposition 1.6 p400 of [3], which we recall here. Let us denote by l(c) the length of a path c. This result implies that a path of X between p and q which avoids the ball of diameter [p, q] has length greater than an exponential of the distance d(p, q). From now on we will add as convention that δ ≥ 1. For all δ 1 ≤ δ 2 a δ 1 -slim triangle is also δ 2 -slim, hence all δ 1 -hyperbolic spaces are δ 2 -hyperbolic spaces. That is why we can assume that all Gromov hyperbolic spaces are δ-hyperbolic with δ ≥ 1. It allows us to consider 1 δ as a well defined term, we hence avoid the arising of separated cases in some oof the proofs. We also use this assumption to simplify constants appearing in this document. The next result is a similar control on the length of path as Proposition 4.5, but we consider that the path is avoiding a horosphere instead of avoiding a ball in H. Lemma 4.6. Let δ ≥ 1 and H be a proper, geodesic, δ-hyperbolic, Busemann space. Let x and y ∈ H and let V x , respectively V y , be a vertical geodesic containing x, respectively y. Let us consider t 0 ≥ max(h(x), h(y)) and let us denote x 0 ∶= V x (t 0 ) and y 0 ∶= V y (t 0 ), the respective points of V x and V y at the height t 0 . Assume that d(x 0 , y 0 ) > 768δ. Then for all connected path γ ∶ [0, T ] → H such that γ(0) = x, γ(T ) = y and h + (γ) ≤ h(x 0 ) we have: l(γ) ≥ ∆h(x, x 0 ) + ∆h(y, y 0 ) + 2 −386 2 1 2δ d(x 0 ,y 0 ) − 24δ.(14) x y For trees (when δ = 0) this Lemma still makes sense. Indeed, if δ tends to 0 then the length of the path described in this Lemma tends to infinity, which is consistent with the fact that such a path does not exist in trees. The proof would use the fact that in Proposition 4.5 we have d(x, im(c)) = 0 when δ = 0 since 0-hyperbolic spaces are real trees. x 0 y 0 V x V y γ B(y, ∆h(y 0 , y)) B(x, ∆h(x 0 , x)) γ(t x ) γ(t y ) h(x 0 ) H x 1 y 1 V γ(t x ) V γ(t y ) Proof. One can follow the idea of the proof on Figure 8. We will consider γ to be parametrised by arclength. Let B(x, ∆h(x 0 , x)) ⊂ H be the ball of radius h(x 0 ) − h(x) centred on x, and let m ∈ B(x, ∆h(x 0 , x)) be a point in this ball. Then: d r (m, x) = d(m, x) − ∆h(m, x) ≤ ∆h(x, x 0 ) − ∆h(m, x) ≤ ∆h(x 0 , m). Let us first assume that h(m) ≥ h(x), then: h(m) + d r (m, x) 2 ≤ h(m) + ∆h(x 0 , m) 2 ≤ h(m) + h(x 0 ) − h(m) 2 = h(x 0 ) 2 + h(m) 2 ≤ h(x 0 ).(15) By Lemma 4.3 we have: d V x h(m) + d r (m, x) 2 , V m h(m) + d r (m, x) 2 ≤ 288δ. We now assume that h(m) ≤ h(x), then: h(x) + d r (x, m) 2 ≤ h(x) + d(x, m) 2 ≤ h(x) + ∆h(x, x 0 ) 2 ≤ h(x 0 ). Then Lemma 4.3 provides us with: d V x h(x) + d r (m, x) 2 , V m h(x) + d r (m, x) 2 ≤ 288δ. Since H is a Busemann space, the function t → d(V x (t), V m (t)) is convex. Furthermore t → d(V x (t), V m (t)) is bounded on [0; +∞[ as H is Gromov hyperbolic, hence t → d(V x (t), V m (t)) is a non increasing function. Therefore both cases h(m) ≤ h(x) and h(x) ≤ h(m) give us that: d x 0 , V m (h(x 0 )) = d V x (h(x 0 )) , V m (h(x 0 )) ≤ 288δ.(16) In other words, all points of B(x, ∆h(x 0 , x)) belong to a vertical geodesic passing nearby x 0 . By the same reasoning we have ∀n ∈ B(y, ∆h(y 0 , y)) : d y 0 , V n (h(y 0 )) ≤ 288δ.(17) Then by the triangle inequality: d V m (h(x 0 )), V n (h(y 0 )) ≥ −d x 0 , V m (h(x 0 )) + d(x 0 , y 0 ) − d y 0 , V n (h(y 0 )) ≥ 768δ − 288δ − 288δ ≥ 192δ.(18)Specifically d(V m (h(x 0 )), V n (h(y 0 ))) = d(V m (h(x 0 )), V n (h(x 0 ))) > 0 which implies that m ≠ n. Then B(x, ∆h(x 0 , x)) ∩ B(y, ∆h(y 0 , y)) = ∅. By continuity of γ we deduce the existence of the two following times t x ≤ t y such that: t x = inf{t ∈ [0, T ] d(γ(t), x) = ∆h(x, x 0 )}, t y = sup{t ∈ [0, T ] d(γ(t), y) = ∆h(y, y 0 )}. In order to have a lower bound on the length of γ we will need to split this path into three parts: γ = γ [0,tx] ∪ γ [tx,ty] ∪ γ [ty,T ] . As γ is parametrised by arclength and d(γ(0), γ(t x )) = ∆h(x, x 0 ) we have that: l γ [0,tx] ≥ ∆h(x, x 0 ).(19) For similar reasons we also have: l γ [ty,T ] ≥ ∆h(y, y 0 ). We will now focus on proving a lower bound for the length of γ [tx,ty] . We want to construct a path γ ′ joining x 1 = V γ(tx) (h(x 0 )) to y 1 = V γ(ty ) (h(x 0 )), that stays below h(x 0 ) and such that γ [tx,ty] is contained in γ ′ . Let x 1 ∶= V γ(tx) (h(x 0 )) and y 1 ∶= V γ(ty ) (h(x 0 )). We construct γ ′ by gluing paths together: γ ′ = ⎧ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎩ V γ(tx) from x 1 to γ(t x ) γ from γ(t x ) to γ(t y ) V γ(ty ) from γ(t y ) to y 1 Applying inequalities (16) and (17) used on γ(t x ) and γ(t y ) we get: d(x 0 , x 1 ) ≤ 288δ,(21)d(y 0 , y 1 ) ≤ 288δ.(22) In order to apply Proposition 4.5 to γ ′ we need to check that there exists a point A of the geodesic segment [x 1 , y 1 ] such that h(A) ≥ h(x 0 ). Applying Lemma 4.3 to [x 1 , y 1 ] and since h(x 1 ) = h(y 1 ) we get: h + ([x 1 , y 1 ]) ≥ d r (x 1 , y 1 ) 2 + h(x 0 ) − 96δ = d(x 1 , y 1 ) 2 + h(x 0 ) − 96δ. Thanks to the triangle inequality and inequalities (21) and (22): h + ([x 1 , y 1 ]) ≥ d(y 0 , x 0 ) − d(x 0 , x 1 ) − d(y 0 , y 1 ) 2 + h(x 0 ) − 96δ ≥ d(x 0 , y 0 ) 2 + h(x 0 ) − 384δ. Since by hypothesis d(x 0 , y 0 ) > 768δ, there exists a point A of [x 1 , y 1 ] exactly at the height: h(A) = d(x 0 , y 0 ) 2 + h(x 0 ) − 384δ. We can then apply Proposition 4.5 to get: δ log 2 (l(γ ′ )) + 1 ≥ d(A, γ ′ ) ≥ ∆h(A, x 0 ) ≥ d(x 0 , y 0 ) 2 + h(x 0 ) − 384δ − h(x 0 ) ≥ d(x 0 , y 0 ) 2 − 384δ. Since δ ≥ 1, last inequality implies that l(γ ′ ) ≥ 2 −385 2 1 2δ d(x 0 ,y 0 ) . Now we use this inequality to have a lower bound on the length of γ [tx,ty] : l(γ [tx,ty] ) ≥ l(γ ′ ) − ∆h(γ(t x ), x 0 ) − ∆h(γ(t y ), y 0 ) ≥ 2 −385 2 1 2δ d(x 0 ,y 0 ) − ∆h(γ(t x ), x 0 ) − ∆h(γ(t y ), y 0 ).(23) We claim that l γ [tx,ty] ≥ ∆h(γ(t x ), x 0 ) + ∆h(γ(t y ), y 0 ) − 48δ, hence: l γ [tx,ty] ≥ 2 −386 2 1 2δ d(x 0 ,y 0 ) − 24δ,(24) which ends the proof by combining inequality (24) with inequalities (19) and (20). Proof of the claim. Inequality (18) with m = γ(t x ) and n = γ(t y ) gives d(x 1 , y 1 ) ≥ 192δ. We want to prove that h + ([γ(t x ), γ(t y )]) ≥ h(x 1 )− 24δ. First, by Lemma 4.2 we have that [γ(t x ), γ(t y )]∪ V γ(tx) ∪ V γ(ty) is a 24δ-slim triangle. Then there exist three times t 0 , t 1 and t 2 such that d V γ(tx) (t 1 ), γ(t 0 ) ≤ 24δ and such that d V γ(ty ) (t 2 ), γ(t 0 ) ≤ 24δ. Then: t 1 − t 2 = ∆h V γ(tx) (t 1 ), V γ(ty ) (t 2 ) ≤ d V γ(tx) (t 1 ), V γ(ty ) (t 2 ) ≤ d V γ(tx) (t 1 ), γ(t 0 ) + d γ(t 0 ), V γ(ty ) (t 2 ) ≤ 48δ.(25) We will show by contradiction that either t 1 = h(V γ(tx) (t 1 )) ≥ h(x 0 ) or t 2 = h(V γ(ty ) (t 2 )) ≥ h(x 0 ). Assume that t 1 < h(x 0 ) and t 2 < h(x 0 ). Then by the triangle inequality: (25). d V γ(tx) (t 1 ), V γ(ty ) (t 2 ) ≥ d V γ(ty) (t 2 ), V γ(tx) (t 2 ) − d V γ(tx) (t 2 ), V γ(tx) (t 1 ) ≥ d V γ(ty) (t 2 ), V γ(tx) (t 2 ) − 48δ, since t 1 − t 2 ≤ 48δ by equation As H is a Busemann space, the function t ↦ d V γ(tx) (t), V γ(ty ) (t) is non increasing (convex and bounded function). Furthermore, h(x 0 ) ≥ t 2 hence: (21) and (22), ≥ 49δ, since d(x 0 , y 0 ) ≥ 768δ by assumption, which is impossible. Therefore t 1 ≥ h(x 0 ) or t 2 ≥ h(x 0 ). We assume without loss of generality that 48δ ≥ d V γ(tx) (t 1 ), V γ(tx) (t 2 ) ≥ d V γ(tx) (t 2 ), V γ(ty ) (t 2 ) − 48δ ≥ d V γ(tx) (h(x 0 )), V γ(ty ) (h(x 0 )) − 48δ ≥ d(x 1 , y 1 ) − 48δ ≥ d(x 0 , y 0 ) − d(x 0 , x 1 ) − d(y 0 , y 1 ) − 48δ ≥ d(x 0 , y 0 ) − 624δ, by inequalitiest 1 ≥ h(x 0 ), then: ∆h γ(t 0 ), V γ(tx) (t 1 ) ≤ d γ(t 0 ), V γ(tx ) (t 1 ) ≤ 24δ, h D ∆ ∆ t 2 t 1 V 2 (D) α V 2 (t 2 ) V 1 (t 1 ) V 1 (D − ∆) ∆ V 2 (D − t) V 1 (D − ∆ − t)h + ([γ(t x ), γ(t y )]) ≥ h(γ(t 0 )) ≥ h V γ(tx) (t 1 ) − ∆h γ(t 0 ), V γ(tx) (t 1 ) ≥ h(x 0 ) − 24δ, and gives us: l γ [tx,ty] ≥ h + ([γ(t x ), γ(t y )]) − h(γ(t x )) + h + ([γ(t x ), γ(t y )]) − h(γ(t y )) ≥ h(x 0 ) − 24δ − h(γ(t x )) + h(x 0 ) − 24δ − h(γ(t y )) ≥ ∆h(γ(t x ), x 0 ) + ∆h(γ(t y ), y 0 ) − 48δ.(26) Next lemma shows that we are able to control the relative distance of a couple of points travelling along two vertical geodesics. We recall that for all a, b ∈ H, d r (a, b) = d(a, b) − ∆h(a, b). Lemma 4.7 (Backwards control). Let δ ≥ 0 and H be a proper, δ-hyperbolic, Busemann space. Let V 1 and V 2 be two vertical geodesics of H. Then for all couple of times (t 1 , t 2 ) and for all t ∈ 0, 1 2 d r (V 1 (t 1 ), V 2 (t 2 )) : d r V 1 t 1 + 1 2 d r (V 1 (t 1 ), V 2 (t 2 )) − t , V 2 t 2 + 1 2 d r (V 1 (t 1 ), V 2 (t 2 )) − t − 2t ≤ 288δ. Proof. To simplify the computations, we use the following notations, D ∶= t 2 + 1 2 d r (V 1 (t 1 ), V 2 (t 2 )) and ∆ = t 1 − t 2 . The term ∆ is the difference of height between V 1 (t 1 ) and V 2 (t 2 ) since vertical geodesics are parametrised by their height. Then we have to prove that ∀t ∈ 0, 1 2 d r (V 1 (t 1 ), V 2 (t 2 )) , d r (V 1 (D−∆−t), V 2 (D−t))−2t ≤ 288δ. We can assume without loss of generality that t 1 ≤ t 2 . Lemma 4.3 applied with x = V 1 (t 1 ) and with y = V 2 (t 2 ) gives us d(V 1 (D), V 2 (D)) ≤ 288δ. Furthermore, the relative distance is smaller than the distance, hence d r (V 1 (D), V 2 (D)) ≤ 288δ. Now, if we move the two points backward from V 1 (D − ∆) and V 2 (D) along V 1 and V 2 , we have for t ∈ [0, D]: d r (V 1 (D − ∆ − t), V 2 (D − t)) =d(V 1 (D − ∆ − t), V 2 (D − t)) − ∆ (27) ≤d(V 1 (D − ∆ − t), V 1 (D − ∆)) + d(V 1 (D − ∆), V 2 (D)) + d(V 2 (D), V 2 (D − t)) − ∆, furthermore V 1 and V 2 are geodesics, then: ≤t + d(V 1 (D − ∆), V 1 (D)) + d(V 1 (D), V 2 (D)) + t − ∆ ≤t + ∆ + 288δ + t − ∆ ≤ 2t + 288δ.(28) Let us consider a geodesic α between V 1 (t 1 ) and V 2 (t 2 ). Since H is a Busemann space, and thanks to Lemma 4.3 we have d (V 1 (D − ∆ − t), α(D − ∆ − t 1 − t)) ≤ 144δ and d (V 2 (D − t), α(D − t 1 + t)) ≤ 144δ. Then the second part of our inequality follows: d r (V 1 (D − ∆ − t), V 2 (D − t)) =d(V 1 (D − ∆ − t), V 2 (D − t)) − ∆ ≥d(α(D − ∆ − t 1 − t), α(D − t 1 + t)) − d(V 1 (D − ∆ − t), α(D − ∆ − t 1 − t)) − d(V 2 (D − t), α(D − t 1 + t)) − ∆ ≥d(α(D − ∆ − t 1 − t), α(D − t 1 + t)) − 288δ − ∆ ≥2t + ∆ − 288δ − ∆ ≥ 2t − 288δ.(29) The next lemma is a slight generalisation of Lemma 4.6. The difference being that we control the length of a path with its maximal height instead of the distance between the projection of its extremities on a horosphere. Proof. This proof is illustrated in Figure 10. Since h + (α) ≥ h(y) we have that 1 2 d r (x, y) ≥ ∆H. Applying Lemma 4.7 with V 1 = V x , V 2 = V y , t 1 = h(x), t 2 = h(y) and t = ∆H we have: d r V x h(x) + 1 2 d r (x, y) − ∆H , V y h(y) + 1 2 d r (x, y) − ∆H − 2∆H ≤ 288δ. Then we have: d r V x h(x) + 1 2 d r (x, y) − ∆H , V y h(y) + 1 2 d r (x, y) − ∆H ≥ 2∆H − 288δ. Furthermore, Lemma 4.4 applied on V x h(x) + 1 2 d r (x, y) − ∆H and V y h(y) + 1 2 d r (x, y) − ∆H gives (notice that the only difference between the two sides of the following inequality is the height in the vertical geodesic V x ): d r V x h(x) + 1 2 d r (x, y) − ∆H , V y h(y) + 1 2 d r (x, y) − ∆H ≤ d V x h(y) + 1 2 d r (x, y) − ∆H , V y h(y) + 1 2 d r (x, y) − ∆H + 54δ. Then: (30). d V x h(y) + 1 2 d r (x, y) − ∆H , V y h(y) + 1 2 d r (x, y) − ∆H ≥ 2∆H − 342δ > 768δ. (30) h x y [x, y] α h(y) h(x) V x h(y) + 1 2 d r (x, y) − ∆H ∆H V yl(α) ≥ ∆h(x, x 0 ) + ∆h(y, y 0 ) + 2 −386 2 1 2δ d(x 0 ,y 0 ) − 24δ ≥ h(y) + 1 2 d r (x, y) − ∆H − h(x) + h(y) + 1 2 d r (x, y) − ∆H − h(y) + 2 −386 2 1 2δ d(x 0 ,y 0 ) − 24δ ≥ ∆h(y, x) + d r (y, x) − 2∆H + 2 −386 2 1 2δ d(x 0 ,y 0 ) − 24δ ≥ d(x, y) − 2∆H + 2 −386 2 1 2δ (2∆H−288δ) − 24δ, by equation≥ d(x, y) + 2 −530 2 1 δ ∆H − 2∆H − 24δ. This previous lemma tells us that a path needs to reach a sufficient height for its length not to increase to much. We give now a generalisation of Lemma 4.8, where the path reaches a given low height before going to its end point. This proposition will be the central result for the understanding of the geodesic shapes in a horospherical product. l(α) ≥ 2∆h(x, m) + d(x, y) + 2 −850 2 1 δ ∆H − 1 − max(0, 2∆H) − 1700δ. Proof. This proof is illustrated in Figure 11. We first assume that ∆H > 850δ, we postpone the other cases to the end of this proof. Let V x and V m be vertical geodesics respectively containing x and m. We call x 1 = V x (h(y)) and m 1 = V m (h(y)) the points of V x and V m at height h(y). First, Lemma 4.4 provides d(x 1 , y) − d r (x, y) ≤ 54δ. Then we consider a geodesic triangle between the three points x 1 , m 1 and y. Lemma 4.3 tells us that h + ([x 1 , y]) ≥ h(y) Since [x 1 , y] is included in the δ-neighbourhood of the two other sides of the geodesic triangle, one of the two following inequalities holds: + 1 2 d r (x 1 , y) − 96δ ≥ h(y) + 1 2 d r (x, y) − 123δ. h x y m [x, y] α m 0 x 1 m 1 h(y) h(x) V x V m ∆h(x, y) ∆h(x, m) ∆H ∆H h + (α) h(y) + 1 2 d r (x, y) h(m)1) h + ([x 1 , m 1 ]) ≥ h(y) + 1 2 d r (x, y) − 124δ 2) h + ([m 1 , y]) ≥ h(y) + 1 2 d r (x, y) − 124δ. We first assume 1) that h + ([x 1 , m 1 ]) ≥ h(y) + 1 2 d r (x, y) − 124δ, hence: d(x 1 , m 1 ) ≥ d r (x, y) − 248δ.(31) Let us denote m 0 = V m (h(x)) the point of V m at height h(x). By considering the 2δ-slim quadrilateral between the points x, x 1 , m 0 , m 1 we have that [x 1 , m 1 ] is in the 2δ-neighbourhood of [x 1 , x] ∪ [x, m 0 ] ∪ [m 0 , m]. Furthermore d r (x, y) ≥ 2(h + (α) − h(y)) + 2∆H ≥ 2∆H ≥ 1700δ by assumption, then h + ([x 1 , m 1 ]) ≥ h(y) + 1 2 d r (x, y) − 124δ ≥ h(y) + 726δ. Since h + ([x 1 , x]) = h + ([m 0 , m 1 ]) = h(y) we have that h + ([x, m 0 ]) ≥ h + ([x 1 , m 1 ]) − 2δ ≥ h(y) + 724δ. Moreover: d r (x, m 0 ) = d(x, m 0 ) ≥ h + ([x, m 0 ]) − h(x) ≥ h(y) − h(x) + 724δ ≥ ∆h(x, y) + 724δ, which allows us to use Lemma 4.7 on V x and V m with t = 1 2 d r (x, m 0 )−∆h(x, y) ≥ 0 and t 1 = t 2 = h(x). It gives: d r V x h(x) + ∆h(x, y) , V m h(x) + ∆h(x, y) − d r (x, m 0 ) + 2∆h(x, y) ≤ 288δ, which implies in particular: d r V x h(y) , V m h(y) + 2∆h(x, y) − 288δ ≤ d r (x, m 0 ).(32) Combining inequalities (31) d r (x, m) ≥ d(x, m 0 ) − 54δ ≥ d r (x, y) + 2∆h(x, y) − 590δ.(33) Let us denote α 1 the part of α linking x to m and α 2 the part of α linking m to y. We have: h + (α 1 ) ≤h + (α) ≤ h(y) + 1 2 d r (x, y) − ∆H ≤ h(x) + ∆h(x, y) + 1 2 d r (x, y) − ∆H ≤h(x) + 1 2 (2∆h(x, y) + d r (x, y)) − ∆H ≤ h(x) + 1 2 (d r (x, m) + 590δ) − ∆H, by inequality (33). ≤h(x) + 1 2 d r (x, m) + 295δ − ∆H ≤ h(x) + 1 2 d r (x, m) − ∆H ′ , with ∆H ′ = ∆H − 295δ. By assumption ∆H > 850δ, hence ∆H ′ > 555δ which allows us to apply Lemma 4.8 on α 1 . It follows: l(α 1 ) ≥d(x, m) + 2 −530 2 1 δ ∆H ′ − 2∆H ′ − 24δ ≥∆h(x, m) + d r (x, m) + 2 −825 2 1 δ ∆H − 2∆H − 614δ, since ∆H ′ = ∆H − 295δ. ≥∆h(x, m) + d r (x, y) − 590δ + 2 −825 2 1 δ ∆H − 2∆H − 614δ, by inequality (33) ≥∆h(x, m) + d r (x, y) + 2 −825 2 1 δ ∆H − 2∆H − 1204δ. We use in the following inequalities that l(α 2 ) ≥ d(m, y) ≥ ∆h(m, y), we have: l(α) ≥ l(α 1 ) + l(α 2 ) ≥ ∆h(x, m) + d r (x, y) + 2 −825 2 1 δ ∆H − 2∆H − 1204δ + ∆h(m, y) ≥ 2∆h(x, m) + ∆h(x, y) + d r (x, y) + 2 −825 2 1 δ ∆H − 2∆H − 1204δ ≥ 2∆h(x, m) + d(x, y) + 2 −825 2 1 δ ∆H − 2∆H − 1204δ ≥ 2∆h(x, m) + d(x, y) + 2 −850 2 1 δ ∆H − 1 − 2∆H − 1700δ, ≥ 2∆h(x, m) + d(x, y) + 2 −850 2 1 δ ∆H − 1 − max(0, 2∆H) − 1700δ, since ∆H > 850δ ≥ 0, which ends the proof for case 1). Now assume that 2) holds, which is h + ([m 1 , y]) ≥ h(y) + 1 2 d r (x, y) − 124δ. It implies d(m 1 , y) ≥ d r (x, y) − 248δ, then: h + (α 2 ) ≤h + (α) ≤ h(y) + 1 2 d r (x, y) − ∆H ≤ h(y) + 1 2 d r (m 1 , y) + 124δ − ∆H ≤ h(y) + 1 2 d r (m 1 , y) − ∆H ′′ , with ∆H ′′ = ∆H − 124δ. Lemma 4.4 provides us with: d r (m, y) ≥ d(m 1 , y) − 54δ ≥ d r (x, y) − 302δ.(34) Since ∆H > 850δ, we have ∆H ′′ > 726δ which allows us to apply Lemma 4.8 on α 2 . It follows that: l(α 2 ) ≥d(y, m) + 2 −530 2 1 δ ∆H ′′ − 2∆H ′′ − 24δ ≥∆h(y, m) + d r (y, m) + 2 −654 2 1 δ ∆H − 2∆H − 272δ, since ∆H ′′ = ∆H − 124δ. ≥∆h(y, m) + d r (x, y) + 2 −654 2 1 δ ∆H − 2∆H − 574δ, by inequality (32). Hence: l(α) ≥ l(α 1 ) + l(α 2 ) ≥ ∆h(x, m) + ∆h(y, m) + d r (x, y) + 2 −654 2 1 δ ∆H − 2∆H − 574δ ≥ 2∆h(x, m) + ∆h(y, x) + d r (x, y) + 2 −654 2 1 δ ∆H − 2∆H − 574δ ≥ 2∆h(x, m) + d(x, y) + 2 −654 2 1 δ ∆H − 2∆H − 574δ ≥ 2∆h(x, m) + d(x, y) + 2 −850 2 1 δ ∆H − 1 − max(0, 2∆H) − 1700δ. There remains to treat the case when ∆H ≤ 850δ, where ∆H = h(y)+ 1 2 d r (x, y)− h + (α). Let n denote a point of α such that h(n) = h + (α). If m comes before n, we have l(α) ≥ d(x, m) + d(m, n) + d(n, y). Otherwise n comes before m and we have l(α) ≥ d(x, n) + d(n, m) + d(m, y). Since h(m) ≤ h(x) ≤ h(y) ≤ h(n) we always have: l(α) ≥ ∆h(x, m) + ∆h(m, n) + ∆h(n, y) ≥ ∆h(x, m) + ∆h(m, x) + ∆h(x, y) + ∆h(y, n) + ∆h(y, n) ≥ 2∆h(x, m) + ∆h(x, y) + 2(h + (α) − h(y)) ≥ 2∆h(x, m) + ∆h(x, y) + d r (x, y) − 2∆H ≥ 2∆h(m, x) + d(x, y) − 1700δ. Furthermore ∆H ≤ 850δ, then 2 −850 2 1 δ ∆H ≤ 1. Therefore: l(α) ≥ 2∆h(m, x) + d(x, y) + 2 −850 2 1 δ ∆H − 1 − max(0, 2∆H) − 1700δ, which ends the proof for the remaining case. Length of geodesic segments in horospherical products From now on, unless otherwise specified, X and Y will always be two proper, geodesically complete, δ-hyperbolic, Busemann spaces with δ ≥ 1, and N will always be an admissible norm. Let p and q be two points of X ⋈ Y , and let α be a geodesic of X ⋈ Y connecting them. We first prove an upper bound on the length of α by computing the length of a path γ ⊂ X ⋈ Y linking p to q Lemma 4.10. Let p = (p X , p Y ) and q = (q X , q Y ) be points of the horospherical product X ⋈ Y . There exists a path γ connecting p to q such that: l N (γ) ≤ d r (p Y , q Y ) + d r (p X , q X ) + ∆h(p, q) + 1152δC N . Proof. Without loss of generality, we assume h(p) ≤ h(q). One can follow the idea of the proof on Figure 12. We consider V p X and V q X two vertical geodesics of X containing p X and q X respectively. Similarly let V p Y and V q Y be two vertical geodesics of Y containing p Y and q Y respectively. We will use them to construct γ. Let A 1 be the point of the vertical geodesic (V p X , V p Y ) ⊂ X ⋈ Y at height h(p) − 1 2 d r (p Y , q Y ) and A 2 be the point of the vertical geodesic (V p X , V q Y ) ⊂ X ⋈ Y at the same height h(p)− 1 2 d r (p Y , q Y ) . Let A 3 be the point of the vertical geodesic (V p X , V q Y ) at height h(q)+ 1 2 d r (p X , q X ) and A 4 be the point of the vertical geodesic (V q X , V q Y ) at the same height h(q) + 1 2 d r (p X , q X ). Then γ ∶= γ 1 ∪ γ 2 ∪ γ 3 ∪ γ 4 ∪ γ 5 is constructed as follows: -γ 1 is the part of (V p X , V p Y ) linking p to A 1 . γ 2 is a geodesic linking A 1 to A 2 . Such a geodesic exists by Property 3.11. -γ 3 is the part of (V p X , V q Y ) linking A 2 to A 3 . γ 4 is a geodesic linking A 3 to A 4 . Such a geodesic exists by Property 3.11. -γ 5 is the part of (V q X , V q Y ) linking A 4 to q. In fact A 1 and A 2 are close to each other. Indeed, the two points A 1 = (A 1,X , A 1,Y ) and A 2 = (A 2,X , A 2,Y ) are characterised by the two geodesics (V p X , V p Y ) and (V p X , V q Y ). Then, because −h(q) = Y (q Y ) ≤ Y (p Y ), Lemma 4.3 applied on p Y and q Y in Y gives us d Y (A 1,Y , A 2,Y ) ≤ 288δ. Furthermore Property 3.11 provides us with d ⋈ ≤ 2C N (d X + d Y ) , however we have that A 1,X = A 2,X hence: d ⋈ (A 1 , A 2 ) ≤ 576δC N .(35) Lemma 4.3 applied on p X and q X provides similarly: Figure 12: Construction of the path γ when h(p) ≤ h(q) for Lemma 4.10. d ⋈ (A 3 , A 4 ) ≤ 576δC N ,(36)p q h γ 5 γ 4 γ 3 γ 2 γ 1 A 1 A 2 A 3 A 4 ∆h(p, q) ≤ 144δ ≤ 144δ which gives us: l N (γ) =l N (γ 1 ) + l N (γ 2 ) + l N (γ 3 ) + l N (γ 4 ) + l N (γ 5 ) =d ⋈ (p, A 1 ) + d ⋈ (A 1 , A 2 ) + d ⋈ (A 2 , A 3 ) + d ⋈ (A 3 , A 4 ) + d ⋈ (A 4 , q) Since γ 1 , γ 3 and γ 5 are vertical geodesics, we have: =∆h(p, A 1 ) + d ⋈ (A 1 , A 2 ) + ∆h(A 2 , A 3 ) + d ⋈ (A 3 , A 4 ) + ∆h(A 4 , q) = 1 2 d r (p Y , q Y ) + d ⋈ (A 1 , A 2 ) + 1 2 d r (p Y , q Y ) + 1 2 d r (p X , q X ) + ∆h(p, q) + d ⋈ (A 3 , A 4 ) + 1 2 d r (p X , q X ) ≤d r (p Y , q Y ) + d r (p X , q X ) + ∆h(p, q) + 1152δC N , by inequalities (35) and (36). We are aiming to use Proposition 4.9 on the two components α X ⊂ X and α Y ⊂ Y of α to obtain lower bounds on their lengths. We hence need the following lemma to ensure us that when α is a geodesic, the exponential term in the inequality of Proposition 4.9 will be small. Lemma 4.11. Let C = 2853δC N + 2 851 and let e ∶ R → R be a map defined by ∀t ∈ R, e(t) = 1 C 2 C −1 t − 2 max(0, t). Then ∀t ∈ R: 1. e(t) ≥ −7C 2 2. ( e(t) ≤ 2853δC N ) ⇒ ( t ≤ 3C 2 ). Proof. For all time t, we have that e(t) = 1 C 2 C −1 t − 2 max(0, t) ≤ 1 C 2 C −1 t − 2t =∶ e 1 (t). The derivative of e 1 is e ′ 1 (t) = log(2) C 2 2 C −1 t − 2, which is non negative ∀t ≥ C log 2 2 log(2) C 2 and non positive otherwise. Then ∀t ∈ R: e 1 (t) ≥ e 1 log 2 2 log(2) C 2 ≥ 2C log(2) − 2C log 2 2 log(2) C 2 ≥ 2C log(2) − 4C log 2 2 log(2) C ≥ 2C log(2) − 4 2 log(2) C 2 ≥ −4 2 log(2) C 2 ≥ −7C 2 . Since C ≥ 2 log(2) we have 3C 2 ≥ C log 2 (C 3 ) ≥ C log 2 2 log(2) C 2 , then e 1 is non decreasing on [C log 2 (C 3 ); +∞[. We show that e 1 (3C 2 ) ≥ 2853δC N : e 1 (3C 2 ) ≥ e 1 (C log 2 (C 3 )) = 1 C 2 C log 2 (C 3 ) C − 2C log 2 (C 3 ) = C(C − 6 log 2 (C)). Since C ≥ 2 851 we have C − 6 log 2 (C) ≥ 1 and since C ≥ 2853δC N we have that e 1 (3C 2 ) ≥ C × 1 ≥ 2853δC N which provides ∀t ∈ [3C 2 ; +∞[ we have e 1 (t) ≥ 2853δC N . Furthermore ∀t ∈ R + , e 1 (t) = e(t), hence ∀t ∈ [3C 2 ; +∞[ we have e(t) ≥ 2853δC N which implies point 2. of this lemma. The following lemma provides us with a lower bound matching Lemma 4.10, and a first control on the heights a geodesic segment must reach. Lemma 4.12. Let p = (p X , p Y ) and q = (q X , q Y ) be two points of X ⋈ Y such that h(p) ≤ h(q). Let α = (α X , α Y ) be a geodesic segment of X ⋈ Y linking p to q. Let C 0 = (2853δC N + 2 851 ) 2 , we have: 1. l(α) ≥ ∆h(p, q) + d r (p Y , q Y ) + d r (p X , q X ) − 15C 0 2. h + (α) ≥ h(q) + 1 2 d r (p X , q X ) − 3C 0 3. h − (α) ≤ h(p) − 1 2 d r (p Y , q Y ) + 3C 0 . Proof. Let us denote ∆H + = h(q) + 1 2 d r (p X , q X ) − h + (α) and ∆H − = h − (α) − h(p) − 1 2 d r (p Y , q Y ) . Let m be a point of α at height h − (α) = h(p) − 1 2 d r (p Y , q Y ) + ∆H − , and n be a point of α at height h + (α) = h(q) + 1 2 d r (p X , q X ) − ∆H + . Then Proposition 4.9 used on α X gives us: l(α X ) ≥2∆h(p X , m X ) + d(p X , q X ) + 2 −850 2 1 δ ∆H + − 1 − 2 max(0, ∆H + ) − 1700δ ≥2h(p X ) − 2 h(p X ) − 1 2 d r (p Y , q Y ) + ∆H − + d(p X , q X ) + 2 −850 2 1 δ ∆H + − 1 − 2 max(0, ∆H + ) − 1700δ ≥d r (p Y , q Y ) + d r (p X , q X ) + ∆h(p, q) + 2 −850 2 1 δ ∆H + − 1 − 2 max(0, ∆H + ) − 2∆H − − 1700δ. Since h(p Y ) ≥ h(q Y ) and h(n Y ) = h(q Y ) − 1 2 d r (p X , q X ) + ∆H + , Proposition 4.9 used on α Y provides similarly: l(α Y ) ≥ d r (p X , q X ) + d r (p Y , q Y ) + ∆h(p, q) + 2 −850 2 1 δ ∆H − − 1 − 2 max(0, ∆H − ) − 2∆H + − 1700δ. Hence by Property 3.4: l N (α) ≥ 1 2 (l(α X ) + l(α Y )) ≥d r (p X , q X ) + d r (p Y , q Y ) + ∆h(p, q) − 1700δ + 2 −851 2 1 δ ∆H − + 2 −851 2 1 δ ∆H + − 2 max(0, ∆H − ) − 2 max(0, ∆H + ) − 1.(37) Furthermore, we know by Lemma 4.10 that l N (α) ≤ ∆h(p, q) + d r (p X , q X ) + d r (p Y , q Y ) + 1152δC N . Since C N ≥ 1 we have: 2852δC N ≥2 −851 2 1 δ ∆H − − 2 max(0, ∆H − ) + 2 −851 2 1 δ ∆H + − 2 max(0, ∆H + ) − 1. Let us denote S ∶= max{∆H − , ∆H + }. Therefore we have 2 −851 2 1 δ S − 2 max(0, S) − 1 ≤ 2852δC N . By assumption δ ≥ 1 hence 2 −851 2 1 δ S −2 max(0, S) ≤ 2853δC N . Furthermore, for C = 2853δC N +2 851 , we have both 2 −851 ≥ 1 C and 1 δ ≥ 1 C . Then we have 1 C 2 S C − 2 max(0, S) ≤ 2853δC N . Lemma 4.11 provides S ≤ 3C 2 = 3C 0 which implies points 2. and 3. of our lemma. Lemma 4.11 also provides us with: −14C 0 ≤2 −851 2 1 δ ∆H − − 2 max(0, ∆H − ) + 2 −851 2 1 δ ∆H + − 2 max(0, ∆H + ). Last inequality is a lower bound of the term we want to remove in inequality (37). The first point of our lemma hence follows since 1700δ + 1 ≤ C 0 . We recall that by definition: ∀p X , q X ∈ X, d r (p X , q X ) = d X (p X , q X ) − ∆h(p X , q X ) ∀p Y , q Y ∈ Y, d r (p Y , q Y ) = d Y (p Y , q Y ) − ∆h(p Y , q Y ) Hence combining Lemma 4.10 and 4.12 we get the following corollary. Corollary 4.13. Let N be an admissible norm and let C 0 = (2853δC N + 2 851 ) 2 . The length of a geodesic segment α connecting p to q in (X ⋈ Y, d ⋈ ) is controlled as follows: l N (α) − d X (p X , q X ) + d Y (p Y , q Y ) − ∆h(p, q) ≤ 15C 0 , which gives us a control on the N -path metric, for all points p and q in X ⋈ Y we have: d ⋈ (p, q) − d X (p X , q X ) + d Y (p Y , q Y ) − ∆h(p, q) ≤ 15C 0 . This result is central as it shows that the shape of geodesics does not depend on the N -path metric chosen for the distance on the horospherical product. Corollary 4.14. Let r ≥ 1. For all p and q in X ⋈ Y we have: d ⋈,ℓr (p, q) − d ⋈,ℓ 1 (p, q) ≤ 30(5706δ + 2 851 ) 2 . Proof. The ℓ r norm inequalities provide us with: r d X r + d Y r ≤ d X + d Y ≤ 2 r−1 r r d X r + d Y r . Hence we have r √ 2 2 (d X + d Y ) ≤ r √ d X r + d Y r ≤ d X + d Y . Then the ℓ r norms are admissible norms with C ℓr ≤ 2, which ends the proof. The next corollary tells us that changing this distance does not change the large scale geometry of X ⋈ Y . The control on the distances of Lemma 4.13 will help us understand the shape of geodesic segments and geodesic lines in a horospherical product. Shapes of geodesics and visual boundary of X ⋈ Y Shapes of geodesic segments In this section we focus on the shape of geodesics. We recall that in all the following X and Y are assumed to be two proper, geodesically complete, δ-hyperbolic, Busemann spaces with δ ≥ 1, and N is assumed to be an admissible norm. The next lemma gives a control on the maximal and minimal height of a geodesic segment in a horospherical product. It is similar to the traveling salesman problem, who needs to walk from x to y passing by m and n. This result follows from the inequalities on maximal and minimal heights of Lemma 4.12 combined with Lemma 4.10. Lemma 5.1. Let p = (p X , p Y ) and q = (q X , q Y ) be two points of X ⋈ Y such that h(p) ≤ h(q). Let N be an admissible norm and let α = (α X , α Y ) be a geodesic of (X ⋈ Y, d ⋈ ) linking p to q. Let C 0 = (2853δC N + 2 851 ) 2 , we have: Figure 13: Notations of Lemma 5.2. 1. h − (α) − h(p) − 1 2 d r (p Y , q Y ) ≤ 4C 0 Y h p q X n m b a h + (α) = h(n) h(q) = h(b) h(p) = h(a) h − (α) = h(m) α2. h + (α) − h(q) + 1 2 d r (p X , q X ) ≤ 4C 0 . Proof. Let us consider a point m of α such that h(m) = h − (α) and a point n of α such that h(n) = h + (α). Then m comes before n or n comes before m. In both cases, since h(m) ≤ h(p) ≤ h(q) ≤ h(n) and by Lemma 3.6 we have: l N (α) ≥ ∆h(p, q) + 2(h(p) − h − (α)) + 2(h + (α) − h(q)) ≥ ∆h(p, q) + 2(h(p) − h − (α)) + d r (p X , q X ) − 6C 0 , by Lemma 4.12. Furthermore Lemma 4.10 provides l N (α) ≤ ∆h(p, q) + d r (p X , q X ) + d r (p Y , q Y ) + C 0 , hence: ∆h(p, q) + d r (p X , q X ) + d r (p Y , q Y ) + C 0 ≥ ∆h(p, q) + 2(h(p) − h − (α)) + d r (p X , q X ) − 6C 0 , which implies h(p) − 1 2 d r (p Y , q Y ) − h − (α) ≤ 4C 0 . In combination with the third point of Lemma 4.12 it proves the first point of our Lemma 5.1. The second point is proved similarly. Lemma 5.2. Let N be an admissible norm and let C 0 = (2853δC N + 2 851 ) 2 . Let p = (p X , p Y ) and q = (q X , q Y ) be two points of X ⋈ Y . Let α = (α X , α Y ) be a geodesic of (X ⋈ Y, d ⋈ ) linking p to q. Then there exist two points a = (a X , a Y ), b = (b X , b Y ) of α such that h(a) = h(p), h(b) = h(q) with the following properties: 3. If h(p) − h(q) ≤ 7C 0 at least one of the two previous conclusions is satisfied. 1. If h(p) ≤ h(q) − 7C 0 then: (a) h − (α) = h − ([x, a]) and h + (α) = h + ([b, y]) (b) d r (p Y , a Y ) − d r (p Y , q Y ) ≤ 16C 0 and d r (p X , a X ) ≤ 22C 0 (c) d r (q X , b X ) − d r (p X , q X ) ≤ 16C 0 and d r (q Y , b Y ) ≤ 22C 0 (d) d ⋈ (a, b) − ∆h(a, b) ≤ 13C 0 . 2. If h(q) ≤ h(p) − 7C 0 then Lemma 5.2 is illustrated in Figure 13. Its notations will be used in all section 5. Proof. Let us consider a point m of α such that h(m) = h − (α) and a point n of α such that h(n) = h + (α). We first assume that m comes before n in α oriented from p to q. Let us call a the first point between m and n at height h(p) and b the last point between m and n at height h(q). Property (a) of our Lemma is then satisfied. Let us denote α 1 the part of α linking p to a, α 2 the part of α linking a to b and α 3 the part of α linking b to q. We have that m is a point of α 1 and that n is a point of α 3 . Inequalities 2. and 3. of Lemma 4.12 used on α 1 provide l N (α 1 ) ≥ d(p, m)+d(m, a) ≥ 2∆h(p, m) ≥ d r (p Y , q Y )−6C 0 and similarly l N (α 3 ) ≥ d r (p X , q X ) − 6C 0 . Furthermore we have l N (α 2 ) ≥ ∆h(p, q). Combining l N (α 1 ) = l N (α) − l N (α 2 ) − l N (α 3 ) and Lemma 4.10 we have: l N (α 1 ) ≤ ∆h(p, q) + d r (p X , q X ) + d r (p Y , q Y ) + C 0 − ∆h(p, q) − d r (p X , q X ) + 6C 0 ≤ d r (p Y , q Y ) + 7C 0 .(38) We have similarly that l N ( α 3 ) ≤ d r (p X , q X ) + 7C 0 and that d ⋈ (a, b) = l N (α 2 ) ≤ ∆h(p, q) + 13C 0 . It gives us d ⋈ (a, b) − ∆h(p, q) ≤ 13C 0 , point (d) of our lemma. Furthermore, using Lemma 5.1 on α and α 1 provides: h − (α) − h(p) − 1 2 d r (p Y , q Y ) ≤ 4C 0 , h − (α 1 ) − h(p) − 1 2 d r (p Y , a Y ) ≤ 4C 0 . Since h − (α) = h − (α 1 ) we have: d r (p Y , a Y ) − d r (p Y , q Y ) ≤ 16C 0 ,(39) which is the first inequality of (b). Using the first point of Lemma 4.12 on α 1 in combination with inequality (38) gives us: d r (p Y , q Y ) + 7C 0 ≥l N (α 1 ) ≥ ∆h(p, a) + d r (p X , a X ) + d r (p Y , a Y ) − 15C 0 ≥d r (p X , a X ) + d r (p Y , a Y ) − 15C 0 ≥d r (p X , a X ) + d r (p Y , q Y ) − 31C 0 , by inequality (39). Then d r (p X , q X ) ≤ 38C 0 the second inequality of point (b) holds. We prove similarly the inequality (c) of this lemma. This ends the proof when m comes before n. If n comes before m, the proof is still working by orienting α from q to p hence switching the roles between p and q. We will now prove that if h(p) ≤ h(q) − 7C 0 then m comes before n on α oriented from p to q. Let us assume that h(p) ≤ h(q) − 7C 0 . We will proceed by contradiction, let us assume that n comes before m, using h(m) ≤ h(p) ≤ h(q) ≤ h(n) it implies: l N (α) ≥d ⋈ (p, n) + d ⋈ (n, m) + d ⋈ (m, q) ≥ ∆h(p, n) + ∆h(n, m) + ∆h(m, q) ≥∆h(p, q) + ∆h(q, n) + ∆h(m, p) + ∆h(p, q) + ∆h(q, n) + ∆h(m, p) + ∆h(p, q) ≥2∆h(p, q) + ∆h(p, q) + 2∆h(m, p) + 2∆(q, n) ≥14C 0 + ∆h(p, q) + 2(h(p) − h − (α)) + 2(h + (α) − h(q)). However Lemma 4.12 applied on α provides h + (α) ≥ h(q) Figure 14: Theorem 5.3. The neighbourhood's shapes are distorted since when going upward, distances are contracted in the "direction" X and expanded in the "direction" Y . + 1 2 d r (p X , q X ) − 3C 0 and h − (α) ≤ h(p) − 1 2 d r (p Y , q Y ) + 3C 0 . Then: l N (α) ≥14C 0 + ∆h(p, q) + d r (p X , q X ) + d r (p Y , q Y ) − 12C 0 ≥∆h(p, q) + d r (p X , q X ) + d r (p Y , q Y ) + 2C 0 ,V 1 V 2 (V X 1 , V Y 2 ) X Y h N 196δC 0 C N (V 1 ) p q h(q) α This previous lemma essentially means that if p is sufficiently below q, the geodesic α first travels in a copy of Y in order to "lose" the relative distance between p Y and q Y , then it travels upward using a vertical geodesic from a to b until it can "lose" the relative distance between p X and q X by travelling in a copy of X. It looks like three successive geodesics of hyperbolic spaces, glued together. The idea is that the geodesic follows a shape similar to the path γ we constructed in Lemma 4.10. The following theorem tells us that a geodesic segment is in the constant neighbourhood of three vertical geodesics. It is similar to the hyperbolic case, where a geodesic segment is in a constant neighbourhood of two vertical geodesics. Theorem 5.3. Let N be an admissible norm. Let p = (p X , p Y ) and q = (q X , q Y ) be two points of X ⋈ Y and let α be a geodesic segment of (X ⋈ Y, d ⋈ ) linking p to q. Let C 0 = (2853δC N + 2 851 ) 2 , there exist two vertical geodesics V 1 = (V 1,X , V 1,Y ) and V 2 = (V 2,X , V 2,Y ) such that: 1. If h(p) ≤ h(q) − 7C 0 then α is in the 196C 0 C N -neighbourhood of V 1 ∪ (V 1,X , V 2,Y ) ∪ V 2 2. If h(p) ≥ h(q) + 7C 0 then α is in the 196C 0 C N -neighbourhood of V 1 ∪ (V 2,X , V 1,Y ) ∪ V 2 3. If h(p) − h(q) ≤ 7C 0 then at least one of the conclusions of 1. or 2. holds. Specifically V 1 and V 2 can be chosen such that p is close to V 1 and q is close to V 2 . Figure 14 pictures the 196C 0 C N -neighbourhood of such vertical geodesics when h(p) ≤ h(q)−7C 0 . When h(p) − h(q) ≤ 7C 0 , there are two possible shapes for a geodesic segment. In some cases, two points can be linked by two different geodesics, one of type 1 and one of type 2. Proof. Let m = (m X , m Y ) be a point of α such that h(m) = h − (α), and n = (n X , n Y ) be a point of α such that h(n) = h + (α). Then by Lemma 5.1 we have: ∆h(p, m) − 1 2 d r (p Y , q Y ) ≤ 4C 0 .(40) We show similarly that: ∆h(q, n) − 1 2 d r (p X , q X ) ≤ 4C 0 .(41) In the first case we assume that h(p) ≤ h(q) − 7C 0 . With notations as in Lemma 5.2, and by inequality (38), we have that l N ([p, a]) ≤ d r (p Y , q Y ) + 7C 0 , hence: l N ([p, m]) =l N ([p, a]) − l N ([a, m]) ≤ d r (p Y , q Y ) + 7C 0 − ∆h(a, m) ≤ 1 2 d r (p Y , q Y ) + 11C 0 , since ∆h(p, m) = ∆h(a, m).(42) It follows from this inequality that: d X (p X , m X ) =2d X×Y (p, m) − d Y (p Y , m Y ) ≤ 2d ⋈ (p, m) − d Y (p Y , m Y ) ≤2l N ([p, m]) − d Y (p Y , m Y ) ≤ d r (p Y , q Y ) + 22C 0 − ∆h(p, m) ≤ 1 2 d r (p Y , q Y ) + 26C 0 . Thus: d r (m X , n X ) =d X (m X , n X ) − ∆h(m, n) ≤ 34C 0 . In the same way we have d r (m Y , n Y ) ≤ 34C 0 . Let us denote n ′ X the point of V m X at the height h(n X ). Since d r (p X , m X ) ≤ 34C 0 , Lemma 4.4 applied on m X and n X provides: d X (m X , n ′ X ) ≤ 35C 0(44) Hence we have proved that α 2,X and [m X , n ′ X ] have their end points close to each other. Let us now prove that these paths have close lengths. We have that l X ([m X , n ′ X ]) = ∆h(m, n), and from inequalities (40) and (41) we have: l X ([m X , n X ]) ≤ 2l N (α 2,X ) − l Y ([m Y , n Y ]) = 2 l N (α) − l N (α 1 ) − l N (α 3 ) − ∆h(m, n) ≤ 2 15C 0 + ∆h(p, q) + d r (p X , q X ) + d r (p Y , q Y ) − ∆h(p, m) − ∆h(n, q) − ∆h(m, n) ≤ 2 ∆h(p, q) + d r (p X , q X ) + d r (p Y , q Y ) − ∆h(p, m) − ∆h(n, q) − ∆h(m, n) ≤ 2 ∆h(p, q) + ∆h(p, m) + ∆h(n, q) + 16C 0 − ∆h(m, n) + 30C 0 ≤ ∆h(m, n) + 62C 0 As l X ([m X , n X ]) ≥ ∆h(m, n) we obtain: l X ([m X , n X ]) − l X ([m X , n ′ X ]) ≤ 62C 0(45) Then by similar arguments as for the path α 1,X , inequalities (44) and (45) show that α 2,X is in the (35 + 62 + 1)C 0 = 98C 0 neighbourhood of V m X . Similarly we prove that α 2,Y is in the 98C 0 neigh- bourhood of V n Y . Since N is an admissible norm, Property 3.11 gives us that α 2 is in the 196C 0 C Nneighbourhood of (V m X , V n Y ). In the second case, we assume that h(q) ≤ h(p) − 7C 0 . Then by switching the role of p and q, Lemma 5.2 gives us the result identically. In the third case, we assume that h(p) − h(q) ≤ 7C 0 . Then Lemma 5.2 tells us that one of the two previous situations prevail, which proves the result. Coarse monotonicity We will see that the following definition is related to being close to a vertical geodesic. Definition 5.4. Let C be a non negative number. A geodesic α ∶ I → X ⋈ Y of X ⋈ Y is called C-coarsely increasing if ∀t 1 , t 2 ∈ I: t 2 > t 1 + C ⇒ h(α(t 2 )) > h(α(t 2 )) . The geodesic α is called C-coarsely decreasing if ∀t 1 , t 2 ∈ I: t 2 > t 1 + C ⇒ h(α(t 2 )) < h(α(t 2 )) . The next lemma links the coarse monotonicity and the fact that a geodesic segment is close to vertical geodesics. Lemma 5.5. Let N be an admissible norm and let C 0 = (2853δC N + 2 851 ) 2 . Let p = (p X , p Y ) and q = (q X , q Y ) be two points of X ⋈ Y and let α be a geodesic segment of (X ⋈ Y, d ⋈ ) linking p to q. Let m ∈ α and n ∈ α be two points in X ⋈ Y such that h − (α) = h(m) and h + (α) = h(n). We have: We will now make use of the rigidity property of quasi-geodesics in Gromov hyperbolic spaces, presented in Theorem 3.1 p.41 of [5]. Theorem 5.7 ([5]). Let H be a δ-hyperbolic geodesic space. If f ∶ R → H is a (λ, k)-quasi geodesic, then there exists a constant κ > 0 depending only on δ, λ and k such that the image of f is in the κ-neighbourhood of a geodesic in H. Lemma 5.8. Let N be an admissible norm and let T 1 and T 2 be two real numbers. Let α = (α X , α Y ) ∶ [T 1 , +∞[→ X ⋈ Y be a geodesic ray of (X ⋈ Y, d ⋈ ). Let K be a positive number such that α is Kcoarsely monotone. Then there exists a constant κ > 0 depending only on K, δ and N such that α is in the κ-neighbourhood of a vertical geodesic ray V ∶ [T 2 ; +∞[→ X ⋈ Y and such that d ⋈ α(T 1 ), V (T 2 ) ≤ κ. Proof. We assume without loss of generality that lim t→+∞ h(α(t)) = +∞. Let C 0 = (2853δC N + 2 851 ) 2 , by Lemma 5.6, α X is a (1, 26C 0 + 8K)-quasi geodesic ray. Then Theorem 5.7 says there exists κ X > 0 depending only on 26C 0 + 8K and δ such that α X is in the κ X -neighbourhood of a geodesic V X . Since C 0 depends only on δ and N , κ X depends only on K, δ and N . Then lim t→+∞ h(α(t)) = +∞ gives us lim t→+∞ h(V X (t)) = +∞ which implies that V X is a vertical geodesic of X. We will now build the vertical geodesic we want in Y . We have lim t→+∞ h(α Y (t)) = −∞ and by Lemma 5.6: ∆h(α Y (t 1 ), α Y (t 2 )) − 26C 0 − 8K ≤ d Y (α Y (t 1 ), α Y (t 2 )) ≤ ∆h(α Y (t 1 ), α Y (t 2 )) + 26C 0 + 8K. Since Y is Busemann, there exists a vertical geodesic ray β starting at α Y (T 1 ). Since β is parametrised by its height, α Y ∪ β is also a (1, 26C 0 + 8K)-quasi geodesic, hence there exists κ Y and V Y depending only on K, δ and N such that α Y ∪ β is in the κ Y -neighbourhood of V Y . Since lim t→−∞ h(V Y (t)) = +∞, V Y is a vertical geodesic of Y . Furthermore, by Property 3.11, d ⋈ ≤ 2C N (d X + d Y ) , hence there exists κ depending only on K, δ and N such that α is in the κ-neighbourhood (for d ⋈ ) of (V X , V Y ), a vertical geodesic of (X ⋈ Y, d ⋈ ). Since h(α(t)) ≥ h(α(T 1 )) − 26C 0 − 8K =∶ M , α is in the κ-neighbourhood of V X [M − κ; +∞[ , V Y ] − ∞; −M + κ] which is a vertical geodesic ray. We will now show that the starting points of α and V are close to each other. Let us denote T ′ 1 a time such that d ⋈ α(T 1 ), V (T ′ 1 ) ≤ κ, then ∆h α(T 1 ), V (T ′ 1 ) ≤ κ, hence T ′ 1 − M ≤ 26C 0 + 8K + κ. Then by the triangle inequality: d ⋈ α(T 1 ), V (M − κ) ≤d ⋈ α(T 1 ), V (T ′ 1 ) + d ⋈ V (T ′ 1 ), V (M − κ) ≤κ + 26C 0 + 8K + κ + κ = 26C 0 + 8K + 3κ Let us denote κ ′ ∶= 26C 0 + 8K + 3κ ≥ κ and T 2 ∶= M − κ. Hence α ∶ [T 1 ; +∞[→ X ⋈ Y is in the κ ′ -neighbourhood of a vertical geodesic ray V ∶ [T 2 ∶ +∞[→ X ⋈ Y , we have d ⋈ α(T 1 ), V (T 2 ) ≤ κ ′ and κ ′ depends only on δ and K. Lemma 5.9. Let N be an admissible norm and let α ∶ R + → X ⋈ Y be a geodesic ray of (X ⋈ Y, d ⋈ ). Then α changes its 17C 0 -coarse monotonicity at most once. Proof. Let α ∶ R + → X ⋈ Y be a geodesic ray. Thanks to Lemma 5.5 α changes at most twice of 17C 0 -coarse monotonicity. Indeed, assume it changes three times, applying Lemma 5.5 on the geodesic segment which includes these three times provides a contradiction. We will show in the following that it actually only changes once. Assume α changes twice of 17C 0 -coarse monotonicity. Then α must be first 17C 0 -coarsely increasing or 17C 0 -coarsely decreasing. We assume without loss of generality that α is first 17C 0 -coarsely decreasing. Then there exist t 1 , t 2 , t 3 ∈ R such that α is 17C 0 -coarsely decreasing on [α(t 1 ), α(t 2 )] then 17C 0 -coarsely increasing on [α(t 2 ), α(t 3 )] then 17C 0 -coarsely decreasing on [α( tells us that α is first 17C 0 -coarsely increasing, which contradicts what we assumed. t 3 ), α(+∞)[. h X Y X − type Y − type V ertical geodesic We have classified the possible shapes of geodesic rays. Since geodesic lines are constructed from two geodesic rays glued together, we will be able to classify their shapes too. Definition 5.10. Let N be an admissible norm and let α = (α X , α Y ) ∶ R → X ⋈ Y be a path of (X ⋈ Y, d ⋈ ). Let κ ≥ 0. 1. α is called X-type at scale κ if and only if: (a) α X is in a κ-neighbourhood of a geodesic of X (b) α Y is in a κ-neighbourhood of a vertical geodesic of Y . 2. α is called Y -type at scale κ if and only if: (a) α Y is in a κ-neighbourhood of a geodesic of Y (b) α X is in a κ-neighbourhood of a vertical geodesic of X. The X-type paths follow geodesics of X, meaning that they are close to a geodesic in a copy of X inside X ⋈ Y . The Y -type paths follow geodesics of Y . Remark 5.11. In a horospherical product, being close to a vertical geodesic is equivalent to be both X-type and Y -type. Theorem 5.12. Let N be an admissible norm. There exists κ ≥ 0 depending only on δ and N such that for any α ∶ R → X ⋈ Y geodesic of (X ⋈ Y, d ⋈ ) at least one of the two following statements holds. such that up to a parametrisation α 1 (0) = α 2 (0) and α 1 ∪ α 2 = α. By Lemma 5.8 there exists κ 1 and κ 2 depending only on δ such that α 1 is in the κ 1 -neighbourhood of a vertical geodesic ray V 1 = (V 1,X , V 1,Y ) ∶ [0; +∞[→ X ⋈ Y and such that α 2 is in the κ 2 -neighbourhood of a vertical geodesic ray V 2 = (V 2,X , V 2,Y ) ∶ [0; +∞[→ X ⋈ Y . This lemma also gives us d ⋈ α 1 (0), V 1 (0) ≤ κ 1 and d ⋈ α 2 (0), V 2 (0) ≤ κ 2 . Assume that lim t→+∞ h(V 1,X (t)) = lim t→+∞ h(V 2,X (t)) = +∞, then they are both vertical rays hence are close to a common vertical geodesic ray. Furthermore lim t→+∞ h(V 1,Y (t)) = lim t→+∞ h(V 2,Y (t)) = −∞ in that case. Let W Y be the non continuous path of Y defined as follows. W Y (t) = V 1,Y (−t) ∀t ∈] − ∞; 0] V 2,Y (t) ∀t ∈]0; +∞[ We now prove that W Y ∶ R → Y is a quasigeodesic of Y . Let t 1 and t 2 be two real numbers. Since V 1,Y and V 2,Y are geodesics, d Y (W Y (t 1 ), W Y (t 2 )) = t 1 − t 2 if t 1 and t 2 are both non positive or both positive. Thereby we can assume without loss of generality that t 1 is non positive and that t 2 is positive. We also assume without loss of generality that t 1 ≥ t 2 . The quasi-isometric upper bound is given by: d Y W Y (t 1 ), W Y (t 2 ) = d Y V 1,Y (−t 1 ), V 2,Y (t 2 ) ≤ d Y V 1,Y (−t 1 ), V 1,Y (0) + d Y V 1,Y (0), V 2,Y (0) + d Y V 2,Y (0), V 2,Y (t 2 ) ≤ t 1 + κ 1 + κ 2 + t 2 ≤ t 1 − t 2 + κ 1 + κ 2 , since t 1 and t 2 have different signs. It remains to prove the lower bound of the quasi-geodesic definition on W Y . d Y W Y (t 1 ), W Y (t 2 ) = d Y V 1,Y (−t 1 ), V 2,Y (t 2 ) ≥ 1 2C N d ⋈ V 1 (−t 1 ), V 2 (t 2 ) − d X V 1,X (−t 1 ), V 2,X (t 2 ) ≥ 1 2C N d ⋈ α(t 1 ), α(t 2 ) − κ 1 + κ 2 C N − d X V 1,X (−t 1 ), V 2,X (t 2 ) .(48) The Busemann assumption on X provides us with: d X V 1,X (−t 1 ), V 2,X (−t 1 ) ≤ d X V 1,X (0), V 2,X (0) ≤ κ 1 + κ 2 . Since α is a geodesic and by using the triangle inequality on (48) we have: d Y W Y (t 1 ), W Y (t 2 ) ≥ t 1 − t 2 2C N − d X V 1,X (−t 1 ), V 2,X (−t 1 ) − d X V 2,X (−t 1 ), V 2,X (t 2 ) − κ 1 + κ 2 C N ≥ t 1 − t 2 2C N − ∆h V 2,Y (−t 1 ), V 2,Y (t 2 − 1 C N + 1 (κ 1 + κ 2 ). Assume that ∆h V 2,Y (−t 1 ), V 2,Y (t 2 ) ≤ t 1 −t 2 4C N , then: d Y W Y (t 1 ), W Y (t 2 ) ≥ t 1 − t 2 4C N − 1 C N + 1 (κ 1 + κ 2 ). Hence W Y is a 1 4C N , 1 C N + 1 (κ 1 + κ 2 ) quasi-geodesic, which was the remaining case. Since κ 1 and κ 2 depend only on δ and N , there exists a constant κ ′ depending only on δ and N such that V 1,Y ∪ V 2,Y is in the κ ′ -neighbourhood of a geodesic of Y . The geodesic α is a Y -type geodesic in this case. Assume lim t→+∞ h(V 1,X (t)) = lim t→+∞ h(V 2,X (t)) = −∞, we prove similarly that α is a X-type geodesic. If a geodesic is both X-type at scale κ and Y -type at scale κ, then it is in a κ-neighbourhood of a vertical geodesic of X ⋈ Y . Visual boundary of X ⋈ Y We will now look at the visual boundary of our horospherical products. This notion is described for the Sol geometry in the work of Troyanov [27] through the objects called geodesic horizons. We extend one of the definitions presented in page 4 of [27] for horospherical products. Definition 5.13. Two geodesics of a metric space X are called asymptotically equivalent if they are at finite Hausdorff distance from each other. Definition 5.14. Let X be a metric space and let o be a base point of X. The visual boundary of X is the set of asymptotic equivalence classes of geodesic rays α ∶ R + → such that α(0) = o, it is denoted by ∂ o X. We will use a result of [23] to describe the visual boundary of horospherical products. [23]). Let X be a proper Busemann space, let q be a point in X and let r ∶ [0, +∞[→ X be a geodesic ray. Then, there exists a unique geodesic ray r ′ starting at q that is asymptotic to r. Theorem 5.16. Let N be an admissible norm. We fix base points and directions (w X , a X ) ∈ X × ∂X, (w Y , a Y ) ∈ Y × ∂Y . Let X ⋈ Y be the horospherical product with respect to (w X , a X ) and (w Y , a Y ). Then the visual boundary of (X ⋈ Y, d ⋈ ) with respect to a base point o = (o X , o Y ) is given by: ∂ o (X ⋈ Y ) = ∂X ∖ {a X } × {a Y } ⋃ {a X } × ∂Y ∖ {a Y } = ∂X × {a Y } ⋃ {a X } × ∂Y ∖ {(a X , a Y )} The fact that (a X , a Y ) is not allowed as a direction in X ⋈ Y is understandable since both heights in X and Y would tend to +∞, which is impossible by the definition of X ⋈ Y . Proof. Let α be a geodesic ray. Lemma 5.9 implies that there exists t 0 ∈ R such that α is coarsely monotone on [t 0 , +∞[. Then Lemma 5.8 tells us that α([t 0 , +∞[) is at finite Hausdorff distance from a vertical geodesic ray V = (V X , V Y ), hence α is also at finite Hausdorff distance from V . Since X is Busemann and proper, Property 5.15 ensure us there exists V ′ X a vertical geodesic ray such that V X and V ′ X are at finite Hausdorff distance with V ′ X (0) = o X . Similarly, there exists V ′ Y a vertical geodesic ray of Y with V ′ Y (0) = o Y such that V Y and V ′ Y are at finite Hausdorff distance. Furthermore, there is at least one vertical geodesic ray V ′ = (V ′ Y , V ′ X ) in every asymptotic equivalence class of geodesic rays, hence ∂ o X ⋈ Y is the set of asymptotic equivalence classes of vertical geodesic rays starting at o. Therefore, an asymptotic equivalence class can be identified by the couple of directions of a vertical geodesic ray. Then ∂ o X ⋈ Y can be identified to: ∂X ∖ {a X } × {a Y } ⋃ {a X } × ∂Y ∖ {a Y } . the union between downward directions and upward directions, which proves the theorem. Example 5.17. In the case of Sol, X and Y are hyperbolic planes H 2 , hence their boundaries are ∂X = ∂H 2 = S 1 and ∂Y = S 1 . Then ∂ o Sol can be identified to the following set: S 1 ∖ {a X } × {a Y } ⋃{aX } × S 1 ∖ {a Y } .(49) It can be seen as two lines at infinity, one upward {a X } × S 1 ∖ {a Y } and the other one downward S 1 ∖ {a X } × {a Y } . It is similar to Proposition 6.4 of [27]. h X Y Figure 2 : 2Different types of geodesics in X ⋈ Y . Property 2. 8 . 8Let H be a δ-hyperbolic Busemann geodesically complete space. Then for all x ∈ H there exists a vertical geodesic Definition 3. 3 ( 3Admissible norm). Let N be a norm on the vector space R 2 such that N (1, 1) = 1. The norm N is called admissible if and only if for all real a and b we have: Proposition 3 . 9 . 39Let N be an admissible norm and let Definition 3.10. A geodesic ray of X ⋈ Y is called vertical if it is a subset of a vertical geodesic. Figure 5 : 5A portion of the graph T Notation 4 . 1 . 41Unless otherwise specified, H will be a Gromov hyperbolic Busemann geodesically complete proper space. Let γ ∶ I → H be a connected path. Let us denote the maximal height and the minimal height of this path as follows: Figure 7 : 7Proof of Lemma 4.3 Proposition 4. 5 5([3]). Let X be a δ-hyperbolic geodesic space. Let c be a continuous path in X. If [p, q] is a geodesic segment connecting the endpoints of c, then for every x ∈ [p, q]: d(x, im(c)) ≤ δ log 2 l(c) + 1. Figure 8 : 8Proof of Lemma 4.6 Figure 9 : 9Proof Lemma 4 . 8 . 48Let δ ≥ 1 and H be a proper, δ-hyperbolic, Busemann space. Let x, y ∈ H such that h(x) ≤ h(y). Let α be a path connecting x to y with h + (α) ≤ h(y) + 1 2 d r (x, y) − ∆H and where ∆H is a positive number such that ∆H > 555δ. Then:l(α) ≥ d(x, y) + 2 −530 2 1 δ ∆H − 2∆H − 24δ. Figure 10 : 10Proof of Lemma 4.8 Let us denote t 0 = h(y) + 1 2 d r (x, y) − ∆H. Thanks to inequality (30) the hypothesis of Lemma 4.6 holds with x 0 = V x h(y) + 1 2 d r (x, y) − ∆H and y 0 = V y h(y) + 1 2 d r (x, y) − ∆H . Applying this lemma on α provides: Proposition 4. 9 . 9Let δ ≥ 1 and H be a proper, δ-hyperbolic, Busemann space. Let x, y, m ∈ H such that h(m) ≤ h(x) ≤ h(y) and let α ∶ [0, T ] → H be a path connecting x to y such that h − (α) = h(m). With the notation ∆H = h(y) + 1 2 d r (x, y) − h + (α) we have: Figure 11 : 11Proof of Proposition 4.9 and (32) we have d(x, m 0 ) = d r (x, m 0 ) ≥ d r (x, y) + 2∆h(x, y) − 536δ. Lemma 4.4 used on x and m then gives: Corollary 4 . 15 . 415Let N 1 and N 2 be two admissible norms. Then the metric spaces (X ⋈ Y, d ⋈,N 1 ) and (X ⋈ Y, d ⋈,N 2 ) are roughly isometric. (a), (b), (c) and (d) hold by switching the roles of p and q and switching the roles of a and b. which contradict Lemma 4.10. Hence, if h(p) ≤ h(q) − 7C 0 , the point m comes before the point n and by the first part of the proof, 1. holds. Similarly, if h(q) ≤ h(p) − 7C 0 then n comes before m and then 2. holds. Otherwise when h(p) − h(q) ≤ 7C 0 both cases could happened, then 1. or 2. hold. Figure 15 : 15Different type of geodesics in X ⋈ Y . Hence Lemma 5.8 applied on [α(t 3 ), α(+∞)[ implies that there exists κ > 0 depending only on δ (since the constant of coarse monotonicity depends only on δ) and a vertical geodesic rayV = (V X , V Y ) such that [α(t 3 ), α(+∞)[ is in the κ-neighbourhood of V . Since h + ([α(t 3 ), α(+∞)[) < +∞, we have that lim t→+∞ h(α(t)) = −∞, hence there exists t 4 ≥ t 3 such that h(α(t 4 )) ≤ h(α(t 1 )) − 7C 0 . ThenLemma 5.5 goal of this section is to present what is a Gromov hyperbolic, Busemann space and what are vertical geodesics in such a space. Let (H, d H ) be a proper, geodesic, metric space.2.1 Gromov hyperbolic spacesA geodesic line, respectively ray, segment, of H is the isometric image of a Euclidean line, respectively half Euclidean line, interval, in H. By slight abuse, we may call geodesic, geodesic ray or geodesic segment, the map α ∶ I → H itself, which parametrises our given geodesic by arclength.Let δ ≥ 0 be a non-negative number. Let x, y and z be three points of H. The geodesic triangle[x, y] ∪ [y, z] ∪ [z,x] is called δ-slim if any of its sides is included in the δ-neighbourhood of the remaining two. The metric space H is called δ-hyperbolic if every geodesic triangle is δ-slim. A metric space H is called Gromov hyperbolic if there exists δ ≥ 0 such that H is a δ-hyperbolic space.An important property of Gromov hyperbolic spaces is that they admit a nice compactification thanks to their Gromov boundary. We call two geodesic rays of H equivalent if their images are at finite Hausdorff distance. Let w ∈ H be a base point. We define ∂ w H the Gromov boundary of H as the set of families of equivalent rays starting from w. The boundary ∂ w H does not depend on the base point w, hence we will simply denote it by ∂H. Both ∂H and H ∪ ∂H, are compact endowed with the Hausdorff topology. For more details, see[16] or chap.III H. p.399 of[3]. A metric space(H, d H ) is Busemann if and only if for every pair of geodesic segments parametrized by arclength γ ∶ [a, b] → H and γ ′ ∶ [a ′ , b ′ ] → H, the following function is convex: In this section we fix δ ≥ 0, H a proper, geodesic, δ-hyperbolic space, w ∈ H a base point and a ∈ ∂H a point on the boundary of H. We call height function, denoted by h, the opposite of the Busemann function, h ∶= −β (a,w) .Let us write Proposition 2 chap.8 p.136 of[16] with our notations.Proposition 2.1 ([16], chap.8 p.136). Let H be a hyperbolic proper geodesic metric space. Let a ∈ ∂H and w ∈ H, then: Corollary 2.2. Let H be a hyperbolic proper geodesic metric space. Let a ∈ ∂H and w ∈ H, and let α ∶ [0, +∞[→ H be a geodesic ray. The two following properties are equivalent: Proof. Since H is δ-hyperbolic, the geodesic triangle [x, y] ∪ [y, x ′ ] ∪ [x ′ , x] is δ-slim. Then there exists p 1 ∈ [x, x ′ ], p 2 ∈ [x ′, y] and m ∈ [x, y] such that d(p 1 , m) ≤ δ and d(p 2 , m) ≤ δ. Hence, AcknowledgementThis work was supported by the University of Montpellier.I thank my two advisers Jeremie Brieussel and Constantin Vernicos for their many relevant reviews and comments.Then:≤30C 0 , by inequality (40).Similarly d r (p Y , m Y ) ≤ 30C 0 . Let us consider the vertical geodesic V m X of X containing m X , and the vertical geodesic V p Y of Y containing p Y . Let us denote p ′ X the point of V m X at the height h(p). Since d r (p X , m X ) ≤ 30C 0 , Lemma 4.4 applied on p X and m X provides d X (p X , p ′ X ) ≤ 31C 0 . We will then consider two paths of X. The first one isa piece of vertical geodesic linking m X to p ′ X . We show that these two paths have close length. Using Property 3.4 with inequalities (40) and (42) provides us with:We already proved that their end points are also close to each other d(p X , p ′ X ) ≤ 31C 0 . Since δ ≤ C 0 , the property of hyperbolicity of X gives us that α 1,X is in the (31+30+1Since N is an admissible norm, Property 3.11 gives us that α 1 is in the 124C 0 C N -neighbourhood of (V m X , V p Y ). We show similarly that α 3 , the portion of α linking n to q, is in the 124C 0 C Nneighbourhood of (V q X , V n Y ). We now focus on α 2 , the portion of α linking m to n. Let us denote [m X , n X ] the path α 2,X and [m Y , n Y ] the path α 2,Y . Then Lemma 5.1 provides us with:However from Lemma 4.10 and since 1152δC N ≤ C 0 :by inequalities (40) and (41).It follows from this inequality and the fact that N is admissible that:≤ ∆h(m, n) + 34C 0 , by inequality (43).We will proceed by contradiction, assume that [p, m] is not 15C 0 -coarsely decreasing, then there existswhich contradicts inequality (46). Then [p, m] is 15C 0 -coarsely decreasing. We show in a similar way that [m, n] is 17C 0 -coarsely increasing and that [n, q] is 15C 0 -coarsely decreasing. This proves the first point of our lemma. The second point is proved by switching the roles of p and q. We now assume h(p) − h(q) ≤ 7C 0 , as in the proof of Theorem 5.3 the inequality (42) or a corresponding inequality holds, which ends the proof.Shapes of geodesic rays and geodesic linesIn this section we are focusing on using the previous results to get informations on the shapes of geodesic rays and geodesic lines. We first link the coarse monotonicity of a geodesic ray to the fact that it is close to a vertical geodesic. Let λ ≥ 1 and c ≥ 0, a (λ, c)-quasigeodesic of the metric space (X ⋈ Y, d ⋈ ) is the image of a function φ ∶ R → X ⋈ Y verifying that ∀t 1 , t 2 ∈ R: Assume that h(p) ≤ h(q)−7C 0. Proof, Then from inequality (42) in the proof of Theorem 5.3, l N ([p, mProof. Assume that h(p) ≤ h(q)−7C 0 . Then from inequality (42) in the proof of Theorem 5.3, l N ([p, m]) ≤ 11C 0 . Furthermore Lemma 5.1 gives us that ∆h(p, m) − 1 2 d r. 4C 0 . Thend r (p Y , q Y ) + 11C 0 . Furthermore Lemma 5.1 gives us that ∆h(p, m) − 1 2 d r (p Y , q Y ) ≤ 4C 0 . Then: Let N be an admissible norm and let C 0 = (2853δC N + 2 851 ) 2 . Let α = (α X , α Y ) be a geodesic ray of (X ⋈ Y, d ⋈ ) and let K be a positive number such that α is K-coarsely monotone. Lemma 5.6.. Then α X and α Y are (1, 26C 0 + 8K)-quasigeodesicsLemma 5.6. Let N be an admissible norm and let C 0 = (2853δC N + 2 851 ) 2 . Let α = (α X , α Y ) be a geodesic ray of (X ⋈ Y, d ⋈ ) and let K be a positive number such that α is K-coarsely monotone. Then α X and α Y are (1, 26C 0 + 8K)-quasigeodesics. Let us denote p = (p X , p Y ) = α(t 1 ) and q = (q X , q Y ) = α(t 2 ). We apply Lemma 5.2 on the part of α linking p to q denoted by. p, qProof. Let t 1 and t 2 be two times. Let us denote p = (p X , p Y ) = α(t 1 ) and q = (q X , q Y ) = α(t 2 ). We apply Lemma 5.2 on the part of α linking p to q denoted by [p, q]. Hence using d) of Lemma 5.2: ∆h(p, q) ≤ d ⋈ (p, q) ≤ d ⋈ (p, a) + d ⋈ (a, b) + d ⋈ (b, q) ≤ K + ∆h(a, b) + 13C 0 + K ≤ ∆h(p, q) + ∆h(p, a) + ∆h(b, q) + 13C 0 + 2K ≤ ∆h. N X⋈y, ≤ K And D ⋈ (b, Q) ≤ K, 13C 0 + 4KBy K-coarse monotonicity of α we have that d. By K-coarse monotonicity of α we have that d(p, a) X⋈Y,N ≤ K and d ⋈ (b, q) ≤ K. Hence using d) of Lemma 5.2: ∆h(p, q) ≤ d ⋈ (p, q) ≤ d ⋈ (p, a) + d ⋈ (a, b) + d ⋈ (b, q) ≤ K + ∆h(a, b) + 13C 0 + K ≤ ∆h(p, q) + ∆h(p, a) + ∆h(b, q) + 13C 0 + 2K ≤ ∆h(p, q) + 13C 0 + 4K. Since N is an admissible norm we have: ∆h(p, q) ≤ d X (p X , q X ) = 2d X×Y (p, q) − d Y (p Y , q Y ) ≤ 2d ⋈ (p, q) − d Y (p Y , q Y ) ≤ 2∆h(p, q) + 13C 0 + 4K − ∆h(p, q) ≤ ∆h(p, q) + 13C 0 + 4K. Hence: d ⋈ (p, q) − 26C 0 − 8K ≤ d X (p X , q X ) ≤ d ⋈ (p, q) + 26C 0 + 8K, By definition we have p X = α X (t 1 ), q X = α X (t 2 ). D Furthermore, Y X (p X , Q X ) ≥ ∆h ; P X , Q X ) = ∆h ; P, Q) And D Y (p, Y ) ≥ ∆h, and d ⋈ (p, q) = t 1 −t 2 . Then α X is a (1, 26C 0 +8K)-quasigeodesic ray. We prove similarly that α Y is a (1, 26C 0 + 8K)-quasigeodesic rayFurthermore, d X (p X , q X ) ≥ ∆h(p X , q X ) = ∆h(p, q) and d Y (p Y , q Y ) ≥ ∆h(p, q). Since N is an admissible norm we have: ∆h(p, q) ≤ d X (p X , q X ) = 2d X×Y (p, q) − d Y (p Y , q Y ) ≤ 2d ⋈ (p, q) − d Y (p Y , q Y ) ≤ 2∆h(p, q) + 13C 0 + 4K − ∆h(p, q) ≤ ∆h(p, q) + 13C 0 + 4K. Hence: d ⋈ (p, q) − 26C 0 − 8K ≤ d X (p X , q X ) ≤ d ⋈ (p, q) + 26C 0 + 8K, By definition we have p X = α X (t 1 ), q X = α X (t 2 ) and d ⋈ (p, q) = t 1 −t 2 . Then α X is a (1, 26C 0 +8K)- quasigeodesic ray. We prove similarly that α Y is a (1, 26C 0 + 8K)-quasigeodesic ray. . X ⋈ Y , α is a X-type geodesic at scale κ of (X ⋈ Y, d ⋈ ) . X ⋈ Y , α is a Y -type geodesic at scale κ of (X ⋈ Y, d ⋈ ) Otherwise there would exist a geodesic ray included in α that changes at least two times of coarse monotonicity. We cut α in two coarsely monotone geodesic rays α 1. It follows from Lemma 5.9 that α changes its coarse monotonicity at most once. 0, +∞[→ X ⋈ Y and α 2 ∶ [0, +∞[→ X ⋈ Y References [1] L. B , M. N , W. W10Horocyclic products of treesProof. It follows from Lemma 5.9 that α changes its coarse monotonicity at most once. Otherwise there would exist a geodesic ray included in α that changes at least two times of coarse monotonicity. We cut α in two coarsely monotone geodesic rays α 1 ∶ [0, +∞[→ X ⋈ Y and α 2 ∶ [0, +∞[→ X ⋈ Y References [1] L. B , M. N , W. W , Horocyclic products of trees. Journal European Mathe- matical Society, Volume 10 (2008) 771-816. Brownian motion on treebolic space: escape to infinity. Revista Matemática Iberoamericana. A B , L S C, M S , W , European Mathematical Society Publishing House31A. B , L. S C , M. S , W. W , Brownian motion on treebolic space: escape to infinity. Revista Matemática Iberoamericana. European Mathematical Society Publishing House, Volume 31.3 (2015), 935-976. Metric Spaces of Non-Positive Curvature. Grundlehren der Mathematischen Wissenschaften. A , Fundamental Principles of Mathematical SciencesM.R. B , A. H , Metric Spaces of Non-Positive Curvature. Grundlehren der Mathe- matischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. . Springer-Verlag, 319BerlinSpringer-Verlag, Berlin, Volume 319 (1999). . S B , M S , W , Brownian Motion and Harmonic Functions on Sol. S. B , M. S , W. W , Brownian Motion and Harmonic Functions on Sol(p, q). . International Mathematics Research Notices. 22International Mathematics Research Notices, Volume 22 (2011) 5182-5218. M C , T D , A , Géométrie et théorie des groupes: Les groupes hyperboliques de Gromov. 1441M. C , T. D , A. P , Géométrie et théorie des groupes: Les groupes hy- perboliques de Gromov. Lecture Notes in Mathematics 1441 (1990). M G C , V K , E L D , S. N G , A , arXiv:1705.09648From Homogeneous Metric Spaces to Lie Groups. M.G. C , V. K , E. L D , S. N G , A.O , From Homogeneous Metric Spaces to Lie Groups. arXiv:1705.09648 (2021). Large scale geometry of certain solvable groups. Geometric and Functional Analysis. T , 19T. D , Large scale geometry of certain solvable groups. Geometric and Functional Analysis, volume 19, 6 (2009), 1650-1687. Quasi-isometric rigidity of solvable groups. A E , D , Proceedings of the International Congress of Mathematicians. the International Congress of MathematiciansHyderabad, IndiaA. E , D. F , Quasi-isometric rigidity of solvable groups. Proceedings of the International Congress of Mathematicians, Hyderabad, India, (2010). Quasi-isometries and rigidity of solvable groups. A E , D F , K , Pure and Applied Mathematics Quaterly. 34A. E , D. F , K. W , Quasi-isometries and rigidity of solvable groups. Pure and Applied Mathematics Quaterly Volume 3 Number 4 (2007), 927-947. Coarse differentiation of quasi-isométries I: Spaces not quasiisometric to Cayley graphs. A E , D F , K , Annals of Mathematics. 176A. E , D. F , K. W , Coarse differentiation of quasi-isométries I: Spaces not quasi- isometric to Cayley graphs. Annals of Mathematics Volume 176 (2012), 221-260. Coarse differentiation of quasi-isométries II: rigidity for lattices in Sol and lamplighter groups. A E , D F , K , Annals of Mathematics. 177A. E , D. F , K. W , Coarse differentiation of quasi-isométries II: rigidity for lattices in Sol and lamplighter groups. Annals of Mathematics Volume 177 (2013), 869-910. A rigidity theorem for the solvable Baumslag-Solitar groups. B F , L , Invent. math. 131B. F , L. M , A rigidity theorem for the solvable Baumslag-Solitar groups. Invent. math. 131 (1998),419-451. Quasi-isometric rigidity for the solvable Baumslag-Solitar groups II. B F , L , Invent. math. 137B. F , L. M , Quasi-isometric rigidity for the solvable Baumslag-Solitar groups II. Invent. math. 137 (1999),613-649. Geometric rigidity of quasi-isometries in horospherical products. T , arXiv:2211.04093T. F , Geometric rigidity of quasi-isometries in horospherical products.. arXiv: 2211.04093 (2022) . T F , A L , V S , Nonpositive , Curvature and the Ptolemy Inequality.. International Mathematics Research Notices. T. F , A. L , V. S , Nonpositive Curvature and the Ptolemy Inequality.. In- ternational Mathematics Research Notices Volume 2007 (2007). E G , P , Sur les Groupes Hyperboliques d'après Mikhael Gromov. 83E. G , P. D L H , Sur les Groupes Hyperboliques d'après Mikhael Gromov. Progress in Mathematics Volume 83 (1990). A corrected quantitative version of the Morse lemma. S G , V , Journal of Functional Analysis. 277S. G , V. S , A corrected quantitative version of the Morse lemma Journal of Functional Analysis, Volume 277 (2019) 1258-1268. Asymptotic invariants of infinite groups. M , LMS Lecture Notes. 182Cambridge Univ. PressM. G , Asymptotic invariants of infinite groups LMS Lecture Notes, vol. 182, Cambridge Univ. Press, (1993). Lectures on analysis on metric spaces. J , Springer-VerlagNew YorkUniversitextJ. H , Lectures on analysis on metric spaces. Universitext. Springer-Verlag, New York, (2001). On homogeneous manifolds of negative curvature. E , Mathematische Annalen. 211E. H , On homogeneous manifolds of negative curvature. Mathematische Annalen. Vol.211; . Iss, Iss. 1 (1974). . M , Lectures on quasi-isometric rigidity. Geometric Group Theory. 21M. K , Lectures on quasi-isometric rigidity. Geometric Group Theory . vol.21. (2014), 127- 172. E. L D , G P , X , arXiv:2208.06510Rough similarity of left-invariant Riemannian metrics on some Lie groups. E. L D , G. P , X. X , Rough similarity of left-invariant Riemannian metrics on some Lie groups. arXiv:2208.06510 (2022). Metric spaces, convexity and nonpositive curvature. A , IRMA Lectures in Mathematics and Theoretical Physics. 6A. P , Metric spaces, convexity and nonpositive curvature. IRMA Lectures in Mathe- matics and Theoretical Physics 6 (2004). Coarse differentiation and quasi-isometries of a class of solvable Lie groups I. I , Geom. Topol. 154I. P , Coarse differentiation and quasi-isometries of a class of solvable Lie groups I. Geom. Topol. 15, No. 4, (2011), 1883-1925. Coarse differentiation and quasi-isometries of a class of solvable Lie groups II. I , Geom. Topol. 15I. P , Coarse differentiation and quasi-isometries of a class of solvable Lie groups II. Geom. Topol. 15, (2011), 1927-1981. Amenability, unimodularity, and the spectral radius of random walks on infinite graphs. W , Math. Z. 205P.M. S , W. W , Amenability, unimodularity, and the spectral radius of random walks on infinite graphs. Math. Z. 205 (1990), 471-486. . M T , L &apos;horizon De, Sol Epfl, Exposition. Math. 16M. T , L'horizon de SOL. EPFL, Exposition. Math. Volume 16 (1998), 441-479. Institut für Mathematik C, Technische Universität Graz Steyrergasse 30. W W , Lamplighters , Combinatorics, Probability & Computing. 14Diestel-Leader Graphs, Random Walks, and Harmonic FunctionsW. W , Lamplighters, Diestel-Leader Graphs, Random Walks, and Harmonic Functions. Institut für Mathematik C, Technische Universität Graz Steyrergasse 30, Combinatorics, Probability & Computing 14 (2005) 415-433. What is a horocyclic product, and how is it related to lamplighters?. W , Internationale Mathematische Nachrichten. 224W. W , What is a horocyclic product, and how is it related to lamplighters? Internationale Math- ematische Nachrichten, Volume 224 (2013) 1-27. Large scale geometry of negatively curved R n ⋊ R. X , Geom. Topol. 182X. X , Large scale geometry of negatively curved R n ⋊ R. Geom. Topol. 18, No. 2 (2014), 831-872.
[]
[ "anafi ros: from Off-the-Shelf Drones to Research Platforms", "anafi ros: from Off-the-Shelf Drones to Research Platforms" ]
[ "Andriy Sarabakha " ]
[]
[]
The off-the-shelf drones are simple to operate and easy to maintain aerial systems. However, due to proprietary flight software, these drones usually do not provide any opensource interface which can enable them for autonomous flight in research or teaching. This work introduces a package for ROS1 and ROS2 for straightforward interfacing with off-theshelf drones from the Parrot ANAFI family. The developed ROS package is hardware agnostic, allowing connecting seamlessly to all four supported drone models. This framework can connect with the same ease to a single drone or a team of drones from the same ground station. The developed package was intensively tested at the limits of the drones' capabilities and thoughtfully documented to facilitate its use by other research groups worldwide.* The author has no relations or conflicts of interest with Parrot SA.
10.48550/arxiv.2303.01813
[ "https://export.arxiv.org/pdf/2303.01813v1.pdf" ]
257,353,574
2303.01813
b8668401e54c0b114b1267f6cb56328d7e6e7d5c
anafi ros: from Off-the-Shelf Drones to Research Platforms Andriy Sarabakha anafi ros: from Off-the-Shelf Drones to Research Platforms The off-the-shelf drones are simple to operate and easy to maintain aerial systems. However, due to proprietary flight software, these drones usually do not provide any opensource interface which can enable them for autonomous flight in research or teaching. This work introduces a package for ROS1 and ROS2 for straightforward interfacing with off-theshelf drones from the Parrot ANAFI family. The developed ROS package is hardware agnostic, allowing connecting seamlessly to all four supported drone models. This framework can connect with the same ease to a single drone or a team of drones from the same ground station. The developed package was intensively tested at the limits of the drones' capabilities and thoughtfully documented to facilitate its use by other research groups worldwide.* The author has no relations or conflicts of interest with Parrot SA. I. INTRODUCTION As one of the fastest-growing fields in the aerospace industry, unmanned aerial vehicles (UAVs) can provide a cost-efficient solution to many time-consuming tasks, such as subterranean exploration [1], power-line inspection [2] and additive manufacturing [3]. Different applications require drones equipped with appropriate sensors, for example, a thermal camera for wildfire monitoring [4], a camera with a high optical zoom for aerial observation [5], or a stereo camera for visual odometry [6]. Custom-built drones can be designed and manufactured to meet specific needs and requirements, allowing for more flexibility in their functionality [7]. However, assembling and configuring custom-made drones require technical expertise and labour time. Unlike custom-made drones, off-the-shelf drones are readily available and can be easily purchased from many retailers. This makes them an attractive option for individuals and organisations that want to use drones for a variety of purposes, such as teaching and research. Another advantage of using off-the-shelf drones is their reliability and durability since they are built to operate in a wide range of conditions. While there are many advantages of using off-the-shelf drones, their main limitation is the possibility of autonomous deployments, making them less suitable for certain applications. To partially overcome this issue, proprietary software development kits (SDKs) were released by some drone manufacturers, such as DJI [8], Ryze [9], Parrot [10] and Bitcraze [11]. Still, only discontinued Parrot Bebop [12], and tiny-size Ryze Tello [13] and Bitcraze Crazyflie [14] have the interface to bridge them with the robot operating system (ROS). Nowadays, ROS has become a standard This research was supported by the NTU Presidential Postdoctoral Fellowship (award number 021820-00001). 1 Andriy Sarabakha is with the School of Electrical and Electronic Engineering (EEE), Nanyang Technological University (NTU), Singapore, 639798. [email protected] development environment for modern roboticists. The main advantage of ROS is the possibility for the modularization of software, making it easy to reuse and modify individual components. Besides, ROS provides a standard set of libraries, tools and conventions for communication between different parts of a robotic system. Moreover, ROS has a large and active community with many available resources. A huge variety of ROS packages is available for building the navigation stack for aerial robots, like visual-inertial localisation [15], environment perception [16], motion planning [17], model-based [18] and model-free [19] control. This work introduces a bridge which allows a straightforward connection between the drones of the Parrot * ANAFI family (illustrated in Fig. 1) and both ROS1 † and ROS2 ‡ . Parrot ANAFI drones were chosen because each model in the family offers unique features making them suitable for various applications. A comprehensive comparison of the characteristics of Parrot ANAFI drones is provided. The developed ROS package is hardware agnostic, allowing connecting seamlessly to all supported drones. This framework also allows connecting to single or multiple drones from the same ground station. The developed package was intensively tested at the limits of the drones and thoughtfully documented to facilitate its use by other researchers. This work is organised as follows. First, the technical details of the experimental platforms are provided in Section II. Next, Section III describes the structure of the developed framework. Then, Section IV provides experimental validation of the developed framework. Finally, Section V summarises this work with conclusions and future work. II. EXPERIMENTAL PLATFORMS Let the world fixed frame be F W = { x W , y W , z W }, and the drone body frame be F D = { x D , y D , z D }. The origin of the body frame is located at the centre of mass (COM) of the UAV. The configuration with the corresponding reference frames is illustrated in Fig. 2. The absolute position of UAV p W D = x y z T is described by three Cartesian coordinates of its COM in F W . While the attitude of UAV θ W D = φ θ ψ T is described by three Euler's angles: roll φ, pitch θ and yaw ψ. The time derivative of the position (x, y, z) gives the linear velocity of the UAV's COM expressed in F W : v = ẋẏż T ,(1)and the velocity expressed in F D is v D = v x v y v z T .(2) The relation between v and v B is given by v = R(φ, θ, ψ)v D ,(3)in which R(φ, θ, ψ) ∈ SO(3) is the rotation matrix from F B to F W : R(φ, θ, ψ) =   c ψ c θ c ψ s φ s θ − c φ s ψ s φ s ψ + c φ c ψ s θ c θ s ψ c φ c ψ + s φ s ψ s θ c φ s ψ s θ − c ψ s φ −s θ c θ s φ c φ c θ   ,(4) in which c and s are cos( ) and sin( ), respectively. The time derivative of the attitude (φ, θ, ψ) gives the angular velocity expressed in F W : ω = φθψ T ,(5) and the angular velocity expressed in F D is ω D = ω φ ω θ ω ψ T .(6) The relation between ω and ω D is given by ω = Tω D ,(7) in which T is the transformation matrix: The vector of control inputs u is considered as in [20]: T =   1 sin φ tan θ cos φ tan θ 0 cos φ − sin φ 0 sin φ sec θ cos φ sec θ   .(8)u = T * τ * φ τ * θ τ * ψ T ,(9) where T * is the reference thrust acting along z D axis, whereas τ * φ , τ * θ and τ * ψ are the reference moments acting around x D , y D and z D axes, respectively. For mobile robots equipped with a camera mounted on a gimbal, there is also a gimbal reference frame F G = { x G , y G , z G }. The origin of the gimbal frame is located in front of the UAV's COM. The position of the camera in F D is p D G = x G y G z G T , while the attitude of the gimbal in F D is θ D G = φ G θ G ψ G T . The rotation matrix of the gimbal in F W can be calculated with R W G = R(φ, θ, ψ)R(φ G , θ G , ψ G ),(10) while the position of the camera in F W can be obtained with p W G = p W D + R(φ, θ, ψ)p D G .(11) A. ANAFI Drones Parrot ANAFI drones are small, lightweight UAVs mainly designed for aerial photography and videography. They are equipped with high-resolution high-dynamic-range (HDR) cameras mounted on a 2-axis gimbal, allowing to capture smooth and stable footage from the air. These drones have a unique folding design, making them portable and easily deployable: it takes less than 1 min to unfold the drone, turn it on, connect to the remote controller and take off. Moreover, ANAFI drones have reasonably long flight duration thanks to the lithium polymer (LiPo) battery, which has a built-in USB-C port for hassle-free charging. The Parrot ANAFI family has four drone models: basic ANAFI 4K, ANAFI Thermal with a thermal camera, water-and dust-resistant ANAFI USA and ANAFI Ai with onboard computing and obstacle avoidance capabilities. These drones are illustrated in Fig. 1, while Table I summarises the properties of each model. Remark 1: Since low-level stabilisation controllers as in [21] are included in ANAFI's autopilot as illustrated in Fig. 3, the virtual control inputs in (9) can be considered as: u = v * z φ * θ * ω * ψ T .(12) Vertical velocity controller Roll & pitch controller Yaw rate controller 3. Control scheme of Parrot ANAFI UAV. Fig. 1a) is the first and basic drone of the ANAFI family. ANAFI 4K is one of the quietest drones in its class with a noise level of 64 dB at 1 m. With 25 min of flight time, the battery can be recharged via a USB-C cable in 90 min. The model of ANAFI 4K is available in Parrot's software-in-the-loop simulation environment -Sphinx. Drone dynamics v * z φ * θ * ω * ψ T * τ * φ τ * θ τ * ψ ω ψ θ W D v z p W D Fig. a length × width × height b protected from limited dust ingress and water spray less than 60 • from vertical c protected from water spray less than 60 • from vertical d at 1 m e from nadir to zenith 2) ANAFI Thermal: This model (shown in Fig. 1b) is the upgrade of ANAFI 4K with a thermal camera. The optical unit of ANAFI Thermal combines the electro-optics with an infrared sensor, making it possible to identify temperatures between −10°C and +400°C. Thanks to the FLIR Lepton radiometric sensor, the absolute temperature of each pixel can be determined. The RGB image can be blended with thermal images. This enables to detection of hot spots with the thermal camera, while the RGB camera allows the viewing of important details. Despite the thermal camera, ANAFI Thermal is the smallest and lightest model of the family. Fig. 1c) is the rescue-grade drone, featuring 32x zoom and thermal imaging capabilities to meet the demands of first responders and search-and-rescue teams. To achieve this, ANAFI USA is equipped with three front-mounted cameras: a thermal camera, 21 Mpx RGB wide-angle camera (for 1x to 5x zoom) and 21 Mpx RGB telephoto camera (for 5x to 32x zoom), which guarantees a continuous zoom. The 32x zoom allows seeing details as small as 1 cm from a distance of 50 m. The image stabilisation system of ANAFI USA ensures high-quality footage even at 15 m/s wind gust. Despite its compact design, ANAFI USA boasts a 32 min of flight time. ANAFI USA has IP53 b ingress protection, offering water and dust resistance and making it suitable to fly in rainy conditions. ANAFI USA has a service ceiling of 6 km and can operate in temperatures between −35°C and +43°C. The body of ANAFI USA is mainly made of polyamide, reinforced with carbon fibre and streamlined using hollow glass beads. The data stored on ANAFI USA or sent through the networks are encrypted, and the drone is protected against malicious software modification attempts. 4) ANAFI Ai: This model (shown in Fig. 1d) is the biggest and heaviest but most advanced of the ANAFI family. Anafi Ai is the first drone to use the 4G cellular network connection, in addition to Wi-Fi, as an alternative encrypted data link between the drone and the remote controller, theoretically enabling control at any distance. Besides a high-resolution 48 Mpx RGB camera with an ISO range of 50 − 6400, ANAFI Ai is also equipped with a pair of multidirectional stereo cameras, which allow the computation of the occupancy grid to avoid obstacles automatically. Anafi Ai has a 3-axis gimbal differently from other Anafi models with 2-axis gimbals. The maximum horizontal speed of Anafi Ai is 16 m/s, thanks to the optimized aerodynamic performance of the vehicle. Anafi Ai's 6800 mAh battery allows 32 min of flight time and can be recharged in 150 min. ANAFI Ai has IPX3 c ingress protection, offering water resistance and making it suitable to fly in rainy conditions. ANAFI Ai can f protected from limited dust ingress execute custom C++ and Python code onboard, thanks to Parrot's Air SDK. Air SDK allows loading and running code directly on ANAFI Ai and accessing all sensors, connectivity interfaces and autopilot features. The model of ANAFI Ai is available in the Sphinx simulation environment. B. Remote Controllers Parrot ANAFI drones come with a handheld remote radio controller called Skycontroller, allowing the user to control the drone and access its various features. Skycontrollers feature a light and compact design with a conventional layout of sticks and buttons. Moreover, they have a built-in battery rechargeable through the USB-C port. The Parrot ANAFI family has two Skycontroller models: Skycontroller 3 for ANAFI 4K, ANAFI Thermal and ANAFI USA, and Skycontroller 4 for ANAFI Ai. The two versions of Skycontrollers are illustrated in Fig. 4, while Table II summarises the properties of each model. Fig. 4a) is a remote radio controller designed for Parrot ANAFI 4K, Thermal and USA drones. Skycontroller 3 has a maximum range of up to 4 km. 1) Skycontroller 3: This model (shown in 2) Skycontroller 4: This model (shown in Fig. 4b) is a remote 4G controller designed for Parrot ANAFI Ai. Skycontroller 4 has IP5X f ingress protection, offering dust resistance. III. DEVELOPED FRAMEWORK The developed frameworkanafi ros -is a pythonbased ROS package which enables interfacing with all available Parrot ANAFI quadrocopters. Besides being compatible with all physical drones, anafi ros can connect to virtual drones in Parrot's simulation environment -Sphinx. The developed anafi ros is built on top of Parrot's official python SDK -Olympe -which provides a programming interface for Parrot ANAFI drones. The communication flow in anafi ros allows connecting directly to the drones via Wi-Fi interfaces or through Skycontrollers via USB ports, which is highly recommended. The developed framework makes connecting multiple drones to the same ground station easy by automatically assigning a different virtual IP address to each connected Skycontroller and managing port forwarding. The main functionalities of anafi ros include drone piloting, feedback of flight parameters from onboard sensors, gimbal control, drone state monitoring, video streaming from onboard cameras, picture capturing, video recording, file transferring between onboard storage and ground station, drone calibration and flight plan management. Remark 2: For the complete list of subscribed and published topics, available services and parameters, please refer to Appendix. The developed package is organised with several subelements to facilitate the development, as depicted in Fig. 5. In other words, each physical component, such as the drone itself, gimbal, camera, battery, connection link, storage device and remote controller, has a respective software element. 1) Drone: The drone element is a core part of the package, which manages the connection to the drone, enables the control of the drone and provides feedback from the drone. This element allows piloting the drone in three modes: directly commanding values in (12), commanding relative displacements ∆x * ∆y * ∆z * ∆ψ * T or commanding world references λ * x λ * y z * ψ * T , where λ * x and λ * y are the desired latitude and longitude, respectively. The drone element retrieves and publishes real-time information, such as the drone's attitude, altitude, speed and GPS location. It forwards to the drone the requests for arming, taking-off and landing. Drone class also allows bounding the altitude, distance, horizontal and vertical speed, pitch and roll angles, and attitude rates. 2) Gimbal: The gimbal element provides control and feedback on the camera's gimbal. This element controls the desired pitch φ * G and roll θ * G of the camera. It also provides the actual attitude θ D G of the gimbal and allows for setting the maximum rotational speed of the gimbal. 3) Camera: The camera element provides essential capabilities for the camera, such as changing the zoom level, capturing pictures and recording videos. This element also publishes real-time video stream, camera calibration matrix and actual zoom level. It also allows setting the camera mode, image style and streaming mode and enabling HDR mode. 4) Battery: The battery element provides the battery status, such as the battery's level, health and voltage. 5) Link: The link element provides information on the connection to the drone, such as the link quality, signal strength and connection throughput. 6) Storage: The storage element provides the available memory on the microSD cards if installed. This element also allows downloading media (photos and videos) from the storage device and formatting it. 7) Controller: The controller element provides an alternative to connect to the drone via the remote controller. This element reads and publishes the state of the sticks (gaz/yaw and pitch/roll), triggers (gimbal tilt and camera zoom) and buttons (return to home, centre camera and reset zoom). The remote controller also streams its real-time attitude. A. Complementary Packages A complimentary ROS packageanafi autonomy iwas developed on top of anafi ros to enable safe navigation of ANAFI drones by adding some high-level capabilities, like position and velocity control. Besides, other open-source ROS packages are available for building the navigation stack for aerial robots, like visual-inertial localisation [15] ii , environment perception [16] iii , motion planning [17] iv , modelbased [18] v and model-free [19] vi control. i https://github.com/andriyukr/anafi_autonomy ii https://github.com/raulmur/ORB_SLAM2 iii https://github.com/robot-perception-group/ AirPose iv https://github.com/HKUST-Aerial-Robotics/Fast-Planner v https://github.com/uzh-rpg/data_driven_mpc vi https://github.com/andriyukr/controllers IV. EXPERIMENTAL VALIDATION To verify the declared characteristics and validate the developed package, we tested the drone's flight characteristics, gimbal response and camera capabilities separately. A. Drone The drones were pushed to their limits by commanding the maximum control inputs to verify the drones' capabilities and validate the developed package. The tests were performed in an open field on a windless day. For the pitch tracking response, first, the commanded pitch was set to 40°for 2 s, then, it was reversed to −40°for 2 s and, finally, set to 0°. As can be observed from Fig. 6a, all drones were able to achieve the desired pitch while reaching speeds above 8 m/s and stabilise at around 0°in the end. ANAFI 4K and Thermal had smoother behaviour, while ANAFI USA had a more aggressive response. Similarly, for the roll tracking response, first, the commanded roll was set to 40°for 2 s, then, it was reversed to −40°for 2 s and, finally, set to 0°. As can be observed from Fig. 6b, all drones were able to achieve the desired roll while reaching speeds of above 10 m/s and stabilise at around 0°in the end. ANAFI 4K and Thermal still had a stable response, while ANAFI USA and Ai had more twitching behaviour. For the vertical velocity tracking response, first, the commanded velocity was set to 4 m/s for 2 s, then, it was reversed to −4 m/s for 2 s and, finally, set to 0 m/s. As can be observed from Fig. 6c, all drones have an initial delay between 100ms and 200ms but later were able to achieve the desired climbing and descend speeds, reaching the altitude of almost 7 m in 2 s, and stabilise at around 0 m/s in the end. All drones had smooth and stable behaviour. For the yaw rate tracking response, first, the commanded yaw rate was set to 200°/s for 2 s, then, it was reversed to −200°/s for 2 s and, finally, set to 0°/s. As shown in Fig. 6d, all drones were able to achieve the desired yaw rate, making a 360°turn in less than 2.5 s, and stabilise at around 0°/s in the end. Similarly, to the velocity tracking, ANAFI 4K, Thermal and USA have a response delay of approximately 100ms; while, for ANAFI Ai, it is approximately 200ms. After the transient phase, all drones had stable spins at the desired yaw rate. B. Gimbal All ANAFI drones have an active gimbal on which the main cameras are mounted. The gimbal can adjust its roll and pitch in orientation or angular velocity modes. Fig. 7 shows the responses of the gimbal in two modes for two controlled axis between the gimbal's operational limits. It is possible to observe that ANAFI Ai has the fastest response but a limited range compared to other ANAFI drones. Remark 3: ANAFI 4K, Thermal and USA are equipped with similar gimbals, so their response is almost identical. C. Camera The main difference between ANAFI drones is the set of cameras they are equipped with, as summarized in Fig. 8. The developed package allows switching online between streams from all available cameras. ANAFI 4K has one RGB front-mounted camera, which can stream live 1280 × 720 px images, shown in Fig. 8a. ANAFI Thermal, besides the same RGB camera as ANAFI 4K, also has a thermal camera, which can stream live 960×720 px images, shown in Fig. 8b. ANAFI USA has three front-mounted cameras: a thermal camera and two RGB wide-angle and telephoto cameras, which can stream high-details images, shown in Fig. 8c, where the road sign highlighted in red in Fig. 8a is zoomed in. ANAFI Ai, besides a high-resolution RGB camera, which can stream live 1920 × 1080 px images, also has a pair of frontal stereo cameras, which allow the computation of 3D environment information, like the 176×90 px disparity map, shown in Fig. 8d, where the palm leaves in the proximity are detected. Besides streaming live the video feed, all drones can shoot pictures, record videos and store them at maximum resolution on the memory card. In addition, anafi ros allows downloading the stored media from the drone. V. CONCLUSIONS This work introduces a ROS1 and ROS2 packageanafi ros -for simple interfacing with the drones from the Parrot ANAFI family. The developed ROS package is hardware agnostic, allowing connecting seamlessly to four supported models. The developed package was intensively tested on the drones at maximum roll and pitch angles of ±40°, corresponding to the horizontal speeds above ±10 m/s, maximum vertical speed of ±4 m/s and maximum yaw rate of ±200°/s. All drone models demonstrated satisfactory performance and stable response. We hope the developed framework will provide new opportunities for further applications of aerial robots. APPENDIX Fig. 1 . 1Parrot ANAFI family drones. Fig. 2 . 2Configuration of Parrot ANAFI with its reference frames. Fig. 4 . 4Parrot Skycontroller series remote controllers. Fig. 5 . 5UML diagram of the structure of anafi ros. ) Vertical speed tracking. (d) Yaw rate tracking. Fig. 6 . 6Piloting response. tracking in orientation mode. (b) Roll tracking in velocity mode. (c) Pitch tracking in orientation mode. (d) Pitch tracking in velocity mode. Fig. 7 . 7Gimbal response.Remark 4: All ANAFI drones also have a down-facing grey-scale global shutter 320 × 240 px camera for optical flow. However, this video stream is not accessible yet.(a) RGB image from ANAFI 4K.(b) Thermal image from the ANAFI Thermal.(c) 32x zoomed RGB image from ANAFI USA.(d) Disparity map image from ANAFI Ai. Fig. 8 . 8Camera features of different Anafi drones. TABLE I TECHNICAL ISPECIFICATIONS OF PARROT ANAFI DRONES.Parameter ANAFI 4K Thermal USA Ai Drone Size folded a [mm] 244 × 67 × 65 218 × 69 × 64 252 × 104 × 82 304 × 130 × 118 Size unfolded a [mm] 242 × 315 × 65 242 × 315 × 64 303 × 398 × 84 378 × 498 × 118 Weight [g] 320 315 499 898 Maximum horizontal speed [m/s] 15 14.7 16 Maximum vertical speed [m/s] 4 Maximum wind resistance [m/s] 13.9 14.7 12.7 Service ceiling [m] 4500 6000 5000 Operating temperatures [°C] −10 to +40 −35 to +43 −10 to +40 Ingress protection IP53 b IPX3 c Noise emission d [dB] 64 66 79 82 Slots MicroSD MicroSD & SIM card Satellite navigation GPS & Glonass GPS, Glonass & Galileo EO Camera Sensor CMOS Aperture f/2.4 f/2.0 ISO 100 − 3200 50 − 6400 Shutter speed [s] 1 − 1/10000 1/15 − 1/10000 Zoom [x] 1 − 3 1 − 32 1 − 6 Video Format MP4 (H.264) MP4 (H.264, H.265) Resolution 4K & FHD 4K, FHD & HD 4K & FHD Framerate [fps] 24 − 60 24 − 120 24 − 30 24 − 120 Horizontal field of view [°] 69 68 Maximum video bandwidth [Mbps] 100 5 200 Photo Format JPEG & DNG (RAW) Resolution [MP] 21 48 Horizontal field of view [°] 84 75 73 Thermal Camera Sensor FLIR LEPTON 3.5 FLIR BOSON Resolution [pixels] 160 × 120 320 × 256 Temperature range [°C] −10 to +400 −40 to +180 Thermal sensitivity [°C] 0.05 Pixel pitch [µm] 12 Horizontal field of view [°] 57 50 Photo format JPEG Video format MP4 (H.264) Video framerate [fps] 9 Gimbal Mechanical 2-axis (roll, pitch) 3-axis Electronic (EIS) 3-axis Tilt range e [°] from −90 to +90 from −116 to +176 Battery Maximum flight time [min] 25 26 32 Type LiPo (2 cells) LiPo (3 cells) Capacity [mAh] 2700 3400 6800 Voltage [V] 7.6 13.2 Weight [g] 125 195 366 Maximum charging power [W] 25 30 45 Charging port USB-C Controller Skycontroller 3 Skycontroller 4 Tools Air SDK Sphinx anafi ros 1) ANAFI 4K: This model (shown in TABLE II TECHNICAL IISPECIFICATIONS OF PARROT SKYCONTROLLERS. ANAFI USA: This model (shown inParameter Skycontroller 3 4 Size folded a [mm] 94 × 152 × 72 147 × 238 × 55 Size unfolded a [mm] 153 × 152 × 116 147 × 315 × 55 Weight [g] 386 606 Transmission system Wi-Fi 802.11a/b/g/n Wi-Fi 802.11a/b/g/n & 4G Frequency used [GHz] 2.4, 5.8 2.4, 5 Maximum transmission distance [km] 4 ∞ Video stream resolution HD 720p 1080p Battery capacity [mAh] 2500 3350 Battery life [h] 2.5 Ports USB-C (charge) & USB-A (connection) USB-C (charge and connection) & micro-HDMI Compatible mobile devices screen size up to 6" screen size up to 8" Ingress protection IP5X f Compatible drones ANAFI 4K, ANAFI Thermal, ANAFI USA ANAFI Ai Support in anafi ros 3) A. Subscribed topics • camera/command (CameraCommand): camera zoom commands • drone/command (PilotingCommand): drone piloting commands • drone/moveby (MoveByCommand): move the drone by the given displacement and rotate by the given angle • drone/moveto (MoveToCommand): move the drone to the specified location • gimbal/command (GimbalCommand): gimbal attitude commands B. Published topics <topic name> (<message type>, < frequency>) ∈ {<set of values>} / [<range of values>]: <topic description> [<measurement units>] • battery/health (UInt8, 1 Hz) ∈ [0: bad, 100: good]: battery health [%] • battery/percentage (UInt8, 30 Hz) ∈ [0: empty, 100: full]: battery level [%] • battery/voltage (Float32, 1 Hz): battery voltage [V] • camera/awb b gain (Float32, 30 Hz): camera automatic white balance (AWB) blue gain • camera/awb r gain (Float32, 30 Hz): camera automatic white balance (AWB) red gain • camera/camera info (CameraInfo, 30 Hz): main camera's info • camera/exposure time (Float32, 30 Hz): exposure time of the main camera [s] • camera/image (Image, 30 Hz): image from the main front camera • camera/hfov (Float32, 30 Hz): camera's horizontal field of view [°] • camera/iso gain (UInt16, 30 Hz): camera's sensitivity gain • camera/vfov (Float32, 30 Hz): camera's vertical field of view [°] • camera/zoom (Float32, 5 Hz): camera zoom level [x] • flightplan/start (FlightPlan): start the flight plan based on the Mavlink file existing on the drone • flightplan/stop (Trigger): stop the flight plan • flightplan/upload (FlightPlan): upload the Mavlink file to the drone • gimbal/calibrate (Trigger): start gimbal calibration • gimbal/reset (Trigger): reset the reference orientation of the gimbal • home/navigate (SetBool): {true: start return home; false: stop return home} trigger navigate home • home/set (Location): set the custom home location • skycontroller/offboard (SetBool): {true: switch to offboard control; false: switch to manual control} change control mode • storage/download (SetBool): {true: delete media after download; false: otherwise} download media from the drone • storage/format (Trigger): format removable storage D. Parameters <range of values>]: <parameter description> [<measurement units>] • PilotingCommand -Header header: header of the message -float32 roll: roll angle [°] -float32 pitch: pitch angle [°] -float32 yaw: yaw rate [°/s] -float32 gaz: vertical velocity [m/s] • SkycontrollerCommand -Header header: header of the message -int8 x ∈ [-100, 100]: x-axis [%] -int8 y ∈ [-100, 100]: y-axis [%] -int8 z ∈ [-100, 100]: z-axis [%] -int8 yaw ∈ [-100, 100]: yaw-axis [%] -int8 camera ∈ [-100, 100]: camera-axis [%] -int8 zoom ∈ [-100, 100]: zoom-axis [%] bool return home ∈ {true: pressed; false: not pressed}: return-to-home (front top) button bool takeoff land ∈ {true: pressed; false: not pressed}: take-off/land (front bottom) button bool reset camera ∈ {true: pressed; false: not pressed}: reset camera (back left) button bool reset zoom ∈ {true: pressed; false: not pressed}: reset zoom (back right) button F. Custom services • FlightPlan string file: path to the flight plan file on local computer string uid: flight plan UID in drone's directory • Location -float64 latitude: latitude [°] -float64 longitude: longitude [°] -float64 altitude: altitude [m] PilotedPOI -float64 latitude: latitude to look at [°] -float64 longitude: longitude to look at [°] -float64 altitude: altitude to look at [m] bool locked gimbal ∈ {true: gimbal is locked on the point of interest, false: gimbal is freely controllable}: gimbal is locked • Photo → string media id: media id -uint8 mode ∈ {0: single shot; 1: bracketingburst of frames with a different exposure; 2: burst of frames; 3: time-lapse -frames at a regular time interval; 4: GPS-lapse -frames at a regular GPS position interval}: photo mode -uint8 photo format ∈ {0: full resolution, not dewarped; 1: rectilinear projection, dewarped}: photo format -uint8 file format ∈ {0: jpeg; 1: dng; 2: jpeg and dng}: file format • Recording → string media id: media id -uint8 mode ∈ {0: standard; 1: hyperlapse; 2: slow motion; 3: high-framerate}: video recording mode ACKNOWLEDGMENT The author thanks the support team of Parrot for their assistance.<topic name> (<message type>): <topic description> <parameter name> (<parameter type>) := <default value> ∈ {<set of values>} / [• Hz): drone's ground distance above the take-off point. to (Float32, 5 Hz): drone's ground distance above the take-off point [m] 30 Hz): drone's attitude in north-west-up frame • drone/gps/fix (Bool, 1 Hz) ∈ {true: GPS is fixed, false: GPS is not fixed} • drone/gps/location (NavSatFix, 1 Hz): drone's GPS location • drone/gps/satellites (UInt8): number of GPS satellites • drone/rpy (Vector3Stamped, 30 Hz): drone's roll. • drone/attitude (QuaternionStampedpitch and yaw in north-west-up frame [°• drone/attitude (QuaternionStamped, 30 Hz): drone's attitude in north-west-up frame • drone/gps/fix (Bool, 1 Hz) ∈ {true: GPS is fixed, false: GPS is not fixed} • drone/gps/location (NavSatFix, 1 Hz): drone's GPS location • drone/gps/satellites (UInt8): number of GPS satellites • drone/rpy (Vector3Stamped, 30 Hz): drone's roll, pitch and yaw in north-west-up frame [°] }: drone's state • gimbal/attitude/absolute (QuaternionStamped, 5 Hz): gimbal's attitude in north-west-up frame • home/location (PointStamped): home location • link/quality (UInt8, 30 Hz). ∈ {&apos;connecting&apos; , &apos; Landed&apos;, &apos; Takingoff&apos;, &apos; Hovering&apos;, &apos; Flying&apos;, &apos; Landing&apos;, &apos; Emergency&apos;, &apos;disconnected&apos;, . , • drone/speed (Vector3Stamped, 30 Hz): drone's speed in body frame [m/s] • drone/state (String, 30 Hz). 0: bad, 5: good]: link quality • skycontroller/attitude (QuaternionStamped• drone/speed (Vector3Stamped, 30 Hz): drone's speed in body frame [m/s] • drone/state (String, 30 Hz) ∈ {'CONNECTING', 'LANDED', 'TAKINGOFF', 'HOVERING', 'FLYING', 'LANDING', 'EMERGENCY', 'DISCONNECTED', . . . }: drone's state • gimbal/attitude/absolute (QuaternionStamped, 5 Hz): gimbal's attitude in north-west-up frame • home/location (PointStamped): home location • link/quality (UInt8, 30 Hz) ∈ [0: bad, 5: good]: link quality • skycontroller/attitude (QuaternionStamped, SkyController's attitude in north-west-up frame • skycontroller/command (SkycontrollerCommand, 100 Hz): command from SkyController • skycontroller/rpy (Vector3Stamped, 20 Hz): Sky-Controller's attitude in north-west-up frame. Hz, Hz): SkyController's attitude in north-west-up frame • skycontroller/command (SkycontrollerCommand, 100 Hz): command from SkyController • skycontroller/rpy (Vector3Stamped, 20 Hz): Sky- Controller's attitude in north-west-up frame [°] Services <service name> (<service type>): <service description> • camera/photo/stop (Photo): stop photo capture • camera/photo/take (Photo): take a photo • camera/recording/start (Recording): start video recording • camera/recording/stop (Recording): stop video recording • camera/reset (Trigger): reset zoom level • drone/arm (SetBool): {true: arm the drone; false: disarm the drone} • drone/calibrate (Trigger): start drone's magnetometer calibration process • drone/emergency (Trigger): cut out the motors • drone/halt (Trigger): halt and start hovering • drone/land (Trigger): take-off the drone • drone/reboot (Trigger): reboot the drone • drone/rth (Trigger): return home • drone/takeoff (Trigger): land the drone • flightplan/pause (Trigger): pause the flight plan • camera/autorecord (bool) := false ∈ {true: enabled; false: disabled}: auto record at take-off • camera/ev compensation. • storage/available (UInt64): available storage space [B] • time (Time, 30 Hz): drone's local time C. int) := 9 ∈ {0: −3.00; 3: −2.00; 6: −1.00; 9: 0.00; 12: 1.00; 15: 2.00; 18: 3.00}: camera exposure compensation [EV• storage/available (UInt64): available storage space [B] • time (Time, 30 Hz): drone's local time C. Services <service name> (<service type>): <service description> • camera/photo/stop (Photo): stop photo capture • camera/photo/take (Photo): take a photo • camera/recording/start (Recording): start video recording • camera/recording/stop (Recording): stop video recording • camera/reset (Trigger): reset zoom level • drone/arm (SetBool): {true: arm the drone; false: disarm the drone} • drone/calibrate (Trigger): start drone's magnetome- ter calibration process • drone/emergency (Trigger): cut out the motors • drone/halt (Trigger): halt and start hovering • drone/land (Trigger): take-off the drone • drone/reboot (Trigger): reboot the drone • drone/rth (Trigger): return home • drone/takeoff (Trigger): land the drone • flightplan/pause (Trigger): pause the flight plan • camera/autorecord (bool) := false ∈ {true: en- abled; false: disabled}: auto record at take-off • camera/ev compensation (int) := 9 ∈ {0: −3.00; 3: −2.00; 6: −1.00; 9: 0.00; 12: 1.00; 15: 2.00; 18: 3.00}: camera exposure compensation [EV] HDR) mode • camera/max zoom speed (float) := 10.0 ∈ [0.1, 10.0]: maximum zoom speed [tan(°) /s] • camera/mode (int) := 0 ∈ {0: camera in recording mode; 1: camera in photo mode}: camera mode • camera/relative (bool) := false ∈ {true: commands relative to the camera pitch; false: otherwise} • camera/rendering. • camera/hdr (bool) := true ∈ {true: enabled; false: disabled}: high dynamic range. int) := 0 ∈ {0: visible; 1: thermal; 2: blended}: thermal image rendering mode (1 and 2 supported only by ANAFI Thermal and ANAFI USA• camera/hdr (bool) := true ∈ {true: enabled; false: disabled}: high dynamic range (HDR) mode • camera/max zoom speed (float) := 10.0 ∈ [0.1, 10.0]: maximum zoom speed [tan(°) /s] • camera/mode (int) := 0 ∈ {0: camera in recording mode; 1: camera in photo mode}: camera mode • camera/relative (bool) := false ∈ {true: com- mands relative to the camera pitch; false: otherwise} • camera/rendering (int) := 0 ∈ {0: visible; 1: thermal; 2: blended}: thermal image rendering mode (1 and 2 supported only by ANAFI Thermal and ANAFI USA) 1: maximize reliability with an average latency; 2: maximize reliability using a frame-rate decimation}: streaming mode • camera/style (int) := 0 ∈ {0: natural look; 1: flat and desaturated images, best for post-processing; 2: intense -bright colors, warm shade. • camera/streaming (int) := 0 ∈ {0: minimize latency with average reliability (best for piloting. high contrast; 3: pastel -soft colors, cold shade, low contrast}: images style • drone/banked turn (bool) := true ∈ {true: enabled; false: disabled}: banked turn • drone/max altitude (float) := 2.0 ∈ [0.5, 4000.0]: maximum altitude [m• camera/streaming (int) := 0 ∈ {0: minimize latency with average reliability (best for piloting); 1: maximize reliability with an average latency; 2: maximize relia- bility using a frame-rate decimation}: streaming mode • camera/style (int) := 0 ∈ {0: natural look; 1: flat and desaturated images, best for post-processing; 2: in- tense -bright colors, warm shade, high contrast; 3: pas- tel -soft colors, cold shade, low contrast}: images style • drone/banked turn (bool) := true ∈ {true: en- abled; false: disabled}: banked turn • drone/max altitude (float) := 2.0 ∈ [0.5, 4000.0]: maximum altitude [m] • drone/max pitch roll rate. float) := 200.0 ∈ [40.0, 300.0]: maximum pitch and roll rotation speed [°/s• drone/max pitch roll rate (float) := 200.0 ∈ [40.0, 300.0]: maximum pitch and roll rotation speed [°/s] ]: maximum yaw rotation speed [°/s] • drone/model (string) := ∈ {'4k', 'thermal', 'usa', 'ai', 'unknown'}: drone's model • gimbal/max speed (float) := 180.0 ∈ [1.0, 180.0]: maximum gimbal speed [°/s] • home/autotrigger (bool) := true ∈ {true: enabled; false: disabled}: auto trigger return-to-home • home/ending behavior (int) := 1 ∈ {0: land; 1: hover}: return-to-home ending behavior • home/min altitude. drone/max yaw rate (float) := 180.0 ∈ [3.0, 200.0• drone/max vertical speed (float) := 1.0 ∈ [0.1, 4.0. float) := 20.0 ∈ [20.0, 100.0]: return-to-home minimum altitude [m• drone/max vertical speed (float) := 1.0 ∈ [0.1, 4.0]: maximum vertical speed [m/s] • drone/max yaw rate (float) := 180.0 ∈ [3.0, 200.0]: maximum yaw rotation speed [°/s] • drone/model (string) := ∈ {'4k', 'thermal', 'usa', 'ai', 'unknown'}: drone's model • gimbal/max speed (float) := 180.0 ∈ [1.0, 180.0]: maximum gimbal speed [°/s] • home/autotrigger (bool) := true ∈ {true: enabled; false: disabled}: auto trigger return-to-home • home/ending behavior (int) := 1 ∈ {0: land; 1: hover}: return-to-home ending behavior • home/min altitude (float) := 20.0 ∈ [20.0, 100.0]: return-to-home minimum altitude [m] = 4 ∈ {1: take-off location; 3: userset custom location; 4: pilot location}: home type for return-to-home • storage/download folder (string) := " ∼/Pictures/Anafi": path to the download folder E. Custom messages • CameraCommand -Header header: header of the message -uint8 mode ∈ {0: level; 1: velocity}: control mode -float32 zoom: zoom command [x] / [x/s] • GimbalCommand -Header header: header of the message -uint8 mode ∈ {0: position; 1: velocity}: control mode -uint8 frame ∈ {0: none; 1: relative; 2: absolute}: gimbal's frame of reference -float32 roll: roll command. • home/precise (bool) := true ∈ {true: enabled; false: disabled}: precise return-to-home • home/type (int. °/s] -float32 pitch: pitch command [°] / [°/s] -float32 yaw: pitch command [°] / [°/s] • MoveByCommand -Header header: header of the message -float32 dx: x displacement [m] -float32 dy: y displacement [m] -float32 dz: z displacement [m] -float32 dyaw: yaw displacement [°] • MoveToCommand -Header header: header of the message -float64 latitude: latitude [°] -float64 longitude: longitude [°] -float64 altitude: altitude [m] -float32 heading: heading w.r.t. North [°] -uint8 orientation mode ∈ {0: none; 1: to target; 2: heading start; 3: heading during}: orientation mode REFERENCES• home/precise (bool) := true ∈ {true: enabled; false: disabled}: precise return-to-home • home/type (int) := 4 ∈ {1: take-off location; 3: user- set custom location; 4: pilot location}: home type for return-to-home • storage/download folder (string) := " ∼/Pictures/Anafi": path to the download folder E. Custom messages • CameraCommand -Header header: header of the message -uint8 mode ∈ {0: level; 1: velocity}: control mode -float32 zoom: zoom command [x] / [x/s] • GimbalCommand -Header header: header of the message -uint8 mode ∈ {0: position; 1: velocity}: control mode -uint8 frame ∈ {0: none; 1: relative; 2: absolute}: gimbal's frame of reference -float32 roll: roll command [°] / [°/s] -float32 pitch: pitch command [°] / [°/s] -float32 yaw: pitch command [°] / [°/s] • MoveByCommand -Header header: header of the message -float32 dx: x displacement [m] -float32 dy: y displacement [m] -float32 dz: z displacement [m] -float32 dyaw: yaw displacement [°] • MoveToCommand -Header header: header of the message -float64 latitude: latitude [°] -float64 longitude: longitude [°] -float64 altitude: altitude [m] -float32 heading: heading w.r.t. North [°] -uint8 orientation mode ∈ {0: none; 1: to target; 2: heading start; 3: heading during}: orientation mode REFERENCES CERBERUS in the DARPA Subterranean Challenge. M Tranzatto, T Miki, M Dharmadhikari, L Bernreiter, M Kulkarni, F Mascarich, O Andersson, S Khattak, M Hutter, R Siegwart, K Alexis, Science Robotics. 7669742M. Tranzatto, T. Miki, M. Dharmadhikari, L. Bernreiter, M. Kulkarni, F. Mascarich, O. Andersson, S. Khattak, M. Hutter, R. Siegwart, and K. Alexis, "CERBERUS in the DARPA Subterranean Challenge," Science Robotics, vol. 7, no. 66, p. eabp9742, 2022. Aerial Device Delivery for Power Line Inspection and Maintenance. A Suarez, R Salmoral, A Garofano-Soldado, G Heredia, A Ollero, 2022 International Conference on Unmanned Aircraft Systems (ICUAS). A. Suarez, R. Salmoral, A. Garofano-Soldado, G. Heredia, and A. Ollero, "Aerial Device Delivery for Power Line Inspection and Maintenance," in 2022 International Conference on Unmanned Air- craft Systems (ICUAS), 2022, pp. 30-38. Aerial additive manufacturing with multiple autonomous robots. K Zhang, P Chermprayong, F Xiao, D Tzoumanikas, B Dams, S Kay, B Kocer, A Burns, L Orr, C Choi, D Darekar, W Li, S Hirschmann, V Soana, S Ngah, S Sareh, A Choubey, L Margheri, V Pawar, M Kovac, Nature. 6092022K. Zhang, P. Chermprayong, F. Xiao, D. Tzoumanikas, B. Dams, S. Kay, B. Kocer, A. Burns, L. Orr, C. Choi, D. Darekar, W. Li, S. Hirschmann, V. Soana, S. Ngah, S. Sareh, A. Choubey, L. Margheri, V. Pawar, and M. Kovac, "Aerial additive manufacturing with multiple autonomous robots," Nature, vol. 609, pp. 709-717, 09 2022. Thermal Infrared Video Stabilization for Aerial Monitoring of Active Wildfires. M M Valero, S Verstockt, B Butler, D Jimenez, O Rios, C Mata, L Queen, E Pastor, E Planas, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 14M. M. Valero, S. Verstockt, B. Butler, D. Jimenez, O. Rios, C. Mata, L. Queen, E. Pastor, and E. Planas, "Thermal Infrared Video Stabi- lization for Aerial Monitoring of Active Wildfires," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 2817-2832, 2021. Consistency-Aware Map Generation at Multiple Zoom Levels Using Aerial Image. L Chen, Z Fang, Y Fu, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 15L. Chen, Z. Fang, and Y. Fu, "Consistency-Aware Map Generation at Multiple Zoom Levels Using Aerial Image," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 15, pp. 5953-5966, 2022. PencilNet: Zero-Shot Sim-to-Real Transfer Learning for Robust Gate Perception in Autonomous Drone Racing. H X Pham, A Sarabakha, M Odnoshyvkin, E Kayacan, IEEE Robotics and Automation Letters. 74H. X. Pham, A. Sarabakha, M. Odnoshyvkin, and E. Kayacan, "PencilNet: Zero-Shot Sim-to-Real Transfer Learning for Robust Gate Perception in Autonomous Drone Racing," IEEE Robotics and Au- tomation Letters, vol. 7, no. 4, pp. 11 847-11 854, 2022. Agilicious: Open-source and open-hardware agile quadrotor for visionbased flight. P Foehn, E Kaufmann, A Romero, R Penicka, S Sun, L Bauersfeld, T Laengle, G Cioffi, Y Song, A Loquercio, D Scaramuzza, Science Robotics. 7676259P. Foehn, E. Kaufmann, A. Romero, R. Penicka, S. Sun, L. Bauersfeld, T. Laengle, G. Cioffi, Y. Song, A. Loquercio, and D. Scaramuzza, "Ag- ilicious: Open-source and open-hardware agile quadrotor for vision- based flight," Science Robotics, vol. 7, no. 67, p. eabl6259, 2022. . &quot; Dji Developer, "DJI Developer," 2023. [Online]. Available: https://developer.dji.com Tello-Python. "Tello-Python," 2019. [Online]. Available: https://github.com/dji- sdk/Tello-Python Parrot Drone SDK. "Parrot Drone SDK," 2023. [Online]. Available: https://developer. parrot.com/docs/index.html . &quot; Sdl Bitcraze, "SDL Bitcraze," 2023. [Online]. Available: https://www.bitcraze. io/documentation/repository/aideck-gap8-examples/master/getting- started/sdk/ bebop autonomy. "bebop autonomy," 2018. [Online]. Available: https://github.com/ AutonomyLab/bebop autonomy . &quot; Tello, Ros , "tello ros," 2022. [Online]. Crazyswarm: A large nano-quadcopter swarm. J A Preiss, W Honig, G S Sukhatme, N Ayanian, 2017 IEEE International Conference on Robotics and Automation (ICRA. J. A. Preiss, W. Honig, G. S. Sukhatme, and N. Ayanian, "Crazyswarm: A large nano-quadcopter swarm," in 2017 IEEE International Confer- ence on Robotics and Automation (ICRA), 2017, pp. 3299-3304. ORB-SLAM: A Versatile and Accurate Monocular SLAM System. R Mur-Artal, J M M Montiel, J D Tardós, IEEE Transactions on Robotics. 315R. Mur-Artal, J. M. M. Montiel, and J. D. Tardós, "ORB-SLAM: A Versatile and Accurate Monocular SLAM System," IEEE Transactions on Robotics, vol. 31, no. 5, pp. 1147-1163, 2015. Active Perception Based Formation Control for Multiple Aerial Vehicles. R Tallamraju, E Price, R Ludwig, K Karlapalem, H H Bülthoff, M J Black, A Ahmad, IEEE Robotics and Automation Letters. 44R. Tallamraju, E. Price, R. Ludwig, K. Karlapalem, H. H. Bülthoff, M. J. Black, and A. Ahmad, "Active Perception Based Formation Control for Multiple Aerial Vehicles," IEEE Robotics and Automation Letters, vol. 4, no. 4, pp. 4491-4498, 2019. RAPTOR: Robust and Perception-Aware Trajectory Replanning for Quadrotor Fast Flight. B Zhou, J Pan, F Gao, S Shen, IEEE Transactions on Robotics. 376B. Zhou, J. Pan, F. Gao, and S. Shen, "RAPTOR: Robust and Perception-Aware Trajectory Replanning for Quadrotor Fast Flight," IEEE Transactions on Robotics, vol. 37, no. 6, pp. 1992-2009, 2021. Data-Driven MPC for Quadrotors. G Torrente, E Kaufmann, P Föhn, D Scaramuzza, IEEE Robotics and Automation Letters. 62G. Torrente, E. Kaufmann, P. Föhn, and D. Scaramuzza, "Data-Driven MPC for Quadrotors," IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 3769-3776, 2021. Online Deep Fuzzy Learning for Control of Nonlinear Systems Using Expert Knowledge. A Sarabakha, E Kayacan, IEEE Transactions on Fuzzy Systems. 287A. Sarabakha and E. Kayacan, "Online Deep Fuzzy Learning for Control of Nonlinear Systems Using Expert Knowledge," IEEE Trans- actions on Fuzzy Systems, vol. 28, no. 7, pp. 1492-1503, 2020. Y6 Tricopter Autonomous Evacuation in an Indoor Environment Using Q-Learning Algorithm. A Sarabakha, E Kayacan, 2016 IEEE 55th Conference on Decision and Control (CDC). A. Sarabakha and E. Kayacan, "Y6 Tricopter Autonomous Evacuation in an Indoor Environment Using Q-Learning Algorithm," in 2016 IEEE 55th Conference on Decision and Control (CDC), Dec 2016, pp. 5992-5997. Exact linearization and noninteracting control of a 4 rotors helicopter via dynamic feedback. V Mistler, A Benallegue, N. M&apos; Sirdi, Proceedings 10th IEEE International Workshop on Robot and Human Interactive Communication. 10th IEEE International Workshop on Robot and Human Interactive CommunicationV. Mistler, A. Benallegue, and N. M'Sirdi, "Exact linearization and noninteracting control of a 4 rotors helicopter via dynamic feedback," in Proceedings 10th IEEE International Workshop on Robot and Human Interactive Communication (ROMAN), 2001, pp. 586-593.
[ "https://github.com/andriyukr/anafi_autonomy", "https://github.com/raulmur/ORB_SLAM2", "https://github.com/robot-perception-group/", "https://github.com/HKUST-Aerial-Robotics/Fast-Planner", "https://github.com/uzh-rpg/data_driven_mpc", "https://github.com/andriyukr/controllers", "https://github.com/dji-" ]
[ "3D Semantic Segmentation in the Wild: Learning Generalized Models for Adverse-Condition Point Clouds", "3D Semantic Segmentation in the Wild: Learning Generalized Models for Adverse-Condition Point Clouds" ]
[ "Aoran Xiao \nNanyang Technological University\n\n", "Jiaxing Huang \nNanyang Technological University\n\n", "Weihao Xuan \nWaseda University\n\n", "Ruijie Ren \nTechnical University of Denmark\n\n", "Kangcheng Liu \nNanyang Technological University\n\n", "Dayan Guan \nMohamed bin Zayed University of Artificial Intelligence\n\n", "Abdulmotaleb El Saddik \nMohamed bin Zayed University of Artificial Intelligence\n\n\nUniversity of Ottawa\n\n", "Shijian Lu \nNanyang Technological University\n\n", "Eric Xing \nMohamed bin Zayed University of Artificial Intelligence\n\n\nCarnegie Mellon University\n\n" ]
[ "Nanyang Technological University\n", "Nanyang Technological University\n", "Waseda University\n", "Technical University of Denmark\n", "Nanyang Technological University\n", "Mohamed bin Zayed University of Artificial Intelligence\n", "Mohamed bin Zayed University of Artificial Intelligence\n", "University of Ottawa\n", "Nanyang Technological University\n", "Mohamed bin Zayed University of Artificial Intelligence\n", "Carnegie Mellon University\n" ]
[]
a) A LiDAR scan captured in a snowy day (b) Point-level annotationsFigure 1. We introduce SemanticSTF, an adverse-weather LiDAR point cloud dataset with dense point-level annotations that can be exploited for the study of point cloud semantic segmentation under all-weather conditions (including fog, snow, and rain). The graph on the left shows one scan sample captured on a snowy day, and the one on the right shows the corresponding point-level annotations.AbstractRobust point cloud parsing under all-weather conditions is crucial to level-5 autonomy in autonomous driving. However, how to learn a universal 3D semantic segmentation (3DSS) model is largely neglected as most existing benchmarks are dominated by point clouds captured under normal weather. We introduce SemanticSTF, an adverseweather point cloud dataset that provides dense point-level annotations and allows to study 3DSS under various adverse weather conditions. We study all-weather 3DSS modeling under two setups: 1) domain adaptive 3DSS that adapts from normal-weather data to adverse-weather data; 2) domain generalizable 3DSS that learns all-weather 3DSS models from normal-weather data. Our studies reveal the challenge while existing 3DSS methods encounter adverseweather data, showing the great value of SemanticSTF in steering the future endeavor along this very meaningful research direction. In addition, we design a domain randomization technique that alternatively randomizes the geometry styles of point clouds and aggregates their embeddings, ultimately leading to a generalizable model that can improve 3DSS under various adverse weather effectively. The SemanticSTF and related codes are available at https: //github.com/xiaoaoran/SemanticSTF.
10.48550/arxiv.2304.00690
[ "https://export.arxiv.org/pdf/2304.00690v1.pdf" ]
257,912,745
2304.00690
7124f495399759ce089e6637dc48e073e9d168aa
3D Semantic Segmentation in the Wild: Learning Generalized Models for Adverse-Condition Point Clouds Aoran Xiao Nanyang Technological University Jiaxing Huang Nanyang Technological University Weihao Xuan Waseda University Ruijie Ren Technical University of Denmark Kangcheng Liu Nanyang Technological University Dayan Guan Mohamed bin Zayed University of Artificial Intelligence Abdulmotaleb El Saddik Mohamed bin Zayed University of Artificial Intelligence University of Ottawa Shijian Lu Nanyang Technological University Eric Xing Mohamed bin Zayed University of Artificial Intelligence Carnegie Mellon University 3D Semantic Segmentation in the Wild: Learning Generalized Models for Adverse-Condition Point Clouds † Corresponding author a) A LiDAR scan captured in a snowy day (b) Point-level annotationsFigure 1. We introduce SemanticSTF, an adverse-weather LiDAR point cloud dataset with dense point-level annotations that can be exploited for the study of point cloud semantic segmentation under all-weather conditions (including fog, snow, and rain). The graph on the left shows one scan sample captured on a snowy day, and the one on the right shows the corresponding point-level annotations.AbstractRobust point cloud parsing under all-weather conditions is crucial to level-5 autonomy in autonomous driving. However, how to learn a universal 3D semantic segmentation (3DSS) model is largely neglected as most existing benchmarks are dominated by point clouds captured under normal weather. We introduce SemanticSTF, an adverseweather point cloud dataset that provides dense point-level annotations and allows to study 3DSS under various adverse weather conditions. We study all-weather 3DSS modeling under two setups: 1) domain adaptive 3DSS that adapts from normal-weather data to adverse-weather data; 2) domain generalizable 3DSS that learns all-weather 3DSS models from normal-weather data. Our studies reveal the challenge while existing 3DSS methods encounter adverseweather data, showing the great value of SemanticSTF in steering the future endeavor along this very meaningful research direction. In addition, we design a domain randomization technique that alternatively randomizes the geometry styles of point clouds and aggregates their embeddings, ultimately leading to a generalizable model that can improve 3DSS under various adverse weather effectively. The SemanticSTF and related codes are available at https: //github.com/xiaoaoran/SemanticSTF. Introduction 3D LiDAR point clouds play an essential role in semantic scene understanding in various applications such as self-driving vehicles and autonomous drones. With the recent advance of LiDAR sensors, several LiDAR point cloud datasets [2, 11, 51] such as SemanticKITTI [2] have been proposed which greatly advanced the research in 3D semantic segmentation (3DSS) [19,43,64] for the task of point cloud parsing. As of today, most existing point cloud datasets for outdoor scenes are dominated by point clouds captured under normal weather. However, 3D vision applications such as autonomous driving require reliable 3D perception under all-weather conditions including various adverse weather such as fog, snow, and rain. How to learn a weather-tolerant 3DSS model is largely neglected due to the absence of related benchmark datasets. Although several studies [3,34] attempt to include adverse weather conditions in point cloud datasets, such as the STF dataset [3] that consists of LiDAR point clouds captured under various adverse weather, these efforts focus on object detection benchmarks and do not provide any pointwise annotations which are critical in various tasks such as 3D semantic and instance segmentation. To address this gap, we introduce SemanticSTF, an adverse-weather point cloud dataset that extends the STF Detection Benchmark by providing point-wise annotations of 21 semantic categories, as illustrated in Fig. 1. Similar to STF, SemanticSTF cap-tures four typical adverse weather conditions that are frequently encountered in autonomous driving including dense fog, light fog, snow, and rain. SemanticSTF provides a great benchmark for the study of 3DSS and robust point cloud parsing under adverse weather conditions. Beyond serving as a well-suited test bed for examining existing fully-supervised 3DSS methods that handle adverse-weather point cloud data, Semantic-STF can be further exploited to study two valuable weathertolerant 3DSS scenarios: 1) domain adaptive 3DSS that adapts from normal-weather data to adverse-weather data, and 2) domain generalizable 3DSS that learns all-weather 3DSS models from normal-weather data. Our studies reveal the challenges faced by existing 3DSS methods while processing adverse-weather point cloud data, highlighting the significant value of SemanticSTF in guiding future research efforts along this meaningful research direction. In addition, we design PointDR, a new baseline framework for the future study and benchmarking of all-weather 3DSS. Our objective is to learn robust 3D representations that can reliably represent points of the same category across different weather conditions while remaining discriminative across categories. However, robust all-weather 3DSS poses two major challenges: 1) LiDAR point clouds are typically sparse, incomplete, and subject to substantial geometric variations and semantic ambiguity. These challenges are further exacerbated under adverse weather conditions, with many missing points and geometric distortions due to fog, snow cover, etc. 2) More noises are introduced under adverse weather due to snow flicks, rain droplets, etc. PointDR addresses the challenges with two iterative operations: 1) Geometry style randomization that expands the geometry distribution of point clouds under various spatial augmentations; 2) Embedding aggregation that introduces contrastive learning to aggregate the encoded embeddings of the randomly augmented point clouds. Despite its simplicity, extensive experiments over point clouds of different adverse weather conditions show that PointDR achieves superior 3DSS generalization performance. The contribution of this work can be summarized in three major aspects. First, we introduce SemanticSTF, a largescale adverse-weather point cloud benchmark that provides high-quality point-wise annotations of 21 semantic categories. Second, we design PointDR, a point cloud domain randomization baseline that can be exploited for future study and benchmarking of 3DSS under all-weather conditions. Third, leveraging SemanticSTF, we benchmark existing 3DSS methods over two challenging tasks on domain adaptive 3DSS and domain generalized 3DSS. The benchmarking efforts lay a solid foundation for future research on this highly meaningful problem. Related Works 3D semantic segmentation aims to assign point-wise semantic labels for point clouds. It has been developed rapidly over the past few years, largely through the development of various deep neural networks (DNNs) such as standard convolutional network for projection-based methods [9, 30,48,52,61], multi-layer perceptron (MLP)-based networks [19, 35,35], 3D voxel convolution-based networks [7,64], or hybrid networks [6,27,43,53,59]. While existing 3DSS networks are mainly evaluated over normal weather point clouds, their performance for adverse weather point clouds is far under-investigated. The proposed Se-manticSTF closes the gap and provides a solid ground for the study and evaluation of all-weather 3DSS. By enabling investigations into various new research directions, Seman-ticSTF represents a valuable tool for advancing the field. Vision recognition under adverse conditions. Scene understanding under adverse conditions has recently attracted increasing attention due to the strict safety demand in various outdoor navigation and perception tasks. In 2D vision, several large-scale datasets have been proposed to investigate perceptions tasks in adverse visual conditions including localization [29], detection [58], and segmentation [37]. On the other hand, learning 3D point clouds of adverse conditions is far under-explored due to the absence of comprehensive dataset benchmarks. The recently proposed datasets such as STF [3] and CADC [34] contain LiDAR point clouds captured under adverse weather conditions. However, these studies focus on the object detection task [15,16] with bounding-box annotations, without providing any point-wise annotations. Our introduced Se-manticSTF is the first large-scale dataset that consists of Li-DAR point clouds in adverse weather conditions with highquality dense annotations, to the best of our knowledge. [38,50,51,57] directly work in the 3D space. However, these methods either work for synthetic-to-real UDA scenarios [48,51] or normal-to-normal point cloud adaptation [57], ignoring normal-to-adverse adaptation which is highly practical in real applications. Our SemanticSTF dataset fills up this blank and will inspire more development of new algorithms for normal-to-adverse adaptation. The SemanticSTF Dataset Background LiDAR sensors send out laser pulses and measure their flight time based on the echoes it receives from targets. The travel distance as derived from the time-of-flight and the registered angular information (between the LiDAR sensors and the targets) can be combined to compute the 3D coordinates of target surface which form point clouds that capture the 3D shape of the targets. However, the active LiDAR pulse system can be easily affected by the scattering media such as particles of rain droplets and snow [10,18,33,36], leading to shifts of measured distances, variation of echo intensity, point missing, etc. Hence, point clouds captured under adverse weather usually have clear distribution discrepancy as compared with those collected under normal weather as illustrated in Fig. 1. However, existing 3DSS benchmarks are dominated by normal-weather point clouds which are insufficient for the study of universal 3DSS under all-weather conditions. To this end, we propose Seman-ticSTF, a point-wise annotated large-scale adverse-weather dataset that can be explored for the study of 3DSS and point cloud parsing under various adverse weather conditions. Data Selection and Split We collect SemanticSTF by leveraging the STF benchmark [3], a multi-modal adverse-weather dataset that was jointly collected in Germany, Sweden, Denmark, and Finland. The data in STF have multiple modalities including LiDAR point clouds and they are collected under various adverse weather conditions such as snow and fog. However, STF provides bounding-box annotations only for the study of 3D detection tasks. In SemanticSTF, we manually selected 2,076 scans captured by a Velodyne HDL64 S3D LiDAR sensor from STF that cover various adverse weather conditions including 694 snowy, 637 dense-foggy, 631 light-foggy, and 114 rainy (all rainy LiDAR scans in STF). During the selection, we pay special attention to the geographical diversity of the point clouds aiming for minimizing data redundancy. We ignore the factor of daytime/nighttime since LiDAR sensors are robust to lighting conditions. We split SemanticSTF into three parts including 1,326 full 3D scans for training, 250 for validating, and 500 for testing. All three splits have approximately the same proportion of LiDAR scans of different adverse weathers. Data Annotation Point-wise annotation of LiDAR point clouds is an extremely laborious task due to several factors, such as 3D view changes, inconsistency between point cloud display and human visual perception, sweeping occlusion, point sparsity, etc. However, point-wise annotating of adverseweather point clouds is even more challenging due to two new factors. First, the perceived distance shifts under adverse weather often lead to various geometry distortions in the collected points which make them different from those collected under normal weather. This presents significant challenges for annotators who must recognize various objects and assign a semantic label to each point. Second, LiDAR point clouds collected under adverse weather often contain a significant portion of invalid regions that consist of indiscernible semantic contents (e.g., thick snow cover) that make it difficult to identify the ground type. The existence of such invalid regions makes point-wise annotation even more challenging. We designed a customized labeling pipeline to handle the annotation challenges while performing point-wise annotation of point clouds in SemanticSTF. Specifically, we first provide labeling instructions and demo annotations and train a team of professional annotators to provide pointwise annotations of a set of selected STF LiDAR scans. To achieve reliable high-quality annotations, the annotators leverage the corresponding 2D camera images and Google Street views as extra references while identifying the category of each point in this initial annotation process. After that, the annotators cross-check their initial annotations for identifying and correcting labeling errors. At the final stage, we engaged professional third parties who provide another round of annotation inspection and correction. Annotation of SemanticSTF is a highly laborious and time-consuming task. For instance, while labeling downtown areas with the most complex scenery, it took an annotator an average of 4.3 hours to label a single LiDAR scan. Labeling a scan captured in a relatively simpler scenery, such as a highway, also takes an average of 1.6 hours. In addition, an additional 30-60 minutes are required per scan for verification and correction by professional third parties. In total, annotating the entire SemanticSTF dataset takes over 6,600 man-hours. While annotating SemanticSTF, we adopted the same set of semantic classes as in the widely-studied semantic segmentation benchmark, SemanticKITTI [2]. Specifically, we annotate the 19 evaluation classes of SemanticKITTI, which encompass most traffic-related objects in autonomous driving scenes. Additionally, following [37], we label points with indiscernible semantic contents caused by adverse weather (e.g. ground covered by snowdrifts) as invalid. Fur- thermore, we label points that do not belong to the 20 categories or are indistinguishable as ignored, which are not utilized in either training or evaluations. Detailed descriptions of each class can be found in the appendix. Data Statistics SemanticSTF consists of point-wise annotations of 21 semantic categories, and Fig. 2 shows the detailed statistics of the point-wise annotations. It can be seen that classes road, sidewalk, building, vegetation, and terrain appear most frequently whereas classes motor, motorcyclist, and bicyclist have clearly lower occurrence frequency. Such class imbalance is largely attributed to the various object sizes and unbalanced distribution of object categories in transportation scenes, and it is also very common in many existing benchmarks. Overall, the statistics and distribution of different object categories are similar to that of other 2D and 3D semantic segmentation benchmarks such as Cityscapes [8], ACDC [37], and SemanticKITTI [2]. To the best of our knowledge, SemanticSTF is the first large-scale adverse-weather 3DSS benchmark that provides high-quality point-wise annotations. Table 1 compares it with several existing point cloud datasets that have been widely adopted for the study of 3D detection and semantic segmentation. We can observe that existing datasets are either collected under normal weather conditions or collected for object detection studies with bounding-box annotations only. 3DSS benchmark under adverse weather is largely blank, mainly due to the great challenge in point-wise annotations of adverse-weather point clouds as described in previous subsections. From this sense, SemanticSTF fills up this blank by providing a large-scale benchmark and test bed which will be very useful to future research in universal 3DSS under all weather conditions. Compared with normal-weather point clouds, point clouds captured under adverse weather exhibit four distinct properties: 1) Snow coverage and snowflakes under snowy weather introduce many white points (labeled as "invalid") as illustrated in Fig. 3(a). The thick snow coverage may lead to object deformation as well; Rainy conditions may cause specular reflection of laser signals from water on the ground and produce many noise points as shown in Fig.3(b); 3) Dense fog may greatly reduce the working range of LiDAR sensors, leading to small spatial distribution of the collected LiDAR points as illustrated in Fig. 3(c); 4) Point clouds under light fog have similar characteristics as normal-weather point clouds as illustrated in Fig. 3 Data illustration Point Cloud Domain Randomization Leveraging SemanticSTF, we explore domain generalization (DG) for semantic segmentation of LiDAR point clouds under all weather conditions. Specifically, we design PointDR, a domain randomization technique that helps to train a generalizable segmentation model from normalweather point clouds that can work well for adverse-weather point clouds in SemanticSTF. Problem Definition Given labeled point clouds of a source domain S = {S k = {x k , y k }} K k=1 where x represents a LiDAR point cloud scan and y denotes its point-wise semantic annotations, the goal of domain generalization is to learn a segmentation model F by using the source-domain data only that can perform well on point clouds from an unseen target domain T . We consider a 3D point cloud segmentation model F that consists of a feature extractor E and a classifier G. Note under the setup of domain generalization, target data will not be accessed in training as they could be hard and even impossible to acquire at the training stage. Point Cloud Domain Randomization Inspired by domain randomization studies in 2D computer vision research [44,45], we explore how to employ domain randomization for learning domain generalizable models for point Specifically, we design PointDR, a point cloud randomization technique that consists of two complementary designs including geometry style randomization and embedding aggregation as illustrated in Fig. 4. Geometry style randomization aims to enrich the geometry styles and expand the distribution of training point cloud data. Given a point-cloud scan x as input, we apply weak and strong spatial augmentation to obtain two copies of x including a weak-view x w = A W (x) and a strong-view x s = A S (x). For the augmentation schemes of A W , we follow existing supervised learning methods [43] and adopt the simple random rotation and random scaling. While for the augmentation schemes of A S , we further adopt random dropout, random flipping, random noise perturbation, and random jittering on top of A W to obtain a more diverse and complex copy of the input point cloud scan x. Embedding aggregation aims to aggregate encoded embeddings of randomized point clouds for learning domain-invariant representations. We adopt contrastive learning [17] as illustrated in Fig. 4. Given the randomized point clouds x w and x s , we first feed them into the feature extractor E and a projector P (a two-layer MLP) which outputs normalized point feature embeddings f w and f s , respectively (f = P(E(x))). f w C ∈ R D×C (D: feature dimension; C: number of semantic classes) is then derived by class-wise averaging the feature embeddings f w in a batch, which is stored in a memory bank B ∈ R D×C that has no backpropagation and is momentum updated by iterations (i.e., B ← m × B + (1 − m) × f w C with a momentum coefficient m). Finally, we employ each point feature embedding f s i of the strong-view f s as query and feature embeddings in B as keys for contrastive learning, where the key sharing the same semantic class as the query is positive key B + and the rest are negative keys. The contrastive loss is defined as L ct = 1 N N i=1 − log exp (f s i B + /τ ) C j=1 exp (f s i B j /τ )(1) where τ is a temperature hyper-parameter [49]. Note there is no back-propagation for the "ignore" class in optimizing the contrastive loss. Contrastive learning pulls point feature embeddings of the same classes closer while pushing away point feature embeddings of different classes. Therefore, optimizing the proposed contrastive loss will aggregate randomized point cloud features and learn perturbation-invariant representations, ultimately leading to a robust and generalizable segmentation model. The momentum-updated memory bank provides feature prototypes of each semantic class for more robust and stable contrastive learning. Combining the supervised cross-entropy loss L ce for weakly-augmented point clouds in Eq. 1, the overall train- ing objective of PointDR can be formulated by: L PointDR = L ce + λ ct L ct (2) Evaluation of Semantic Segmentation SemanticSTF can be adopted for benchmarking different learning setups and network architectures on point cloud segmentation. We perform experiments over two typical learning setups including domain generalization and unsupervised domain adaptation. In addition, we evaluate several state-of-the-art point-cloud segmentation networks to examine their generalization capabilities. Domain Generalization We first study domain generalizable point cloud segmentation. For DG, we can only access an annotated source domain during training and the trained model is expected to generalize well to unseen target domains. Leveraging SemanticSTF, we build two DG benchmarks and examine how PointDR helps learn a universal 3DSS model that can work under different weather conditions. The first benchmark is SemanticKITTI [2] → Seman-ticSTF where SemanticKITTI is a large-scale real-world 3DSS dataset collected under normal weather conditions. This benchmark serves as a solid testing ground for evaluating domain generalization performance from normal to adverse weather conditions. The second benchmark is Syn-LiDAR [51] → SemanticSTF where SynLiDAR is a largescale synthetic 3DSS dataset. The motivation of this benchmark is that learning a universal 3DSS model from synthetic point clouds that can work well across adverse weather is of high research and application value considering the challenges in point cloud collection and annotation. Note this benchmark is more challenging as the domain discrepancy comes from both normal-to-adverse weather distribution shift and synthetic-to-real distribution shift. Setup. We use all 19 evaluating classes of SemanticKITTI in both domain generalization benchmarks. The category of invalid in SemanticSTF is mapped to the ignored since SemanticKITTI and SynLiDAR do not cover this category. We adopt MinkowskiNet [7] (with TorchSparse library [43]) as the backbone model, which is a sparse convolutional network that provides state-of-the-art performance with decent efficiency. We adopt the evaluation metrics of Intersection over the Union (IoU) for each segmentation class and the mean IoU (mIoU) over all classes. All experiments are run over a single NVIDIA 2080Ti (11GB). More implementation details are provided in the appendix. Baseline Methods. Since domain generalizable 3DSS is far under-explored, there is little existing baseline that can be directly adopted for benchmarking. We thus select two closely related approaches as baseline to evaluate the proposed PointDR. The first approach is data augmentation and we select three related augmentation methods including Dropout [39] that randomly drops out points to simulate LiDAR points missing in adverse weather, Noise perturbation that adds random points in the 3D space to simulate noise points as introduced by particles like falling snow, and PolarMix [50] that mixes point clouds of different sources for augmentation. The second approach is to adapt 2D domain generalization methods for 3DSS. We select two 2D domain generalization methods including the widely studied MMD [26] We also evaluate the compared domain generalization methods over each individual adverse weather condition as shown in Table 2. It can be observed that the three data augmentation methods work for data captured in rainy and snowy weather only. The 2D generalization method MMD shows clear effectiveness for point clouds under dense fog and rain while PCL works for point clouds under rainy and snowy weather instead. We conjecture that the performance variations are largely attributed to the different properties of point clouds captured under different weather conditions. For example, more points are missing in rain while object points often deform due to the covered snow (more illustrations are provided in the appendix). Such data variations lead to different domain discrepancies across weather which further leads to different performances of the compared methods. As PointDR learns perturbation-tolerant representations, it works effectively across different adverse weather conditions. We also provide qualitative results, please refer to the appendix for details. Ablation study. We study different PointDR designs to examine how they contribute to the overall generalization performance. As Table 3 shows, we report three models over the benchmark "SemanticKITTI → SemanticSTF": 1) Baseline that is trained with L ce . 2) PointDR-CT that is jointly trained with L ce and L ct without using the memory bank B. 3) The complete PointDR that is trained with L ce , L ct and the memory bank B. We evaluate the three models over the validation set of SemanticSTF and Table 3 shows experimental results. We can see that the Baseline performs poorly at 24.4% due to clear domain discrepancy between point clouds of normal weather and adverse weather. Leveraging the proposed contrastive loss, L ct achieves clearly better performance at 27.4%, indicating that learning perturbation-invariance is helpful for universal LiDAR segmentation of all-weather conditions. On top of that, introducing the momentum-updated memory bank B further improves the segmentation performance at 28.6%. This is because the feature embeddings in B serve as the class prototypes which help the optimization of the segmentation network, finally leading to more robust representations of 3DSS that perform better over adverse weather point clouds. Domain Adaptation We also study SemanticSTF over a domain adaptive point cloud segmentation benchmark SemanticKITTI → SemanticSTF. Specifically, we select four representative UDA methods including ADDA [46], entropy minimization (Ent-Min) [47], self-training [65], and CoSMix [38] for adaptation from the source SemanticKITTI [2] toward the target SemanticSTF. Following the state-of-theart [38,50, 51] on synthetic-to-real adaptation, we adopt MinkowskiNet [7] as the segmentation backbone for all compared methods. Table 4 shows experimental results over the validation set of SemanticSTF. We can see that all UDA methods outperform the Source-only consistently under the normal-to-adverse adaptation setup. At the other end, the performance gains are still quite limited, showing the great improvement space along domain adaptive 3DSS from normal to adverse weather conditions. In addition, we examined the adaptability of the four UDA methods in relation to each individual adverse weather condition. Specifically, we trained each of the four methods for adaptation from SemanticKITTI to SemanticSTF data for each adverse weather condition. Table 5 shows the experimental results over the validation set of SemanticSTF. We can see all four methods outperform the Source-only method under Dense-fog and Light-fog, demonstrating their effectiveness in mitigating domain discrepancies. However, for rain and Snow, only CoSMix achieved marginal performance gains while the other three UDA methods achieved limited performance improvements. We conjecture that snow and rain introduce large deformations on object surfaces or much noise, making adaptation from normal to adverse weather more challenging. CoSMix works in the input space by directly mixing source and target points, allowing it to perform better under heavy snow and rain which have larger domain gaps. However, all methods achieved relatively low segmentation performance, indicating the significance of our research and the large room for improvement in our constructed benchmarks. Network Models vs All-Weather 3DSS We also study how different 3DSS network architectures generalize when they are trained with normal-weather point clouds and evaluated over SemanticSTF. Specifically, we select five representative 3DSS networks [9, 19, 43, 64] that have been widely adopted in 3D LiDAR segmentation studies. In the experiments, each selected network is first pre-trained with SemanticKITTI [2] and then evaluated over the validation set of SemanticSTF. We directly use the officially released code and the pre-trained weights for evaluation. Table 6 shows experimental results. We can observe that the five pre-trained models perform very differently though they all achieve superior segmentation over SemanticKITTI. Specifically, RandLA-Net [19], SPVCNN [43], and SPVNAS [43] perform clearly better than SalsaNext [9] and Cylinder3D [64]. In addition, none of the five pre-trained models perform well, verifying the clear domain discrepancy between point clouds of normal and adverse weather conditions. The experiments further indicate the great value of SemanticSTF in the future exploration of robust point cloud parsing under all weather conditions. In addition, the supervised performance of these 3DSS networks over SemanticSTF is provided in the appendix. Conclusion and Outlook This paper presents SemanticSTF, a large-scale dataset and benchmark suite for semantic segmentation of LiDAR We provide more experiment details of domain adaptation and domain generalization in Section A and Section B, respectively, supervised learning on adverse conditions in Section C and additional details on SemanticSTF dataset in Section D. A. Domain generalization A.1. Implementation details We provide the detailed training configurations for semantic segmentation of LiDAR point clouds that have been adopted as described in Sec. 5.1 of the submitted paper. Specifically, we implement the backbone model MinkowskiNet [7] with the TorchSparse library [42]. For training, we use SGD optimizer. The learning rate, momentum and weight decays are set as 0.24, 0.9, and 1.4e − 4, respectively. τ in Eq. 1 in the paper is set as 0.07 [17,49] and λ ct in Eq. 2 is set as 0.1. The momentum coefficient m is set at 0.99. We train 50 epochs with one NVIDIA 2080Ti with 11GB GPU memory and set the batch size at 4. The augmentations of training data in the source-domain are implemented as follows: For rotation, LiDAR points are rotated with the range of [0, 360 • ] along Z axis. For scale, the coordinates of LiDAR points are randomly scaled within [0.95, 1.05]. For drop-out, we randomly drop-out 0-20% points of input LiDAR scans with a probability of 0.5. As for noise perturbation, 0 − 2, 000 random points are added into the 3D space of each LiDAR scan with a probability of 0.5. When using flipping, we randomly flip coordinates of LiDAR point clouds along x or y axis with a probability of 0.5. As for jittering, random coordinate shifts with a range of [−0.05, 0.05] meters are added into LiDAR points with a probability of 0.5. In training the oracle model, we employ the SGD optimizer with the hyperparameters including initial learning rate at 0.1, momentum at 0.9, weight decay at 1.0e − 4, and dampening at 0.1. We train the segmentation model with 500 epochs using a single NVIDIA 2080Ti with 11GB GPU memory. The batch size is set as 4. We use Poly learning rate policy with power= 0.9. As for data augmentations, we follow [43] and adoptes random rotation ([−π, π]) and scaling ([0.95, 1.05]); We also adopts PolarMix [50] with following parameter settings: Rotation angles along the Z-axis, denoted as Ω, are randomly scaled within normal distributions with a mean of µ = 0 and standard deviation of σ = 2 3 π. We keep the original instance classes for rotate-pasting in PolarMix. A.2. Evaluation of individual adverse weather conditions We noticed that for certain individual adverse weather conditions, some class has no data captured in the validation set of SemanticSTF. Specifically, there are no points of bicycle and motorcycle in the validation set of dense fog; no points of bicyclist and motorcyclist in the validation set of snow, and no bicycle and motorcyclist in the validation set of rain. This is reasonable as the LiDAR data of SemanticSTF is collected in European countries including Germany, Sweden, Denmark, and Finland where motorcycles are not widely used for the reason of environmental protection. In addition, people usually do not ride bicycles or motorcycles in adverse weather conditions. As a result, classes motor, motorcyclist, and bicyclist have extremely lower occurrence frequency, leading to an absence of these classes in the validation set of SemanticSTF under relevant weather conditions. Tables 7, 8, 9, and 10 present corresponding class-level IoU performance for each adverse weather in Table 3 of the submitted paper. A.3. Ablation study Data augmentation. We study how data augmentation techniques affect generalized semantic segmentation of point clouds (3DSS) and compare them with the proposed PointDR. As Table 11 shows, we report seven models over the benchmark "SemanticKITTI→SemanticSTF": 1) The Baseline is a source-only model that is trained by using the training data of Se-manticKITTI; 2) The drop-out, noise perturbation, flipping, and jittering are segmentation models with different augmentation techniques over input data; All is the model that combines of all these augmentation techniques; 3) Our proposed PointDR. We can see that implementing each of these augmentation techniques improves the generalization capability of the segmentation model clearly and consistently. However, the combination of them all did not yield the best segmentation performance, largely because the combination brings too many distortions to the input point clouds. On the contrary, the proposed PointDR achieves the best segmentation performance, indicating its superior ability to learn universal representations for all-weather 3DSS. Parameter analysis. We examine the parameter λ cl in Eq. 2 in the paper that balances the cross entropy loss and the contrastive loss. As Table 12 shows, optimizing the proposed contrastive loss is able to improve segmentation performance consistently while different λ cl produce quite different mIoUs. The best mIoU is obtained when λ cl = 0.10. In Tables 4 and 5 of the submitted paper, we examine state-of-the-art UDA methods over the proposed normal-to-adverse UDA scenario. Specifically, we selected typical UDA methods from the popular synthetic-to-real UDA benchmark [38,51] as the baseline methods as described in Section 5.2 of the paper. We adopt MinkowskiNet [7] as the segmentation model as in synthetic-to-real UDA. When implementing ADDA [46], entropy minimization [47], and self-training [65], we follow the same implementation and training configurations as the synthetic-to-real UDA [51] and leverage TorchSparse library [42]] (with version 1.1.0) based on PyTorch [32] library. While for CoSMix [38], we use the officially released codes based on MinkowskiEngine with default training parameters for the adaptation. We report mIoU of the covered classes for individual adverse weather conditions in Table 5. B.2. Detailed class-level results In Tables 14, 15, 16, and 17 below, we present the class-level IoU performance for the UDA methods that are examined in the setting of adaptation to individual conditions in Table 5 C. Supervised learning on adverse conditions We use SemanticSTF to train five state-of-the-art 3DSS models in a supervised manner and report their segmentation performance in Table 18. Specifically, we use their officially released codes and default training configurations for model training. We can see that these state-of-the-art models achieve much lower segmentation performance over SemanticSTF as compared with their performance over SemanticKITTI. The results indicate that SemanticSTF is a more challenging benchmark for supervised methods due to the diverse data distribution and hard geometric domains. In addition, comparing Table 18 and Table 6 of the submitted paper, we notice that the rankings of the supervised and the pre-trained 3DSS models are not well aligned, indicating that the ability of supervised representation learning may not be highly correlated with the generalization ability. We also notice that the state-of-the-art network Cylinder3D [64] achieves much lower segmentation performance over SemanticSTF as compared with its performance over SemanticKITTI. This could be due to two major factors: 1) The design of Cylinder3D is sensitive to complicated and noisy geometries of point clouds as introduced by various adverse weather conditions; 2) Cylinder3D is sensitive to training parameters and the default training configurations for SemanticKITTI does not work well for SemanticSTF. The results further demonstrate the importance of studying universal 3DSS as well as the value of the proposed SemanticSTF dataset in steering the future endeavour along this very meaningful research direction. D. Additional Details on SemanticSTF Dataset D.1. Annotation In this section, we explain the implementation of our point cloud labeling in more detail. We leveraged a professional labeling program that has multiple annotating tools such as a brush, a polygon tool, a bounding volume tool, as well as different filtering methods for hiding labeled points or selected labels. Corresponding 2D images are displayed to assist labelling. The program also supports cross-checking and correction as illustrated in the main paper. Fig. 5 shows the interface of our point cloud annotation program. D.2. Semantic class definition In the process of labeling such challenging data, we had to decide which classes we wanted to annotate at some point in time. In general, we followed the class definitions of the SemanticKITTI dataset [2] and ACDC [37] dataset, but did some simplifications and adjustments for the data source used. The annotated classes with their respective definitions are listed in Table 19 bicyclist Humans driving a bicycle or standing in close range to a bicycle (within arm reach). motorcyclist Humans driving a motorcycle or standing in close range to a motorcycle (within arm reach). invalid Indiscernible semantic contents caused by adverse weather, such as points of thick snow cover, falling snow or rain droplets, and the splash from the rear of the moving vehicles when driving on the road of snow or water. Figure 2 . 2Number of annotated points per class in SemanticSTF. Fig. 3 3provides examples of point cloud scans captured under adverse weather conditions in SemanticSTF (in row 1) as well as the corresponding annotations (in row 2). Figure 3 .Figure 4 . 34Examples of LiDAR point cloud scans captured under different adverse weather including snow, rain, dense fog, and light fog (the first row) and corresponding dense annotations in SemanticSTF (the second row). The framework of our point cloud randomization method (PointDR): Geometry style randomization creates different point cloud views with various spatial perturbations while embedding aggregation encourages the feature extractor to aggregate randomized point embeddings to learn perturbation-invariant representations, ultimately leading to a generalizable segmentation model. . Comparison of state-of-the-art 3DSS methods (trained in a supervised manner) over the test set of SemanticSTF. Figure 5 . 5The interface of point cloud labeling program for annotating SemanticSTF. object pole Lamp posts, the poles of traffic signs and traffic lights. traffic sign Traffic signs excluding their mounting. Domain generalization [4,31] aims to learn a generalizable model from single or multiple related but distinct source domains where target data is inaccessible during model learning. It has been widely studied in 2D computer vision tasks [1, 21, 26, 63] while few studies explore it in point cloud learning. Recently, [25] studies domain generalization for 3D object detection by deforming point clouds via vector fields. Differently, this work is the first attempt that explores domain generalization for 3DSS. Unsupervised domain adaptation is a method of transferring knowledge learned from a labeled source domain to a target domain by leveraging the unlabeled target data. It has been widely studied in 2D image learning [12,14,20,22-24] and 3D point clouds [15, 16, 28, 40, 54, 55, 60]. Recently, domain adaptive 3D LiDAR segmentation has drawn increasing attention due to the challenge in point-wise annotation. Different UDA approaches have been designed to mitigate discrepancies across LiDAR point clouds of different domains. For example, [48, 62] project point clouds into depth images and leverage 2D UDA techniques while Table 2 . 2Experiments on domain generalization with SemanticKITTI [2] or SynLiDAR [51] as source and SemanticSTF as target. and the recently proposed PCL [56]. Results.Table 2shows experimental results over the valida-Method Lce Lct B mIoU Baseline ✓ 24.4 PointDR-CT ✓ ✓ 27.4 PointDR ✓ ✓ ✓ 28.6 Table 3 . 3Ablation study of PointDR over domain generalized segmentation task SemanticKITTI→SemanticSTF. tion set of SemanticSTF. For both benchmarks, the Baseline is a source-only model that is trained by using the training data of SemanticKITTI or SynLiDAR. We can see that the Baseline achieves very low mIoU while evaluated over the validation set of SemanticSTF, indicating the large domain discrepancy between point clouds of normal and adverse weather conditions. In addition, all three data augmentation methods improve the model generalization consistently but the performance gains are limited especially for the challenging benchmark SynLiDAR→ SemanticSTF. The two 2D generalization methods both help SemanticKITTI → SemanticSTF clearly but show very limited improvement over SynLiDAR → SemanticSTF. The proposed PointDR achieves the best generalization consistently across both benchmarks, demonstrating its superior capability to learn perturbation-invariant point cloud representations and effectiveness while handling all-weather 3DSS tasks. Table 4 . 4Comparison of state-of-the-art domain adaptation methods on SemanticKITTI→SemanticSTF adaptation. SemanticKITTI serves as the source domain and the entire SemanticSTF including all four weather conditions serves as the target domain.Method Dense-fog Light-fog Rain Snow Source-Only 26.9 25.2 27.7 23.5 ADDA [46] 31.5 27.9 27.4 23.4 Ent-Min [47] 31.4 28.6 30.3 24.9 Self-training [65] 31.8 29.3 27.9 25.1 CoSMix [38] 31.6 30.3 33.1 32.9 Table 5 . 5Comparison of state-of-the-art domain adaptation methods on SemanticKITTI→SemanticSTF adaptation for individual adverse weather conditions. We train a separate model for each weather-specific subset of SemanticSTF and evaluate the trained model on the weather condition it has been trained for. nique that aims to use normal-weather point clouds to train a domain generalizable 3DSS model that can work well over adverse-weather point clouds. PointDR consists of two novel designs including geometry style randomization and embedding aggregation which jointly learn perturbationinvariant representations that generalize well to various new point-cloud domains. Extensive experiments show that PointDR achieves superior point cloud segmentation performance as compared with the state-of-the-art. This study is funded BY the Ministry of Education Singapore, under the Tier-1 scheme with project number RG18/22. It is also supported under the RIE2020 Industry Alignment Fund -Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from Singapore Telecommunications Limited (Singtel), through Singtel Cognitive and Artificial Intelligence Lab for Enterprises (SCALE@NTU). Holger Caesar, Varun Bankiti, Alex H Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11621-11631, 2020. 4 [6] Ran Cheng, Ryan Razani, Ehsan Taghavi, Enxu Li, and Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun. Vision meets robotics: The kitti dataset. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9729-9738, 2020. 5, 12 [18] Robin Heinzler, Philipp Schindler, Jürgen Seekircher, Werner Ritter, and Wilhelm Stork. Jiaxing Huang, Dayan Guan, Aoran Xiao, and Shijian Lu. Aoran Xiao, Xiaofei Yang, Shijian Lu, Dayan Guan, and Jiaxing Huang. Fps-net: A convolutional fusion network for large-scale lidar point cloud segmentation. ISPRS Journal of Qiangeng Xu, Yin Zhou, Weiyue Wang, Charles R Qi, and Dragomir Anguelov. Spg: Unsupervised domain adaptation for 3d object detection via semantic point generation. In Proceedings of the IEEE/CVF International Conference on Feihu Zhang, Jin Fang, Benjamin Wah, and Philip Torr. Deep fusionnet for point cloud semantic segmentation. In Kaiyang Zhou, Ziwei Liu, Yu Qiao, Tao Xiang, and Chen Change Loy. Domain generalization: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence,3DSS Model D-fog L-fog Rain Snow All RandLA-Net [19] 26.5 26.0 25.1 22.7 25.3 SalsaNext [9] 16.0 9.6 7.8 3.5 9.1 SPVCNN [43] 30.4 22.8 21.7 18.3 22.4 SPVNAS [43] 25.5 18.3 17.0 13.0 18.0 Cylinder3D [64] 14.8 7.4 5.7 4.0 7.3 Table 6. Performance of state-of-the-art 3DSS models that are pre-trained over SemanticKITTI and tested on validation set of SemanticSTF for individual weather conditions and jointly for all weather conditions. point clouds under adverse weather conditions. Semantic- STF provides high-quality point-level annotations for point clouds captured under adverse weather including dense fog, light fog, snow and rain. Extensive studies have been con- ducted to examine how state-of-the-art 3DSS methods per- form over SemanticSTF, demonstrating its significance in directing future research on domain adaptive and domain generalizable 3DSS under all-weather conditions. We also design PointDR, a domain randomization tech- Acknowledgement References [1] Yogesh Balaji, Swami Sankaranarayanan, and Rama Chel- lappa. Metareg: Towards domain generalization using meta- regularization. Advances in neural information processing systems, 31, 2018. 2 [2] Jens Behley, Martin Garbade, Andres Milioto, Jan Quen- zel, Sven Behnke, Cyrill Stachniss, and Jurgen Gall. Se- mantickitti: A dataset for semantic scene understanding of lidar sequences. In Proceedings of the IEEE/CVF Interna- tional Conference on Computer Vision, pages 9297-9307, 2019. 1, 3, 4, 6, 7, 8, 16 [3] Mario Bijelic, Tobias Gruber, Fahim Mannan, Florian Kraus, Werner Ritter, Klaus Dietmayer, and Felix Heide. See- ing through fog without seeing fog: Deep multimodal sen- sor fusion in unseen adverse weather. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11682-11692, 2020. 1, 2, 3, 4 [4] Gilles Blanchard, Gyemin Lee, and Clayton Scott. Gener- alizing from several related classification tasks to a new un- labeled sample. Advances in neural information processing systems, 24, 2011. 2 [5] Bingbing Liu. 2-s3net: Attentive feature fusion with adap- tive feature selection for sparse semantic segmentation net- work. In Proceedings of the IEEE/CVF conference on com- puter vision and pattern recognition, pages 12547-12556, 2021. 2 [7] Christopher Choy, JunYoung Gwak, and Silvio Savarese. 4d spatio-temporal convnets: Minkowski convolutional neu- ral networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3075- 3084, 2019. 2, 6, 7, 12, 14, 15 [8] Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In Proceed- ings of the IEEE conference on computer vision and pattern recognition, pages 3213-3223, 2016. 4 [9] Tiago Cortinhal, George Tzelepis, and Eren Erdal Aksoy. Salsanext: Fast, uncertainty-aware semantic segmentation of lidar point clouds. In International Symposium on Visual Computing, pages 207-222. Springer, 2020. 2, 8, 15 [10] A Filgueira, H González-Jorge, Susana Lagüela, L Díaz- Vilariño, and Pedro Arias. Quantifying the influence of rain in lidar performance. Measurement, 95:143-148, 2017. 3 [11] Whye Kit Fong, Rohit Mohan, Juana Valeria Hurtado, Lub- ing Zhou, Holger Caesar, Oscar Beijbom, and Abhinav Val- ada. Panoptic nuscenes: A large-scale benchmark for lidar panoptic segmentation and tracking. IEEE Robotics and Au- tomation Letters, 7(2):3795-3802, 2022. 1, 4 [12] Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In International conference on machine learning, pages 1180-1189. PMLR, 2015. 2 [13] The Inter- national Journal of Robotics Research, 32(11):1231-1237, 2013. 4 [14] Dayan Guan, Jiaxing Huang, Aoran Xiao, and Shijian Lu. Domain adaptive video segmentation via temporal consis- tency regularization. In Proceedings of the IEEE/CVF Inter- national Conference on Computer Vision, pages 8053-8064, 2021. 2 [15] Martin Hahner, Christos Sakaridis, Mario Bijelic, Felix Heide, Fisher Yu, Dengxin Dai, and Luc Van Gool. Lidar snowfall simulation for robust 3d object detection. In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16364-16374, 2022. 2 [16] Martin Hahner, Christos Sakaridis, Dengxin Dai, and Luc Van Gool. Fog simulation on real lidar point clouds for 3d object detection in adverse weather. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15283-15292, 2021. 2 [17] Weather influence and classification with automotive lidar sensors. In 2019 IEEE intelligent vehicles symposium (IV), pages 1527-1534. IEEE, 2019. 3 [19] Qingyong Hu, Bo Yang, Linhai Xie, Stefano Rosa, Yulan Guo, Zhihua Wang, Niki Trigoni, and Andrew Markham. Randla-net: Efficient semantic segmentation of large-scale point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11108- 11117, 2020. 1, 2, 8, 15 [20] Cross-view regularization for domain adaptive panoptic seg- mentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10133- 10144, 2021. 2 [21] Jiaxing Huang, Dayan Guan, Aoran Xiao, and Shijian Lu. Fsdr: Frequency space domain randomization for domain generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6891- 6902, 2021. 2 [22] Jiaxing Huang, Dayan Guan, Aoran Xiao, and Shijian Lu. Model adaptation: Historical contrastive learning for unsu- pervised domain adaptation without source data. Advances in Neural Information Processing Systems, 34:3635-3649, 2021. 2 [23] Jiaxing Huang, Dayan Guan, Aoran Xiao, Shijian Lu, and Ling Shao. Category contrast for unsupervised domain adap- tation in visual tasks. In Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition, pages 1203-1214, 2022. 2 [47] Tuan-Hung Vu, Himalaya Jain, Maxime Bucher, Matthieu Cord, and Patrick Pérez. Advent: Adversarial entropy min- imization for domain adaptation in semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2517-2526, 2019. 7, 8, 14, 15 [48] Bichen Wu, Xuanyu Zhou, Sicheng Zhao, Xiangyu Yue, and Kurt Keutzer. Squeezesegv2: Improved model structure and unsupervised domain adaptation for road-object segmenta- tion from a lidar point cloud. In 2019 International Confer- ence on Robotics and Automation (ICRA), pages 4376-4382. IEEE, 2019. 2, 3 [49] Zhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua Lin. Unsupervised feature learning via non-parametric instance discrimination. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3733-3742, 2018. 5, 12 [50] Aoran Xiao, Jiaxing Huang, Dayan Guan, Kaiwen Cui, Shi- jian Lu, and Ling Shao. Polarmix: A general data augmen- tation technique for lidar point clouds. NeurIPS, 2022. 3, 6, 7, 12, 13, 14 [51] Aoran Xiao, Jiaxing Huang, Dayan Guan, Fangneng Zhan, and Shijian Lu. Transfer learning from synthetic to real lidar point cloud for semantic segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 2795-2803, 2022. 1, 3, 4, 6, 7, 14 [52] Photogrammetry and Remote Sensing, 176:237-249, 2021. 2 [53] Jianyun Xu, Ruixiang Zhang, Jian Dou, Yushi Zhu, Jie Sun, and Shiliang Pu. Rpvnet: A deep and efficient range-point- voxel fusion network for lidar point cloud segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 16024-16033, 2021. 2 [54] Computer Vision, pages 15446-15456, 2021. 2 [55] Jihan Yang, Shaoshuai Shi, Zhe Wang, Hongsheng Li, and Xiaojuan Qi. St3d: Self-training for unsupervised do- main adaptation on 3d object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10368-10378, 2021. 2 [56] Xufeng Yao, Yang Bai, Xinyun Zhang, Yuechen Zhang, Qi Sun, Ran Chen, Ruiyu Li, and Bei Yu. Pcl: Proxy-based contrastive learning for domain generalization. In Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7097-7107, 2022. 6, 13, 14 [57] Li Yi, Boqing Gong, and Thomas Funkhouser. Com- plete & label: A domain adaptation approach to seman- tic segmentation of lidar point clouds. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 15363-15373, 2021. 3 [58] Fisher Yu, Haofeng Chen, Xin Wang, Wenqi Xian, Yingying Chen, Fangchen Liu, Vashisht Madhavan, and Trevor Dar- rell. Bdd100k: A diverse driving dataset for heterogeneous multitask learning. In Proceedings of the IEEE/CVF con- ference on computer vision and pattern recognition, pages 2636-2645, 2020. 2 [59] European Conference on Computer Vision, pages 644-663. Springer, 2020. 2 [60] Weichen Zhang, Wen Li, and Dong Xu. Srdan: Scale- aware and range-aware domain adaptation network for cross- dataset 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6769-6779, 2021. 2 [61] Yang Zhang, Zixiang Zhou, Philip David, Xiangyu Yue, Ze- rong Xi, Boqing Gong, and Hassan Foroosh. Polarnet: An improved grid representation for online lidar point clouds se- mantic segmentation. In Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition, pages 9601-9610, 2020. 2 [62] Sicheng Zhao, Yezhen Wang, Bo Li, Bichen Wu, Yang Gao, Pengfei Xu, Trevor Darrell, and Kurt Keutzer. epointda: An end-to-end simulation-to-real domain adaptation framework for lidar point cloud segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 3500-3509, 2021. 2 [63] 2022. 2 [64] Xinge Zhu, Hui Zhou, Tai Wang, Fangzhou Hong, Yuexin Ma, Wei Li, Hongsheng Li, and Dahua Lin. Cylindrical and asymmetrical 3d convolution networks for lidar seg- mentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9939-9948, 2021. 1, 2, 8, 15 [65] Yang Zou, Zhiding Yu, Xiaofeng Liu, BVK Kumar, and Jinsong Wang. Confidence regularized self-training. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5982-5991, 2019. 7, 8, 14, 15 Table 13 13below shows segmentation performance with different momentum values (m) used for updating the memory bank B. It performs reasonably well when m is 0.98 or 0.99, showing that a slowly progressing memory bank is beneficial.Methods car bi.cle mt.cle truck oth-v. pers. bi.clst mt.clst road parki. sidew. oth-g. build. fence veget. trunk terra. pole traf. SemanticKITTI→SemanticSTF(dense fog) Baseline 74.7 - - 7.8 0.0 6.4 8.9 0.0 72.2 0.6 33.8 0.0 59.6 48.7 56.9 27.4 56.4 27.2 21.1 Dropout [39] 67.5 - - 1.9 0.0 8.9 2.8 0.0 70.9 5.6 29.0 0.8 64.6 44.0 60.0 31.6 60.6 28.1 21.3 Perturbation 68.6 - - 8.8 0.0 6.0 0.0 0.0 66.6 14.8 24.3 0.1 52.2 43.5 60.1 19.4 54.1 16.3 11.5 PolarMix [50] 52.3 - - 17.2 0.0 3.6 0.0 19.3 75.2 0.0 28.7 0.6 62.4 49.5 60.5 29.0 55.4 20.8 30.7 MMD [26] 75.5 - - 0.3 0.0 4.2 0.0 0.0 75.4 11.2 33.6 0.5 64.8 51.7 64.7 26.1 62.3 23.0 23.0 PCL [56] 64.3 - - 11.7 0.0 0.6 0.0 0.0 72.4 3.8 31.3 0.8 63.1 46.5 65.7 19.4 64.3 18.5 28.9 PointDR (Ours) 69.2 - - 7.1 0.0 2.4 6.7 0.0 73.5 8.5 33.6 0.2 65.6 47.6 63.6 31.0 60.7 24.4 38.8 SynLiDAR→SemanticSTF(dense fog) Baseline 21.6 - - 6.4 0.0 3.7 2.9 18.9 25.7 0.0 7.7 1.0 41.2 22.5 52.3 15.4 55.5 9.3 2.4 Dropout [39] 12.7 - - 7.7 0.0 1.9 0.4 2.5 38.3 0.1 10.2 0.3 37.3 21.8 57.4 13.1 44.5 10.1 1.0 Perturbation 13.3 - - 10.4 0.0 4.3 2.8 19.1 30.0 0.7 8.8 1.2 30.5 17.5 48.9 18.4 50.3 16.3 5.2 PolarMix [50] 15.8 - - 10.6 0.0 1.5 1.7 3.5 27.7 0.0 9.9 0.3 46.2 28.9 59.2 13.5 49.5 4.4 1.7 MMD [26] 26.5 - - 12.7 0.0 2.7 4.0 22.3 30.6 0.0 9.4 0.0 31.6 21.7 52.6 13.9 54.3 8.9 2.5 PCL [56] 22.9 - - 20.1 0.0 2.2 6.2 28.3 29.0 0.0 9.2 2.6 37.9 22.9 54.5 11.4 45.9 8.5 1.1 PointDR (Ours) 42.5 - - 16.6 0.0 2.4 3.2 12.2 31.9 0.2 9.0 0.8 42.8 27.1 59.8 18.3 44.0 15.4 5.7 Table 7. Class-wise IoU on domain generalization with SemanticKITTI or SynLiDAR as the source and validation set of dense fog in SemanticSTF as the target. '-' represents no samples captured in dense fog in the validation set of SemanticSTF. Methods car bi.cle mt.cle truck oth-v. pers. bi.clst mt.clst road parki. sidew. oth-g. build. fence veget. trunk terra. pole traf. SemanticKITTI→SemanticSTF(light fog) Table 8 . 8Class-wise IoU on domain generalization with SemanticKITTI or SynLiDAR as the source and validation set of light fog in SemanticSTF as the target.Table 10. Class-wise IoU on domain generalization with SemanticKITTI or SynLiDAR as the source and validation set of snow in SemanticSTF as the target. '-' represents no samples captured in snow in the validation set of SemanticSTF.Table 11. Comparison of data augmentation techniques and the proposed PointDR. PointDR performs clearly the best over domain generalized segmentation task SemanticKITTI→SemanticSTF.Table 12. Performance of PointDR models with different contrastive loss weight λ cl in Eq. 2 in the paper.However, when m is too large (at 0.999), the memory bank updates too slowly to capture the latest and representative feature embeddings, which fails to serve as the class-wise proxy and ultimately leads to a clear segmentation performance drop.Table 13. Performance of PointDR models with different momentum updated weight m for the memory bank B.Methods car bi.cle mt.cle truck oth-v. pers. bi.clst mt.clst road parki. sidew. oth-g. buid. fence veget. trunk terra. pole traf. SemanticKITTI→SemanticSTF(rain) Baseline 72.4 0.0 - 0.0 16.3 6.9 0.0 - 71.6 12.7 58.1 0.0 70.0 33.0 51.8 9.9 24.2 33.3 22.9 Dropout [39] 81.3 0.0 - 0.0 21.2 5.6 0.0 - 62.2 11.8 44.8 0.6 76.8 44.7 56.0 16.3 23.3 32.8 22.2 Perturbation 83.9 0.0 - 2.4 0.0 20.9 0.0 - 73.2 12.6 54.7 7.0 71.7 43.2 58.3 5.9 29.4 29.4 16.9 PolarMix [50] 56.7 4.0 - 9.1 1.5 29.8 0.0 - 68.2 10.9 50.2 0.5 73.2 47.2 48.3 17.8 22.3 32.3 14.1 MMD [26] 83.9 0.0 - 0.0 8.9 31.6 0.0 - 77.9 17.9 60.2 0.3 69.6 39.3 58.4 14.1 32.5 34.0 30.0 PCL [56] 84.2 0.0 - 0.0 0.1 4.3 0.0 - 68.1 10.9 55.5 4.6 74.7 43.9 59.6 5.8 27.3 34.2 38.8 PointDR (Ours) 78.0 0.0 - 0.0 13.8 20.0 0.0 - 72.1 14.7 60.0 1.2 76.1 36.9 58.0 18.3 24.7 36.1 32.5 SynLiDAR→SemanticSTF(rain) Baseline 45.8 4.5 - 6.8 0.4 38.9 0.0 - 32.0 0.0 24.3 0.0 43.0 8.0 33.8 11.3 23.9 11.5 7.7 Dropout [39] 47.0 7.6 - 7.7 0.0 34.0 0.0 - 47.3 6.9 34.6 0.0 39.8 11.5 37.5 13.8 29.6 21.6 8.6 Perturbation 57.5 5.3 - 18.2 0.0 36.3 0.1 - 37.1 1.5 26.9 0.3 34.9 10.4 32.6 12.2 20.5 23.2 10.4 PolarMix [50] 59.6 1.5 - 6.0 5.2 24.6 1.0 - 31.4 0.1 30.4 0.0 55.5 12.2 44.6 13.1 25.0 11.0 4.7 MMD [26] 49.5 4.8 - 20.0 4.7 37.6 0.0 - 43.7 0.0 32.4 0.0 42.1 11.3 34.4 12.3 25.1 13.4 8.1 PCL [56] 51.3 0.9 - 4.3 2.1 35.6 0.0 - 41.4 0.0 32.0 0.0 54.8 9.7 37.1 11.4 24.2 16.6 6.3 PointDR (Ours) 42.2 3.3 - 21.9 0.0 30.4 1.7 - 35.8 3.2 31.9 0.0 54.0 14.4 40.7 12.5 31.9 23.6 11.8 Table 9. Class-wise IoU on domain generalization with SemanticKITTI or SynLiDAR as the source and validation set of rain in Seman- ticSTF as the target. '-' represents no samples captured in rain in the validation set of SemanticSTF. of the paper.Table 14. Comparison of state-of-the-art domain adaptation methods on SemanticKITTI→SemanticSTF adaptation for dense fog. '-' represents no samples captured in dense fog in the validation set of SemanticSTF.Methods car bi.cle mt.cle truck oth-v. pers. bi.clst mt.clst road parki. sidew. oth-g. build. fence veget. trunk terra. pole traf. Source-only 56.4 - -10.1 0.0 0.6 15.4 0.0 68.0 0.6 22.8 0.0 63.6 36.6 62.8 29.4 53.5 17.7 19.5 ADDA [46] 63.4 - -14.3 0.0 2.1 8.0 38.7 68.0 0.1 25.6 0.0 60.6 45.4 64.8 30.4 52.6 20.4 41.9 Ent-Min [47] 68.0 - - 4.9 0.0 1.9 7.6 0.0 74.8 0.0 39.4 0.0 68.8 50.5 61.0 28.3 63.3 22.7 43.2 Self-training [65] 68.2 - -24.4 0.0 5.4 4.8 0.0 70.9 0.3 31.3 0.0 65.9 46.7 59.2 31.6 55.4 22.5 43.7 CoSMix [38] 76.5 - -27.0 0.0 4.7 0.0 0.0 74.2 0.5 29.9 1.8 62.1 48.0 62.6 37.3 59.6 23.4 28.8 Methods car bi.cle mt.cle truck oth-v. pers. bi.clst mt.clst road parki. sidew. oth-g. build. fence veget. trunk terra. pole traf. Table 15 . 15Comparison of state-of-the-art domain adaptation methods on SemanticKITTI→SemanticSTF adaptation for light fog.Table 16. Comparison of state-of-the-art domain adaptation methods on SemanticKITTI→SemanticSTF adaptation for rain. '-' represents no samples captured in rain in the validation set of SemanticSTF.Table 17. Comparison of state-of-the-art domain adaptation methods on SemanticKITTI→SemanticSTF adaptation for snow. '-' represents no samples captured in snow in the validation set of SemanticSTF.Methods car bi.cle mt.cle truck oth-v. pers. bi.clst mt.clst road parki. sidew. oth-g. build. fence veget. trunk terra. pole traf. Source-only 69.4 0.0 -0.1 0.1 12.1 0.0 -72.9 9.7 54.5 0.0 73.7 31.2 55.2 16.2 21.4 33.9 18.8 ADDA [46] 71.8 0.0 -0.1 0.7 3.8 0.0 -71.9 9.2 51.5 0.0 67.8 35.6 53.6 17.8 25.7 32.0 24.2 Ent-Min [47] 78.4 0.0 -0.4 2.9 0.1 0.0 -80.3 10.1 57.9 0.0 78.0 47.1 53.8 13.0 24.1 35.8 33.8 Self-training [65] 69.4 0.0 -0.1 0.1 12.1 0.0 -72.9 9.7 54.5 0.0 73.7 31.2 55.2 16.2 21.4 33.9 18.8 CoSMix [38] 83.6 0.1 -2.1 11.8 47.9 0.0 -64.7 10.9 51.1 2.5 72.6 47.2 59.8 25.7 20.9 27.2 35.2 Methods car bi.cle mt.cle truck oth-v. pers. bi.clst mt.clst road parki. sidew. oth-g. build. fence veget. trunk terra. pole traf. Source-only 70.7 0.0 0.0 15.4 1.6 5.1 - -49.8 8.9 36.6 0.0 67.1 26.3 30.7 28.1 22.1 26.8 9.6 ADDA [46] 69.3 0.0 0.0 14.0 0.8 2.9 - -55.3 1.3 35.7 0.0 67.2 26.7 37.5 30.1 21.2 25.4 11.0 Ent-Min [47] 73.8 0.0 15.4 19.8 1.4 2.9 - -53.6 1.6 32.9 0.0 73.4 28.5 34.1 28.8 21.7 26.6 8.8 Self-training [65] 73.9 0.0 6.1 16.9 5.2 7.7 - -53.9 6.2 34.3 0.0 69.3 27.7 33.7 29.8 19.5 26.9 16.0 CoSMix [38] 79.2 1.3 0.0 0.6 14.2 38.9 - -70.1 15.1 54.1 6.3 74.6 44.1 58.3 20.5 20.4 26.9 35.6 Methods car bi.cle mt.cle truck oth-v. pers. bi.clst mt.clst road parki. sidew. oth-g. build. fence veget. trunk terra. pole traf. invalid below. Drivable areas where cars could drive on including main road, bike lanes, and crossed areas on the street. Road curb is excluded. sidewalk Paths along sides of the road, used for pedestrians and bicycles, but cars are not allowed to drive on. Also include private driveways. parking Areas for parking and are clearly different from sidewalk and road. If unclear then other-ground or sidewalk can be selected. Garages are labeled as building instead of parking. other-ground Ground that excludes sidewalk, terrain, road, and parking. It includes (paved/plastered) traffic islands that are not meant for walking. construction building All building parts including walls, doors, windows, stairs, and garages, etc. fence Separators including wood or metal fences, small walls and crash barriers. vehicle car Different types of cars, including cars, jeeps, SUVs, and vans. truck Trucks, vans with a body that is separate from the driver cabin, pickup trucks, as well as their attached trailers. bicycle Including different types of bicycles, without any riders or pedestrians nearby. motorcycle Including different types of motorcycles, without any riders or pedestrians nearby. other-vehicle Other types of vehicles that do not belong to previously defined vehicle classes, such as various trailers, excavators, forklifts, and fallbacks. nature vegetation Including bushes, shrubs, foliage, treetop except for trunks, and other clearly identifiable vegetation. trunk The tree trunk is labeled as trunk separately from the treetop. terrain Mainly include grass and soil. human person Humans that are standing, walking, sitting, or in any other pose, but not driving any vehicle. Trolley cases, strollers, and pets nearby are excluded.cat. class definition flat road Table 19 . 19Definitions of semantic classes in SemanticSTF. Contrastive adaptation network for unsupervised domain adaptation. Guoliang Kang, Lu Jiang, Yi Yang, Alexander G Hauptmann, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionGuoliang Kang, Lu Jiang, Yi Yang, and Alexander G Haupt- mann. Contrastive adaptation network for unsupervised do- main adaptation. In Proceedings of the IEEE/CVF con- ference on computer vision and pattern recognition, pages 4893-4902, 2019. 2 3d-vfield: Adversarial augmentation of point clouds for domain generalization in 3d object detection. Alexander Lehner, Stefano Gasperini, Alvaro Marcos-Ramiro, Michael Schmidt, Mohammad-Ali Nikouei Mahani, Nassir Navab, Benjamin Busam, Federico Tombari, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition2022Alexander Lehner, Stefano Gasperini, Alvaro Marcos- Ramiro, Michael Schmidt, Mohammad-Ali Nikouei Mahani, Nassir Navab, Benjamin Busam, and Federico Tombari. 3d- vfield: Adversarial augmentation of point clouds for domain generalization in 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17295-17304, 2022. 2 Domain generalization with adversarial feature learning. Haoliang Li, Shiqi Sinno Jialin Pan, Alex C Wang, Kot, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition1314Haoliang Li, Sinno Jialin Pan, Shiqi Wang, and Alex C Kot. Domain generalization with adversarial feature learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5400-5409, 2018. 2, 6, 13, 14 Pointvoxel cnn for efficient 3d deep learning. Zhijian Liu, Haotian Tang, Yujun Lin, Song Han, Advances in Neural Information Processing Systems. 32Zhijian Liu, Haotian Tang, Yujun Lin, and Song Han. Point- voxel cnn for efficient 3d deep learning. Advances in Neural Information Processing Systems, 32, 2019. 2 Unsupervised domain adaptive 3d detection with multi-level consistency. Zhipeng Luo, Zhongang Cai, Changqing Zhou, Gongjie Zhang, Haiyu Zhao, Shuai Yi, Shijian Lu, Hongsheng Li, Shanghang Zhang, Ziwei Liu, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionZhipeng Luo, Zhongang Cai, Changqing Zhou, Gongjie Zhang, Haiyu Zhao, Shuai Yi, Shijian Lu, Hongsheng Li, Shanghang Zhang, and Ziwei Liu. Unsupervised do- main adaptive 3d detection with multi-level consistency. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8866-8875, 2021. 2 1 year, 1000 km: The oxford robotcar dataset. Will Maddern, Geoffrey Pascoe, Chris Linegar, Paul Newman, The International Journal of Robotics Research. 361Will Maddern, Geoffrey Pascoe, Chris Linegar, and Paul Newman. 1 year, 1000 km: The oxford robotcar dataset. The International Journal of Robotics Research, 36(1):3-15, 2017. 2 Rangenet++: Fast and accurate lidar semantic segmentation. Andres Milioto, Ignacio Vizzo, Jens Behley, Cyrill Stachniss, 2019 IEEE/RSJ international conference on intelligent robots and systems (IROS). Andres Milioto, Ignacio Vizzo, Jens Behley, and Cyrill Stachniss. Rangenet++: Fast and accurate lidar semantic segmentation. In 2019 IEEE/RSJ international conference on intelligent robots and systems (IROS), pages 4213-4220. IEEE, 2019. 2 Domain generalization via invariant feature representation. Krikamol Muandet, David Balduzzi, Bernhard Schölkopf, PMLRInternational Conference on Machine Learning. Krikamol Muandet, David Balduzzi, and Bernhard Schölkopf. Domain generalization via invariant fea- ture representation. In International Conference on Machine Learning, pages 10-18. PMLR, 2013. 2 Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, 3214Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An im- perative style, high-performance deep learning library. Ad- vances in neural information processing systems, 32, 2019. 14 Towards reliable perception for unmanned ground vehicles in challenging conditions. Thierry Peynot, James Underwood, Steven Scheding, IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEEThierry Peynot, James Underwood, and Steven Scheding. Towards reliable perception for unmanned ground vehicles in challenging conditions. In 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 1170- 1176. IEEE, 2009. 3 Canadian adverse driving conditions dataset. Matthew Pitropov, Danson Evan Garcia, Jason Rebello, Michael Smart, Carlos Wang, Krzysztof Czarnecki, Steven Waslander, The International Journal of Robotics Research. 404-5Matthew Pitropov, Danson Evan Garcia, Jason Rebello, Michael Smart, Carlos Wang, Krzysztof Czarnecki, and Steven Waslander. Canadian adverse driving conditions dataset. The International Journal of Robotics Research, 40(4-5):681-690, 2021. 1, 2 Pointnet: Deep learning on point sets for 3d classification and segmentation. Hao Charles R Qi, Kaichun Su, Leonidas J Mo, Guibas, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionCharles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 652-660, 2017. 2 Performance of laser and radar ranging devices in adverse environmental conditions. Julian Ryde, Nick Hillier, Journal of Field Robotics. 26Julian Ryde and Nick Hillier. Performance of laser and radar ranging devices in adverse environmental conditions. Jour- nal of Field Robotics, 26(9):712-727, 2009. 3 Acdc: The adverse conditions dataset with correspondences for semantic driving scene understanding. Christos Sakaridis, Dengxin Dai, Luc Van Gool, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer Vision16Christos Sakaridis, Dengxin Dai, and Luc Van Gool. Acdc: The adverse conditions dataset with correspondences for se- mantic driving scene understanding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10765-10775, 2021. 2, 3, 4, 16 Cosmix: Compositional semantic mix for domain adaptation in 3d lidar segmentation. ECCV. Cristiano Saltori, Fabio Galasso, Giuseppe Fiameni, Nicu Sebe, Elisa Ricci, Fabio Poiesi, 15Cristiano Saltori, Fabio Galasso, Giuseppe Fiameni, Nicu Sebe, Elisa Ricci, and Fabio Poiesi. Cosmix: Compositional semantic mix for domain adaptation in 3d lidar segmenta- tion. ECCV, 2022. 3, 7, 8, 14, 15 Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, 1514Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929-1958, 2014. 6, 13, 14 Adapting object detectors with conditional domain normalization. Peng Su, Kun Wang, Xingyu Zeng, Shixiang Tang, Dapeng Chen, Di Qiu, Xiaogang Wang, Computer Vision-ECCV 2020: 16th European Conference. Glasgow, UKSpringerProceedings, Part XI 16Peng Su, Kun Wang, Xingyu Zeng, Shixiang Tang, Dapeng Chen, Di Qiu, and Xiaogang Wang. Adapting object detec- tors with conditional domain normalization. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XI 16, pages 403-419. Springer, 2020. 2 Scalability in perception for autonomous driving: Waymo open dataset. Pei Sun, Henrik Kretzschmar, Xerxes Dotiwalla, Aurelien Chouard, Vijaysai Patnaik, Paul Tsui, James Guo, Yin Zhou, Yuning Chai, Benjamin Caine, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionPei Sun, Henrik Kretzschmar, Xerxes Dotiwalla, Aurelien Chouard, Vijaysai Patnaik, Paul Tsui, James Guo, Yin Zhou, Yuning Chai, Benjamin Caine, et al. Scalability in perception for autonomous driving: Waymo open dataset. In Proceed- ings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2446-2454, 2020. 4 TorchSparse: Efficient Point Cloud Inference Engine. Haotian Tang, Zhijian Liu, Xiuyu Li, Yujun Lin, Song Han, Conference on Machine Learning and Systems (MLSys). 1214Haotian Tang, Zhijian Liu, Xiuyu Li, Yujun Lin, and Song Han. TorchSparse: Efficient Point Cloud Inference Engine. In Conference on Machine Learning and Systems (MLSys), 2022. 12, 14 Searching efficient 3d architectures with sparse point-voxel convolution. Haotian Tang, Zhijian Liu, Shengyu Zhao, Yujun Lin, Ji Lin, Hanrui Wang, Song Han, European conference on computer vision. Springer1215Haotian Tang, Zhijian Liu, Shengyu Zhao, Yujun Lin, Ji Lin, Hanrui Wang, and Song Han. Searching efficient 3d architec- tures with sparse point-voxel convolution. In European con- ference on computer vision, pages 685-702. Springer, 2020. 1, 2, 5, 6, 8, 12, 15 Domain randomization for transferring deep neural networks from simulation to the real world. Josh Tobin, Rachel Fong, Alex Ray, Jonas Schneider, Wojciech Zaremba, Pieter Abbeel, 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEEJosh Tobin, Rachel Fong, Alex Ray, Jonas Schneider, Woj- ciech Zaremba, and Pieter Abbeel. Domain randomization for transferring deep neural networks from simulation to the real world. In 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS), pages 23-30. IEEE, 2017. 4 Training deep networks with synthetic data: Bridging the reality gap by domain randomization. Jonathan Tremblay, Aayush Prakash, David Acuna, Mark Brophy, Varun Jampani, Proceedings of the IEEE conference on computer vision and pattern recognition workshops. the IEEE conference on computer vision and pattern recognition workshopsCem Anil, Thang To, Eric Cameracci, Shaad Boochoon, and Stan BirchfieldJonathan Tremblay, Aayush Prakash, David Acuna, Mark Brophy, Varun Jampani, Cem Anil, Thang To, Eric Camer- acci, Shaad Boochoon, and Stan Birchfield. Training deep networks with synthetic data: Bridging the reality gap by domain randomization. In Proceedings of the IEEE confer- ence on computer vision and pattern recognition workshops, pages 969-977, 2018. 4 Adversarial discriminative domain adaptation. Eric Tzeng, Judy Hoffman, Kate Saenko, Trevor Darrell, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition15Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation. In Proceed- ings of the IEEE conference on computer vision and pattern recognition, pages 7167-7176, 2017. 7, 8, 14, 15
[]
[ "Signals of a Sneutrino (N)LSP at the LHC", "Signals of a Sneutrino (N)LSP at the LHC" ]
[ "Andrey Katz \nDepartment of Physics\nUniversity of Maryland\n20742College ParkMD\n", "Brock Tweedie \nDepartment of Physics and Astronomy\nJohns Hopkins University\n21218BaltimoreMD\n" ]
[ "Department of Physics\nUniversity of Maryland\n20742College ParkMD", "Department of Physics and Astronomy\nJohns Hopkins University\n21218BaltimoreMD" ]
[]
The sneutrino is a viable candidate for the NLSP in SUSY spectra with gravitino LSP. In this work we study the collider implications of this possibility. In particular, we investigate whether the LHC can distinguish it (at least, in some cases) from alternative spectra, such as those with a neutralino LSP. We show that there exists a complete family of experimentally allowed and theoretically motivated spectra with sneutrino NLSP, which exhibit very distinctive multilepton signals that are difficult to fake within the MSSM. We study these signals in detail, including the techniques necessary to find them. We demonstrate our analysis approach on simulations incorporating backgrounds.
10.1103/physrevd.81.035012
[ "https://arxiv.org/pdf/0911.4132v2.pdf" ]
119,113,655
0911.4132
e640ad10a4235154572e513ef51212e1e1d6f414
Signals of a Sneutrino (N)LSP at the LHC 16 Feb 2010 (Dated: February 16, 2010) Andrey Katz Department of Physics University of Maryland 20742College ParkMD Brock Tweedie Department of Physics and Astronomy Johns Hopkins University 21218BaltimoreMD Signals of a Sneutrino (N)LSP at the LHC 16 Feb 2010 (Dated: February 16, 2010) The sneutrino is a viable candidate for the NLSP in SUSY spectra with gravitino LSP. In this work we study the collider implications of this possibility. In particular, we investigate whether the LHC can distinguish it (at least, in some cases) from alternative spectra, such as those with a neutralino LSP. We show that there exists a complete family of experimentally allowed and theoretically motivated spectra with sneutrino NLSP, which exhibit very distinctive multilepton signals that are difficult to fake within the MSSM. We study these signals in detail, including the techniques necessary to find them. We demonstrate our analysis approach on simulations incorporating backgrounds. I. INTRODUCTION Supersymmetry (SUSY) has been a leading framework for physics beyond the Standard Model (SM) for decades. While the basic motivation is quite simple -to stabilize the electroweak sector against quantum corrections from arbitrarily high energy scales -SUSY has proved to be a seemingly never-ending source of investigation from the perspective of formal theory, model building, and phenomenology. The vast parameter space of even the simplest incarnation, the Minimal Supersymmetric Standard Model (MSSM), leads to a large variety of possible collider and cosmological signals, many of which are not yet fully understood. In this paper, we turn our attention to some of the less-explored regions of the MSSM parameter space, regions where the sneutrino appears to be the lightest supersymmetric particle (LSP) from the perspective of colliders. There are a couple of reasons why the sneutrino is typically overlooked as a possible LSP. In particular, it is unworkable as a dark matter candidate, since it would have already been discovered in direct detection experiments [1,2]. There are of course many ways to modelbuild around the direct detection constraints, but by far the simplest is to assume that the sneutrino decays, either through R-parity-violating interactions or into a lighter gravitino. The role of dark matter must then be played either by the gravitino or by some other sector of particles. Putting the sneutrino near the bottom of the spectrum therefore appears to offer no "benefit." However, even the standard scenario with a neutralino LSP is becoming progressively less viable (see, e.g., [3]), and we should definitely not take for granted that it is correct. Cosmology aside, there is also a simple parametric bias that usually disfavors light sneutrinos. In both gravity mediation and gauge mediation (GMSB) [4][5][6][7][8], the masses of the sfermions acquire contributions proportional to gauge couplings. In more minimal scenarios, this tends to make the left-handed (LH) slepton doublet heavier than the right-handed (RH) slepton, as well as the bino and often the wino. However, as we will see in the next section, it is straightforward for the slepton doublet to end up much lighter when we go beyond minimal assumptions. Though not often considered, then, there is no reason why the sneutrino cannot be at the bottom of the MSSM spectrum. Nonetheless, the most distinctive collider implications of this possibility remain mostly uninvestigated. If we assume that R-parity holds as a good symmetry, which is still very well-motivated from the perspective of forbidding proton decay, then the minimal cosmologically-viable scenario has the would-be LSP sneutrino decaying into the gravitino. However, since this decay is completely invisible, the sneutrino acts like an LSP in colliders, independent of its lifetime. 1 Here, we will begin an inquiry into the LHC signals of this sneutrino "(N)LSP". We will find decay chain topologies and collider signatures that are noticeably different from the standard cases, such as a neutralino LSP. Of course, even assuming a sneutrino NLSP with gravitino LSP, the parameter space describing the rest of the spectrum remains enormous. In our analysis here, we will concentrate on spectra with O(TeV) colored superparticles and lighter electroweak gauginos, all sitting above approximately flavor-degenerate LH slepton doublets. However, many of our observations will also survive in more generic spectra. Given this setup, we can identify two broad classes of spectra which lead to somewhat different phenomenology: either the RH slepton participates in the decay chains, or it mostly does not. Roughly speaking, this depends on whether the RH slepton is heavier or lighter than the bino-like neutralino. In this paper, we will concentrate on the collider signatures of the simpler case where m(ẽ R ) > m(B). We will investigate spectra with active RH sleptons in a companion paper [9]. In every decay chain ending in a sneutrino NLSP, lepton number is being carried away. This must necessarily be compensated for by release of a charged lepton or a neutrino. In addition, the sneutrino is quasi-degenerate with its SU(2) L partner, the charged LH slepton, so that the two sfermions will usually sit together at the bottom of the spectrum. This means that every decay chain has a sizable chance of producing at least one charged lepton, significantly modifying the lepton accounting compared to many alternative spectra. In addition, the splitting within the slepton doublet is often large enough to produce visible particles via W * emission. In the leptonic mode, this can come in combination with a charged lepton produced with the slepton, leading to a rather unique excess of oppositesign uncorrelated-flavor lepton pairs. This excess will be accompanied by somewhat larger number of completely sign-and flavor-uncorrelated dilepton events, as well as a somewhat smaller number of very distinctive trilepton events. The coexistence of all of these features together is highly non-generic in the MSSM, and will serve as a powerful indicator of the presence of a sneutrino NLSP. Our paper is organized as follows. In the next section, we give top down motivations for this scenario, discussing classes of mediation models in which a sneutrino NLSP may arise. In section III, we summarize the details of the slepton/sneutrino spectrum, decays, and cosmology. In section IV, we discuss generic event topologies and lepton accounting. (Readers mainly interested in collider phenomenology can start here.) We then perform a more detailed study of LHC signals in section V, including estimates of the dominant backgrounds. Section VI finishes with some closing thoughts and ideas for further studies. II. HOW TO GET A SNEUTRINO NLSP A. Gauge Mediation Giving up on a cosmological role for a light sneutrino, it finds a natural home in gauge mediation, where all would-be LSP's are fated to decay into the gravitino. Indeed, a sneutrino NLSP readily arises is General Gauge Mediation [10]. GGM is defined as the class of UV completions of the MSSM which fulfill the following condition: in the limit where all gauge couplings are taken to zero, the theory decouples into the SUSY breaking sector and the MSSM with exact SUSY. This definition includes various perturbative messenger models (either of direct gauge mediation or including a separate messenger sector), as well as nonperturbative ones, where even the definition of the messenger field is obscure. In GGM can be fully parameterized by six independent parameters at the messenger scale (not including the Higgs sector 2 and the value of the messenger scale itself). One can think 2 One needs an additional mechanism (which might not fit the definitions of GGM) in order to produce reliable µ and Bµ terms. Such mechanisms usually significantly modify the soft masses of the Higgses from the GGM value (see e.g. [11,12]). Therefore, in exploring the parameter space of GGM one should about this parametrization as follows: each gauge group of the standard model (SM) has some contribution to its gaugino and an independent contribution to the scalars charged under that group. We can parameterize the soft masses as M r = g 2 r MB r , m 2f = 3 r=1 g 4 r C 2 (f, r)M 2 A r .(1) In these formulas r runs over the three gauge groups of the SM. B r are three complex numbers, and A r are three real numbers. The g r are the gauge couplings, and C 2 (f, r) denotes the quadratic Casimir of f with respect to the gauge group r. M is an overall soft mass scale. The GGM framework admits a large variety of different spectra, far beyond that of minimal GMSB models. Not all of these spectra are fully understood in terms of collider signatures and experimental constraints. For example, the signatures of the promptly decaying neutralino NLSP at the Tevatron have been only recently studied in full generality in [13]. The chargino, which can also be the NLSP in a narrow region of GGM parameter space, has also been neglected for a long time and was first seriously considered only recently [14]. Obtaining sneutrino as the NLSP was found to be straightforward in [15,16], even with restrictions on the GGM parameters. For example, in order to ensure that the LH slepton doublet is lighter than the RH slepton, it is adequate simply to demand 3 A 2 < ∼ 3 5 g 4 1 g 4 2 A 1 ≃ (0.2)A 1 .(2) The gauginos can subsequently be made heavier than the LH slepton completely independently by adjusting the B r , and the squarks can easily be made heavier as well. Realizing this situation with perturbative models is also possible, though not quite so trivial. Using the GGM notation, the Bs are now related to the As. Specifically, in order to make the wino arbitrarily heavy with respect to the LH slepton in models with purely Fterm SUSY-breaking masses for the messengers, we would need arbitrarily large messenger keep the Higgs sector scales as free parameters. 3 Note that this is only a rough upper bound on A 2 , since it does not account for radiative corrections or left/right mixing effects. In particular, near the boundary, stau mixing effects can become large, andτ 1 may become the NLSP. Dynkin index. 4 However, perturbative gauge unification is spoiled unless the Dynkin is < ∼ 5, or the messenger scale is made very high. Taking the former constraint, the best we can do is to make the wino a little less than twice as heavy. At the same time, we must have a large mass for the RH slepton (which automatically also translates into a large mass for the bino), and this also feeds into the LH slepton via A 1 , decreasing the gap with the wino. Without pushing to large Dynkin, the best we can manage is a rather squashed spectrum of sleptons and electroweak gauginos at the messenger scale, and mixing effects and radiative corrections can easily reorder it. A simple model that manages to produce an acceptable spectrum contains two identical copies of 10+10, with independent supersymmetric and supersymmetry-breaking masses for each of the SM irreducible representations. 5 From the perspective of the electroweak gauginos and sleptons, the Q-like ((3, 2) 1/6 + cc) messenger fields act as an approximately pure source of A 2 and B 2 , and the U-like ((3, 1) −2/3 + cc) and E-like ((1, 1) 1 + cc) fields act as independently tunable sources of A 1 and B 1 . The LH slepton is lightest by a comfortable margin when we choose parameters such that B 1 is 2 ∼ 3 times larger than B 2 . More generally, we can consider perturbative models with D-term contributions from a hidden gauge sector under which the messengers are charged. This was suggested in [18] as a way to fully realize GGM without resorting to strong coupling. In the case at hand, it allows us to independently decrease the LH slepton mass with respect to the wino mass without requiring arbitrarily large Dynkin index. Clearly, then, a sneutrino NLSP in gauge mediation would point to a highly non-minimal, possibly nonperturbative messenger sector. Of course, even if the sneutrino NLSP is actually established at the LHC, determining whether the spectrum is due to non-minimal gauge mediation will be quite difficult, since even a prompt decay of the sneutrino is completely invisible. Still, there will be various clues encoded in the spectrum, namely the sum rules 4 The messenger Dynkin sets upper bounds on the mass ratios between the gauginos and the sfermions. Indeed, it is straightforward to engineer models that reduce these ratios, but we know of no examples of the opposite effect. 5 This can easily be accomplished by coupling in multiple singlet fields which feel SUSY breaking, assuming the couplings are not SU (5) symmetric [17]. However, since the Dynkin index here is 6, the messenger scale must be O(1000) TeV or larger to permit perturbative gauge coupling unification. In a more complete analysis, one also needs to worry about threshold corrections from the explicit global SU (5)-breaking in the messenger spectrum. above, near flavor-universality (and small A-terms), and perhaps even the apparent highscale non-unification of gaugino masses. B. Other Scenarios Obtaining the sneutrino at the bottom of the superparticle spectrum is also possible with other mediation mechanisms, such as gravity mediation or gaugino mediation [19,20]. 6 One way to accomplish this [21] (see also [22,23]) is to invoke highly non-universal Higgs masses (NUHM) at the input scale. This leads to large hypercharge D-term loop contributions in the renormalization group equations, what is usually called the "S-term". It is defined as S ≡ Tr Y m 2 ,(3) where the trace runs over all soft scalar masses of the MSSM, including the Higgses. It contributes to the running of the superpartners as ∆ dm 2 d ln µ = 1 8π 2 3 5 g 2 1 Y S .(4) In parameterizations of high-scale mediation with universal soft masses, this term vanishes at the scale where the soft masses are produced. However, by taking the Higgs mass parameters independent of the sfermions, and making the down-type mass much larger than the up-type, the S-term can push the RH slepton mass higher than the LH slepton mass. Since the down-type Higgs mass is very large in these scenarios, the contributions to the running of the third generation masses is significant. In particular, the third-generation slepton doublet can be appreciably lighter than the other two, at the level of 10's of GeV. This has led to several discussions [24][25][26][27] that emphasize signals with taus. We will not pursue this approach here, but note that large flavor non-degeneracies within the sneutrinos can lead to additional variations on the more general signals that we will explore in section IV. We defer investigation of these to [9]. Another, simple way to obtain a sneutrino NLSP in gravity mediation is by using independent (but flavor-universal) masses-squared for the 5 and 10 representations of SU(5). The RH slepton can be made arbitrarily heavier than the LH slepton in this way, but in order to end up with the LH slepton lighter than the bino, we should go to negative m 2 5 . The tachyonic masses for the LH slepton and RH down squark will run positive in the IR, provided they are not too large in magnitude, and they should not pose any phenomenological difficulties. 7 Since the Higgs soft masses are not necessarily very large, flavor non-universal contributions to the sleptons will typically be smaller. These spectra should fall under our analysis here. Physics beyond the MSSM could also play a role. For example, the LH slepton can be rendered light via renormalization group running induced by couplings to right-handed neutrinos, as recently proposed in [29]. Even though this paper did not consider spectra with sneutrino NLSP and gravitino LSP, it is likely possible to achieve with the same kind of setup. Since the physical mass spectrum here depends on new, unknown Yukawa couplings, flavor violation effects may be arbitrary. This is an interesting possibility, but falls outside the scope of our present work. In this paper, we will consider spectra that are relatively flavor-univeral. We have noted that a gravity mediation scenario with negative m 2 5 may fall into this class, but most of the alternatives involve some non-negligible degree of flavor violation, for example leading to tauenriched signatures. In particular, the discussions of [24][25][26][27]29] are essentially independent of our own observations below regarding flavor-universal multi-lepton signals. III. GENERAL PROPERTIES OF MODELS WITH A SNEUTRINO NLSP In this section, we discuss in more detail the generic features of the LH slepton and sneutrino states. We first work out the fine-structure of the mass spectrum, then discuss decays of the charged sleptons, and finally consider the role of the sneutrino NLSP in cosmology. A. Slepton and Sneutrino Mass Spectrum Before electroweak symmetry breaking, all three flavors of sneutrino are precisely degenerate with their charged partners. The measured difference between mass eigenstates 7 Universal negative masses-squared were considered in [28]. is dictated by electroweak D-terms and by the mixing between the left-and right-handed sleptons. The D-terms act to make the charged sleptons heavier than the sneutrinos for tan β > 1: ml+ L − mν = m 2 W (− cos(2β)) ml+ L + mν ≃ m 2 W sin 2 β ml+ L + mν ,(5) where in the last term we have displayed the large tan β limit, which becomes accurate to better than O(10%) for tan β > ∼ 3. The splitting is inversely proportional to the average doublet mass, and is always less than m W . For example, for masses near 200 GeV, the splitting is about 16 GeV or smaller. In the third generation, left/right mixing effects might become important. The mass- squared matrix of theτ s is   m 2 τ L + ∆ L −µvy τ sin β + vA * τ cos β −µ * vy τ sin β + vA τ cos β m 2 τ R + ∆ R   ,(6) where we have represented the electroweak D-term contributions to LH and RH sleptons as ∆ L and ∆ R , respectively. If we neglect A-terms, and if the the µ/Yukawa terms are not too large compared to the mass-squared difference, the (mostly) left-handed stau is shifted down by approximately − µ 2 v 2 y 2 τ sin 2 β m 2 τ R − m 2 τ L .(7) (The D-term contributions have been neglected in the denominator, but they are small, roughly (20 GeV) 2 .) This can potentially makeτ 1 the NLSP, if the mass correction from mixing is larger than the electroweak doublet splitting above. Clearly then, in any viable sneutrino NLSP scenario, we will require 8 µ 2 v 2 y 2 τ m 2 τ R − m 2 τ L < ∼ m 2 W .(8) In general then, spectra with sneutrino NLSP can have quite detailed fine-structure for the lowest-lying states. Here, we will be interested in cases with approximate flavor degeneracy between the three slepton doublets, which carries over to the three flavors of NLSP 8 The LH stau doublet also receives flavor-nonuniversal mass corrections at loop level. These tend to push the third generation lighter, making the tau-sneutrino lightest. If these corrections are large, such as in high-scale scenarios with large Higgs soft masses, thenτ L can be lighter than the sneutrinos of the first two generations, even before left/right mixing is taken into account. We will not consider such scenarios here, but see [9]. sneutrinos. The stau-sneutrino will be the true NLSP, but the splitting with the other sneutrinos will not be large enough to generate any visible activity. Stau mixing will always push theτ 1 lighter than the first-and second-generationl + L , but as long as it stays heavier than the sneutrinos, the modifications to the phenomenology we consider here are minor. B. Decays In equation (5), we saw that in general the mass splitting between the sneutrino and its charged slepton partner is always less than m W . If all of the charginos and neutralinos are heavier, then the charged slepton decays will be dominated by W * emission:l →νff ′ (see Fig. 1). 9 (Subsequently,l will always refer tol L , so we drop the subscript.) As long as ml+ − mν > ∼ few GeV (which for even modest tan β is almost always true), the branching fractions into different species of quarks and leptons will be very similar to that of on-shell W s. In particular, it will produce e or µ approximately 22% of the time. It can also produce leptons from secondary τ decays, but the vast majority of the other decay modes will contain low-multiplicity jets. These will be quite difficult to cleanly identify, and we will not explore the possibility of using them in our searches. Seeing the products of the e and µ decay modes could in principle be complicated by the fact that the slepton/sneutrino mass splitting can become quite small, and that it is three- 9 Virtual gauginos will also contribute to the decays, opening up additional modes, and interfering with some of the W * modes in a flavor-dependent way. Practically, these tend to be much less important if the gauginos are well above m W . body. In the example given in the previous subsection, of a 200 GeV doublet split by 16 GeV, the average lepton momentum in the slepton rest frame is about 8 GeV. This approaches the threshold for good quality lepton identification in the LHC experiments. So a fraction of the leptons, from the softer region of the emission spectrum, may be unobservable. Since the hardness of the emitted leptons scales inversely with the doublet mass, heavier sleptons will yield softer leptons, possibly leading to O(1) loss of signal. This may be partially compensated for in strong SUSY production, in which the sleptons may be produced with substantial boost from decay chains initiated by much heavier squarks and gluinos. Since for cosmological reasons, we work with models where the sneutrino is the NLSP and the gravitino the LSP, we should also be mindful of the possibility that the charged LH slepton can decay directly to the gravitino and a charged lepton, much like a RH slepton NLSP. Though suppressed by some mass scale significantly larger than m W , this decay is two-body, and typically has much more phase space than theνlν mode. Assuming m 3/2 ≪ ml, the decay width into gravitino and lepton is given by Γ(l →Gl) ≃ m 5 l 16πF 2 ,(9) where √ F is the fundamental SUSY-breaking scale. The decay width intoνlν by W * emission goes as Γ(l →νlν) ≃ α 2 2 30π (ml + − mν) 5 m 4 W ≃ α 2 2 | cos(2β)| 5 2 6 · 15π m 6 W m 5 l ,(10) where have used equation (5) and taken the small splitting limit. The two-body decays become competitive if √ F < ∼ 60 α 2 2 | cos(2β)| 5 m 10 l m 6 W 1/4 ≃ (2 TeV) 1 | cos(2β)| 5 ml 100 GeV 5/2 .(11) For modest values of ml, such a small value of √ F is difficult to obtain in known models. For bigger values of ml, the two-body decays will be relevant for a small part of the viable parameter space. In subsequent sections, we will simply assume that the SUSY-breaking scale is not so small, and that two-body decays are subdominant. However, cases with significant two-body contributions would also be interesting to study. 10 10 If two-body dominates, then the phenomenology would look very similar to RH slepton NLSP, but with C. Cosmology A sneutrino NLSP decays mostly invisibly, into neutrino and gravitino, and its would-be relic density becomes attenuated by m 3/2 /mν. For most choices of parameters, it decays before matter domination. Therefore the neutrinos generated in its decay would presently constitute a subdominant non-thermal component of the cosmic neutrino background. In the cases where the gravitino is heavy enough to deposit a visible amount of kinetic energy in terrestrial nuclear recoil experiments, the gravitino-on-nucleon cross section is suppressed to a point well below present bounds. The possibility that the gravitino produced in the decay is in fact dark matter, or some component of it, has been considered in a number of papers (see, for example, [30,31]). Since we are mostly interested in collider signals, we will not explicitly consider this. Despite the elusive nature of the sneutrino's main decay products, rare decays involving on-shell or off-shell gauge bosons do produce visible particles. If the lifetime of the sneutrino is longer than a few seconds, these can upset the light element abundance of nucleosynthesis. This possibility was considered in detail in [32]. For sneutrinos lighter than about 300 GeV, and/or gravitinos lighter than about 4 GeV, there is no constraint. In particular, low-scale mediation scenarios like gauge mediation are automatically cosmologically safe. IV. SUSY CASCADES WITH A SNEUTRINO NLSP When all decay chains end in the LH slepton doublet, there can be significant modifications compared to more standard MSSM spectra. In this section, we will discuss these features and categorize promising decay topologies for searches at the LHC. The simplest SUSY events available at the LHC are direct pair production of the sneutrino and/or its charged partner, accompanied by a decays through W * in the latter case ( Fig. 1). In a hadron collider environment, none of these options is particularly easy to find. Sneutrino pair production is completely invisible, and could only be detected in principle through ISR, essentially impossible at the LHC. Including the charged slepton leads to relatively soft modified lepton counting. Instead of every event containing at least two hard leptons (and/or taus), each decay chain would now have a O(50%) chance of ending viaν →Gν. In a scenario with mixed twobody and three-body decay events, the former will introduce an additional population of opposite-sign same-flavor (OSSF) leptons from charged slepton production and decay. leptons and/or jets with very little missing energy. It will likely be swamped by physics backgrounds and fakes, even in the dileptonic mode. We are therefore led to consider more complicated processes, where the sneutrino is the final particle emitted in cascades initiated by gauginos or squarks/gluinos. As usual, the most striking situation is O(TeV) super-QCD (SQCD) production, and we choose to focus on this case. 11 Electroweak (EW) production of gauginos may also be visible, particularly in trilepton (or higher) channels. These become particularly important when the squarks and gluinos are much heavier than a TeV, and therefore difficult to produce. We do not study these explicitly, but the general observations below for the structure of trilepton signals in SQCD production should carry over essentially unchanged. Before we proceed, let us summarize our main assumptions, and describe more carefully the types of spectra we will analyze: • The RH sleptons are heavier than the (mostly-)bino, m(ẽ R ) > m(B) .(12) This is sufficient to ensure that the RH slepton is bypassed in the vast majority of the decay chains. We will consider spectra with this inequality reversed in [9]. • Approximate flavor degeneracy. In particular, the fine structure of the three LH slepton doublets is dominated by the D-term splitting in equation (5). The sneutrinos are mass-degenerate for all practical purposes. The mostly-LH stau may be somewhat lighter than the other charged sleptons, but still heavier than the sneutrinos. • All other superparticles are heavier than the charged LH slepton states. • O(TeV) colored superpartners, sitting above the EW gauginos. • Modest gaugino-Higgsino mixing. • Negligible A-terms at the mediation scale. 11 Squarks near 1 TeV are actually favored to achieve m H > 114 GeV (via loop corrections) in the MSSM, in the absence of large A-terms and with near flavor degeneracy. Of course, it is not difficult to circumvent this in extensions of the MSSM [33], for example [34,35]. Similar issues were earlier discussed in [36,37]. In the following subsections, we will explore the signals of this class of scenarios in more detail, as well as the difficulty of replicating them in alternative MSSM spectra. The discussion is at the conceptual level, without worrying about many of the complications of the hadron collider environment. We will perform a more realistic analysis illustrating these ideas in simulation in section V. A. Lepton Counting Consider, as a first stage, simple counting of the leptonic events including flavor/sign correlation between the leptons in dileptonic events. We will see that this lepton accounting is quite distinctive with a sneutrino NLSP. Starting from squark/gluino production, each decay chain must proceed down to the slepton doublet through charginos and neutralinos. Since we will be assuming that the colored superparticles are heaviest, these will always be treated as on-shell. However, this point is not crucial for many of our observations. We also neglect decay chains containing (mostly-)Higgsinos. These will be relevant for chains initiated by stops and sbottoms, but they will not typically serve as more than a correction to our signals. 12 We therefore concentrate on the decays of charginos and neutralinos that are mostly wino and bino. We illustrate the decay chains in Fig. 2. Up to possible phase space suppressions, and coupling shifts due to gaugino mixing, any EW gaugino decay has a 50% probability of producing a charged lepton in the first stage of the decay chain. Since the decays are flavor-blind, 1/3 of these leptons will be taus. At first pass we will treat these as "hadrons," considering only the prompt production of electrons and muons. This still leaves 1/3 of the gaugino decays, whether chargino or neutralino, going into one of these highly visible lepton modes. This leads to a significant chance for each SUSY event to contain one or two hard, isolated leptons. Specifically, the ratio of dilepton:monolepton:no-lepton would be roughly 1:4:4. If this had been the only source of the isolated leptons, then the dilepton signal would have 12 Spectra with strongly mixed charginos and neutralinos are an obvious exception to this. Also, in relatively unmixed cases with very light Higgsinos, they may contribute to the EW gaugino decay chains, via Higgs or electroweak boson emission. been completely charge and flavor uncorrelated, since each lepton is produced in a different chain. 13 However, we also have leptons from the charged slepton's decay via W * . Although these leptons tend to be softer, due to the approximate slepton-sneutrino degeneracy, they are still often visible. To some extent, these add to the monoleptonic signal, but their effects at higher multiplicity are much more interesting. χ +χ+ χ 0 χ 0 l + ĩ νi νĩ l + i νĩ νiν i (W + ) * (W + ) * νj l + j l + i l − i νj l + In the dileptonic events, we now have contributions where both leptons come from a single neutralino decay chain (either bino or neutral wino), as in the lower-right diagram of Fig. 2. Since the probability of the first decay to produce an electron or muon (plus selectron or smuon) is 1/3, and the probability of the second to do the same is approximately 2/9, we get a roughly 7% chance to get two leptons in a neutralino decay. These leptons are necessarily opposite-sign (OS), but they are flavor-uncorrelated. This is to be contrasted with the completely sign/flavor-uncorrelated dileptons discussed above. 13 Sign correlation across the event can occur to some extent. For example, we can have LH squark-antisquark pair production followed by decays into charged winos on both sides, each subsequently decaying into charged leptons. This leads to opposite-sign leptons. Similarly, squark-squark production (from valence quarks annihilating via gluino exchange) may have a bias towards same-sign leptons. Practically, neither of these tends to lead to an overwhelming charge bias, when all production and decay modes are taken into account. Since this signal can be produced from neutralino decays, but not from chargino decays, its size depends on their relative production rates. For example, in the case of events that contain two binos, these OS dileptons would account for roughly half of all dileptonic events. Pure wino production would have a weaker signal, as there is only a 1/3 chance for a given side to produceW 0 . In such a case, the excess OS leptons account for about 15% of the dileptonic events. In both cases, the dilepton channels account for almost 20% of all events, with monolepton and no-lepton each accounting for 35 ∼ 45%. We can further subdivide the dileptonic modes in the usual way, using the relative sign/flavor of the two leptons. They can be either opposite-sign opposite-flavor (OSOF), opposite-sign same-flavor (OSSF), same-sign opposite flavor (SSOF), or same-sign same flavor (SSSF). For the signals discussed above, we get flavor-universal ratios of 1:1 for OSOF:OSSF and SSOF:SSSF. But given the presence of our OS lepton production from neutralino decay followed by slepton decay, there will be an excess of OSOF and OSSF compared to SSOF and SSSF. The numbers above suggest that this excess will be O(0.1 ∼ 1). In addition, there will be very distinctive trilepton modes, where one side of the event produces OS dileptons in neutralino decay, and the other side produces a single lepton in either chargino or neutralino decay. The rates for these processes are only about 2 times smaller than the OS dilepton excess. We will discuss these in more detail in subsection IV C, but at this point we can already see that the three leptons will be completely flavor-uncorrelated, and that there will be no events where all three leptons have the same sign. There will even be a population of 4-lepton events, when both chains produce two OS leptons. This tends to be smaller than the trileptons by about an order of magnitude. We will not carefully investigate these events, but they will serve as further confirmation if they are observable. To summarize, we expect the following pattern of multilepton events when the sneutrino is the (N)LSP: • 35 ∼ 45% of all SUSY events will have no leptons, another 35 ∼ 45% will contain one lepton, and close to 20% will be dileptonic. • On top of a general sign/flavor-uncorrelated signal from leptons produced in independent chains, the dileptonic channel will contain a flavor-uncorrelated excess of OS events. The relative size of this excess depends on the relative rates of gauginos pro- To extract the correlated contribution to OSSF, we subtract the OSOF shape. The right panel shows the result of the subtraction. duced, but it will account for between 15% and 50% of the entire dilepton sample. • Trilepton events will be present, with numbers roughly half as large as the correlated OS excess (OS minus SS signal). • 4-lepton events may also be observable, though they will have much smaller rates. We emphasize that these are just rough estimates. In particular, phase space and mixing of gaugino couplings can in principle modify these numbers at the O(1) level. B. Opposite-Sign Dilepton Excess A standard signal of many SUSY spectra is an excess of OSSF dileptons, from the neutralino chain depicted in Fig. 3. This is generally accompanied by flavor-uncorrelated backgrounds, from both the SM and from SUSY. If we plot the dilepton invariant mass distributions from both the OSOF and OSSF channels, we see purely uncorrelated lepton pairs for the former, and a combination of uncorrelated and correlated leptons for the latter (left panel Fig. 4). In order to isolate the correlated contribution, one can perform a flavor subtraction, OSSF minus OSOF (right panel Fig. 4). This reveals a dilepton mass distribution which contains information about the mass splittings between the neutralinos and the slepton. For example, when the slepton is on shell, we get the characteristic ramp and edge shape. With the sneutrino NLSP spectra, we find ourselves in a very analogous situation. As discussed above, opposite-sign uncorrelated-flavor lepton pairs are produced in the decays of neutralinos into charged sleptons, followed by decay down to the sneutrino (bottomright panel of Fig. 2). These decays will produce their own distinctive dilepton invariant mass distribution (discussed in more detail below), now encoded in equal excesses in the OSOF and OSSF channels. Immediately, we can see that the above subtraction scheme The histogram shows a distribution obtained using BRIDGE. will completely miss this signal. In order to find it, one should perform a subtraction not in flavor, but in sign. 14 This is illustrated in Fig. 5. In order to further test flavor universality, the subtraction can be performed in individual opposite-flavor and same-flavor categories (OSOF minus SSOF, and OSSF minus SSSF). Subsequently, we will leave this option implicit, and simply refer to OS and SS categories. Two important qualifications to this procedure are in order. First, it is possible that the SUSY production will have sign correlations, for example if it is biased towards squarkantisquark pairs from s-channel gluons. In such a case, the uncorrelated OS and SS distributions may have different normalizations. However, they will still have the same shape. The OS excess could then still be revealed with a weighted subtraction, such that the subtracted shape is left with no high-mass tail. Second, the SM backgrounds (mainly tt) are dominantly OS. Indeed, the discovery of the SS signal will be a cleaner first indication that there is new physics at play. In the standard flavor-subtraction, these backgrounds cancel out. In our case, they do not. Interpretation of the SS-subtracted OS excess as additional new physics will therefore require greater care. Nonetheless, as we will show in section V, the 14 A similar subtraction has recently been advocated in [38], in studying scenarios with mostly right-handed sneutrino LSP in the MRSSM [39]. However, the framework there is genuinely flavor-violating, and incorporates different analysis tools. backgrounds can be brought to a manageable level, such that the signal becomes dominant with reasonable spectra. In addition to a somewhat unconventional distribution between sign/flavor bins, the shape of the OS excess has its own unique features. We can first observe that instead of a sequence of two 2-body decays or one 3-body decay, we have a 2-body decay followed by a 3-body decay. In fact, this possibility has already been discussed in the context of models with mostly right-handed sneutrinos as the (unqualified) LSP [40]. There it was pointed out that the dilepton mass distribution has a shape distinct from both of the ordinary two options. We display this shape for a sample spectrum, with and without spin effects in the slepton decay, in Fig. 6. (We also include a distribution obtained with the program BRIDGE [41], which we further utilize in the analysis of section V.) As usual, this shape contains information on the masses (and even the spins) of the particles participating in the decays. The endpoint is given by the same kind of expression which describes the endpoint of theχ 0 2 →l →χ 0 1 sequence with on-shell slepton: m max = (m 2 χ − m 2 l )(m 2 l − m 2 ν ) m 2 l ≃ 2(m 2 χ − m 2 l ) ml − mν ml .(13) We also note that in the small-splitting limit, the entire distribution achieves a universal shape, described by a ninth-degree polynomial. For details, see appendix A. The peak, which may be much easier to measure in practice than the endpoint, occurs near (0.48)m max . Several other aspects of the OS dilepton excess are worth noting: • The distribution may be bimodal, corresponding to the two subchainsχ 0 2 →l →ν and χ 0 1 →l →ν. If the two neutralinos have similar masses, then they may be difficult to disentangle. • While the presence of the completely sign-and flavor-uncorrelated dilepton events acts like a background here, its presence is also a crucial point of verification of the sneutrino NLSP interpretation. As per the accounting in the previous subsection, the total number of uncorrelated dilepton events will be O(1 ∼ 10) larger than the observed OS excess. • As we have already mentioned, the slepton/sneutrino mass difference scales inversely with the average doublet mass, and can be relatively small compared to the superpartner mass scale. Consequently, the lepton produced in the slepton decay will not necessarily have much available energy, and it must share this with the neutrino and (to a much lesser extent) the sneutrino. This can complicate the clean identification of the lepton, which on average acquires an energy of approximately (ml − mν)/2 in the slepton rest frame. (For example, for a 200 GeV doublet, the average energy is about 8 GeV.) With strict identification requirements to reject fakes, and tight isolation cuts to reject heavy flavor decays, O(1) of these leptons may be unusable. We will demonstrate that a significant fraction can still be used in realistic cases in section V. C. Trileptons Whenever one side of the event produces an OS dilepton pair, the other side has O(1/3) chance of also producing a lepton via gaugino decay. This leads to a roughly 2:1 ratio between rates for correlated OS dilepton events and very distinctive trilepton events, which have smaller SM backgrounds. If the OS excess is generated with high enough statistics to be clearly observable, the trilepton events will be observable as well. Indeed, the presence of these trilepton events may very well be established before the OS minus SS subtraction is possible. Beyond simple counting, it should also be possible to extract the correlated dilepton mass distribution (Fig. 6) from these events, leading to a powerful confirmation of the overall picture. However, there is an immediate combinatoric problem to be overcome. The only sign/flavor structure between these three leptons is that they cannot all have the same sign. In general, in every event we will be able to find two possible OS dilepton pairings, only one of which is the correct one. This is to be contrasted with the analogous production of trileptons in more standard SUSY spectra, where we are always guaranteed an OSSF pair. There we can focus on events where this pair is generated with a lepton of the opposite flavor. For our case, we propose a simple strategy which breaks the degeneracy and directly extracts the correct distribution to good approximation. The lepton emitted in the slepton decay will tend to be the softest lepton in the event. Assume that this is indeed the case. Focus on the subsample of events where only one of the hardest two leptons can form an OS pair with this soft lepton. (Equivalently, the two hardest leptons should be OS with respect to each other.) When we form the invariant mass distribution from this (half-sized) subsample, we should have a much higher probability of having guessed correctly, and the mass distribution should reflect this. We find that this technique works well in practice, and demonstrate its application in section V. The trilepton signal could also be useful for discovery via electroweak production of gaugino pairs. As usual, the high lepton multiplicity can make the events distinctive enough that SM backgrounds can be controlled, even though the accompanying jet activity and missing energy are not necessarily very large. We will not explicitly investigate this discovery channel here, but note that the observations of this section clearly still apply. In particular, the signal can immediately be discriminated from typical trilepton SUSY production by the complete lack of flavor correlations. It is possible that the Tevatron is already capable of placing interesting limits on sneutrino NLSP spectra in EW production. We save investigation of this for future work. D. Jet-Lepton Invariant Mass As in more traditional searches, we can also attempt to construct kinematic observables from the jets produced in the cascades. In particular, the sequenceq → qχ → ql(l/ν) results in a jet and a lepton, from which we can construct an invariant mass distribution. Since we assume that the gaugino is on-shell, this distribution has the same ramp-and-edge shape that characterizes the dilepton mass distribution inχ 0 2 → l +l− → l + l −χ0 1 (right panel Fig. 4). 15 Extraction of such a shape would help further confirm the origin of the leptons. It would also provide information on the squark/gaugino mass splittings since the edge is predicted to occur as in equation (13) with appropriate replacements. Of course, reality is not so straightforward. Even assuming high quality jet measurements, we are always left with combinatoric ambiguities since each side of the event will produce jets. Still, even incorporating multiple pairing possibilities into the construction of the mass distribution (or simply guessing at random), nontrivial edge features will remain. With high enough statistics, we may be able to cleanly identify these edges, recovering some kinematic information and achieving greater confidence that we are seeing leptons produced in two-body gaugino decays. 16 In events with gluino decays, these edges can be washed out, or disappear entirely. A heaver gluino decaying via an on-shell squark injects an extra hard jet into the event, further complicating the combinatorics. A lighter gluino decaying via off-shell squark effectively destroys the ramp-and-edge structure. In cases where gluino production dominates, there may therefore be no edge-like features in the jet-lepton invariant mass distribution. It should also be noted that we may accidentally use a lepton produced in slepton decay, versus gaugino decay, in the jet-lepton mass. However, this is not a major issue. Monoleptonic events are already dominated by leptons produced in gaugino decay (by a factor of 3 ∼ 6), and can be further purified by demanding harder leptons. In multilepton events, we usually make a correct choice by taking the hardest lepton. E. How Unique are These Signals? Will these signals be enough to confidently infer that the decay chains are ending in sleptons/sneutrinos? In order to address this question, let us consider how leptons are produced in alternative scenarios within the MSSM. SUSY cascades primarily generate leptons in three ways: decays of gauginos into sleptons/sneutrinos, decays of sleptons/sneutrinos into gauginos, and decays of electroweak gauge bosons emitted in transitions between charginos/neutralinos. Any of these particles could also be off-shell. In sneutrino NLSP spectra, a fourth option opens up, from the decaysl →νlν discussed in subsection III B. This is a crucial component of our signal. But when sneutrino is not at the bottom of the spectrum, these three-body decays are bypassed in favor of the two-body decaysl + →χ 0 l + orl + →χ + ν. The most commonly considered way to produce multiple leptons in the same chain is χ 0 2 →χ 0 1 via an intermediate slepton, as shown in Fig. 3. This leads to a very distinctive excess in the OSSF dilepton channel, typically constituting an O(1) fraction of all dilepton production. Such an excess will not be present with our sneutrino NLSP unless we also introduce the RH slepton into the cascades. (But this has its own set of novelties, which we will discuss in [9].) This OSSF excess is a very generic signal whenever charged sleptons participate in the decay chains. 17 But it can be hidden if there is an accidental mass degeneracy with one of the charginos/neutralinos, such that one of the emitted leptons becomes very difficult to find. Assuming this occurs, then we are left with uncorrelated production, where the leptons in dileptonic events always come from opposite chains. If the sleptons are rarely bypassed, this could lead to a similar dilepton:monolepton ratio. Another way to get single leptons in each chain is through emission of W ( * ) , from ã χ + →χ 0 transition, or vice-versa. The branching fraction into electron and muon is 22%, which is not so different from our 33% branching fraction for gaugino decays into sleptons. The counting may therefore be different only at the O(1) level, assuming the W ( * ) is produced in most decay chains. While each of these two possibilities -slepton transitions with a near-degeneracy and W ( * ) emission -do not lead to correlated dileptons within a single chain, there may nonetheless be sign correlations across an event originating at production. (Again, domination by squarkantisquark production is a simple example.) The naive sign-subtraction procedure in the dilepton invariant mass distribution will leave over an OS excess, but, unlike the sneutrino NLSP case, this will have exactly the same shape as the SS distribution (up to SM backgrounds). In addition, if we only get one lepton produced in each of the two decay chains in an event, we will never get a trilepton excess. In order to get either a distinctive OS excess or trileptons, we generally need the possibility of two leptons being produced in the same chain. To get multiple lepton emissions, we can string together these types of transitions. Indeed, a W * emission in association with slepton production or decay could closely mimic the pattern of our signal, particularly if there is a modest degeneracy analogous to the slepton/sneutrino mass splitting. While these are indeed logical possibilities, they are clearly highly non-generic. Besides the requirement of strong accidental mass degeneracies to hide the OSSF signals whenever 17 An obvious way around this is to have large flavor violation in the slepton sector. This is clearly dangerous from the perspective of µ → eγ, so we assume flavor universality in our own analysis. But for some interesting ways to have large SUSY flavor violation while avoiding disagreement with low energy precision experiments, see [43,44] and [39]. In order to fake our flavor-uncorrelated signal, this non-universality would need to be nearly maximal, but this possibility is excluded (see [45] for review). sleptons contribute, at least three distinct mass levels of charginos/neutralinos must be participating in almost every decay chain. This might become possible when there is large mixing, or if the mostly-Higgsinos are much lighter than the mostly-gauginos. We will not investigate these alternatives in detail, since, in any case, there are too many variations to explore exhaustively. But these arguments do suggest that the pattern of leptons in spectra with a sneutrino NLSP is quite distinctive. V. LHC SIMULATIONS In this section we will use the tools developed in section IV in order to analyze two example spectra. We perform simulations including showering/hadronization, event reconstruction at particle level, and SM backgrounds. We do not attempt to incorporate a detector simulation or energy resolution effects. The purpose here is to demonstrate the plausibility of a real search at the LHC, given the presence of backgrounds and a jetty environment where leptons might be lost, or simply fall below the energy threshold of good quality reconstruction. This last point is particularly relevant for the leptons produced in slepton decay down to the sneutrino, which can be somewhat soft due to the small mass splitting. In the following, we will assume that the LHC will ultimately reach its design energy of 14 TeV. We do not expect that a lower final operating energy will qualitatively change our conclusions. For example, at a 10 TeV LHC, the signal cross sections are reduced by a factor of about 3 ∼ 4. A. Sample Spectra There are many possible spectra with sneutrino NLSP that we might consider, but most of them lead to qualitatively similar phenomenology if we restrict ourselves to the assumptions outlined in section IV. In particular, the mass ordering between the wino and bino is not very important, as long as they are lighter than the squarks and gluinos. Other than very detailed variations involving large gaugino-Higgsino mixings, or introduction of approximate mass degeneracies, the main flexibility available to us is the mass of the LH slepton doublet and the production of the gauginos in squark/gluino decays. The former controls the mass splitting between the slepton and sneutrino, and hence the energy of the leptons produced in slepton decays. The relative rates of bino versus wino production will control the size of our OS dilepton excess and trilepton signal. Weaker signals occur if winos dominate, since fewer neutralinos will be produced. In addition, the proportion of gauginos produced directly in squark decays versus in gluino decays will determine the visibility of edge structures in the jet-lepton invariant mass distribution. We concentrate on two SUSY spectra with sneutrino NLSP within the framework of GGM with arbitrary Higgs sector. The spectra have similar slepton and gaugino masses, but the ordering of squarks versus gluino is different. In the first spectrum, labeled "qGGM," the physical squark masses are just under 1 TeV, while the physical gluino is about 1.4 TeV. SQCD production is dominated by squarks, and decays into winos and binos are roughly democratic. In the second spectrum, "gGGM," the situation is approximately reversed, with an 800 GeV gluino and 1.5 TeV squarks. Production in this spectrum is dominated by gluinos, which decay through off-shell squarks. These decays are highly biased towards winos, due to the larger SU(2) L couplings. 18 A full list of input parameters for both spectra is given in table I. We used SOFTSUSY v3.0.7 [46] to extract the physical mass spectrum from our messenger-scale parameters. These are presented in table II. 19 18 SQCD does not discriminate between squark chiralities, but the diagrams with off-shell squarks are weighted by the couplings into the final electroweak gauginos. This is to be contrasted with gluino decays into on-shell squarks, where both chiralities are produced equally, and the couplings to winos and binos simply determine the squark decay rates. This latter observation also applies to theqGGM spectrum for events with gluinos. 19 Note, that while the spectrumgGGM is perfectly viable, the spectrumqGGM has 112 GeV Higgs, which is excluded in the Higgs-sector decoupling limit. We use this spectrum for illustration purposes only. Uplifting the Higgs to the experimentally allowed range is not difficult [33], and practically will not change our conclusions (but note that other particles in the Higgs sector might be modified). B. Generation and Reconstruction of Events We utilized several programs in order to generate and reconstruct our signal events. These • BRIDGE v2.15 [41] for calculating branching ratios of SUSY particles and simulating decay chains. • PYTHIA v6.4.14 [48] interface for MadGraph to shower and hadronize the events. • FastJet v2.3.4 [49] for jet reconstruction of the particle-level events. After hadronization, event reconstruction proceeds as follows. We separate out leptons (electrons and muons) with p T above 5 GeV and |η| < 2.5, and check them for isolation. We scalar-sum the p T of the lepton with the p T s of all other non-leptonic (and non-invisible) particles within an η-φ cone of size 0.4. If the lepton constitutes 90% or more of the total p T , then we consider it "tight." Failing this, if the p T of the other particles tallies to less than 10 GeV, we consider it "loose." (This second class of leptons will be be used to keep more signal in trileptonic events.) We set aside leptons which fail both of these criteria for clustering into jets. In the spectra discussed above, nearly 90% of leptons produced in gaugino decay will be identified as tight, and most of the rest as loose. The leptons produced in the threebody slepton decays will be softer, and therefore more difficult to identify and isolate. In particular, we lose up to 30% of them due to our 5 GeV p T threshold. (See Fig. 7 for the complete p T distributions. Note the large tails due to the boost of the sleptons.) The efficiency for detecting the leptons that pass the threshold depends on the amount of jet activity in the event, varying from almost unity for squark pair production down to about 50% for gluino pair production. O(50%) of these leptons are identified in the loose category. After identifying the set of isolated leptons, we proceed to cluster all of the remaining non-invisible particles in the event into jets using the Cambridge/Aachen algorithm with R = 0.4. We keep jets with p T > 20 GeV and |η| < 2.5. Since we will focus on heavy SQCD production events, we apply cuts on jet activity and missing energy. We demand at least two jets above 300 GeV of p T , and E T of at least 200 GeV. 20 We do not consider events with zero leptons, though these would of course be interesting to investigate in a more complete analysis. We consider events with one lepton if it is tight, and if the transverse mass of the lepton and missing energy vector is above 100 GeV, to veto W backgrounds. An event with more than one lepton must have at least two tight leptons, or else we discard it. In particular, trilepton events may have a single loose lepton. This keeps O(50%) more of the signal in that channel, while the backgrounds remain small. We also neglect events with any OSSF dilepton pairs between 80 GeV and 100 GeV, in order to avoid incorporating Zs (either from backgrounds or produced in SUSY cascades) into our analysis. We present leading-order cross sections for theqGGM andgGGM spectra, before and after reconstruction and cuts, in appendix B. C. Backgrounds We simulate the leading backgrounds from SM processes that generate hard leptons and neutrinos through electroweak boson decays. These include • τ + τ − jj. ∆R jj > 0.4, p T j > 150 GeV. (500k events) We decay the tt and W W jj samples with BRIDGE both in semileptonic and in dileptonic channels, including taus. The all-hadronic mode was found to give negligible contribution given our cuts above. There will also be contributions from (l + l − )+jets (in addition to the tau mode above), (l + l − )W +jets, and (l + l − )(l + l − )+jets. None of these backgrounds are expected to be significant, and they are indeed found to be subdominant in other SUSY analyses (see, e.g., [50]). We have explicitly checked (l + l − )W +jets, which should be the dominant electroweak background in the trilepton channel, in particular (τ + τ − )W +jets in light of our transverse mass cut. We have verified that it is small, with a total cross section in the trileptonic channel, after cuts, of 0.04 fb. We do not explicitly include it in the analysis. The monoleptonic channel may be polluted by generic QCD events with heavy flavor decays. However, our cuts on tightness of the lepton and on E T will significantly reduce these. We do not investigate this background here, but note that generic QCD has been shown to be negligible in other monolepton SUSY analyses (see again [50]). These usually use a higher p T cut on the lepton, but we do not find that this has a significant effect on our results below. There is an additional subtlety with the monoleptonic channel. Without additional activity from heavy flavor, events with a single leptonic W decay will almost never pass our 100 GeV cut on transverse mass (section V B). In a more realistic simulation, most of the passing events would probably come from the resolution tail of the detector. However, we do not incorporate any resolution smearing in our nominal analysis. To get some sense of how large of an effect this might be, we smeared the transverse mass by 20% and checked how many events pass the cut. The signal is only marginally affected, but the surviving semileptonic tt+jets, W jj, and semileptonic W W jj backgrounds increase by factors of roughly 3, 5, and 10, respectively. These are still quite small with respect to the signals. (They can also be significantly attenuated with modest additional cost to the signal by simply using a higher transverse mass cut.) Leading-order cross sections for the backgrounds, before and after reconstruction and cuts, can be found in appendix B. D. Analysis of Events Here we present the results of the analyses of section IV, as applied to our two sample spectra,qGGM andgGGM, incorporating backgrounds. Events are weighted to correspond to 100 fb −1 of integrated luminosity. While pure counting in multiple channels will already indicate the presence of SUSY (and even suggest the presence of a sneutrino NLSP) long before this amount of luminosity is acquired, we choose to present this longer-term goal so that statistics are good enough to clearly see the shapes in all of our proposed kinematic distributions. Indeed, if lepton fakes and missing energy are well-understood early on, a clear excess of events in the monoleptonic channel may already be visible with as little as a few 100 pb −1 of data, even at the planned initial operating energy of 10 TeV. Subsequent progress will depend somewhat on how soon 14 TeV becomes available, with 3 ∼ 4 times higher signal cross sections. 21 In any case, relatively background-free samples of same-sign dilepton and trilepton events would start to become available with a few fb −1 . Opposite-sign dileptons should also emerge with statistical significance over backgrounds around this point. The presence of a near-flavor-universal excess of OS over SS could probably be inferred by a few 10's of fb −1 , with shape information becoming progressively better above that. Of course, spectra with lighter colored superpartners would have higher rates, and might require less running to achieve the same level of statistics. cuts. Fig. 9 shows a more refined view of the individual channels in the dileptonic events. We see that the ratio of monoleptonic:dileptonic is of order 3:1, which is close to what we expect from the naive counting presented in section IV. Backgrounds, dominated by tt+jets 22 , are clearly not obscuring theqGGM signal. They are more important forgGGM, though the dilepton mass shape is quite different. The background can of course be further reduced with more aggressive or more tailored cuts, but we do not investigate this explicitly. In Fig. 10, we show the dilepton invariant mass spectrum in the OS and SS channels. Fig. 11 displays the OS minus SS subtraction, which picks out the OS excess. In both plots, the Z mass window 80 GeV < m ll < 100 GeV is blinded. Note that we could avoid this if we separately considered opposite-flavor channels, which are free of Z contamination. In the spectrumqGGM we expect to see two bumps of comparable size, corresponding to wino and bino, sitting one on top of the other. Using equation (13), we predict peaks at 31 and 40 GeV. This is quite consistent with what we see on the sign-subtracted invariant mass plot. IngGGM spectrum, most of the decays proceed through wino, rather than bino. Hence we expect to see only one bump around 50 GeV, and this is indeed what is observed. We turn now to the dileptonic invariant mass in the trileptonic channel, forming lepton pairs according to the procedure discussed in subsection IV C. We show this distribution in Fig. 12. This channel essentially lacks any backgrounds after the cuts we impose, and reproduces the same features as the dileptonic sign-subtracted invariant mass. Finally we analyze the leading-jet/leading-lepton invariant mass in Fig 13. Since in thẽ gGGM spectrum, gluinos decay through the squarks off-shell, we do not see any clear feature. However, in theqGGM spectrum we see an edge in the distribution, corresponding to the cascade decayq →χ →l/ν. The theoretically predicted endpoint is around 625 GeV, and is easily visible. (The histograms are stacked.) Note that OSSF is slightly reduced due to the Z veto. VI. CONCLUSIONS AND OUTLOOK In this paper we began investigating the collider signals of the largely unexplored region of MSSM parameter space with NLSP sneutrinos. We showed that a large portion of this region has distinctive collider signatures at the LHC. In particular, we focused on strong production modes for spectra with O(TeV) colored superpartners, approximate flavor degeneracy in the LH sleptons, and RH sleptons mostly bypassed in the decay chains. We found that these spectra lead to interesting multilepton signals. A large fraction of SUSY events are monoleptonic or dileptonic. The dileptons are mostly characterized by a broad distribution with no sign or flavor correlations, but they are accompanied by a sizable excess of sign-correlated dileptons. Unlike many SUSY dilepton signals, this excess is completely flavor-universal. It has a unique shape which contains information about the mass splittings within the spectrum. In addition, the trilepton channel has an appreciable rate. If analyzed carefully, it can provide strong confirmation of the physics inferred from the dileptons. All together, these leptonic signatures are quite difficult to fake within alternative spectra in the MSSM, and their observation should be taken as highly suggestive evidence for a sneutrino NLSP. We also proposed specific ways to analyze these signals. In particular, we used simulations of two representative spectra to demonstrate that one can extract the signal in the dileptonic channel using sign subtraction. The signature in the trileptonic channel can also be purified, and combinatorial background significantly reduced, by choosing events with a unique opposite-sign pairing between the softest lepton and one of the two hardest leptons. This technique works because the softest lepton is usually the one emitted from the W * in the decay of the slepton down to the sneutrino. Of course, we could not cover all possible MSSM spectra withν-NLSP. In particular, we did not analyze spectra where the RH slepton plays an active role in the decay chains. Though these certainly share some common features with the spectra analyzed here, their collider signatures will be much more "leptogenic," producing up to three or four leptons from a single decay chain. Indeed, spectra with active RH sleptons can be treated as close relatives of the spectra studied in [51,52], with the roles of the LH and RH sleptons interchanged. We will study spectra with RH sleptons in detail in a forthcoming paper [9]. In this paper, we tried to address only a very broad question, namely whether we can identify spectra withν-NLSP utilizing some well-defined collider signals. Taking our results as evidence that this is possible (at least in a large portion of the allowed parameter space), we face further interesting questions. For example, if these signals are actually discovered, then are there any additional clues that tell us whether we are observing a high-scale or low-scale mediation scenario? One way to answer this question might be to study flavor non-degeneracy in the sleptons and sneutrinos. We have avoided explicit analysis of τ s, but dedicated study of their production in SUSY events could provide useful information. It would be very interesting to see how feasible such a study might be at the LHC. It would also be interesting to extend these studies to more remote parts of the parameter space. For example, we might consider spectra where a neutralino resides between the slepton and sneutrino. It is also very important to understand the current experimental bounds on all these scenarios from LEP and the Tevatron. To the best of our knowledge, these studies have not been performed yet. In this appendix, we summarize leading-order production cross sections for our signals and backgrounds in section V. The following table shows the cross sections (in fb) for the different super-QCD pair production modes for our two sample spectra. The last table lists our major backgrounds, including cross-sections at generator-level and in the various multi-lepton channels after reconstruction. The matched tt +jets sample is inclusive, while rest of the backgrounds were produced with generator-level cuts, as described in subsection V C. When we observed no events after cuts, we placed a "0" in the entry, but this is roughly to be understood as a limit of < ∼ 1 event after a 100 fb −1 run (i.e., cross section less than about 0.01 fb). spite of the fact that this definition covers a very broad class of models with potentially very different collider signatures, all of these models possess some common features, like maintaining the sum rules Tr [Y m 2 ] = Tr [(B − L)m 2 ] = 0 for the sfermion soft masses (at the messenger scale), as well as parametrically suppressed A-terms. FIG. 1 : 1Possible decay modes of the LH slepton. While the first diagram produces low-multiplicity jets, the second produces a relatively soft lepton. This lepton can be visible above backgrounds, if accompanied by other hard activity. FIG. 2 : 2Possible decay modes of gauginos, with W * decaying leptonically. Notice the sign and flavor flow on the last diagram, with two charged leptons in a single decay chain. While the leptons' signs are correlated, their flavors are not. FIG. 3 :FIG. 4 : 34The diagram responsible for theχ 0 2 decay down toχ 0 1 via an intermediate (possibly offshell) slepton, which occurs in more conventional spectra. The leptons are correlated in both charge and flavor, falling into the OSSF category. Schematic illustration of the standard SUSY dilepton flavor subtraction. The left panel shows the OSSF (magenta, solid) and OSOF (blue, dot-dash) dilepton invariant mass distributions. FIG. 5 : 5Schematic illustration of our dilepton sign subtraction. The left panel shows the OS (magenta, solid) and SS (blue, dot-dash) dilepton invariant mass distributions. To extract the correlated contribution to OS, we subtract the SS shape. The right panel shows the result of the subtraction. FIG. 6 : 6The OS dilepton invariant mass distribution from the sequenceχ0 → l − il + i → l − i (l + j ν jν * i ),with spectrum mχ0 = 362 GeV, ml = 232 GeV, mν = 218 GeV. The black line shows the distribution from flat phase space. The red line shows the distribution incorporating matrix elements. Input parameters for the sample spectra in units of GeV (GeV 2 for the masses squared). include• MadGraph/MadEvent v4.3.0 [47] for generation of generic 2 → 2 SUSY pair production in 14 TeV pp collisions. (100k events for each spectrum) FIG. 7 : 7The parton-level p T spectrum of leptons produced in three-body slepton decay, for thẽ qGGM spectrum (left panel) andgGGM spectrum (right panel). In the slepton rest frame, the lepton energy ranges from 0 to 14 GeV, with an average of about 7 GeV. • tt+jets. Matched using k T -MLM at 20 GeV, up to two additional jets. (6M events, 1.9M after matching veto) • W W jj (opposite-and same-sign). In MadGraph, we use cuts ∆R jj > 0.4, p T j > 150 GeV. (70k events) • lνjj via on-shell W (including τ s). ∆R jj > 0.4, p T j > 150 GeV, E T > 150 GeV. (500k events) Fig. 8 8shows the lepton counting at 100 fb −1 , 14 TeV, after application of our analysis FIG. 8 : 8Counts for number of observed leptons for theqGGM spectrum (left panel) andgGGM spectrum (right panel). (The histograms are stacked.) FIG. 9 : 9Dilepton channels for theqGGM spectrum (left panel) andgGGM spectrum (right panel). FIG. 10 : 10Dilepton invariant mass distribution for OS and SS categories, for theqGGM spectrum (left panel) andgGGM spectrum (right panel). The Z mass window has been blinded. (Backgrounds are almost purely OS.) FIG. 11 :FIG. 12 : 1112Dilepton invariant mass distribution, applying the OS minus SS subtraction (including backgrounds), for theqGGM spectrum (left panel) andgGGM spectrum (right panel). The Z mass window has been blinded. Error bars are representative of 100 fb −1 statistics. Trilepton invariant mass distribution, obtained using the technique described in subsection IV C, for theqGGM spectrum (left panel) andgGGM spectrum (right panel). FIG. 13 : 13Invariant mass of leading lepton and leading jet for theqGGM spectrum (left panel) and gGGM spectrum (right panel). (The histograms are stacked.) TABLE II : IIPhysical masses (in units of GeV) in the example spectra. max ≃ (0.48)m max .which peaks at 1 + √ 28 27 m Appendix B: Cross sections The next table lists the inclusive super-QCD pair production cross sections (in fb) for our sample spectra, broken down into multilepton channels, before and after reconstruction and cuts.qspectrumgggqqq * qq gGGM 650 310 8 36 qGGM 10 240 250 440 GGMgGGM partonic reco partonic reco total 950 310 1000 148 0l 450 200 350 85 1l 350 80 450 48 2l 120 26 190 15 3l 19 3.6 27 1.4 4l 1.9 0.3 1.7 0.04 dilep tt +jets semilep tt +jets dilep W W jj semilep W W jj W jj → (l/τ )νjj τ + τ − jjgenerated 6.3×10 4 2.5×10 5 104 418 4000 4400 1l reco 2.1 0.40 0.60 ∼0.02 0.31 0 2l (OS) reco 1.7 0 0.88 0 0 0.75 2l (SS) reco 0 0 0.09 0 0 0 3l reco ∼0.03 0 0 0 0 ∼0.008 Our analysis will also trivially extend to those cases with R-parity violation where the sneutrino decays outside the detector. In spite of the fact that the gravitino is not a guaranteed LSP candidate in these scenarios, it is a logical possibility which we further consider. Because the intermediate particle in the decay chain is now spin-1/2, there will be spin effects modifying the distributions from flat phase space. However, these average out when we sum over quark and lepton charges[42]. It was also suggested in[40] that a kind of subtraction, analogous to what is done for dilepton distributions, might be possible in order to remove the shape contribution from uncorrelated jet-lepton pairings. These cuts have not been optimized. A more complete analysis may achieve better signal vs background discrimination. For example, we might demand larger jet multiplicities but with a looser p T cut. Such a cut could be more inclusive of gluinos decaying through off-shell squarks. This factor applies both before and after our cuts on the signal. We have not explicitly investigated the backgrounds at 10 TeV, but do not expect them to become significantly more important relative to the signal.22 We parenthetically note that in most of the events passing cuts, one or both of the two leading jets typically do not come from the top's decay. AcknowledgmentsWe are grateful to Kaustubh Agashe, Zacharia Chacko, Sarah Eno, Beate Heinemann,Appendix A: Analytic approximations to opposite-sign dilepton mass distributionsThe dilepton invariant mass spectrum from the chainχi ) asymptotes to a polynomial in the limit where the sneutrino goes nonrelativistic in the slepton's rest frame (or equivalently, when ml − mν ≪ ml). Assuming constant matrix element, the normalized distribution takes the formwhere m max is as in equation(13). This distribution peaks at m max / Heavy Sneutrinos as Dark Matter. T Falk, K A Olive, M Srednicki, hep-ph/9409270Phys. Lett. 339248T. Falk, K. A. Olive, and M. Srednicki, Heavy Sneutrinos as Dark Matter, Phys. Lett. B339, 248 (1994), hep-ph/9409270. Sneutrino cold dark matter, a new analysis: Relic abundance and detection rates. C Arina, N Fornengo, 0709.4477JHEP. 1129C. Arina and N. Fornengo, Sneutrino cold dark matter, a new analysis: Relic abundance and detection rates, JHEP 11, 029 (2007), 0709.4477. The well-tempered neutralino. N Arkani-Hamed, A Delgado, G F Giudice, hep-ph/0601041Nucl. Phys. 741N. Arkani-Hamed, A. Delgado, and G. F. Giudice, The well-tempered neutralino, Nucl. Phys. B741, 108 (2006), hep-ph/0601041. A Phenomenological Model of Particle Physics Based on Supersymmetry. M Dine, W Fischler, Phys. Lett. 110227M. Dine and W. Fischler, A Phenomenological Model of Particle Physics Based on Supersym- metry, Phys. Lett. B110, 227 (1982). A Supersymmetric GUT. M Dine, W Fischler, Nucl. Phys. 204346M. Dine and W. Fischler, A Supersymmetric GUT, Nucl. Phys. B204, 346 (1982). Low-Energy Supersymmetry. L Alvarez-Gaume, M Claudson, M B Wise, Nucl. Phys. 20796L. Alvarez-Gaume, M. Claudson, and M. B. Wise, Low-Energy Supersymmetry, Nucl. Phys. B207, 96 (1982). Low-energy dynamical supersymmetry breaking simplified. M Dine, A E Nelson, Y Shirman, hep-ph/9408384Phys. Rev. 511362M. Dine, A. E. Nelson, and Y. Shirman, Low-energy dynamical supersymmetry breaking sim- plified, Phys. Rev. D51, 1362 (1995), hep-ph/9408384. New tools for low-energy dynamical supersymmetry breaking. M Dine, A E Nelson, Y Nir, Y Shirman, hep-ph/9507378Phys. Rev. 532658M. Dine, A. E. Nelson, Y. Nir, and Y. Shirman, New tools for low-energy dynamical super- symmetry breaking, Phys. Rev. D53, 2658 (1996), hep-ph/9507378. . A Katz, B Tweedie, to appearA. Katz and B. Tweedie, to appear (2009). P Meade, N Seiberg, D Shih, 0801.3278General Gauge Mediation. 177P. Meade, N. Seiberg, and D. Shih, General Gauge Mediation, Prog. Theor. Phys. Suppl. 177, 143 (2009), 0801.3278. New Approach to the µ-Bµ Problem of Gauge-Mediated Supersymmetry Breaking. C Csaki, A Falkowski, Y Nomura, T Volansky, 0809.4492Phys. Rev. Lett. 102111801C. Csaki, A. Falkowski, Y. Nomura, and T. Volansky, New Approach to the µ-Bµ Problem of Gauge-Mediated Supersymmetry Breaking, Phys. Rev. Lett. 102, 111801 (2009), 0809.4492. µ and General Gauge Mediation. Z Komargodski, N Seiberg, 0812.3900JHEP. 0372Z. Komargodski and N. Seiberg, µ and General Gauge Mediation, JHEP 03, 072 (2009), 0812.3900. . P Meade, M Reece, D Shih, 0911.4130Prompt Decays of General Neutralino NLSPs at the TevatronP. Meade, M. Reece, and D. Shih, Prompt Decays of General Neutralino NLSPs at the Tevatron (2009), 0911.4130. Supersymmetry with a Chargino NLSP and Gravitino LSP. G D Kribs, A Martin, T S Roy, JHEP. 01G. D. Kribs, A. Martin, and T. S. Roy, Supersymmetry with a Chargino NLSP and Gravitino LSP, JHEP 01, 023 (2009), 0807.4936. L M Carpenter, 0812.2051Surveying the Phenomenology of General Gauge Mediation. L. M. Carpenter, Surveying the Phenomenology of General Gauge Mediation (2008), 0812.2051. Parameter Space of General Gauge Mediation. A Rajaraman, Y Shirman, J Smidt, F Yu, 0903.0668Phys. Lett. 678367A. Rajaraman, Y. Shirman, J. Smidt, and F. Yu, Parameter Space of General Gauge Media- tion, Phys. Lett. B678, 367 (2009), 0903.0668. Implementing General Gauge Mediation. L M Carpenter, M Dine, G Festuccia, J D Mason, Phys. Rev. 79L. M. Carpenter, M. Dine, G. Festuccia, and J. D. Mason, Implementing General Gauge Mediation, Phys. Rev. D79, 035002 (2009), 0805.2944. Exploring General Gauge Mediation. M Buican, P Meade, N Seiberg, D Shih, 0812.3668JHEP. 0316M. Buican, P. Meade, N. Seiberg, and D. Shih, Exploring General Gauge Mediation, JHEP 03, 016 (2009), 0812.3668. Supersymmetry breaking through transparent extra dimensions. D E Kaplan, G D Kribs, M Schmaltz, hep-ph/9911293Phys. Rev. 6235010D. E. Kaplan, G. D. Kribs, and M. Schmaltz, Supersymmetry breaking through transparent extra dimensions, Phys. Rev. D62, 035010 (2000), hep-ph/9911293. Gaugino mediated supersymmetry breaking. Z Chacko, M A Luty, A E Nelson, E Ponton, hep-ph/9911323JHEP. 013Z. Chacko, M. A. Luty, A. E. Nelson, and E. Ponton, Gaugino mediated supersymmetry breaking, JHEP 01, 003 (2000), hep-ph/9911323. Exploration of the MSSM with Non-Universal Higgs Masses. J R Ellis, T Falk, K A Olive, Y Santoso, hep-ph/0210205Nucl. Phys. 652259J. R. Ellis, T. Falk, K. A. Olive, and Y. Santoso, Exploration of the MSSM with Non-Universal Higgs Masses, Nucl. Phys. B652, 259 (2003), hep-ph/0210205. W Buchmuller, J Kersten, K Schmidt-Hoberg, hep-ph/0512152Squarks and sleptons between branes and bulk. 0269W. Buchmuller, J. Kersten, and K. Schmidt-Hoberg, Squarks and sleptons between branes and bulk, JHEP 02, 069 (2006), hep-ph/0512152. Higgs boson exempt no-scale supersymmetry and its collider and cosmology implications. J L Evans, D E Morrissey, J D Wells, hep-ph/0611185Phys. Rev. 7555017J. L. Evans, D. E. Morrissey, and J. D. Wells, Higgs boson exempt no-scale supersymmetry and its collider and cosmology implications, Phys. Rev. D75, 055017 (2007), hep-ph/0611185. Collider signatures of gravitino dark matter with a sneutrino NLSP. L Covi, S Kraml, hep-ph/0703130JHEP. 0815L. Covi and S. Kraml, Collider signatures of gravitino dark matter with a sneutrino NLSP, JHEP 08, 015 (2007), hep-ph/0703130. Sneutrino NLSP Scenarios in the NUHM with Gravitino Dark Matter. J R Ellis, K A Olive, Y Santoso, JHEP. 10J. R. Ellis, K. A. Olive, and Y. Santoso, Sneutrino NLSP Scenarios in the NUHM with Gravitino Dark Matter, JHEP 10, 005 (2008), 0807.3736. A Heavy Higgs and a Light Sneutrino NLSP in the MSSM with Enhanced SU(2) D-terms. A D Medina, N R Shah, C E M Wagner, 0904.1625Phys. Rev. 8015001A. D. Medina, N. R. Shah, and C. E. M. Wagner, A Heavy Higgs and a Light Sneutrino NLSP in the MSSM with Enhanced SU(2) D-terms, Phys. Rev. D80, 015001 (2009), 0904.1625. Signatures of Sneutrino NLSP in Gravitino Dark Matter Scenario at the LHC. Y Santoso, 0909.4742Y. Santoso, Signatures of Sneutrino NLSP in Gravitino Dark Matter Scenario at the LHC (2009), 0909.4742. Minimal supergravity with m 2 0 < 0. J L Feng, A Rajaraman, B T Smith, Phys. Rev. J. L. Feng, A. Rajaraman, and B. T. Smith, Minimal supergravity with m 2 0 < 0, Phys. Rev. Enhanced Tau Lepton Signatures at LHC in Constrained Supersymmetric Seesaw. K Kadota, J Shao, 0910.5517Phys. Rev. 80115004K. Kadota and J. Shao, Enhanced Tau Lepton Signatures at LHC in Constrained Supersym- metric Seesaw, Phys. Rev. D80, 115004 (2009), 0910.5517. Thermal Production of Gravitinos. M Bolz, A Brandenburg, W Buchmuller, hep-ph/0012052Nucl. Phys. 606M. Bolz, A. Brandenburg, and W. Buchmuller, Thermal Production of Gravitinos, Nucl. Phys. B606, 518 (2001), hep-ph/0012052. Superweakly-interacting massive particles. J L Feng, A Rajaraman, F Takayama, hep-ph/0302215Phys. Rev. Lett. 9111302J. L. Feng, A. Rajaraman, and F. Takayama, Superweakly-interacting massive particles, Phys. Rev. Lett. 91, 011302 (2003), hep-ph/0302215. . T Kanzaki, M Kawasaki, K Kohri, T Moroi, hep-ph/0609246Cosmological Constraints on Gravitino LSP Scenario with Sneutrino NLSP. 7525011Phys. Rev.T. Kanzaki, M. Kawasaki, K. Kohri, and T. Moroi, Cosmological Constraints on Gravitino LSP Scenario with Sneutrino NLSP, Phys. Rev. D75, 025011 (2007), hep-ph/0609246. Higgs Physics as a Window Beyond the MSSM (BMSSM). M Dine, N Seiberg, S Thomas, 0707.0005Phys. Rev. 7695004M. Dine, N. Seiberg, and S. Thomas, Higgs Physics as a Window Beyond the MSSM (BMSSM), Phys. Rev. D76, 095004 (2007), 0707.0005. µB-driven electroweak symmetry breaking. Y Nomura, D Poland, B Tweedie, hep-ph/0509244Phys. Lett. 633573Y. Nomura, D. Poland, and B. Tweedie, µB-driven electroweak symmetry breaking, Phys. Lett. B633, 573 (2006), hep-ph/0509244. Minimally fine-tuned supersymmetric standard models with intermediate-scale supersymmetry breaking. Y Nomura, D Poland, B Tweedie, hep- ph/0509243Nucl. Phys. 745Y. Nomura, D. Poland, and B. Tweedie, Minimally fine-tuned supersymmetric standard models with intermediate-scale supersymmetry breaking, Nucl. Phys. B745, 29 (2006), hep- ph/0509243. Low-scale supersymmetry breaking: Effective description, electroweak breaking and phenomenology. A Brignole, J A Casas, J R Espinosa, I Navarro, hep-ph/0301121Nucl. Phys. 666105A. Brignole, J. A. Casas, J. R. Espinosa, and I. Navarro, Low-scale supersymmetry breaking: Effective description, electroweak breaking and phenomenology, Nucl. Phys. B666, 105 (2003), hep-ph/0301121. The MSSM fine tuning problem: A Way out. J A Casas, J R Espinosa, I Hidalgo, hep-ph/0310137JHEP. 018J. A. Casas, J. R. Espinosa, and I. Hidalgo, The MSSM fine tuning problem: A Way out, JHEP 01, 008 (2004), hep-ph/0310137. Sneutrino Dark Matter and Signals of Lepton Flavor Violation in the MRSSM. A Kumar, D Tucker-Smith, N Weiner, Neutrino Mass, 0910.2475A. Kumar, D. Tucker-Smith, and N. Weiner, Neutrino Mass, Sneutrino Dark Matter and Signals of Lepton Flavor Violation in the MRSSM (2009), 0910.2475. Flavor in supersymmetry with an extended Rsymmetry. G D Kribs, E Poppitz, N Weiner, Phys. Rev. 78G. D. Kribs, E. Poppitz, and N. Weiner, Flavor in supersymmetry with an extended R- symmetry, Phys. Rev. D78, 055010 (2008), 0712.2039. Mixed Sneutrinos, Dark Matter and the LHC. Z Thomas, D Tucker-Smith, N Weiner, Phys. Rev. 77Z. Thomas, D. Tucker-Smith, and N. Weiner, Mixed Sneutrinos, Dark Matter and the LHC, Phys. Rev. D77, 115015 (2008), 0712.4146. BRIDGE: Branching ratio inquiry / decay generated events. P Meade, M Reece, hep-ph/0703031P. Meade and M. Reece, BRIDGE: Branching ratio inquiry / decay generated events (2007), hep-ph/0703031. Invariant mass distributions in cascade decays. D J Miller, P Osland, A R Raklev, hep-ph/0510356JHEP. 0334D. J. Miller, P. Osland, and A. R. Raklev, Invariant mass distributions in cascade decays, JHEP 03, 034 (2006), hep-ph/0510356. The Standard Model and Supersymmetric Flavor Puzzles at the Large Hadron Collider. J L Feng, C G Lester, Y Nir, Y Shadmi, 0712.0674Phys. Rev. 7776002J. L. Feng, C. G. Lester, Y. Nir, and Y. Shadmi, The Standard Model and Supersymmetric Flavor Puzzles at the Large Hadron Collider, Phys. Rev. D77, 076002 (2008), 0712.0674. . Y Nomura, M Papucci, D Stolarski, Flavorful Supersymmetry, Phys. Rev. 77Y. Nomura, M. Papucci, and D. Stolarski, Flavorful Supersymmetry, Phys. Rev. D77, 075006 (2008), 0712.2074. Y Nir, 0708.1872Probing new physics with flavor physics (and probing flavor physics with new physics. Y. Nir, Probing new physics with flavor physics (and probing flavor physics with new physics) (2007), 0708.1872. SOFTSUSY: A C++ program for calculating supersymmetric spectra. B C Allanach, hep-ph/0104145Comput. Phys. Commun. 143305B. C. Allanach, SOFTSUSY: A C++ program for calculating supersymmetric spectra, Comput. Phys. Commun. 143, 305 (2002), hep-ph/0104145. MadGraph/MadEvent v4: The New Web Generation. J , JHEP. 09J. Alwall et al., MadGraph/MadEvent v4: The New Web Generation, JHEP 09, 028 (2007), 0706.2334. PYTHIA 6.4 Physics and Manual. T Sjostrand, S Mrenna, P Skands, hep-ph/0603175JHEP. 0526T. Sjostrand, S. Mrenna, and P. Skands, PYTHIA 6.4 Physics and Manual, JHEP 05, 026 (2006), hep-ph/0603175. Dispelling the N 3 myth for the k t jet-finder. M Cacciari, G P Salam, hep-ph/0512210Phys. Lett. 641M. Cacciari and G. P. Salam, Dispelling the N 3 myth for the k t jet-finder, Phys. Lett. B641, 57 (2006), hep-ph/0512210. The ATLAS), Expected Performance of the ATLAS Experiment -Detector. G Aad, 0901.0512Trigger and Physics. G. Aad et al. (The ATLAS), Expected Performance of the ATLAS Experiment -Detector, Trigger and Physics (2009), 0901.0512. Low-scale gaugino mediation, lots of leptons at the LHC. A Simone, J Fan, M Schmaltz, W Skiba, 0808.2052Phys. Rev. 7895010A. De Simone, J. Fan, M. Schmaltz, and W. Skiba, Low-scale gaugino mediation, lots of leptons at the LHC, Phys. Rev. D78, 095010 (2008), 0808.2052. . A Simone, J Fan, V Sanz, W Skiba, Leptogenic Supersymmetry, Phys. Rev. 80A. De Simone, J. Fan, V. Sanz, and W. Skiba, Leptogenic Supersymmetry, Phys. Rev. D80, 035010 (2009), 0903.5305.
[]
[ "Search for galactic axions with a high-Q dielectric cavity", "Search for galactic axions with a high-Q dielectric cavity", "Search for galactic axions with a high-Q dielectric cavity", "Search for galactic axions with a high-Q dielectric cavity" ]
[ "D Alesini \nINFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly\n", "D Babusci \nINFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly\n", "C Braggio \nINFN\nSezione di Padova\nPadovaItaly\n\nDipartimento di Fisica e Astronomia\nPadovaItaly\n", "G Carugno \nINFN\nSezione di Padova\nPadovaItaly\n\nDipartimento di Fisica e Astronomia\nPadovaItaly\n", "N Crescini \nDipartimento di Fisica e Astronomia\nPadovaItaly\n\nINFN\nLaboratori Nazionali di Legnaro\nLegnaro, PadovaItaly\n", "D D&apos;agostino \nDipartimento di Fisica E.R. Caianiello\nFisciano, SalernoItaly\n\nINFN\nSezione di Napoli\nNapoliItaly\n", "A D&apos;elia \nINFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly\n", "D Di Gioacchino \nINFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly\n", "R Di Vora \nINFN\nSezione di Padova\nPadovaItaly\n\nDipartimento di Scienze Fisiche\ndella Terra e dell'Ambiente\nUniversità di Siena\n53100SienaItaly\n", "P Falferi \nIstituto di Fotonica e Nanotecnologie\nCNR Fondazione Bruno Kessler\nI-38123Povo, TrentoItaly\n\nINFN\nTIFPA\nPovo, TrentoItaly\n", "U Gambardella \nDipartimento di Fisica E.R. Caianiello\nFisciano, SalernoItaly\n\nINFN\nSezione di Napoli\nNapoliItaly\n", "C Gatti \nINFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly\n", "G Iannone \nDipartimento di Fisica E.R. Caianiello\nFisciano, SalernoItaly\n\nINFN\nSezione di Napoli\nNapoliItaly\n", "C Ligi \nINFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly\n", "A Lombardi \nINFN\nLaboratori Nazionali di Legnaro\nLegnaro, PadovaItaly\n", "G Maccarrone \nINFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly\n", "A Ortolan \nINFN\nLaboratori Nazionali di Legnaro\nLegnaro, PadovaItaly\n", "R Pengo \nINFN\nLaboratori Nazionali di Legnaro\nLegnaro, PadovaItaly\n", "A Rettaroli \nINFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly\n", "G Ruoso \nINFN\nLaboratori Nazionali di Legnaro\nLegnaro, PadovaItaly\n", "L Taffarello \nINFN\nSezione di Padova\nPadovaItaly\n", "S Tocci \nINFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly\n", "D Alesini \nINFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly\n", "D Babusci \nINFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly\n", "C Braggio \nINFN\nSezione di Padova\nPadovaItaly\n\nDipartimento di Fisica e Astronomia\nPadovaItaly\n", "G Carugno \nINFN\nSezione di Padova\nPadovaItaly\n\nDipartimento di Fisica e Astronomia\nPadovaItaly\n", "N Crescini \nDipartimento di Fisica e Astronomia\nPadovaItaly\n\nINFN\nLaboratori Nazionali di Legnaro\nLegnaro, PadovaItaly\n", "D D&apos;agostino \nDipartimento di Fisica E.R. Caianiello\nFisciano, SalernoItaly\n\nINFN\nSezione di Napoli\nNapoliItaly\n", "A D&apos;elia \nINFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly\n", "D Di Gioacchino \nINFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly\n", "R Di Vora \nINFN\nSezione di Padova\nPadovaItaly\n\nDipartimento di Scienze Fisiche\ndella Terra e dell'Ambiente\nUniversità di Siena\n53100SienaItaly\n", "P Falferi \nIstituto di Fotonica e Nanotecnologie\nCNR Fondazione Bruno Kessler\nI-38123Povo, TrentoItaly\n\nINFN\nTIFPA\nPovo, TrentoItaly\n", "U Gambardella \nDipartimento di Fisica E.R. Caianiello\nFisciano, SalernoItaly\n\nINFN\nSezione di Napoli\nNapoliItaly\n", "C Gatti \nINFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly\n", "G Iannone \nDipartimento di Fisica E.R. Caianiello\nFisciano, SalernoItaly\n\nINFN\nSezione di Napoli\nNapoliItaly\n", "C Ligi \nINFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly\n", "A Lombardi \nINFN\nLaboratori Nazionali di Legnaro\nLegnaro, PadovaItaly\n", "G Maccarrone \nINFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly\n", "A Ortolan \nINFN\nLaboratori Nazionali di Legnaro\nLegnaro, PadovaItaly\n", "R Pengo \nINFN\nLaboratori Nazionali di Legnaro\nLegnaro, PadovaItaly\n", "A Rettaroli \nINFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly\n", "G Ruoso \nINFN\nLaboratori Nazionali di Legnaro\nLegnaro, PadovaItaly\n", "L Taffarello \nINFN\nSezione di Padova\nPadovaItaly\n", "S Tocci \nINFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly\n" ]
[ "INFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly", "INFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly", "INFN\nSezione di Padova\nPadovaItaly", "Dipartimento di Fisica e Astronomia\nPadovaItaly", "INFN\nSezione di Padova\nPadovaItaly", "Dipartimento di Fisica e Astronomia\nPadovaItaly", "Dipartimento di Fisica e Astronomia\nPadovaItaly", "INFN\nLaboratori Nazionali di Legnaro\nLegnaro, PadovaItaly", "Dipartimento di Fisica E.R. Caianiello\nFisciano, SalernoItaly", "INFN\nSezione di Napoli\nNapoliItaly", "INFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly", "INFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly", "INFN\nSezione di Padova\nPadovaItaly", "Dipartimento di Scienze Fisiche\ndella Terra e dell'Ambiente\nUniversità di Siena\n53100SienaItaly", "Istituto di Fotonica e Nanotecnologie\nCNR Fondazione Bruno Kessler\nI-38123Povo, TrentoItaly", "INFN\nTIFPA\nPovo, TrentoItaly", "Dipartimento di Fisica E.R. Caianiello\nFisciano, SalernoItaly", "INFN\nSezione di Napoli\nNapoliItaly", "INFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly", "Dipartimento di Fisica E.R. Caianiello\nFisciano, SalernoItaly", "INFN\nSezione di Napoli\nNapoliItaly", "INFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly", "INFN\nLaboratori Nazionali di Legnaro\nLegnaro, PadovaItaly", "INFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly", "INFN\nLaboratori Nazionali di Legnaro\nLegnaro, PadovaItaly", "INFN\nLaboratori Nazionali di Legnaro\nLegnaro, PadovaItaly", "INFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly", "INFN\nLaboratori Nazionali di Legnaro\nLegnaro, PadovaItaly", "INFN\nSezione di Padova\nPadovaItaly", "INFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly", "INFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly", "INFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly", "INFN\nSezione di Padova\nPadovaItaly", "Dipartimento di Fisica e Astronomia\nPadovaItaly", "INFN\nSezione di Padova\nPadovaItaly", "Dipartimento di Fisica e Astronomia\nPadovaItaly", "Dipartimento di Fisica e Astronomia\nPadovaItaly", "INFN\nLaboratori Nazionali di Legnaro\nLegnaro, PadovaItaly", "Dipartimento di Fisica E.R. Caianiello\nFisciano, SalernoItaly", "INFN\nSezione di Napoli\nNapoliItaly", "INFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly", "INFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly", "INFN\nSezione di Padova\nPadovaItaly", "Dipartimento di Scienze Fisiche\ndella Terra e dell'Ambiente\nUniversità di Siena\n53100SienaItaly", "Istituto di Fotonica e Nanotecnologie\nCNR Fondazione Bruno Kessler\nI-38123Povo, TrentoItaly", "INFN\nTIFPA\nPovo, TrentoItaly", "Dipartimento di Fisica E.R. Caianiello\nFisciano, SalernoItaly", "INFN\nSezione di Napoli\nNapoliItaly", "INFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly", "Dipartimento di Fisica E.R. Caianiello\nFisciano, SalernoItaly", "INFN\nSezione di Napoli\nNapoliItaly", "INFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly", "INFN\nLaboratori Nazionali di Legnaro\nLegnaro, PadovaItaly", "INFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly", "INFN\nLaboratori Nazionali di Legnaro\nLegnaro, PadovaItaly", "INFN\nLaboratori Nazionali di Legnaro\nLegnaro, PadovaItaly", "INFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly", "INFN\nLaboratori Nazionali di Legnaro\nLegnaro, PadovaItaly", "INFN\nSezione di Padova\nPadovaItaly", "INFN\nLaboratori Nazionali di Frascati\nFrascati, RomaItaly" ]
[]
A haloscope of the QUAX-aγ experiment, composed of an high-Q resonant cavity immersed in a 8 T magnet and cooled to ∼ 4.5 K is operated to search for galactic axion with mass ma 42.8 µeV.The design of the cavity with hollow dielectric cylinders concentrically inserted in a OFHC Cu cavity, allowed us to maintain a loaded quality-factor Q ∼ 300000 during the measurements in presence of magnetic field. Through the cavity tuning mechanism it was possible to modulate the resonance frequency of the haloscope in the region 10.35337 − 10.35345 GHz and thus acquire different dataset at different resonance frequencies. Acquiring each dataset for about 50 minutes, combining them and correcting for the axion's signal estimation-efficiency we set a limit on the axion-photon coupling gaγγ < 0.731 × 10 −13 GeV −1 with the confidence level set at 90%.
10.1103/physrevd.106.052007
[ "https://export.arxiv.org/pdf/2208.12670v1.pdf" ]
251,881,687
2208.12670
75dc4d36a0cf6c936bb1207a15f78065f9cba984
Search for galactic axions with a high-Q dielectric cavity D Alesini INFN Laboratori Nazionali di Frascati Frascati, RomaItaly D Babusci INFN Laboratori Nazionali di Frascati Frascati, RomaItaly C Braggio INFN Sezione di Padova PadovaItaly Dipartimento di Fisica e Astronomia PadovaItaly G Carugno INFN Sezione di Padova PadovaItaly Dipartimento di Fisica e Astronomia PadovaItaly N Crescini Dipartimento di Fisica e Astronomia PadovaItaly INFN Laboratori Nazionali di Legnaro Legnaro, PadovaItaly D D&apos;agostino Dipartimento di Fisica E.R. Caianiello Fisciano, SalernoItaly INFN Sezione di Napoli NapoliItaly A D&apos;elia INFN Laboratori Nazionali di Frascati Frascati, RomaItaly D Di Gioacchino INFN Laboratori Nazionali di Frascati Frascati, RomaItaly R Di Vora INFN Sezione di Padova PadovaItaly Dipartimento di Scienze Fisiche della Terra e dell'Ambiente Università di Siena 53100SienaItaly P Falferi Istituto di Fotonica e Nanotecnologie CNR Fondazione Bruno Kessler I-38123Povo, TrentoItaly INFN TIFPA Povo, TrentoItaly U Gambardella Dipartimento di Fisica E.R. Caianiello Fisciano, SalernoItaly INFN Sezione di Napoli NapoliItaly C Gatti INFN Laboratori Nazionali di Frascati Frascati, RomaItaly G Iannone Dipartimento di Fisica E.R. Caianiello Fisciano, SalernoItaly INFN Sezione di Napoli NapoliItaly C Ligi INFN Laboratori Nazionali di Frascati Frascati, RomaItaly A Lombardi INFN Laboratori Nazionali di Legnaro Legnaro, PadovaItaly G Maccarrone INFN Laboratori Nazionali di Frascati Frascati, RomaItaly A Ortolan INFN Laboratori Nazionali di Legnaro Legnaro, PadovaItaly R Pengo INFN Laboratori Nazionali di Legnaro Legnaro, PadovaItaly A Rettaroli INFN Laboratori Nazionali di Frascati Frascati, RomaItaly G Ruoso INFN Laboratori Nazionali di Legnaro Legnaro, PadovaItaly L Taffarello INFN Sezione di Padova PadovaItaly S Tocci INFN Laboratori Nazionali di Frascati Frascati, RomaItaly Search for galactic axions with a high-Q dielectric cavity (Dated: August 29, 2022) A haloscope of the QUAX-aγ experiment, composed of an high-Q resonant cavity immersed in a 8 T magnet and cooled to ∼ 4.5 K is operated to search for galactic axion with mass ma 42.8 µeV.The design of the cavity with hollow dielectric cylinders concentrically inserted in a OFHC Cu cavity, allowed us to maintain a loaded quality-factor Q ∼ 300000 during the measurements in presence of magnetic field. Through the cavity tuning mechanism it was possible to modulate the resonance frequency of the haloscope in the region 10.35337 − 10.35345 GHz and thus acquire different dataset at different resonance frequencies. Acquiring each dataset for about 50 minutes, combining them and correcting for the axion's signal estimation-efficiency we set a limit on the axion-photon coupling gaγγ < 0.731 × 10 −13 GeV −1 with the confidence level set at 90%. I. INTRODUCTION The axion is an hypothetical particle that was theorized to solve the strong CP problem. It arises from the spontaneous breaking of the Peccei-Quinn symmetry of QCD [1][2][3]. In addition, the properties predicted for the axion, charge neutrality, spin 0 and negligible interaction with the ordinary matter, make this particle a strong candidate for the dark matter [4]. Cosmology and astrophysical considerations, suggest an axion mass range 1 µeV < m a < 10 meV [5]. The hunt for axion is now world spread and most of the experiments involved in this search use detectors based on the haloscope design proposed by Sikivie [6,7]. Among them are ADMX [8][9][10][11], HAYSTAC [12,13], ORGAN [14], CAPP-8T [15,16], CAPP-9T [17], CAPP-PACE [18], CAPP-18T [19], GrA-Hal [20], RADES [21][22][23], TASEH [24], QUAX [25][26][27][28][29], and KLASH [30,31]. Dielectric and plasma haloscopes have also been proposed, like MADMAX [32] and AL-PHA [33], respectively. The haloscope concept is based on the immersion of a resonant cavity in a strong magnetic field in order to stimulate the inverse Primakoff effect, converting an axion into an observable photon [34]. To maximise the power of the converted axion, it's necessary to maximise the cavity quality-factor (Q) and to tune the resonance frequency to match the axion mass. Different solutions have being adopted to maximize the signal-to-noise ratio, facing the problem from different angles. Resonant cavities of superconductive and dielectric materials are becoming increasingly popular because of their high Q [35][36][37][38]. In this work we describe the results obtained operating the haloscope of the QUAX-aγ experiment using an high-Q dielectric cavity immersed in a static magnetic field of 8 T and cooled down to ∼ 4.5 K. The results obtained allow us to exclude values of g aγγ > 0.729 × 10 −13 GeV −1 at 90% confidence level (C.L.) in a region of mass 1.32 neV wide, centered at 42.8216 µeV. II. EXPERIMENTAL SETUP General description The core of the haloscope is an extremely high-Q resonant cavity. The cavity is extensively described in [39]: it is based on a right circular copper cavity with hollow sapphire cylinders that confine higher order modes around the cylinder axis. The useful mode is a TM030, which has an effective volume V · C 030 = 3.4 × 10 −2 liters at the resonant frequency of 10.3 GHz, where C 030 is a geometrical factor entering in the signal-power estimation in Eq. (4). Under an 8 T-field we measured an internal quality factor of more than 9 × 10 6 . The cavity and the magnet are hosted inside a liquid-He cryostat at a temperature of about 4 K. The principle scheme of the measurement set up is shown in Figure 1. The microwave cavity is immersed in a 8 T maximum magnetic field, not shown in the figure, generated by a 150 mm diameter bore of 500 mm length superconducting magnet. When the magnet is driven by a 92 A current, the effective squared field over the cavity length amount to 50.8 T 2 . The microwave cavity is read by a tunable monopole antenna with coupling β. This is obtained by acting on a manually-controlled mechanical feed-through, that allows for β values in the range 0.01 to 20. A weakly coupled port (coupling about 0.01) is used for calibration purposes and is connected to the room temperature electronics by means of line L1. To avoid thermal power inputs from room temperature, a 20 dB attenuation is inserted on L1. Cavity tuning was obtained by displacing triplets of 2 mm-diameter sapphire rods relative to the top and bottom cavity endcaps [39]. Again, independent motion of the two triplets is obtained by manually controlled mechanical feed-throughs. - The power collected by the tunable antenna is amplified by a cryogenic high electron mobility transistor amplifier (cryo HEMT in figure), isolated from the cavity by means of the circulators C1 and C2. The output of the cryo HEMT is filtered and then transmitted along line L3 to the room temperature electronics, where it is first amplified by a room temperature HEMT and then processed for data storage. The room temperature chain is the same used in our previous measurements [26]: the HEMT output is frequency down-converted using a mixer with the local oscillator frequency set to a value about 500 kHz below the cavity resonance. The low frequency in phase and quadrature outputs of the mixer are amplified and then sampled with a 2 Ms/s analog to digital converter (ADC) and stored on a computer for off line data analysis. Data storage is done with blocks of about 4 s of sampled data for both output channels of the mixer. An auxiliary line L2 is used for calibration purposes: it is connected to the line L3 by means of the circulator C1, and 32 dB of attenuation prevents thermal leakage from room temperature components. The room temperature electronic features also a Vector Network Analyser (VNA) for measurement of the scattering parameters S12 (input from line L2 -output from line L1), S31 and S32. From these scattering parameters it is possible to derive the loaded quality factor Q L , resonance frequency f c and coupling β of the tunable antenna. A diode noise source, having an equivalent noise temperature of about 10 4 K, can be fed to line L1 for testing after being amplified in such a way to have an equivalent noise temperature inside the microwave cavity slightly in excess of the thermodynamic temperature. A microwave signal generator and a microwave spectrum analyser are used for the measurement of the system noise temperature as described below. All rf generators, the VNA and the spectrum analyser are frequency locked to a GPS disciplined reference oscillator. Following the figure, all components below the horizontal blue line sectioning the 4K region are enclosed in a vacuum chamber immersed in a liquid helium cryostat. A Ruthenium Oxide thermometer measures the temperature of the cavity. Data taking In a data taking run performed over a two weeks period in June 2021 we searched for an axion signal as a cavity excess-power in a small frequency band about 10.353 GHz, i.e. about an axion mass of 42.8 µeV. As shown in [39], the unloaded Q factor of our microwave cavity is of several millions, well in excess of the axion quality factor Q a = 10 6 . We decided to perform measurements for three different values of the cavity loaded quality-factor Q L = Q 0 /(1 + β): 1) β 1, i.e. Q L > Q a 2) β 6, i.e. Q L Q a 3) β ≥ 14, i.e. Q L Q a The total data taking session comprised 8 sub-runs in regime 1, 33 sub-runs in regime 2 and 11 sub-runs in regime 3. We performed the following steps for each subrun: a. looking to the S32 spectra with a VNA, we moved the cavity frequency to the desired value by acting (inserting) the sapphire triplet for tuning. Normally, for each sub-run a shift of half the cavity linewidth with respect to the previous sub-run was done. b. we stored the S32 spectra for the resulting cavityconfiguration. This spectrum corresponds to a reflection-type measurement for the tunable port of the cavity. c. we injected along the line L1 a white noise produced with the amplified noise-source and collected down-converted low-frequency I and Q spectra with the ADC. For this run, thermal-input spectra were usually integrated for about 2-3 minutes. d. we removed any input to the system and collected data with the ADC. This step has been chosen to last 750 data blocks, for a total time of 3000 s. For each sub-run both the cavity temperatures and the liquid-helium level in the cryostat are recorded. The total time needed for a single sub-run is about one hour. We performed measurements only during the day, so that a typical day started with the liquid helium refilling of the cryostat, followed by the magnet charging lasting about 40 minutes. Considering that at the end of the day we needed to discharge the magnet for safety reasons, in a typical day we recorded about 5-6 sub-runs. In this paper we present the results obtained for the sub-runs with strong cavity-coupling such that Q L Q a (regime 3), while the other data will be subject of a different paper. Indeed, extracting the axion signal in a regime where Q L Q a or even higher poses a series of issues regarding systematics that necessitate a dedicated study. For the measurements in regime 3 we dealt with a loaded quality-factor Q L ∼ 3 × 10 5 , which is much larger than the typical values used by other running-haloscopes, where such value was never in excess of 10 5 (with the exemption of our previous measurement [26]). Noise temperature and gain We measured the system noise-temperature at the beginning, at the end and in the middle of the global data taking period. This procedure [40] consists in measuring precisely the gains of the three lines L1, L2 and L3 from the point A1 in Figure 1. The knowledge of the gains allows us to extract the system noise-temperature from the measurement of the noise level at the output of line L3. Gain measurement is obtained by feeding a calibrated power-level with the signal generator either from L1 or L2, and reading the outputs at L3 for inputs from both inputs lines or at L1, for an input from L2 only. The last measurement is only possible when the tunable antenna has a significant coupling to the cavity, therefore only at the cavity resonant frequency. On the contrary, to exploit the cavity reflection, the measurement from L2 to L3 is done at a frequency just off the cavity resonance. Figure 2 shows the power measured at the output of line L3 by feeding power into L1 (red points) or L2 (blue points). For each measure, we estimated the gains and the intercepts at zero, P 0 , with a fit with a linear function. The transmission coefficient from input L2 to output L1 is done in a similar way. Combining these results, we computed the gains from A1 to L1, L2, L3 as (g 1 , g 2 , g 3 ) = (-48.3, -39.3, 52) dB, respectively. We computed the system noise-temperature as T sys = P 0 k B Bg 3(1) obtaining T sys = 17.3 ± 1 K. Here, k B is the Boltzmann constant and B the resolution bandwidth. This particular value of the noise temperature was obtained with the magnetic field on at the end of the last day of run, at the end of the sessions with regime 3. Previous measurements, performed with the magnet off, were in agreement with this one. The cause of the large noisetemperature observed was identified in a malfunction of the cryo HEMT exhibiting a quite high added noise. After the run, we measured its added noise separately finding a value about 10-12 K. Raw data processing As described in II 2, we measured, for each subrun, the values of the cavity parameters by taking a reflection spectrum on the pick-up antenna, and a transmission spectrum with a thermal-noise source feeding power into the weakly coupled antenna. Parameters are extracted by fitting the spectra. A standard Lorentzian line shape is used to fit the transmission spectrum: S31 2 (ν) = A 2π Γ (ν − ν c ) 2 + (Γ/2) 2(2) With this equation we extract from the fit the cavity resonance frequency ν c and the linewidth Γ, directly related to the loaded factor of merit Q L = ν c /Γ. A is a normalization constant. A modified reflection function is used for the reflection spectrum to take into account some impedance mismatch between the cavity and the first-stage amplifier: S32(δ) = C β − 1 − iQ 0 δ β + 1 + iQ 0 δ + ic(3) where C is a normalization constant, δ = ν/ν c − ν c /ν, with ν c the cavity resonance frequency, ν the frequency, c is a free parameter related to the impedance mismatch, Q 0 is the cavity unloaded quality factor. From this second fit we obtain the value of the coupling β. III. DATA ANALYSIS AND RESULTS By tuning the cavity resonance frequency (ν c ) we acquired 11 different dataset, one for each ν c in the range 10.35337 − 10.35345 GHz (Tab. I). For each dataset we calculated a power spectrum. In this section, we discuss the cumulative results obtained from the combined spectra. For sake of simplicity, examples from a single dataset are reported when necessary. The expected power generated by the axion conversion inside the haloscope is given by [34,41]: P a = g 2 aγγ m 2 a 3 c 3 ρ a β 1 + β ω c 1 µ 0 B 2 0 V C 030 Q L × 1 1 + (2Q L ∆ ω /ω c ) 2(4) In the first set of parenthesis, ρ a ∼ 0.45 GeV/cm 3 [42] is the local dark matter density, g aγγ is the coupling con- stant of the axion-photon interaction, m a is the axion mass. The second set of parenthesis contains the vacuum permeability µ 0 , the magnetic field B 0 and the volume V of the cavity. ω c = 2πν c is the resonance angular frequency of the cavity, β and Q L are antenna coupling and loaded quality factor as described above. C 030 is a geometrical factor equal to about 0.028 for the TM030 mode of this cylindrical dielectric cavity. In the third brackets, a Lorentzian function describes the effect of the detuning ∆ w = ω c − ω a between the cavity and an axion having angular frequency ω a . In presence of a signal due to axion conversion a power excess would be observable in the residuals of the power spectrum. The residuals are obtained subtracting a Savitzky-Golay (SG) filter [43] of the fourth order to the cavity power spectrum. The dynamic interval of the SG filter was optimized to 59.2 kHz (91 bins). For each dataset, we applied the SG filter to a window [ν c −3Γ, ν c +3Γ] of about 180 kHz, corresponding to six linewidths Γ and centered on the cavity resonance frequency ν c as showed in frame, the axion signal is expected to have a width of about 10 KHz [6,44]. With a power spectrum with bin width ∆ν = 651 Hz we expect the axion signal to be distributed over 16 consecutive bins. We normalized the residuals of each dataset to the expected noise power σ Dicke calculated with the Dicke radiometer equation [45] σ Dicke = k B T sys ∆ν/∆t , where T sys , is the system noise-temperature, ∆ν is the bin width (651 Hz) and ∆t is the integration time (3000 s). The distribution of the cumulative normalizedresiduals from all the datasets is shown in Fig. 5 along with a Gaussian fit, showing a standard deviation compatible with 1. We use the Least-Squares method to estimate the best valueĝ aγγ for the axion-photon coupling, by minimizing χ 2 = Nscan α=1 N bin i=1 R (α) i − S (α) i (m a , g 2 aγγ ) σ (α) Dicke 2 ,(6) where the α index runs over the N scan datasets taken with different cavity resonant-frequencies, the index i runs over the frequency bins of each power spectrum, R iα and S iα are the residuals and the expected power signals for the scan α and frequency bin i, respectively. S iα is calculated as the integral in the frequency domain of equation (4) multiplied by the spectrum of the full standard halo model distribution [44]. We express the expected power as S α,i (m a , g 2 aγγ ) = g 2 aγγ T α,i (m a ), and analytically minimize Eq. (6) by solving ∂χ 2 /∂g 2 aγγ = 0, and calculating the uncertainty according to the formula (ξ = g 2 aγγ ): 1 σ 2 ξ = 1 2 ∂ 2 χ 2 ∂ξ 2 .(7) Solving this equation, we get: ( ≡ Nscan α=1 N bin i=1 ) g 2 = σ 2 (g 2 ) R (α) i T (α) i (m a ) (σ (α) Dicke ) 2 (8) where g 2 is the average squared coupling-constant that accounts for the contributions of all the frequency bins of all the datasets, and σ 2 (g 2 ) = T (α) i (m a ) σ (α) Dicke 2(9) is its variance. We repeated this procedure for different values of m a and calculated g 2 and σ(g 2 ) for axions masses in the range 42.8210 − 42.8223 µeV. A candidate discovery requires the detection of a power excess larger than 5σ above the noise, hence in the distribution of g 2 /σ(g 2 ). We did not find any candidate (see Fig. 6) and the result is interpreted as an exclusion test in this axion-mass range. So far, we did not consider the efficiency of the SG filter in estimating the axion signal. In order to quantify it, we run a Monte Carlo simulation where a fake axion signal, with a known g 2 injected , is numerically inserted in simulated power-spectra with different ν c . We used Eq. (8) to estimate the g 2 for each injected signal (gg 2 calculated ), and determined the efficiency from the relation between g 2 calculated and g 2 injected . We simulated the cavity power-spectra by adding random Gaussian noise (mean=0, sigma=σ Dicke , random values extracted between 0 and ± σ Dicke , according to a Gaussian PDF) to the SG filters. For a given axion mass, m a , the estimation of the efficiency works as follows: 1) for each data set we calculate a simulated spectrum; 2) a fake axion signal with a known g 2 injected is injected in the simulated spectra; 3) Eq. (8) is used to compute g 2 calculated ; 4) points 2 and 3 are repeated for different values of g 2 injected ; 5) points 2-4 are repeated for a new set of simulated spectra in order to increase the statistics. The output of this procedure is a distribution of g 2 calculated for each value of g 2 injected . The relation between g 2 injected and the mean of g 2 calculated , calculated accounting for the contribution of all the datasets, is shown in Figure 7 for axion proper frequency f axion =10.35341562 GHz, The distribution of g 2 injected vs g 2 calculated shows a linear relation with slope very close to 3 and an intercept different from zero. These features are valid for all the axion masses not in the immediate proximity of the edge of the power spectrum, where the slope deviates considerably from 3. The slope of this linear relation is interpreted as the inverse of the estimation efficiency. If we neglect the intercept, then g 2 calculated /g 2 injected ∼ 1/3, i.e. an estimation efficiency of about 0.33. The intercept represents a contribution to the g 2 calculated given by the average noise of the simulated spectra for a given m a . Once we corrected the estimated g 2 by the filter efficiency, we calculated the limit on the axion-photon coupling with a 90% confidence level as in [26], using a power constrained procedure for the g 2 that under fluctuates below −σ [46]. In Fig. 8 The 90% single-sided C.L. upper limit for the axion coupling constant gaγγ as a function of the axion mass. The red solid curve represents the expected limit in the case of no signal. The yellow region indicates the QCD axion model band. Image realized using: https://github.com/cajohare/AxionLimits at the maximum sensitivity (the minimum spectrum reported in Fig. 8, g CL aγγ < 0.731 × 10 −13 GeV −1 at 90% C.L. IV. CONCLUSIONS We reported the results of the search of galactic axions using an high-Q dielectric haloscope. The investigated mass range is 42.8210 − 42.8223 µeV. We set a limit for the axion-photon coupling a factor about 4 from the axion-QCD band. We demonstrated the robustness of this detection approach and the importance of working with a high Q cavity in the search for axions. In fact, we managed to reach a sensitivity that almost touches the QCD band, even though the equivalent thermal noise of our system was very high (about 17 K) due to an experimental setback, and the detection efficiency was considered. In future experiments of this kind the sensitivity could be further improved, reducing the overall noise and improving the thermalization of the cavity. the magnet power supply, and M. Zago who realized the technical drawings of the system. We deeply acknowledge the Cryogenic Service of the Laboratori Nazionali di Legnaro for providing us with large quantities of liquid helium on demand. This work was supported by INFN and partially supported by EU through FET Open SU-PERGALAX project, Grant N.863313 FIG. 3 . 3Reflection spectrum obtained with the VNA and fit with the function (3). The fit results in the following values: β = 14.59 ± 0.01, νc = 10353366689 ± 20 Hz, Q0 = 5565000 ± 8000, c = 0.0127 ± 0.0001, C = 3.6526 ± 0.0002. Figure 4 . 3535 [FIG. 4 . 435354In the laboratory 10.35333 10.35337 10.35342 10.35346 10.GHzFFT cavity power spectrum (blue dots) and SG filter (black line). νc = 10.3534149 GHz, QL = 354000 FIG. 5 . 5Distribution of the cumulative residuals from each dataset normalized to the σDicke. FIG. 6 . 6Histogram of the g 2 /σ(g 2 ) distribution calculated using Eqs.(8) and(9). No excess above 5σ was observed.. FIG. 7 . 7Relation between g 2 injected and the mean of the g 2 calculated distribution (blue points), along with the best linear fit parameters. The belt represents the standard deviation of the distribution of g 2 calculated obtained after 100 simulations. we show the calculated upper-limit g CL aγγ in the axion mass range 42.821 − 42.8223 µeV, i.e. a mass window of about 1.32 neV centered in 42.8216 µeV. The reference upper-limit of this analysis is the value FIG. 2. Power output at the line L3 with variable input at the lines L1 (red) and L2 (blue). For the L1 input a rf signal at the cavity resonance-frequency is used, while for the L2 input the frequency is detuned by 1 MHz from the cavity resonance. Measurements are performed with a spectrum analyser taking 500 RMS averages of a 100 MHz window with a resolution bandwidth B = 1 MHz.Output power [W] Input power [W] y = m1 + m2 * M0 Error Value 9.73e-13 3.78e-11 m1 0.0486 2.32 m2 NA 8.76 Chisq 0.998 R y = m1 + m2 * M0 Error Value 1.5e-12 3.78e-11 m1 0.306 18.6 m2 NA 6.81 Chisq 0.999 R TABLE I . Icavity resonance frequency, quality factor and cavity-antenna coupling for each dataset.νc [GHz] QL β 10.3533667 365730 14.59 10.3533711 337630 15.91 10.3533792 315100 17.00 10.3533874 288190 18.00 10.3533955 286620 17.87 10.3534036 284810 17.66 10.3534159 283410 17.61 10.3534150 354000 13.74 10.3534250 292510 16.20 10.3534354 290290 16.42 10.3534464 285760 17.25 ACKNOWLEDGMENTSDA VERIFICAREWe are grateful to E. Berto, A. Benato, and M. Rebeschini for the mechanical work; F. Calaon and M. Tessaro for help with the electronics and cryogenics. We thank G. Galet and L. Castellani for the development of . S Weinberg, 10.1103/PhysRevLett.40.223Phys. Rev. Lett. 40223S. Weinberg, Phys. Rev. Lett. 40, 223 (1978). . F Wilczek, 10.1103/PhysRevLett.40.279Phys. Rev. Lett. 40279F. Wilczek, Phys. Rev. Lett. 40, 279 (1978). . R D Peccei, H R Quinn, 10.1103/PhysRevLett.38.1440Phys. Rev. Lett. 381440R. D. Peccei and H. R. Quinn, Phys. Rev. Lett. 38, 1440 (1977). . J Preskill, M B Wise, F Wilczek, 10.1016/0370-2693(83)90637-8Phys. Lett. B. 120127J. Preskill, M. B. Wise, and F. Wilczek, Phys. Lett. B 120, 127 (1983). . I G Irastorza, J Redondo, 10.1016/j.ppnp.2018.05.003Prog. Part. Nucl. Phys. 10289I. G. Irastorza and J. Redondo, Prog. Part. Nucl. Phys. 102, 89 (2018). . P Sikivie, 10.1103/PhysRevLett.51.1415Phys. Rev. Lett. 511415P. Sikivie, Phys. Rev. Lett. 51, 1415 (1983). . P Sikivie, 10.1103/PhysRevD.32.2988Phys. Rev. D. 322988P. Sikivie, Phys. Rev. D 32, 2988 (1985). . T Braine, 10.1103/PhysRevLett.124.101303Phys. Rev. Lett. 124101303T. Braine et al., Phys. Rev. Lett. 124, 101303 (2020). . N Du, 10.1103/PhysRevLett.120.151301Phys. Rev. Lett. 120151301N. Du et al., Phys. Rev. Lett. 120, 151301 (2018). . C Boutan, 10.1103/PhysRevLett.121.261302Phys. Rev. Lett. 121261302C. Boutan et al., Phys. Rev. Lett. 121, 261302 (2018). . C Bartram, 10.1103/PhysRevLett.127.261803Phys. Rev Lett. 127261803C. Bartram et al., Phys. Rev Lett. 127, 261803 (2021). . K Backes, 10.1038/s41586-021-03226-7Nature. 590238K. Backes et al., Nature 590, 238 (2021). . L Zhong, 10.1103/PhysRevD.97.092001Phys. Rev. D. 9792001L. Zhong et al., Phys. Rev. D 97, 092001 (2018). . B T Mcallister, G Flower, E N Ivanov, M Goryachev, J Bourhill, M E Tobar, 10.1016/j.dark.2017.09.010Phys. Dark Univ. 1867B. T. McAllister, G. Flower, E. N. Ivanov, M. Goryachev, J. Bourhill, and M. E. Tobar, Phys. Dark Univ. 18, 67 (2017). . J Choi, S Ahn, B Ko, S Lee, Y K Semertzidis, 10.1016/j.nima.2021.165667Nucl. Inst. Meth. Phys. Res. A. 1013165667J. Choi, S. Ahn, B. Ko, S. Lee, and Y. K. Semertzidis, Nucl. Inst. Meth. Phys. Res. A 1013, 165667 (2021). . S Lee, S Ahn, J Choi, B Ko, Y K Semertzidis, 10.1103/PhysRevLett.124.101802Phys. Rev. Lett. 124101802S. Lee, S. Ahn, J. Choi, B. Ko, and Y. K. Semertzidis, Phys. Rev. Lett. 124, 101802 (2020). . J Jeong, S Youn, S Bae, J Kim, T Seong, J E Kim, Y K Semertzidis, 10.1103/PhysRevLett.125.221302Phys. Rev. Lett. 125221302J. Jeong, S. Youn, S. Bae, J. Kim, T. Seong, J. E. Kim, and Y. K. Semertzidis, Phys. Rev. Lett. 125, 221302 (2020). . O Kwon, 10.1103/PhysRevLett.126.191802Phys. Rev. Lett. 126191802O. Kwon et al., Phys. Rev. Lett. 126, 191802 (2021). . Y Lee, B Yang, H Yoon, M Ahn, H Park, B Min, D Kim, J Yoo, 10.1103/PhysRevLett.128.241805Phys. Rev. Lett. 128241805Y. Lee, B. Yang, H. Yoon, M. Ahn, H. Park, B. Min, D. Kim, and J. Yoo, Phys. Rev. Lett. 128, 241805 (2022). . T Grenet, R Ballou, Q Basto, K Martineau, P Perrier, P Pugnat, J Quevillon, N Roch, C Smith, arXiv:2110.14406T. Grenet, R. Ballou, Q. Basto, K. Martineau, P. Per- rier, P. Pugnat, J. Quevillon, N. Roch, and C. Smith, arXiv:2110.14406 (2021). . A Álvarez Melcón, 10.1007/JHEP(2020)084JHEP. 0784A.Álvarez Melcón et al., JHEP 07 (2020), 084. . A Álvarez Melcón, 10.1088/1475-7516/2018/05/040JCAP. 0540A.Álvarez Melcón et al., JCAP 05 (2018), 040. . A Álvarez Melcón, 10.1007/JHEP10(2021)075JHEP. 1075A.Álvarez Melcón et al., JHEP 10 (2021), 075. . H Chang, arXiv:2205.05574H. Chang et al., arXiv:2205.05574 (2022). . D , 10.1103/PhysRevD.99.101101Phys. Rev. D. 99101101D. Alesini et al., Phys. Rev. D 99, 101101 (2019). . D , 10.1103/PhysRevD.99.101101Phys. Rev. D. 103102004D. Alesini et al., Phys. Rev. D 103, 102004 (2021). . R Barbieri, C Braggio, G Carugno, C S Gallo, A Lombardi, A Ortolan, R Pengo, G Ruoso, C C Speake, 10.1016/j.dark.2017.01.003Phys. Dark Univ. 15135R. Barbieri, C. Braggio, G. Carugno, C. S. Gallo, A. Lombardi, A. Ortolan, R. Pengo, G. Ruoso, and C. C. Speake, Phys. Dark Univ. 15, 135 (2017). . N Crescini, 10.1140/epjc/s10052-018-6163-8Eur. Phys. J. C. 781N. Crescini et al., Eur. Phys. J. C 78, 1 (2018). . N Crescini, 10.1103/PhysRevLett.124.171801Phys. Rev. Lett. 124171801N. Crescini et al., Phys. Rev. Lett. 124, 171801 (2020). . C Gatti, arXiv:1811.06754C. Gatti et al., arXiv:1811.06754 (2018). . D , arXiv:1911.02427D. Alesini et al., arXiv:1911.02427 (2019). . A Caldwell, 10.1103/PhysRevLett.118.091801Phys. Rev. Lett. 11891801A. Caldwell et al., Phys. Rev. Lett. 118, 091801 (2017). . M Lawson, A J Millar, M Pancaldi, E Vitagliano, F Wilczek, 10.1103/PhysRevLett.123.141802Phys. Rev. Lett. 123141802M. Lawson, A. J. Millar, M. Pancaldi, E. Vitagliano, and F. Wilczek, Phys. Rev. Lett. 123, 141802 (2019). . S , Al Kenany, 10.1016/j.nima.2017.02.012Nucl. Instr. Meth. Phys. Res. A. 85411S. Al Kenany et al., Nucl. Instr. Meth. Phys. Res. A 854, 11 (2017). . D , Di Gioacchino, file:/localhost/opt/grobid/grobid-home/tmp/10.1109/TASC.2019.2897267IEEE Trans. App. Sup. 291D. Di Gioacchino et al., IEEE Trans. App. Sup. 29, 1 (2019). . D Ahn, O Kwon, W Chung, W Jang, D Lee, J Lee, S W Youn, D Youm, Y K Semertzidis, arXiv:1904.05111D. Ahn, O. Kwon, W. Chung, W. Jang, D. Lee, J. Lee, S. W. Youn, D. Youm, and Y. K. Semertzidis, arXiv:1904.05111 (2019). . D , file:/localhost/opt/grobid/grobid-home/tmp/10.1016/j.nima.2020.164641Rev. Sci. Instr. 9194701D. Alesini et al., Rev. Sci. Instr. 91, 094701 (2020). . D , 10.1016/j.nima.2020.164641Nucl. Instr. Meth. Phys. Res. A. 985164641D. Alesini et al., Nucl. Instr. Meth. Phys. Res. A 985, 164641 (2021). . R , Di Vora, 10.1103/PhysRevApplied.17.054013Phys. Rev. Applied. 1754013R. Di Vora et al., Phys. Rev. Applied 17, 054013 (2022). . C Braggio, arXiv:2205.02053C. Braggio et al., arXiv:2205.02053 (2022). . B M Brubaker, 10.1103/PhysRevLett.118.061302Phys. Rev. Lett. 11861302B. M. Brubaker et al., Phys. Rev. Lett. 118, 061302 (2017). . P A Zyla, 10.1093/ptep/ptaa104Prog. Theor. Exp. Phys. 2020Particle Data GroupP. A. Zyla et al., (Particle Data Group), Prog. Theor. Exp. Phys. 2020, 10.1093/ptep/ptaa104 (2020), 083C01, https://academic.oup.com/ptep/article- pdf/2020/8/083C01/34673722/ptaa104.pdf. . A Savitzky, M J Golay, 10.1021/ac60214a047Anal. Chem. 361627A. Savitzky and M. J. Golay, Anal. Chem. 36, 1627 (1964). . M S Turner, 10.1103/PhysRevD.42.3572Phys. Rev. D. 423572M. S. Turner, Phys. Rev. D 42, 3572 (1990). The measurement of thermal radiation at microwave frequencies. R H Dicke, 10.1007/978-94-009-7752-5_11Classics in Radio Astronomy. DordrechtSpringerR. H. Dicke, The measurement of thermal radiation at microwave frequencies, in Classics in Radio Astronomy (Springer, Dordrecht, 1946) pp. 106-113. . G Cowan, K Cranmer, E Gross, O Vitells, arXiv:1105.3166G. Cowan, K. Cranmer, E. Gross, and O. Vitells, arXiv:1105.3166 (2011).
[ "https://github.com/cajohare/AxionLimits" ]
[ "Muon spin rotation study of the (T MT SF ) 2 ClO 4 system", "Muon spin rotation study of the (T MT SF ) 2 ClO 4 system" ]
[ "A J Greer \nPhysics Department\nGonzaga University\n99258-0051SpokaneWAUSA\n", "D R Harshman ", "W J Kossler \nPhysics Department\nCollege of William and Mary\n23187-8795WilliamsburgVAUSA\n", "A Goonewardene \nPhysics Department\nCollege of William and Mary\n23187-8795WilliamsburgVAUSA\n", "D Ll ", "Williams \nDepartment of Physics\nUniversity of British Columbia\nV6T 1Z1VancouverBCCanada\n", "E Koster \nDepartment of Physics\nUniversity of British Columbia\nV6T 1Z1VancouverBCCanada\n", "W Kang \nDepartment of Physics\nUniversity of Chicago\n60637ChicagoILUSA\n", "R N Kleiman \nLucent Technologies\nBell Labs\n07974Murray HillNJUSA\n", "R C Haddon \nDepartment of Chemistry\nUniversity of California at Riverside\n92521RiversideCAUSA\n", "\nPhysikon Research Corporation\nPO Box 101498264-1014LyndenWAUSA\n", "\nDepartment of Physics\nUniversity of Notre Dame\nNotre Dame\n46556INUSA\n" ]
[ "Physics Department\nGonzaga University\n99258-0051SpokaneWAUSA", "Physics Department\nCollege of William and Mary\n23187-8795WilliamsburgVAUSA", "Physics Department\nCollege of William and Mary\n23187-8795WilliamsburgVAUSA", "Department of Physics\nUniversity of British Columbia\nV6T 1Z1VancouverBCCanada", "Department of Physics\nUniversity of British Columbia\nV6T 1Z1VancouverBCCanada", "Department of Physics\nUniversity of Chicago\n60637ChicagoILUSA", "Lucent Technologies\nBell Labs\n07974Murray HillNJUSA", "Department of Chemistry\nUniversity of California at Riverside\n92521RiversideCAUSA", "Physikon Research Corporation\nPO Box 101498264-1014LyndenWAUSA", "Department of Physics\nUniversity of Notre Dame\nNotre Dame\n46556INUSA" ]
[]
We report a study of the organic compound (T M T SF )2ClO4 in both a sample cooled very slowly through the anion ordering temperature (relaxed state) and a sample cooled more rapidly (intermediate state). For the relaxed state the entire sample is observed to be superconducting below about Tc ≃ 1.2 K. The second moment of the internal field distribution was measured for the relaxed state yielding an in-plane penetration depth of ≃ 12000Å. The intermediate state sample entered a mixed phase state, characterized by coexisting macroscopic sized regions of superconducting and spin density wave (SDW) regions, below Tc ≃ 0.87 K. These data were analyzed using a back-to-back cutoff exponential function, allowing the extraction of the first three moments of the magnetic field distribution. Formation of a vortex lattice is observed below 0.87 K as evidenced by the diamagnetic shift for the two fields in which we took intermediate state data.
10.1016/s0921-4534(03)01323-6
[ "https://export.arxiv.org/pdf/cond-mat/0301241v1.pdf" ]
119,424,501
cond-mat/0301241
025b687235f4e9e7305a546ce474176a9e329a1e
Muon spin rotation study of the (T MT SF ) 2 ClO 4 system 14 Jan 2003 A J Greer Physics Department Gonzaga University 99258-0051SpokaneWAUSA D R Harshman W J Kossler Physics Department College of William and Mary 23187-8795WilliamsburgVAUSA A Goonewardene Physics Department College of William and Mary 23187-8795WilliamsburgVAUSA D Ll Williams Department of Physics University of British Columbia V6T 1Z1VancouverBCCanada E Koster Department of Physics University of British Columbia V6T 1Z1VancouverBCCanada W Kang Department of Physics University of Chicago 60637ChicagoILUSA R N Kleiman Lucent Technologies Bell Labs 07974Murray HillNJUSA R C Haddon Department of Chemistry University of California at Riverside 92521RiversideCAUSA Physikon Research Corporation PO Box 101498264-1014LyndenWAUSA Department of Physics University of Notre Dame Notre Dame 46556INUSA Muon spin rotation study of the (T MT SF ) 2 ClO 4 system 14 Jan 20031organic superconductorpenetration depthgap symmetryspin density wave We report a study of the organic compound (T M T SF )2ClO4 in both a sample cooled very slowly through the anion ordering temperature (relaxed state) and a sample cooled more rapidly (intermediate state). For the relaxed state the entire sample is observed to be superconducting below about Tc ≃ 1.2 K. The second moment of the internal field distribution was measured for the relaxed state yielding an in-plane penetration depth of ≃ 12000Å. The intermediate state sample entered a mixed phase state, characterized by coexisting macroscopic sized regions of superconducting and spin density wave (SDW) regions, below Tc ≃ 0.87 K. These data were analyzed using a back-to-back cutoff exponential function, allowing the extraction of the first three moments of the magnetic field distribution. Formation of a vortex lattice is observed below 0.87 K as evidenced by the diamagnetic shift for the two fields in which we took intermediate state data. Introduction The organic superconductors (T M T SF ) 2 X (Bechgaard salts, where T M T SF means tetramethyltetraselenafulvalene), and where X is P F 6 , ClO 4 , etc., have been under much scrutiny of late due to their rich array of magnetic behavior [1,2,3,4,5]. The P F 6 compound undergoes a metalinsulator transition at ambient pressure and below 12 K enters a spin density wave (SDW) state. Under relatively low pressure, this material goes into a superconducting state with T c ≃ 1.1 K [6,7]. The ClO 4 compound, however, becomes superconducting at ambient pressure if it is cooled slowly enough through the anion (ClO 4 ) order-ing temperature, ≃ 24 K [8]. The slow cooling through the ordering temperature allows the anions to become ordered, and this is termed the relaxed state of the material. If, in contrast, the ClO 4 compound is cooled rapidly through the ordering temperature it enters a SDW phase below about 4 K. Between these two extremes are a whole series of intermediate states. These are achieved by first slowly cooling the sample from about 40 K down to a quenching temperature, and then quenching the sample by rapidly cooling to liquid helium temperatures at a rate of at least 60 K/min. When the quenching temperature is varied around the anion ordering temperature of 24 K [6], a partial degree of anion order results. Samples treated in this manner have been observed to have coexisting, macroscopically sized regions of both superconducting and SDW phases [9]. Recently there has been much investigation into the superconducting state of these materials. As with the high-T c 's, much of the interest has focused on the nature of the pairing state of the superconducting quasi-particles. In the early 80's there was evidence in favor of both conventional BCS-like pairing [1,10,11,12] as well as p-wave pairing [13,14,15,16]. More recently there has been much evidence from upper critical field measurements [17,18], Knight shift measurements [19,20], and corresponding theoretical support [21,22,23,24] suggesting that the pairing state may be triplet, or f -wave, in nature. Reported here are the results of one of the few studies employing µSR to investigate this material. We present results of relaxed ClO 4 in a transverse magnetic field of 300 Oe. Also presented are results of an intermediate state at two different transverse magnetic fields, 100 Oe and 190 Oe. Experiment All samples were made by the usual electrochemical technique [6]. The first sample consisted of long needles of typical dimensions 2 cm × 2 mm × 0.5 mm along the a, b, and c directions, respectively, and was studied in the relaxed state. The second was assembled from pieces of typical dimensions 5 mm × 1 mm × 0.5 mm and was studied in the intermediate state. For both samples, the crystals were oriented into a mosaic with axes parallel. The data were acquired at the M15 muon beam line at the TRIUMF cyclotron facility in Vancouver, BC, Canada. The samples were mounted on 99.999% pure annealed silver (in which muons do not depolarize) and cooled in a dilution refrigerator. The relaxed state sample was first cooled from 80 → 32 K at a rate of 267 mK/min and then from 32 → 15 K at a rate of 30 mK/min. The intermediate state sample was cooled at a rate of 50 mK/min from 30 → 16 K through the anion ordering temperature. Cooling was again done in zero external field. All data were acquired with the external field applied parallel to the crystal c axis. The initial muon polarization direction was approximately perpendicular to the applied field in a standard transverse field geometry. For a more complete discussion of the µSR technique, see, e.g., [25]. Results for the relaxed state sample The analysis of the relaxed data was performed using the following Gaussian relaxation function: G(t) = Ae −σ 2 t 2 cos(ωt + φ)(1) We have omitted the 1/2 factor in the exponent for simplicity, and we used a single component function because the large sample size prevented any appreciable background in the data. The results of fits with this function are shown in Fig. 1. The superconducting transition is clearly evident at about 1.1 K, where the formation of a flux lattice develops. This is consistent with previously published values for T c [6,7]. The increase in the relaxation rate σ with decreasing temperature indicates a broadening of the field distribution due to the formation of a flux line lattice (FLL) in the superconducting state. The shape of this plot can often yield information on the pairing state of the superconducting quasiparticles [26]. The curve through the data is a guide to the eye using the two-fluid model for the temperature dependence of the penetration depth. That is [27]: (∆B) 2 = 2σ 2 γ 2 µ = 0.00371φ 2 o λ 4 (T )(2) and with λ(T ) = λ(T = 0) 1 − T T c 4 −1/2(3) Here, φ o = 2.068 × 10 −7 G cm 2 is the flux quantum, (∆B 2 ) is the second moment, and γ µ = 85.137 Mrad/s/kG is the muon gyromagnetic ratio. For equation 2 a triangular lattice is assumed, and the value for σ is found from Fig. 1 by subtracting the above T c value from the T → 0 K value in quadrature. The low temperature penetration depth for these data was previously determined to be λ ab = 12000 ± 2000Å [7], where λ ab is an average penetration depth obtained from equation 2. Since the penetration depths along a and b are different this λ ab = √ λ a λ b . Field distributions expected when λ a = λ b can be seen in ref [28]. Our data appear to become temperature independent as 0 K is approached, which is consistent with s-wave pairing; however, it is not possible to precisely determine the pairing state due to the size of the error bars. The data here, in contrast to those of L.P. Le et al. [29], have many more low temperature points. The clear rise in relaxation rate, seen in our data below about 1 K, may have even been seen in their data, but was discounted due presumably to the sparseness and lower statistics of their data. Results for the sample in the intermediate state Before taking data and lowering the sample temperature, an external field of 100 Oe was applied and held fixed during this phase of the experiment. The analysis was performed using a back-to-back exponential function of the form [30] n(ω) = a L e (ω−ωp)τL (ω < ω p ) a R e (ωp−ω)τR (ω > ω p )(4) to represent the fields associated with superconductivity. Here ω p is the frequency of the peak of the frequency distribution, ω is the frequency, τ L and τ R are the decay factors to the left and right of the peak frequency, a L and a R are constants, and n(ω) is the probability per unit frequency interval of a given frequency. An assumed Gaussian-like distribution of fields arising from nuclear dipoles was convoluted with this. This convoluted form has an analytical Fourier transform which was used as the first component of a two component fitting function. The second (background) component is attributed to muons which do not stop in the sample. This sample was of a smaller size than the relaxed sample, and we found it appropriate to include a background signal. The overall fitting function is shown below. G(t) = A 1 B(t)e −(σ1t) 2 + A 2 e −(σ2t) 2 cos(ω 2 t + φ)(5) Subscripts here refer to first and second components. Also, B(t) = (r 1 (t) + r 2 (t)) cos(ω 1 t + φ)+ t(r 1 (t)/τ L − r 2 (t)/τ R ) sin(ω 1 t + φ) (6) and r 1 (t) = τ R (τ L + τ R )(1 + (t/τ L ) 2 ) r 2 (t) = τ L (τ L + τ R )(1 + (t/τ R ) 2 )(7) The nuclear dipole field spread parameter, σ 1 , and the background parameters (A 2 , σ 2 , and ω 2 ) were determined by fits above T c and were held fixed for subsequent fits. (For the 190 Oe data these parameters were found at T c due to the behavior above T c -see below.) The parameters in B(t) then reflect changes in the magnetic environment seen by the muons. Results of fits are expressed as moments of the field distribution. First, second, and third moments for the applied field of 100 Oe are shown in Fig. 2 as diamonds. The onset of superconductiv- Figure 2. Plot of the first, second, and third moments of the field distributions derived from the back to back exponential function fits: 100 Oe applied field (diamonds), 190 Oe applied field (circles, squares). ity is at T c ≃ 0.87 K, a little lower than the relaxed state discussed above, and consistent with earlier studies of intermediate state samples [9]. One can see immediately from the second moment data that the behavior as T → 0 K is different than the relaxed sample data as well as what conventional BCS theory predicts. The increasing second mo-ment denotes very unusual behavior, which we attribute to the mixed phase state of the sample. The first and third moment graphs show increases in diamagnetic shift and skewness, respectively, as T → 0 K. Contrary to expectations, the skew is toward lower fields. This lower field skewness may be, for example, seen directly in Fig. 3. The tail to the left of the peak is responsible for the skewness. The lower field tail is not 10 12 14 16 18 20 ω (Mr/s) predicted for either a triangular or square flux lattice alone [27], and is most likely due to the SDW phase. The results of fits to data with H ext = 190 Oe applied parallel to the crystal c axis are also shown in Fig. 2. The second moment graph (circles and squares) again clearly shows the onset of superconductivity at T c ≃ 0.87 K. Below this temperature the field distribution broadens as fluxons in the superconducting phase form into some type of lattice. Above T c the fits surprisingly show evidence of a broad field distribution which increases with temperature. This must be due to the SDW phase, since the superconducting regions are not present here. As T c is approached from above, the field distribution seen by the muons becomes more uniform, reflecting the decreasing influence of the SDW phases in favor of the uniform, applied field. Below T c the behavior is generally the same as for the 100 Oe data, but with more scatter at low temperature. The reason for this scatter, we think, is due to the order in which the data were taken. The square points indicate data taken while lowering the temperature monotonically from 1.25 K to 0.025 K. From 0.025 K the sample was warmed up to 1.43 K as indicated by arrow 1. The circles show results from fits as the sample was warmed to 1.90 K. Finally, the sample was cooled (more circles) to 0.15 and 0.075 K as indicated by arrow 2. These last points are higher than the previous low temperature data points for the 190 Oe field, and not much higher than the circles above T c . We believe this final higher second moment results due to the interplay between the superconducting regions and the SDW regions. The first and third moment graphs for 190 Oe show similar, but less dramatic, effects than for 100 Oe. In the first moment graph, the diamagnetic shift is much smaller, which may indicate that a smaller fraction of the sample is superconducting. Similarly, the third moment is less pronounced for this higher applied field. Both of these effects can be understood if one considers that the data fit by the first component of the fitting function contain signals from muons which stop in two different magnetic environments. The amount of signal from each is proportional to the number of muons stopping in each, and a higher field should decrease the superconducting fraction in favor of the SDW phase. Conclusion We have studied the relaxed and some intermediate states of the organic superconductor (T M T SF ) 2 ClO 4 with µSR. While the error bars are too large to rule out higher order quasiparticle pairing, the results of fits to relaxed state data indicate a temperature dependence which appears to be consistent with s-wave pairing. The low temperature penetration depth is found to be λ ab = 12000 ± 2000Å. Recent, subsequent data taken on similar samples also reveal behav-ior consistent with s-wave pairing, although the Figure 1 . 1Plot of the relaxation rate σ as a function of temperature for the relaxed (superconducting) state at H ext = 300 Oe. The curve is a fit using the two-fluid model assuming a baseline of 0.0835 µs −1 and T c = 1.15 K, yielding λ(T → 0 K) = 12300Å. Figure 3 . 3The lineshape at T = 0.025 K for 190 Oe showing the low field tail giving the negative third moment. authors claim otherwise[31]. The intermediate state, characterized by coexisting regions of superconducting and SDW phases, has a suppressed T c and a lineshape with a low-field tail below T c . Both of these effects are proposed to be due to the existence of this SDW phase in the material.The authors would like to thank Mel Goode, Bassam Hitti, and the rest of the TRIUMF support personnel for their help in making this work possible. . Stéphane Belain, Kamran Behnis, Phys. Rev. Lett. 792125Stéphane Belain and Kamran Behnis, Phys. Rev. Lett. 79 (1997) 2125. . J R Cooper, Phys. Rev. Lett. 63J.R. Cooper et. al., Phys. Rev. Lett. 63 (1989) 1984. . D Jérome, J. Phys. (Paris) Lett. 5695D. Jérome et al., J. Phys. (Paris) Lett. 56 (1980) L-95. . T Osada, Phys. Rev. Lett. 661525T. Osada et al., Phys. Rev. Lett. 66 (1991) 1525. . S T Hannahs, Phys. Rev. Lett. 63S.T. Hannahs et al., Phys. Rev. Lett. 63 (1989) 1988. . T Ishiguro, Organic Superconductors. SpingerT. Ishiguro et al., Organic Superconductors, (Spinger, Berlin, 1998). . D R Harshman, A P Mills, Phys. Rev. 4510684D.R. Harshman and A.P. Mills, Phys. Rev. B45 (1992) 10684; . D R Harshman, R N Kleiman, R C Hadden, W J Kossler, T Pfiz, D Ll, ; D R Williams, Harshman, Research Conference on Organic Superconductivity. Invited talk presented at the Gordon. SeptemberD.R. Harshman, R.N. Kleiman, R.C. Hadden, W.J. Kossler, T. Pfiz, and D.Ll. Williams, unpublished; D.R. Harshman, Invited talk presented at the Gor- don Research Conference on Organic Super- conductivity, Irsee, Germany, 22-27 Septem- ber, 1991. . K Bechgaard, Phys. Rev. Lett. 46852K. Bechgaard et al. Phys. Rev. Lett. 46 (1981) 852. . H Schwenk, Phys. Rev. 29500H. Schwenk et al., Phys. Rev. B29 (1984) 500. . P Garoche, J. Physique Lett. 43P. Garoche et al., J. Physique Lett. 43 (1982). . P M Chaikin, J. Magn. Magn. Mater. 311268P.M. Chaikin et al., J. Magn. Magn. Mater. 31 (1993) 1268. . K Murata, Japn. J. Appl. Phys. 261367K. Murata et al., Japn. J. Appl. Phys. 26 (1987) 1367. . M Y Choi, Phys. Rev. B. 256208M.Y. Choi et al., Phys. Rev. B 25 (1982) 6208. . S Bouffard, J. Phys. C: Solid State Phys. 152951S. Bouffard et al., J. Phys. C: Solid State Phys. 15 (1982) 2951. . C Coulon, J. Physique. 431721C. Coulon et al., J. Physique 43 (1982) 1721. . S Tomic, J. Physique Coll. 31075S. Tomic et al., J. Physique Coll. C3 (1983) 1075. . I J Lee, Synth. Metals. 70747I.J. Lee et al., Synth. Metals 70 (1995) 747. . I J Lee, Phys. Rev. Lett. 783555I.J. Lee et al., Phys. Rev. Lett. 78 (1997) 3555. . I J Lee, cond-mat/0001332I.J. Lee et al., unpublished cond-mat/0001332. . I J Lee, Phys. Rev. Lett. 8817004I.J. Lee et al., Phys. Rev. Lett. 88 (2002) 17004. . R D Duncan, cond-mat/0102439R.D. Duncan et al., unpublished cond-mat/0102439. . A G Lebed, Phys. Rev. 62795A.G. Lebed et al., Phys. Rev. B62 (2000) R795. . K Kuroki, Phys. Rev. 6394509K. Kuroki et al., Phys. Rev. B63 (2001) 094509. . M Miyazaki, unpublished cond-mat/9908488M. Miyazaki et al., unpublished cond-mat/9908488. Muon Spin Rotation Spectroscopy: Principles and Applications in Solid State Physics. A Schenck, Hilger, BristolA. Schenck, Muon Spin Rotation Spec- troscopy: Principles and Applications in Solid State Physics (Hilger, Bristol, 1986). M Tinkham, Introduction to Superconductivity. New YorkMcGraw-HillM. Tinkham, Introduction to Superconduc- tivity (McGraw-Hill, New York, 1975). . E H Brandt, J. Low Temp. Phys. 73355E.H. Brandt, J. Low Temp. Phys. 73 (1988) 355. Low magnetic fields in anisotropic superconductors. A J Greer, W J Kossler, Springer-VerlagBerlinA.J. Greer and W.J. Kossler, Low mag- netic fields in anisotropic superconductors (Springer-Verlag, Berlin, 1995). . L P Le, Phys. Rev. 487284L.P. Le et al., Phys. Rev. B48 (1993) 7284. . D R Harshman, Phys. Rev. Lett. 673152D.R. Harshman et al., Phys. Rev. Lett. 67 (1991) 3152. . G M Luke, Physica BG.M. Luke et al., to be published in Physica B.
[]
[ "Published as a conference paper at ICLR 2017 ATTEND, ADAPT AND TRANSFER: ATTENTIVE DEEP ARCHITECTURE FOR ADAPTIVE TRANSFER FROM MULTIPLE SOURCES IN THE SAME DOMAIN", "Published as a conference paper at ICLR 2017 ATTEND, ADAPT AND TRANSFER: ATTENTIVE DEEP ARCHITECTURE FOR ADAPTIVE TRANSFER FROM MULTIPLE SOURCES IN THE SAME DOMAIN", "Published as a conference paper at ICLR 2017 ATTEND, ADAPT AND TRANSFER: ATTENTIVE DEEP ARCHITECTURE FOR ADAPTIVE TRANSFER FROM MULTIPLE SOURCES IN THE SAME DOMAIN", "Published as a conference paper at ICLR 2017 ATTEND, ADAPT AND TRANSFER: ATTENTIVE DEEP ARCHITECTURE FOR ADAPTIVE TRANSFER FROM MULTIPLE SOURCES IN THE SAME DOMAIN" ]
[ "Janarthanan Rajendran \nUniversity of Michigan\nIndian Institute of Technology Madras\nIndian Institute of Technology Madras\nMcGill University\nIndian Institute of Technology Madras\n\n", "Aravind S Lakshminarayanan \nUniversity of Michigan\nIndian Institute of Technology Madras\nIndian Institute of Technology Madras\nMcGill University\nIndian Institute of Technology Madras\n\n", "Mitesh M Khapra [email protected] \nUniversity of Michigan\nIndian Institute of Technology Madras\nIndian Institute of Technology Madras\nMcGill University\nIndian Institute of Technology Madras\n\n", "Balaraman Ravindran \nUniversity of Michigan\nIndian Institute of Technology Madras\nIndian Institute of Technology Madras\nMcGill University\nIndian Institute of Technology Madras\n\n", "Janarthanan Rajendran \nUniversity of Michigan\nIndian Institute of Technology Madras\nIndian Institute of Technology Madras\nMcGill University\nIndian Institute of Technology Madras\n\n", "Aravind S Lakshminarayanan \nUniversity of Michigan\nIndian Institute of Technology Madras\nIndian Institute of Technology Madras\nMcGill University\nIndian Institute of Technology Madras\n\n", "Mitesh M Khapra [email protected] \nUniversity of Michigan\nIndian Institute of Technology Madras\nIndian Institute of Technology Madras\nMcGill University\nIndian Institute of Technology Madras\n\n", "Balaraman Ravindran \nUniversity of Michigan\nIndian Institute of Technology Madras\nIndian Institute of Technology Madras\nMcGill University\nIndian Institute of Technology Madras\n\n" ]
[ "University of Michigan\nIndian Institute of Technology Madras\nIndian Institute of Technology Madras\nMcGill University\nIndian Institute of Technology Madras\n", "University of Michigan\nIndian Institute of Technology Madras\nIndian Institute of Technology Madras\nMcGill University\nIndian Institute of Technology Madras\n", "University of Michigan\nIndian Institute of Technology Madras\nIndian Institute of Technology Madras\nMcGill University\nIndian Institute of Technology Madras\n", "University of Michigan\nIndian Institute of Technology Madras\nIndian Institute of Technology Madras\nMcGill University\nIndian Institute of Technology Madras\n", "University of Michigan\nIndian Institute of Technology Madras\nIndian Institute of Technology Madras\nMcGill University\nIndian Institute of Technology Madras\n", "University of Michigan\nIndian Institute of Technology Madras\nIndian Institute of Technology Madras\nMcGill University\nIndian Institute of Technology Madras\n", "University of Michigan\nIndian Institute of Technology Madras\nIndian Institute of Technology Madras\nMcGill University\nIndian Institute of Technology Madras\n", "University of Michigan\nIndian Institute of Technology Madras\nIndian Institute of Technology Madras\nMcGill University\nIndian Institute of Technology Madras\n" ]
[]
Transferring knowledge from prior source tasks in solving a new target task can be useful in several learning applications. The application of transfer poses two serious challenges which have not been adequately addressed. First, the agent should be able to avoid negative transfer, which happens when the transfer hampers or slows down the learning instead of helping it. Second, the agent should be able to selectively transfer, which is the ability to select and transfer from different and multiple source tasks for different parts of the state space of the target task. We propose A2T (Attend, Adapt and Transfer), an attentive deep architecture which adapts and transfers from these source tasks. Our model is generic enough to effect transfer of either policies or value functions. Empirical evaluations on different learning algorithms show that A2T is an effective architecture for transfer by being able to avoid negative transfer while transferring selectively from multiple source tasks in the same domain.
null
[ "https://arxiv.org/pdf/1510.02879v5.pdf" ]
7,301,499
1510.02879
fb32b93eb070c71aa3b98b7987cd416884e447cc
Published as a conference paper at ICLR 2017 ATTEND, ADAPT AND TRANSFER: ATTENTIVE DEEP ARCHITECTURE FOR ADAPTIVE TRANSFER FROM MULTIPLE SOURCES IN THE SAME DOMAIN Janarthanan Rajendran University of Michigan Indian Institute of Technology Madras Indian Institute of Technology Madras McGill University Indian Institute of Technology Madras Aravind S Lakshminarayanan University of Michigan Indian Institute of Technology Madras Indian Institute of Technology Madras McGill University Indian Institute of Technology Madras Mitesh M Khapra [email protected] University of Michigan Indian Institute of Technology Madras Indian Institute of Technology Madras McGill University Indian Institute of Technology Madras Balaraman Ravindran University of Michigan Indian Institute of Technology Madras Indian Institute of Technology Madras McGill University Indian Institute of Technology Madras Published as a conference paper at ICLR 2017 ATTEND, ADAPT AND TRANSFER: ATTENTIVE DEEP ARCHITECTURE FOR ADAPTIVE TRANSFER FROM MULTIPLE SOURCES IN THE SAME DOMAIN Transferring knowledge from prior source tasks in solving a new target task can be useful in several learning applications. The application of transfer poses two serious challenges which have not been adequately addressed. First, the agent should be able to avoid negative transfer, which happens when the transfer hampers or slows down the learning instead of helping it. Second, the agent should be able to selectively transfer, which is the ability to select and transfer from different and multiple source tasks for different parts of the state space of the target task. We propose A2T (Attend, Adapt and Transfer), an attentive deep architecture which adapts and transfers from these source tasks. Our model is generic enough to effect transfer of either policies or value functions. Empirical evaluations on different learning algorithms show that A2T is an effective architecture for transfer by being able to avoid negative transfer while transferring selectively from multiple source tasks in the same domain. INTRODUCTION One of the goals of Artificial Intelligence (AI) is to build autonomous agents that can learn and adapt to new environments. Reinforcement Learning (RL) is a key technique for achieving such adaptability. The goal of RL algorithms is to learn an optimal policy for choosing actions that maximize some notion of long term performance. Transferring knowledge gained from tasks solved earlier to solve a new target task can help, either in terms of speeding up the learning process or in terms of achieving a better solution, among other performance measures. When applied to RL, transfer could be accomplished in many ways (see Taylor & Stone (2009; for a very good survey of the field). One could use the value function from the source task as an initial estimate in the target task to cut down exploration [Sorg & Singh (2009)]. Alternatively one could use policies from the source task(s) in the target task. This can take one of two forms -(i) the derived policies can be used as initial exploratory trajectories [Atkeson & Schaal (1997); Niekum et al. (2013)] in the target task and (ii) the derived policy could be used to define macro-actions which may then be used by the agent in solving the target task [Mannor et al. (2004); Brunskill & Li (2014)]. While transfer in RL has been much explored, there are two crucial issues that have not been adequately addressed in the literature. The first is negative transfer, which occurs when the transfer results in a performance that is worse when compared to learning from scratch in the target task. This severely limits the applicability of many transfer techniques only to cases for which some measure of relatedness between source and target tasks can be guaranteed beforehand. This brings us to the second problem with transfer, which is the issue of identifying an appropriate source task from which to transfer. In some scenarios, different source tasks might be relevant and useful for different parts of the state space of the target task. As a real world analogy, consider multiple players (experts) who are good at different aspects of a game (say, tennis). For example, Player 1 is good at playing backhand shots while Player 2 is good at playing forehand shots. Consider the case of a new player (agent) who wants to learn tennis by selectively learning from these two experts. We handle such a situation in our architecture by allowing the agent to learn how to pick and use solutions from multiple and different source tasks while solving a target task, selectively applicable for different parts of the state space. We call this selective transfer. Our agent can transfer knowledge from Player 1 when required to play backhand shots and Player 2 for playing forehand shots. Further, let us consider consider the situation that both Player 1 and Player 2 are bad at playing drop shots. Apart from the source tasks, we maintain a base network that learns from scratch on the target task. The agent can pick and use the solution of the base network when solving the target task at the parts of the state space where transferring from the source tasks is negative. Such a situation could arise when the source task solutions are irrelevant for solving the target task over a specific portion of the state space, or when the transferring from the source tasks is negative over a specific portion of the state space (for example, transferring the bad drop shot abilities of Players 1 and 2). This situation also entails the first problem of avoiding negative transfer. Our framework allows an agent to avoid transferring from both Players 1 and 2 while learning to play drop shots, and rather acquire the drop shot skill by learning to use the base network. The architecture is trained such that the base network uses not just the experience obtained through the usage of its solutions in the target task, but the overall experience acquired using the combined knowledge of the source tasks and itself. This enables the base network solutions to get closer to the behavior of the overall architecture (which uses the source task solutions as well). This makes it easier for the base network to assist the architecture to fine tune the useful source task solutions to suit the target task perfectly over time. The key contribution in the architecture is a deep attention network, that decides which solutions to attend to, for a given input state. The network learns solutions as a function of current state thereby aiding the agent in adopting different solutions for different parts of the state space in the target task. To this end, we propose A2T: Attend, Adapt and Transfer, an Attentive Deep Architecture for Adaptive Transfer, that avoids negative transfer while performing selective transfer from multiple source tasks in the same domain. In addition to the tennis example, A2T is a fairly generic framework that can be used to selectively transfer different skills available from different experts as appropriate to the situation. For instance, a household robot can appropriately use skills from different experts for different household chores. This would require the skill to transfer manipulation skills across objects, tasks and robotic actuators. With a well developed attention mechanism, the most appropriate and helpful combination of object-skill-controller can be identified for aiding the learning on a related new task. Further, A2T is generic enough to effect transfer of either action policies or actionvalue functions, as the case may be. We also adapt different algorithms in reinforcement learning as appropriate for the different settings and empirically demonstrate that the A2T is effective for transfer learning for each setting. RELATED WORK As mentioned earlier, transfer learning approaches could deal with transferring policies or value functions. For example, Banerjee & Stone (2007) describe a method for transferring value functions by constructing a Game tree. Similarly, Sorg & Singh (2009) use the value function from a source task as the initial estimate of the value function in the target task. Another method to achieve transfer is to reuse policies derived in the source task(s) in the target task. Probabilistic Policy Reuse as discussed in Fernández & Veloso (2006) maintains a library of policies and selects a policy based on a similarity metric, or a random policy, or a max-policy from the knowledge obtained. This is different from the proposed approach in that the proposed approach can transfer policies at the granularity of individual states which is not possible in policy-reuse rendering it unable to learn customized policy at that granularity. Atkeson & Schaal (1997); Niekum et al. (2013) evaluated the idea of having the transferred policy from the source tasks as explorative policies instead of having a random exploration policy. This provides better exploration behavior provided the tasks are similar. Talvitie & Singh (2007) try to find the promising policy from a set of candidate policies that are generated using different action mapping to a single solved task. In contrast, we make use of one or more source tasks to selectively transfer policies at the granularity of state. Apart from policy transfer and value transfer as discussed above, Ferguson & Mahadevan (2006) discuss representation transfer using Proto Value Functions. The idea of negative and selective transfer have been discussed earlier in the literature. For example, Lazaric & Restelli (2011) address the issue of negative transfer in transferring samples for a related task in a multi-task setting. Konidaris et al. (2012) discuss the idea of exploiting shared common features across related tasks. They learn a shaping function that can be used in later tasks. The two recent works that are very relevant to the proposed architecture are discussed in Parisotto et al. (2015) and Rusu et al. (2016). Parisotto et al. (2015) explore transfer learning in RL across Atari games by trying to learn a multi-task network over the source tasks available and directly finetune the learned multi-task network on the target task. However, fine-tuning as a transfer paradigm cannot address the issue of negative transfer which they do observe in many of their experiments. Rusu et al. (2016) try to address the negative transfer issue by proposing a sequential learning mechanism where the filters of the network being learned for an ongoing task are dependent through lateral connections on the lower level filters of the networks learned already for the previous tasks. The idea is to ensure that dependencies that characterize similarity across tasks could be learned through these lateral connections. Even though they do observe better transfer results than direct fine-tuning, they are still not able to avoid negative transfer in some of their experiments. PROPOSED ARCHITECTURE Let there be N source tasks and let K 1 , K 2 , . . . K N be the solutions of these source tasks 1, . . . N respectively. Let K T be the solution that we learn in the target task T . Source tasks refer to tasks that we have already learnt to perform and target task refers to the task that we are interested in learning now. These solutions could be for example policies or state-action values. Here the source tasks should be in the same domain as the target task, having the same state and action spaces. We propose a setting where K T is learned as a function of K 1 , . . . , K N , K B , where K B is the solution of a base network which starts learning from scratch while acting on the target task. In this work, we use a convex combination of the solutions to obtain K T . K T (s) = w N +1,s K B (s) + N i=1 w i,s K i (s) (1) N +1 i=1 w i,s = 1, w i,s ∈ [0, 1](2) w i,s is the weight given to the ith solution at state s. The agent uses K T to act in the target task. Figure 1a shows the proposed architecture. While the source task solutions K 1 , . . . , K N remain fixed, the base network solutions are learnt and hence K B can change over time. There is a central network which learns the weights (w i,s , i ∈ 1, 2, . . . , N +1), given the input state s. We refer to this network as the attention network. The [0, 1] weights determine the attention each solution gets allowing the agent to selectively accept or reject the different solutions, depending on the input state. We adopt a soft-attention mechanism whereby more than one weight can be non-zero [Bahdanau et al. (2014)] as opposed to a hard-attention mechanism [Mnih et al. (2014)] where we are forced to have only one non-zero weight. Here, f (s; θ a ) is a deep neural network (attention network), which could consist of convolution layers and fully connected layers depending on the representation of input. It is parametrised by θ a and takes as input a state s and outputs a vector of length N + 1, which gives the attention scores for the N + 1 solutions at state s. Eq. (3) normalises this score to get the weights that follow Eq.(2). If the ith source task solution is useful at state s, then w i,s is set to a high value by the attention network. Working at the granularity of states allows the attention network to attend to different source tasks, for different parts of the state space of the target task, thus giving it the ability to perform selective transfer. For parts of the state space in the target task, where the source task solutions cause negative transfer or where the source task solutions are not relevant, the attention network learns to give high weight to the base network solution (which can be learnt and improved), thus avoiding negative transfer. Depending on the feedback obtained from the environment upon following K T , the attention network's parameters θ a are updated to improve performance. As mentioned earlier, the source task solutions, K 1 , . . . , K N remain fixed. Updating these source task's parameters would cause a significant amount of unlearning in the source tasks solutions and result in a weaker transfer, which we observed empirically. This also enables the use of source task solutions, as long as we have the outputs alone, irrespective of how and where they come from. Even though the agent follows K T , we update the parameters of the base network that produces K B , as if the action taken by the agent was based only on K B . Due to this special way of updating K B , apart from the experience got through the unique and individual contribution of K B to K T in parts of the state space where the source task solutions are not relevant, K B also uses the valuable experience got by using K T which uses the solutions of the source tasks as well. This also means that, if there is a source task whose solution K j is useful for the target task in some parts of its state space, then K B tries to replicate K j in those parts of the state space. In practise, the source task solutions though useful, might need to be modified to suit perfectly for the target task. The base network takes care of these modifications required to make the useful source task solutions perfect for the target task. The special way of training the base network assists the architecture in achieving this faster. Note that the agent could follow/use K j through K T even when K B does not attain its replication in the corresponding parts of the state space. This allows for a good performance of the agent in earlier stages training itself, when a useful source task is available and identified. Since the attention is soft, our model has the flexibility to combine multiple solutions. The use of deep neural networks allow the model to work even for large, complex RL problems. The deep attention network, allows the agent to learn complex selection functions, without worrying about representation issues a priori. To summarise, for a given state, A2T learns to attend to specific solutions and adapts this attention over different states, hence attaining useful transfer. A2T is general and can be used for transfer of solutions such as policy and value. POLICY TRANSFER The solutions that we transfer here are the source task policies, taking advantage of which, we learn a policy for the target task. Thus, we have K 1 , . . . , K N , K B , K T ← π 1 , . . . π N , π B , π T . Here π represents a stochastic policy, a probability distribution over all the actions. The agent acts in the target task, by sampling actions from the probability distribution π T . The target task policy π T is got as described in Eq. (1) and Eq. (2). The attention network that produces the weights for the different solutions, is trained by the feedback got after taking action following π T . The base network that produces π B is trained as if the sampled action came from π B (though it originally came from π T ), the implications of which were discussed in the previous section. When the attention network's weight for the policy π B is high, the mixture policy π T is dominated by π B , and the base network learning is nearly on-policy. In the other cases, π B undergoes off-policy learning. But if we look closely, even in the latter case, since π B moves towards π T , it tries to be nearly on-policy all the time. Empirically, we observe that π B converges. This architecture for policy transfer can be used alongside any algorithm that has an explicit representation of the policy. Here we describe two instantiations of A2T for policy transfer, one for direct policy search using REINFORCE algorithm and another in the Actor-Critic setup. POLICY TRANSFER IN REINFORCE ALGORITHMS USING A2T: REINFORCE algorithms [Williams (1992)] can be used for direct policy search by making weight adjustments in a direction that lies along the gradient of the expected reinforcement. The full architecture is same as the one shown in Fig.1a with K ← π. We do direct policy search, and the parameters are updated using REINFORCE. Let the attention network be parametrized by θ a and the base network which outputs π B be parametrized by θ b . The updates are given by: θ a ← θ a + α θa (r − b) ∂ M t=1 log(π T (s t , a t )) ∂θ a (5) θ b ← θ b + α θ b (r − b) ∂ M t=1 log(π B (s t , a t )) ∂θ b(6) where α θa , α θ b are non-negative factors, r is the return obtained in the episode, b is some baseline and M is the length of the episode. a t is the action sampled by the agent at state s t following π T . Note that while π T (s t , a t ) is used in the update of the attention network, π B (s t , a t ) is used in the update of the base network. POLICY TRANSFER IN ACTOR-CRITIC USING A2T: Actor-Critic methods [Konda & Tsitsiklis (2000)] are Temporal Difference (TD) methods that have two separate components, viz., an actor and a critic. The actor proposes a policy whereas the critic estimates the value function to critique the actor's policy. The updates to the actor happens through TD-error which is the one step estimation error that helps in reinforcing an agent's behaviour. We use A2T for the actor part of the Actor-Critic. The architecture is shown in Fig.1b. The actor, A2T is aware of all the previous learnt tasks and tries to use those solution policies for its benefit. The critic evaluates the action selection from π T on the basis of the performance on the target task. With the same notations as REINFORCE for s t , a t , θ a , θ b , α θa , α θ b , π B , π T ; let action a t dictated by π T lead the agent to next state s t+1 with a reward r t+1 and let V (s t ) represent the value of state s t and γ the discount factor. Then, the update equations for the actor are as below: δ t = r t+1 + γV (s t+1 ) − V (s t )(7)θ a ← θ a + α θa δ t ∂ log π T (st,at) ∂θa ∂ log π T (st,at) ∂θa (8) θ b ← θ b + α θ b δ t ∂ log π B (st,at) ∂θ b ∂ log π B (st,at) ∂θ b (9) Here, δ t is the TD error. The state-value function V of the critic is learnt using TD learning. VALUE TRANSFER In this case, the solutions being transferred are the source tasks' action-value functions, which we will call as Q functions. Thus, K1, . . . , K N , K B , K T ← Q 1 , . . . , Q N , Q B , Q T . Let A represent the discrete action space for the tasks and Q i (s) = {Q(s, a j ) ∀ a j ∈ A}. The agent acts by using Q T in the target task, which is got as described in Eq. (1) and Eq. (2). The attention network and the base network of A2T are updated as described in the architecture. VALUE TRANSFER IN Q LEARNING USING A2T: The state-action value Q function is used to guide the agent to selecting the optimal action a at a state s, where Q(s, a) is a measure of the long-term return obtained by taking action a at state s. One way to learn optimal policies for an agent is to estimate the optimal Q(s, a) for the task. Q-learning [Watkins & Dayan (1992)] is an off-policy Temporal Difference (TD) learning algorithm that does so. The Q-values are updated iteratively through the Bellman optimality equation [Puterman (1994)] with the rewards obtained from the task as below: Q(s, a) ← E[r(s, a, s ) + γmax a Q(s , a )] In high dimensional state spaces, it is infeasible to update Q-value for all possible state-action pairs. One way to address this issue is by approximating Q(s, a) through a parametrized function approximator Q(s, a; θ),thereby generalizing over states and actions by operating on higher level features [Sutton & Barto (1998)]. The DQN [Mnih et al. (2015)] approximates the Q-value function with a deep neural network to be able to predict Q(s, a) over all actions a, for all states s. The loss function used for learning a Deep Q Network is as below: L(θ) = E s,a,r,s [ y DQN − Q(s, a; θ) 2 ], with y DQN = r + γmax a Q(s , a , θ − ) Here, L represents the expected TD error corresponding to current parameter estimate θ. θ − represents the parameters of a separate target network, while θ represents the parameters of the online network. The usage of a target network is to improve the stability of the learning updates. The gradient descent step is shown below: ∇ θ L(θ) = E s,a,r,s [(y DQN − Q(s, a; θ))∇ θ Q(s, a)] To avoid correlated updates from learning on the same transitions that the current network simulates, an experience replay [Lin (1993)] D (of fixed maximum capacity) is used, where the experiences are pooled in a FIFO fashion. We use DQN to learn our experts Q i , i ∈ 1, 2 . . . N on the source tasks. Q-learning is used to ensure Q T (s) is driven to a good estimate of Q functions for the target task. Taking advantage of the offpolicy nature of Q-learning, both Q B and Q T can be learned from the experiences gathered by an -greedy behavioral policy based on Q T . Let the attention network that outputs w be parametrised by θ a and the base network outputting Q B be parametrised by θ b . Let θ a − and θ b − represent the parameters of the respective target networks. Note that the usage of target here is to signify the parameters (θ − a , θ − b ) used to calculate the target value in the Q-learning update and is different from its usage in the context of the target task. The update equations are: y Q T = (r + γmax a Q T (s , a ; θ a − , θ b − ))(10)L Q T (θ a , θ b ) = E s,a,r,s [(y Q T − Q T (s, a; θ a , θ b )) 2 ](11)L Q B (θ b ) = E s,a,r,s [(y Q T − Q B (s, a; θ b )) 2 ](12)∇ θa L Q T = E[(y Q T − Q T (s, a))∇ θa Q T (s, a)](13)∇ θ b L Q B = E[(y Q T − Q B (s, a))∇ θ b Q R (s, a)] (14) θ a and θ b are updated with the above gradients using RMSProp. Note that the Q-learning updates for both the attention network (Eq. (11)) and the base network (Eq. (12)) use the target value generated by Q T . We use target networks for both Q B and Q T to stabilize the updates and reduce the nonstationarity as in DQN training. The parameters of the target networks are periodically updated to that of the online networks. EXPERIMENTS AND DISCUSSION We evaluate the performance of our architecture A2T on policy transfer using two simulated worlds, viz., chain world and puddle world as described below. The main goal of these experiments is to test the consistency of results with the algorithm motivation. Chain world: Figure 2a shows the chain world where the goal of the agent is to go from one point in the chain (starting state) to another point (goal state) in the least number of steps. At each state the agent can choose to either move one position to the left or to the right. After reaching the goal state the agent gets a reward that is inversely proportional to the number of steps taken to reach the goal. Puddle worlds: Figures 2b and 2c show the discrete version of the standard puddle world that is widely used in Reinforcement Learning literature. In this world, the goal of the agent is to go from a specified start position to the goal position, maximising its return. At each state the agent can choose one of these four actions: move one position to the north, south, east or west.With 0.9 probability the agent moves in the chosen direction and with 0.1 probability it moves in a random direction irrespective of its choice of action. On reaching the goal state, the agent gets a reward of +10. On reaching other parts of the grid the agent gets different penalties as mentioned in the legend of the figures. . We evaluate the performance of our architecture on value transfer using the Arcade Learning Environment (ALE) platform [Bellemare et al. (2012)]. Atari 2600: ALE provides a simulator for Atari 2600 games. This is one of the most commonly used benchmark tasks for deep reinforcement learning algorithms [Mnih et al. (2015), Mnih et al. (2016), Parisotto et al. (2015), Rusu et al. (2016)]. We perform our adaptive transfer learning experiments on the Atari 2600 game Pong. ABILITY TO DO SELECTIVE TRANSFER In this section, we consider the case when multiple partially favorable source tasks are available such that each of them can assist the learning process for different parts of the state space of the target task. The objective here is to first show the effectiveness of the attention network in learning to focus only on the source task relevant to the state the agent encounters while trying to complete the target task and then evaluating the full architecture with an additional randomly initialised base network. This is illustrated for the Policy Transfer setting using the chain world shown in (Fig. 2a). Consider that the target task LT is to start in A or B with uniform probability and reach C in the least number of steps. Now, consider that two learned source tasks, viz., L1 and L2, are available. L1 is the source task where the agent has learned to reach the left end (A) starting from the right end (B). In contrast, L2 is the source task where the agent has learned to reach the right end (B) starting from the left end (A). Intuitively, it is clear that the target task should benefit from the policies learnt for tasks L1 and L2. We learn to solve the task LT using REINFORCE given the policies learned for L1 and L2. Figure 3a (i) shows the weights given by the attention network to the two source task policies for different parts of the state space at the end of learning. We observe that the attention network has learned to ignore L1, and L2 for the left, and right half of the state space of the target task, respectively. Next, we add base network and evaluate the full architecture on this task. Figure 3a (ii) shows the weights given by the attention network to the different source policies for different parts of the state space at the end of learning. We observe that the attention network has learned to ignore L1, and L2 for the left, and right half of the state space of the target task, respectively. As the base network replicates π T over time, it has a high weight throughout the state space of the target task. We also evaluate our architecture in a relatively more complex puddle world shown in Figure 2c. In this case, L1 is the task of moving from S1 to G1, and L2 is the task of moving from S2 to G1. In the target task LT , the agent has to learn to move to G1 starting from either S1 or S2 chosen with uniform probability. We learn the task LT using Actor-Critic method, where the following are available (i) learned policy for L1 (ii) learned policy for L2 and (iii) a randomly initialized policy network (the base network). Figure 3b shows the performance results. We observe that actor-critic using A2T is able to use the policies learned for L1, and L2 and performs better than a network learning from scratch without any knowledge of source tasks. We do a similar evaluation of the attention network, followed by our full architecture for value transfer as well. We create partially useful source tasks through a modification of the Atari 2600 game Pong. We take inspiration from a real world scenario in the sport Tennis, where one could imagine two different right-handed (or left) players with the first being an expert player on the forehand but weak on the backhand, while the second is an expert player on the backhand but weak on the forehand. For someone who is learning to play tennis with the same style (right/left) as the experts, it is easy to follow the forehand expert player whenever he receives a ball on the forehand and follow the backhand expert whenever he receives a ball on the backhand. We try to simulate this scenario in Pong. The trick is to blur the part of the screen where we want to force the agent to be weak at returning the ball. The blurring we use is to just black out all pixels in the specific region required. To make sure the blurring doesn't contrast with the background, we modify Pong to be played with a black background (pixel value 0) instead of the existing gray (pixel value 87). We construct two partially helpful source task experts L1 and L2. L1 is constructed by Figure 4: Visualisation of the attention weights in the Selective Transfer with Attention Network experiment: Green and Blue bars signify the attention probabilities for Expert-1 (L1) and Expert-2 (L2) respectively. We see that in the first two snapshots, the ball is in the lower quadrant and as expected, the attention is high on Expert-1, while in the third and fourth snapshots, as the ball bounces back into the upper quadrant, the attention increases on Expert-2. training a DQN on Pong with the upper quadrant (the agent's side) blurred, while L2 is constructed by training a DQN with the lower quadrant (the agent's side) blurred. This essentially results in the ball being invisible when it is in the upper quadrant for L1 and lower quadrant for L2. We therefore expect L1 to be useful in guiding to return balls on the lower quadrant, and L2 for the upper quadrant. The goal of the attention network is to learn suitable filters and parameters so that it will focus on the correct source task for a specific situation in the game. The source task experts L1 and L2 scored an average of 9.2 and 8 respectively on Pong game play with black background. With an attention network to suitably weigh the value functions of L1 and L2, an average performance of 17.2 was recorded just after a single epoch (250,000 frames) of training. (The score in Pong is in the range of [−21, 21]). This clearly shows that the attention mechanism has learned to take advantage of the experts adaptively. Fig. 4 shows a visualisation of the attention weights for the same. We then evaluate our full architecture (A2T) in this setting, i.e with an addition of DQN learning from scratch (base network) to the above setting. The architecture can take advantage of the knowledge of the source task experts selectively early on during the training while using the expertise of the base network wherever required, to perform well on the target task. Figure 5 summarizes the results, where it is clear that learning with both the partially useful experts is better than learning with only one of them which in turn is better than learning from scratch without any additional knowledge. ABILITY TO AVOID NEGATIVE TRANSFER AND ABILITY TO TRANSFER FROM FAVORABLE TASK We first consider the case when only one learned source task is available such that its solution K 1 (policy or value) can hamper the learning process of the new target task. We refer to such a source task as an unfavorable source task. In such a scenario, the attention network shown in Figure 1a should learn to assign a very low weight (ignore) to K 1 . We also consider a modification of this setting by adding another source task whose solution K 2 is favorable to the target task. In such a scenario, the attention network should learn to assign high weight (attend) to K 2 while ignoring K 1 . We now define an experiment using the puddle world from Figure 2b for policy transfer. The target task in our experiment is to maximize the return in reaching the goal state G1 starting from any one of the states S1, S2, S3, S4. We artificially construct an unfavorable source task by first learning to solve the above task and then negating the weights of the topmost layer of the actor network. We then add a favorable task to the above setting. We artificially construct a favorable source task (a) Avoiding negative transfer(Pong) and transferring from a favorable task (b) Avoiding negative transfer(Freeway) and transferring from a favorable task Figure 7: Avoiding negative transfer and transferring value from a favorable task(higher the better). Specific training and architecture details are mentioned in APPENDIX. The plots are averaged over two runs with different random seeds. simply by learning to solve the target task and using the learned actor network. Figure 6 shows the results. The target task for the value transfer experiment is to reach expert level performance on Pong. We construct two kinds of unfavorable source tasks for this experiment. Inverse-Pong: A DQN on Pong trained with negated reward functions, that is with R (s, a) = −R(s, a) where R(s, a) is the reward provided by the ALE emulator for choosing action a at state s. Freeway: An expert DQN on another Atari 2600 game, Freeway, which has the same range of optimal value functions and same action space as Pong. We empirically verified that the Freeway expert DQN leads to negative transfer when directly initialized and fine-tuned on Pong which makes this a good proxy for a negative source task expert even though the target task Pong has a different state space. We artificially construct a favorable source task by learning a DQN to achieve expertise on the target task (Pong) and use the learned network. Figure 7a compares the performance of the various scenarios when the unfavorable source task is Inverse-Pong, while Figure 7b offers a similar comparison with the negative expert being Freeway. From all the above results, we can clearly see that A2T does not get hampered by the unfavorable source task by learning to ignore the same and performs competitively with just a randomly initialized learning on the target task without any expert available. Secondly, in the presence of an additional source task that is favorable, A2T learns to transfer useful knowledge from the same while ignoring the unfavorable task, thereby reaching expertise on the target task much faster than the other scenarios. VISUALIZATION: EVOLUTION OF ATTENTION WEIGHTS WITH ONE POSITIVE AND ONE NEGATIVE EXPERT We present the evolution of attention weights for the experiment described in Section 4.2 where we focus on the efficacy of the A2T framework in providing an agent the ability to avoid negative transfer and transfer from a favorable source task (perfect expert). Figure 8 depicts the evolution of the attention weights (normalised in the range of [0, 1]) during the training of the A2T framework. The corresponding experiment is the case where the target task is to solve Pong, while there are two source task experts, one being a perfect Pong playing trained DQN (to serve as positive expert), and the other being the Inverse-Pong DQN trained with negated reward functions (to serve as negative expert). Additionally, there's also the base network that learns from scratch using the experience gathered by the attentively combined behavioral policy from the expert networks, the base network and itself. We train the framework for 30 epochs, and the plot illustrates the attention weights every second epoch. We clearly see from figure 8 that there is no weird co-adaptation that happens in the training, and the attention on the negative expert is uniformly low throughout. Initially, the framework needs to collect some level of experience to figure out that the positive expert is optimal (or close to optimal). Till then, the attention is mostly on the base network, which is learning from scratch. The attention then shifts to the positive expert which in turn provides more rewarding episodes and transition tuples to learn from. Finally, the attention drifts slowly to the base network from the positive expert again, after which the attention is roughly random in choosing between the execution of positive expert and the base network. This is because the base network has acquired sufficient expertise as the positive expert which happens to be optimal for the target task. This visualization clearly shows that A2T is a powerful framework in ignoring a negative expert throughout and using a positive expert appropriately to learn quickly from the experience gathered and acquire sufficient expertise on the target task. In our experiments in the previous subsection dealing with prevention of negative transfer and using a favorable source task, we consider the positive expert as a perfect (close to optimal) expert on the same task we treat as the target task. This raises the question of relying on the presence of a perfect expert as a positive expert. If we have such a situation, the obvious solution is to execute each of the experts on the target task and vote for them with probabilities proportional to the average performance of each. WHEN A PERFECT EXPERT IS NOT AVAILABLE AMONG THE SOURCE TASKS The A2T framework is however generic and not intended to just do source task selection. We illustrate this with an additional baseline experiment, where the positive source task is an imperfect expert on the target task. In such a case, just having a weighted average voting among the available source task networks based on their individual average rewards is upper bounded by the performance of the best available positive expert, which happens to be an imperfect expert on the target task. Rather, the base network has to acquire new skills not present in the source task networks. We choose a partially trained network on Pong, that scores an average of 8 (max: 21). The graph in figure 9 clearly shows that the A2T framework with a partial Pong expert and a negative expert performs better than i) learning from scratch, ii) A2T with only one negative expert, and performs worse than A2T with one perfect positive expert and one negative expert. This is expected because a partial expert cannot provide as much of expert knowledge as a perfect expert, but still provides some useful knowledge in speeding the process of solving the target task. An important conclusion from this experiment is that the A2T framework is capable of discovering new skills not available among any of the experts when such skills are required for optimally solving the target task. To maintain consistency, we perform the same number of runs for averaging scores and experimented with both learning rates and pick the better performing one (0.00025). CONCLUSION AND FUTURE WORK In this paper we present a very general deep neural network architecture, A2T, for transfer learning that avoids negative transfer while enabling selective transfer from multiple source tasks in the same domain. We show simple ways of using A2T for policy transfer and value transfer. We empirically evaluate its performance with different algorithms, using simulated worlds and games, and show that it indeed achieves its stated goals. Apart from transferring task solutions, A2T can also be used for transferring other useful knowledge such as the model of the world. While in this work we focused on transfer between tasks that share the same state and action spaces and are in the same domain, the use of deep networks opens up the possibility of going beyond this setting. For example, a deep neural network can be used to learn common representations [Parisotto et al. (2015)] for multiple tasks thereby enabling transfer between related tasks that could possibly have different state-action spaces. A hierarchical attention over the lower level filters across source task networks while learning the filters for the target task network is another natural extension to transfer across tasks with different state-action spaces. The setup from Progressive Neural Networks [Rusu et al. (2016)] could be borrowed for the filter transfer, while the A2T setup can be retained for the policy/value transfer. Exploring this setting for continuous control tasks so as to transfer from modular controllers as well avoid negative transfer is also a potential direction for future research. The nature of tasks considered in our experiments is naturally connected to Hierarchical Reinforcement Learning and Continual Learning. For instance, the blurring experiments inspired from Tennis based on experts for specific skills like Forehand and Backhand could be considered as learning from sub-goals (program modules) like Forehand and Backhand to solve a more complex and broader task like Tennis by invoking the relevant sub-goals (program modules). This structure could be very useful to build a household robot for general purpose navigation and manipulation whereby specific skills such as manipulation of different objects, navigating across different source-destination points, etc could be invoked when necessary. The attention network in the A2T framework is essentially a soft meta-controller and hence presents itself as a powerful differentiable tool for Continual and Meta Learning. Meta-Controllers have typically been been designed with discrete decision structure over high level subgoals. This paper presents an alternate differentiable meta-controller with a soft-attention scheme. We believe this aspect can be exploited for differentiable meta-learning architectures for hierarchical reinforcement learning. Over all, we believe that A2T is a novel way to approach different problems like Transfer Learning, Meta-Learning and Hierarchical Reinforcement Learning and further refinements on top of this design can be a good direction to explore. APPENDIX A: DETAILS OF THE NETWORK ARCHITECTURE IN VALUE TRANSFER EXPERIMENTS For the source task expert DQNs, we use the same architecture as [Mnih et al. (2015)] where the input is 84 × 84 × 4 with 32 convolution filters, dimensions 8 × 8, stride 4 × 4 followed by 64 convolution filters with dimensions 4 × 4 and stride 2 × 2, again followed by 64 convolution filters of size 3×3 and stride 1×1. This is then followed by a fully connected layer of 512 units and finally by a fully connected output layer with as many units as the number of actions in Pong (Freeway) which is 3. We use ReLU nonlinearity in all the hidden layers. With respect to the A2T framework architecture, we have experimented with two possible architectures: • The base and attention networks following the NIPS architecture of Mnih et al. (2013) except that the output layer is softmax for the attention network. • The base and attention networks following the Nature architecture of Mnih et al. (2015) with a softmax output layer for the attention network. Specifically, the NIPS architecture of Mnih et al. (2013) [Mnih et al. (2015)] for updating gradient. For Policy Transfer, since the tasks were simple, stochastic gradient descent was sufficient to provide stable updates. We also use reward clipping, target networks and experience replay for our value transfer experiments in exactly the same way (all hyper parameters retained) as [Mnih et al. (2015)]. A training epoch is 250,000 frames and for each training epoch, we evaluate the networks with a testing epoch that lasts 125,000 frames. We report the average score over the completed episodes for each testing epoch. The average scores obtained this way are averaged over 2 runs with different random seeds. In the testing epochs, we use = 0.05 in the -greedy policy. LEARNING RATE In all our experiments, we trained the architecture using the learning rates, 0.0025 and 0.0005. In general, the lower learning rate provided more stable (less variance) training curves. While comparing across algorithms, we picked the best performing learning rate out of the two (0.0025 and 0.0005) for each training curve. APPENDIX C: BLURRING EXPERIMENTS ON PONG The experts are trained with blurring (hiding the ball) and black background as illustrated in AP-PENDIX A. Therefore, to compare the learning with that of a random network without any additional knowledge, we ran the baseline DQN on Pong with a black background too. Having a black background provides a rich contrast between the white ball and the black background, thereby making training easier and faster, which is why the performance curves in that setting are different to the other two settings reported for Inverse Pong and Freeway Negative transfer experiments where no blacking is done and Pong is played with a gray background. The blurring mechanism in Pong is illustrated in APPENDIX E. APPENDIX D: BLURRING EXPERIMENTS ON BREAKOUT Similar to our Blurring experiment on Pong, we additionally ran another experiment on the Atari 2600 game, Breakout, to validate the efficiency of our attention mechanism. We consider a setup with two experts L1 and L2 along with our attention network. The experts L1 and L2 were trained by blurring the lower left and right quadrants of the breakout screen respectively. We don't have to make the background black like in the case of Pong because the background is already black in Breakout and direct blurring is sufficient to hiding the ball in the respective regions without any contrasts introduced. We blur only the lower part so as to make it easy for the agent to at least anticipate the ball based on the movement at the top. We empirically observed that blurring the top half (as well) makes it hard to learn any meaningful partially useful experts L1 and L2. The goal of this experiment is to show that the attention network can learn suitable filters so as to dynamically adapt and learn to select the expert appropriate to the situation (game screen) in the task. The expert L1 which was blurred on the left bottom half is bound to weak at returning balls on that region while L2 is expected to be weak on the right. This is in the same vein as the forehandbackhand example in Tennis and its synthetic simulation for Pong by blurring the upper and lower quadrants. During game play, the attention mechanism is expected to ignore L2 when the ball is on the bottom right half (while focusing on L1) and similarly ignore L2 (while focusing on L1) when the ball is on the left bottom half. We learn experts L1 and L2 which score 42.2 and 39.8 respectively. Using the attention mechanism to select the correct expert, we were able to achieve a score of 94.5 after training for 5 epochs. Each training epoch corresponds to 250, 000 decision steps, while the scores are averaged over completed episodes run for 125, 000 decision steps. This shows that the attention mechanism learns to select the suitable expert. Though the performance is limited by the weaknesses of the respective experts, our goal is to show that the attention paradigm is able to take advantage of both experts appropriately. This is evident from the scores achieved by standalone experts and the attention mechanism. Additionally, we also present a visualization of the attention mechanism weights assigned to the experts L1 and L2 during game play in APPENDIX G. The weights assigned are in agreement with what we expect in terms of selective attention. The blurring mechanism is visually illustrated in APPENDIX F. . We perform blurring in each case by ensuring X1 = 0 and X2 = 0 for all pixels within them for training L1 and L2 respectively. Effectively, this is equivalent to hiding the ball in the appropriate quadrants. Blurring X1 simulates weakness in the lower left quadrant, while blurring X2 simulates weakness in the lower right quadrant. We don't blur all the way down upto the last row to ensure the paddle controlled by the agent is visible on the screen. We also don't black the rectangular border with a width of 4 pixels surrounding the screen. Figures 11a and 11b illustrate the scenarios of blurring the lower left quadrant before and after blurring; and similarly do 11c and 11d for blurring the lower right quadrant. APPENDIX G: BLURRING ATTENTION VISUALIZATION ON BREAKOUT Figure 12: Visualisation of the attention weights in the Selective Transfer with Attention for Breakout: Green and Blue bars signify the attention probabilities for Expert-1 (L1) and Expert-2 (L2) respectively on a scale of [0, 1]. We see that in the first two snapshots, the ball is in the lower right quadrant and as expected, the attention is high on Expert-1, while in the third and fourth snapshots, the ball is in the lower right quadrant and hence the attention is high on Expert-2. Figure 13: This experiment is a case study on a target task where the performance is limited by data availability. So far, we focused on experiments where the target task is to solve Pong (normal or black background) for Value Transfer, and Puddle Worlds for Policy Transfer. In both these cases, a randomly initialized value (or policy) network learning without the aid of any expert network is able to solve the target task within a reasonable number of epochs (or iterations). We want to illustrate a case where solving the target task in reasonable time is hard and the presence of a favorable source task significantly impacts the speed of learning. To do so, we consider a variant of Pong as our target task. In this variant, only a small probability ρ of transition tuples (s, a, r, s ) with non-zero reward r are added to the Replay Memory (and used for learning through random batch sampling). This way, the performance on the target task is limited by the availability of rewarding (positive or negative) transitions in the replay memory. This synthetically makes the target task of Pong a sparse reward problem because the replay memory is largely filled with transition tuples that have zero reward. We do not use any prioritized sampling so as to make sure the sparsity has a negative effect on learning to solve the target task. We use a version of Pong with black background (as used in Section 4.1 for the Blurring experiments) for faster experimentation. ρ = 0.1 was used for the plots illustrated above. Figure 13a clearly shows the difference between a normal Pong task without any synthetic sparsity and the new variant we introduce. The learning is much slower and is clearly limited by data availability even after 20 epochs (20 million frames) due to reward sparsity. Figure 13b describes a comparison between the A2T setting with one positive expert which expertly solves the target task and one negative expert, learning from scratch, and direct fine-tuning on a negative expert. We clearly see the effect of having the positive expert in one of the source tasks speeding up the learning process significantly when compared to learning from scratch, and also see that fine-tuning on top of a negative expert severely limits learning even after 20 epochs of training. We also see that the A2T framework is powerful to work in sparse reward settings and avoids negative transfer even in such cases, while also clearly learning to benefit from the presence of a target task expert among the source task networks. Importantly, this experiment demonstrates that transfer learning has a significant effect on tasks which may be hard (infeasible to solve within a reasonable training time) without any expert available. Further, A2T is also beneficial for such (sparse reward) situations when accessing the weights of an expert network is not possible, and only outputs of the expert (policy or value-function) can be used. Such synthetic sparse variants of existing tasks is a good way to explore future directions in the intersection of Inverse Reinforcement Learning and Reward-Based Learning, with A2T providing a viable framework for off-policy and on-policy learning. Figure 1 : 1w i,s = exp (e i,s ) N +1j=1exp (e j,s ), i ∈ {1, 2, . . . , N + 1}(3) (a) A2T architecture. The doted arrows represent the path of back propagation. (b) Actor-Critic using A2T.(e 1,s , e 2,s , . . . , e N +1,s ) = f (s; θ a ) Figure 2 : 2Different worlds for policy transfer experiments Figure 3 : 3Results of the selective policy transfer experiments Figure 5 : 5Selective Value Transfer. Figure 6 : 6Avoiding negative transfer and transferring policy from a favorable task(lower the better). Figure 8 : 8Evolution of attention weights with one positive and one negative expert. Figure 9 : 9Partial Positive Expert Experiment Figure 11 : 11in lower-left quad (b) Blurred lower-left quad (c) Ball in lower-right quad (d) Blurred lower-right quad The figures above explain the blurring mechanism used for selective transfer experiments on Breakout. The background of the screen is already black. Let X (84 × 84) denote an array containing the pixels of the screen. We focus on the two quadrants X1 = X[31 : 81, 4 : 42] and X2 = X[31 : 81, 42 : 80] APPENDIX J: CASE STUDY OF TARGET TASK PERFORMANCE LIMITED BY DATA AVAILABILITY (a) Comparison of Sparse Pong to Normal Pong (b) A2T with a positive and negative expert The figures above explain the blurring mechanism for selective transfer experiments on Pong. The background of the screen is made black. Let X (84 × 84) denote an array containing the pixels of the screen. The paddle controlled by the agent is the one on the right. We focus on the two quadrants X1 = X[: 42, 42 :] and X2 = X[42 :, 42 :] of the Pong screen relevant to the agent controlled paddle. To simulate an expert that is weak at returning balls in the upper quadrant, the portion of X1 till the horizontal location of agent-paddle, ie X1[:, : 31] is blacked out, while similarly, for simulating weakness in the bottom quadrant, we blur the portion of X2 till the agentpaddle's horizontal location, ie X2[:, : 31] = 0. Figures 10a and 10b illustrate the scenarios of blurring the upper quadrant before and after blurring; and similarly do 10c and 10d for blurring the lower quadrant. Effectively, blurring this way with a black screen is equivalent to hiding the ball (white pixel) in the appropriate quadrant where weakness is to be simulated. Hence, Figures 10b and 10d are the mechanisms used while training a DQN on Pong to hide the ball at the respective quadrants, so to create the partially useful experts which are analogous to forehand-backhand experts in Tennis. X[: a, : b] indicates the subarray of X with all rows upto row index a and all columns upto column index b.APPENDIX E: BLURRING MECHANISM IN PONG -DETAILS (a) Ball in upper quad (b) Blurred upper quad (c) Ball in lower quad (d) Blurred lower quad Figure 10: ACKNOWLEDGEMENTSThanks to the anonymous reviewers of ICLR 2017 who have provided thoughtful remarks and helped us revise the paper. We would also like to thank Sherjil Ozair, John Schulman, Yoshua Bengio, Sarath Chandar, Caglar Gulchere and Charu Chauhan for useful feedback about the work. Robot learning from demonstration. G Christopher, Stefan Atkeson, Schaal, Proceedings of International Conference on Machine Learning. International Conference on Machine Learning97Christopher G Atkeson and Stefan Schaal. Robot learning from demonstration. In In Proceedings of International Conference on Machine Learning, volume 97, 1997. Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, arXiv:1409.0473arXiv preprintDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. General game learning using knowledge transfer. Bikramjit Banerjee, Peter Stone, The 20th International Joint Conference on Artificial Intelligence. Bikramjit Banerjee and Peter Stone. General game learning using knowledge transfer. In In The 20th International Joint Conference on Artificial Intelligence, 2007. The arcade learning environment: An evaluation platform for general agents. Yavar Marc G Bellemare, Joel Naddaf, Michael Veness, Bowling, arXiv:1207.4708arXiv preprintMarc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environ- ment: An evaluation platform for general agents. arXiv preprint arXiv:1207.4708, 2012. Pac-inspired option discovery in lifelong reinforcement learning. Emma Brunskill, Lihong Li, Proceedings of the 31st International Conference on Machine Learning (ICML-14). the 31st International Conference on Machine Learning (ICML-14)Emma Brunskill and Lihong Li. Pac-inspired option discovery in lifelong reinforcement learning. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pp. 316-324, 2014. Proto-transfer learning in markov decision processes using spectral methods. Kimberly Ferguson, Sridhar Mahadevan, Computer Science Department Faculty Publication Series. 151Kimberly Ferguson and Sridhar Mahadevan. Proto-transfer learning in markov decision processes using spectral methods. Computer Science Department Faculty Publication Series, pp. 151, 2006. Probabilistic policy reuse in a reinforcement learning agent. Fernando Fernández, Manuela Veloso, Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems. the fifth international joint conference on Autonomous agents and multiagent systemsACMFernando Fernández and Manuela Veloso. Probabilistic policy reuse in a reinforcement learning agent. In Proceedings of the fifth international joint conference on Autonomous agents and mul- tiagent systems, pp. 720-727. ACM, 2006. Actor-critic algorithms. Vijay Konda, John Tsitsiklis, In SIAM Journal on Control and Optimization. MIT PressVijay Konda and John Tsitsiklis. Actor-critic algorithms. In SIAM Journal on Control and Opti- mization, pp. 1008-1014. MIT Press, 2000. Transfer in reinforcement learning via shared features. George Konidaris, Ilya Scheidwasser, Andrew G Barto, The Journal of Machine Learning Research. 131George Konidaris, Ilya Scheidwasser, and Andrew G Barto. Transfer in reinforcement learning via shared features. The Journal of Machine Learning Research, 13(1):1333-1371, 2012. Transfer from multiple mdps. Alessandro Lazaric, Marcello Restelli, Advances in Neural Information Processing Systems. Alessandro Lazaric and Marcello Restelli. Transfer from multiple mdps. In Advances in Neural Information Processing Systems, pp. 1746-1754, 2011. Reinforcement learning for robots using neural networks. Long-Ji Lin, DTIC DocumentTechnical reportLong-Ji Lin. Reinforcement learning for robots using neural networks. Technical report, DTIC Document, 1993. Dynamic abstraction in reinforcement learning via clustering. Shie Mannor, Ishai Menache, Amit Hoze, Uri Klein, Proceedings of the twenty-first international conference on Machine learning. the twenty-first international conference on Machine learningACM71Shie Mannor, Ishai Menache, Amit Hoze, and Uri Klein. Dynamic abstraction in reinforcement learning via clustering. In Proceedings of the twenty-first international conference on Machine learning, pp. 71. ACM, 2004. Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, arXiv:1312.5602arXiv preprintIoannis AntonoglouVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wier- stra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013. Recurrent models of visual attention. Volodymyr Mnih, Nicolas Heess, Alex Graves, Advances in Neural Information Processing Systems. Volodymyr Mnih, Nicolas Heess, Alex Graves, et al. Recurrent models of visual attention. In Advances in Neural Information Processing Systems, pp. 2204-2212, 2014. Human-level control through deep reinforcement learning. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, G Marc, Alex Bellemare, Martin Graves, Andreas K Riedmiller, Georg Fidjeland, Ostrovski, Nature. 5187540Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Belle- mare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, 2015. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, P Timothy, Tim Lillicrap, David Harley, Koray Silver, Kavukcuoglu, arXiv:1602.01783Asynchronous methods for deep reinforcement learning. arXiv preprintVolodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. arXiv preprint arXiv:1602.01783, 2016. Incremental semantically grounded learning from demonstration. Scott Niekum, Sachin Chitta, G Andrew, Bhaskara Barto, Sarah Marthi, Osentoski, Robotics: Science and Systems. 9Scott Niekum, Sachin Chitta, Andrew G Barto, Bhaskara Marthi, and Sarah Osentoski. Incremental semantically grounded learning from demonstration. In Robotics: Science and Systems, volume 9, 2013. Actor-mimic: Deep multitask and transfer reinforcement learning. Emilio Parisotto, Jimmy Ba, Ruslan Salakhutdinov, abs/1511.06342CoRREmilio Parisotto, Jimmy Ba, and Ruslan Salakhutdinov. Actor-mimic: Deep multitask and transfer reinforcement learning. CoRR, abs/1511.06342, 2015. Markov decision processes: Discrete stochastic dynamic programming. Martin L Puterman, Martin L Puterman. Markov decision processes: Discrete stochastic dynamic programming. 1994. . Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell, abs/1606.04671Progressive neural networks. CoRR. Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. CoRR, abs/1606.04671, 2016. Transfer via soft homomorphisms. Jonathan Sorg, Satinder Singh, Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems. The 8th International Conference on Autonomous Agents and Multiagent Systems2International Foundation for Autonomous Agents and Multiagent SystemsJonathan Sorg and Satinder Singh. Transfer via soft homomorphisms. In Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems-Volume 2, pp. 741-748. International Foundation for Autonomous Agents and Multiagent Systems, 2009. Introduction to Reinforcement Learning. Richard S Sutton, Andrew G Barto, MIT PressCambridge, MA, USA1st edition. ISBN 0262193981Richard S. Sutton and Andrew G. Barto. Introduction to Reinforcement Learning. MIT Press, Cambridge, MA, USA, 1st edition, 1998. ISBN 0262193981. An experts algorithm for transfer learning. Erik Talvitie, Satinder Singh, Proceedings of the 20th international joint conference on Artifical intelligence. the 20th international joint conference on Artifical intelligenceMorgan Kaufmann Publishers IncErik Talvitie and Satinder Singh. An experts algorithm for transfer learning. In Proceedings of the 20th international joint conference on Artifical intelligence, pp. 1065-1070. Morgan Kaufmann Publishers Inc., 2007. Transfer learning for reinforcement learning domains: A survey. E Matthew, Peter Taylor, Stone, The Journal of Machine Learning Research. 10Matthew E Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A survey. The Journal of Machine Learning Research, 10:1633-1685, 2009. An introduction to intertask transfer for reinforcement learning. E Matthew, Peter Taylor, Stone, AI Magazine3215Matthew E Taylor and Peter Stone. An introduction to intertask transfer for reinforcement learning. AI Magazine, 32(1):15, 2011. Q-learning. Jch Christopher, Peter Watkins, Dayan, Machine learning. 83Christopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 8(3):279-292, 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. J Ronald, Williams, Machine learning. 83-4Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229-256, 1992.
[]
[ "Pseudomechanics of supersymmetric oscillators", "Pseudomechanics of supersymmetric oscillators", "Pseudomechanics of supersymmetric oscillators", "Pseudomechanics of supersymmetric oscillators" ]
[ "Akash Sinha \nSchool of Basic Sciences\nIndian Institute of Technology Bhubaneswar\n752050Argul, Jatni, KhurdaOdishaIndia\n", "Aritra Ghosh \nSchool of Basic Sciences\nIndian Institute of Technology Bhubaneswar\n752050Argul, Jatni, KhurdaOdishaIndia\n", "Akash Sinha \nSchool of Basic Sciences\nIndian Institute of Technology Bhubaneswar\n752050Argul, Jatni, KhurdaOdishaIndia\n", "Aritra Ghosh \nSchool of Basic Sciences\nIndian Institute of Technology Bhubaneswar\n752050Argul, Jatni, KhurdaOdishaIndia\n" ]
[ "School of Basic Sciences\nIndian Institute of Technology Bhubaneswar\n752050Argul, Jatni, KhurdaOdishaIndia", "School of Basic Sciences\nIndian Institute of Technology Bhubaneswar\n752050Argul, Jatni, KhurdaOdishaIndia", "School of Basic Sciences\nIndian Institute of Technology Bhubaneswar\n752050Argul, Jatni, KhurdaOdishaIndia", "School of Basic Sciences\nIndian Institute of Technology Bhubaneswar\n752050Argul, Jatni, KhurdaOdishaIndia" ]
[]
In this note, we study some classical aspects of supersymmetric oscillators, in one and two spatial (bosonic) dimensions. Our main ingredient is a generalized Poisson bracket, which emerges as a classical counterpart to commutators and anticommutators from supersymmetric quantum mechanics. In one dimension, i.e. in presence of one bosonic and one fermionic coordinate, the Hamiltonian admits a U (1, 1) symmetry for which we explicitly compute the first integrals. It is found that the supercharges emerge from the formalism in a natural way as fermionic conserved quantities. Following this, we describe classical supercharge operators based on the generalized Poisson bracket, which define supersymmetry transformations. The results are generalized to two spatial dimensions, where there is a U (2, 2) symmetry in the Hamiltonian. We comment on supersymmetric generalizations of the Pais-Uhlenbeck and isotonic oscillators. The possibility of defining a generalized Nambu bracket in this framework is also discussed.
null
[ "https://export.arxiv.org/pdf/2304.04747v2.pdf" ]
258,048,527
2304.04747
2c754ca8a2fdc342d1e9c91a45a0495cdd57ca7d
Pseudomechanics of supersymmetric oscillators 16 Apr 2023 Akash Sinha School of Basic Sciences Indian Institute of Technology Bhubaneswar 752050Argul, Jatni, KhurdaOdishaIndia Aritra Ghosh School of Basic Sciences Indian Institute of Technology Bhubaneswar 752050Argul, Jatni, KhurdaOdishaIndia Pseudomechanics of supersymmetric oscillators 16 Apr 2023(Dated: April 18, 2023)arXiv:2304.04747v2 [math-ph] In this note, we study some classical aspects of supersymmetric oscillators, in one and two spatial (bosonic) dimensions. Our main ingredient is a generalized Poisson bracket, which emerges as a classical counterpart to commutators and anticommutators from supersymmetric quantum mechanics. In one dimension, i.e. in presence of one bosonic and one fermionic coordinate, the Hamiltonian admits a U (1, 1) symmetry for which we explicitly compute the first integrals. It is found that the supercharges emerge from the formalism in a natural way as fermionic conserved quantities. Following this, we describe classical supercharge operators based on the generalized Poisson bracket, which define supersymmetry transformations. The results are generalized to two spatial dimensions, where there is a U (2, 2) symmetry in the Hamiltonian. We comment on supersymmetric generalizations of the Pais-Uhlenbeck and isotonic oscillators. The possibility of defining a generalized Nambu bracket in this framework is also discussed. I. INTRODUCTION In recent times, there has been a considerable interest in supersymmetric quantum mechanics [1,2], which has become a discipline of research in its own right following Witten's remarkable work [3,4]. The formulation of supersymmetric quantum mechanics involves the introduction of fermionic coordinates [1,2], in addition to the usual bosonic ones, with certain supersymmetry transformations between them. This leads to the emergence of supercharges which are generators of the supersymmetry transformations and therefore in a sense, characterize the supersymmetric theory. The supercharges commute with the Hamiltonian of the system and are described by a super-Poincare algebra. For a quantum mechanical system with one bosonic and one fermionic degree of freedom, one has supercharge (fermionic themselves) operators Q and Q whose anticommutator gives the Hamiltonian of the system, i.e. {Q, Q} = 2H, (1) while [Q, H] = [Q, H] = 0. One could now ask: what is the appropriate classical limit of a supersymmetric quantum theory? Since the quantum theory involves both commutators and anticommutators, the corresponding classical theory must incorporate a general bilinear bracket structure on the space of observables which could be realized as the → 0 limit of the quantum theory. Such a bracket was introduced in [5] (see also [2], chapter 3 therein), and has been called a 'generalized' Poisson bracket. It is noteworthy that the corresponding 'classical' theory shall incorporate anti-commuting as well as c-number variables in the form of coordinates and momenta, a framework dubbed as pseudomechanics * [email protected][email protected] by Casalbuoni [6]. In order to appreciate the idea of the generalized Poisson bracket, it is important to realize that a supersymmetric quantum theory has associated with it, both bosonic and fermionic variables. The latter are described in terms of Grassman numbers, and as such do not have a straightforward classical analogue. Instead, we would like to view their presence in the classical limit as mere anti-commuting variables in the Hamiltonian. The notion of fermionic variables shall lead to the concepts of left and right differentiation [5]. We shall review the details in the next section [section-(II)]. In the present work. we consider the pseudo-classical mechanics or pseudomechanics of supersymmetric oscillators, with respect to the generalized Poisson bracket. In section-(III), we consider an oscillator in one spatial dimension, i.e. one which is described by a bosonic coordinate-momentum pair (q, p) and similarly a fermionic pair (θ, π). It is found that the generalized Poisson bracket allows us to define a supersymmetry transformation between the bosonic and fermionic sectors. The Hamiltonian is associated with a U (1, 1) symmetry and the supercharges emerge in a natural manner as conserved quantities associated with the system. Since one can construct a phase space, locally spanned by the variables {q, p, θ, π}, we may define a generalized Nambu 4-bracket {·, ·, ·, ·} on the algebra of functions on this phase space [7][8][9]. This bracket generalizes the notion of the generalized Poisson bracket and can describe the Hamilton's equations. Following this, in section-(IV), we extend our analysis to two spatial dimensions by considering the supersymmetric isotropic oscillator. The features from the one-dimensional case are generalized appropriately to two dimensions. Finally, we comment on classical supersymmetric generalizations of two physically interesting oscillator models, namely, the Pais-Uhlenbeck oscillator and the isotonic oscillator in section-(V). We conclude the paper in section-(VI). II. GENERALIZED POISSON BRACKETS In this section, we will briefly review the basic formulas involving generalized Poisson brackets and refer the reader to [2,5] for the details. Consider a classical system with phase space variables (q, p). Let us introduce a new 'fermionic coordinate' θ and an associated momentum π. We want to construct a classical mechanics framework such that we end up getting bracket relations {q, p} = 1, {θ, π} = 1,(2) with others vanishing. This can be achieved using the 'generalized' Poisson bracket discussed in [2,5]. To this end, let us define the notions of even and odd operators in the quantum theory. For a generic operator A, the permutation operator P is such that P −1 AP = (−1) Π(A) A, where Π(A) = +1 if A is an even operator while Π(A) = −1 if A is odd. Since fermionic variables are 'odd' with respect to permutations, it is important to consider even and odd functions with care. We preserve the same definition of odd-ness and even-ness in the classical theory, in that all the bosonic variables are even, while the fermionic ones are odd. Quite naturally then, the products of variables, i.e. q 2 , p 2 and qp are even, and similarly θ 2 (= 0), π 2 (= 0) and θπ are even too. For a given system, let us note that following [5], the Hamiltonian function is defined as H = −L +θπ +qp.(3) Therefore, noting that the Lagrangian is even, while (q, p) and (θ, π) are pairwise even and odd respectively, one must conclude that the Hamiltonian H is even [2]. Prior to introducing generalized Poisson brackets, we must identify the notions of left and right differentiation. For any function F (Q, P) on the phase space, where Q and P collectively indicate the coordinates and momenta respectively (both bosonic and fermionic), one defines its total derivative as dF = F ,Q dQ + dP∂ P F.(4) Here, F ,Q denotes the right derivative, while ∂ P F denotes the left derivative. We shall preserve this convention throughout the paper, i.e. derivatives with respect to the coordinates shall be performed as right derivatives while those with respect to the momenta shall be left derivatives. However, right derivatives can be converted to left derivatives by accounting for all the permutations. Explicitly, one has ∂ Q F = (−1) Π(Q)[Π(Q)+Π(F )] F ,Q .(5)q = ∂H ∂p ,ṗ = − ∂H ∂q ,θ = ∂H ∂π ,π = ∂H ∂θ ,(6) where we have used the fact that the Hamiltonian is an even function. This leads to the following expression for the Hamiltonian vector field: X H (F ) = {F, H} = F ,Q ∂ P H − H ,Q ∂ P F(7) or equivalently, X H (·) = {·, H}. Therefore, one has now introduced a generalized Poisson bracket, by taking into account the properties of left and right differentiation. As has been argued in [5], this can be realized as the classical limit of a supersymmetric quantum theory, and takes into account the algebraic properties analogous to both commutator and anticommutator brackets. Moreover, this bracket satisfies a generalized Jacobi identity (see [5] for other algebraic properties): In what follows, we shall denote a generic even function by E, i.e. Π(E) = +1 while an odd function shall be denoted by O, i.e. Π(O) = −1. Subscripts shall be used to distinguish between multiple even/odd functions in the same equation. Although the expression for the generalized Poisson bracket indicated to in Eq. (7) involves the fact that the Hamiltonian is an even function, one may write an analogous expression for the bracket between two odd functions. We quote the following expressions from [2]: {E 1 , E 2 } q,p,θ,π = ∂E 1 ∂q ∂E 2 ∂p − ∂E 2 ∂q ∂E 1 ∂p + − ∂E 1 ∂θ ∂E 2 ∂π + ∂E 2 ∂θ ∂E 1 ∂π ,(9){E, O} q,p,θ,π = ∂E ∂q ∂O ∂p − ∂O ∂q ∂E ∂p − ∂E ∂θ ∂O ∂π + ∂O ∂θ ∂E ∂π ,(10){O, E} q,p,θ,π = ∂O ∂q ∂E ∂p − ∂E ∂q ∂O ∂p + ∂O ∂θ ∂E ∂π + ∂E ∂θ ∂O ∂π ,(11){O 1 , O 2 } q,p,θ,π = ∂O 1 ∂q ∂O 2 ∂p + ∂O 2 ∂q ∂O 1 ∂p + ∂O 1 ∂θ ∂O 2 ∂π + ∂O 2 ∂θ ∂O 1 ∂π .(12) III. ONE-DIMENSIONAL CASE Let us now consider a classical supersymmetric oscillator consisting of one bosonic and one fermionic coordinate. The Hamiltonian describing this system is given by In order to make supersymmetry explicit, we define a pair of c-number coordinates on M B H = 1 2 (p 2 + q 2 ) + iπθ,(13)P (q, p) = p − iq √ 2 , X(q, p) = q − ip √ 2 .(14) Using the coordinate expressions for the generalized Poisson bracket [Eqs. (9)-(12)], it is easy to verify that {X, X} q,p = 0 = {P, P } q,p , {X, P } q,p = 1,(15) making the transformation canonical. These new coordinates allow us to re-write the Hamiltonian as H = i (P X + πθ) ,(16) where the bosonic and fermionic variables appear in a similar way. Now, for the theory to be supersymmetric, there must exist a 'supersymmetry transformation' which relates the bosonic and fermionic sectors. Such a transformation can be introduced in a natural manner by employing the generalized Poisson bracket in the following way. Let us consider two (fermionic/odd) quantities Q = αP θ + βXπ, Q = ǫP θ + δXπ,(17) for some arbitrary non-zero complex constants {α, β, ǫ, δ} which satisfy α ǫ = β δ so that we avoid Q and Q becoming linearly dependent. If we impose Q = Q * , * denoting the complex conjugation, one finds Q = αP θ + βXπ, Q = (β * P θ + α * Xπ) ,(18) and the condition for linear independence of Q and Q is just |α| 2 = |β| 2 . Using the above construction, it is easy to check that {Q, Q} = (αδ + βǫ)(P X + πθ), and thus, for Q = Q * , we have (αδ + βǫ) = (|α| 2 + |β| 2 ) = 0. This finally gives the result {Q, Q} ∼ H,(19) analogous to Eq. (1), where {·, ·} denotes the generalized Poisson bracket in the equation above, while the same notation is used to denote the anticommutator in Eq. (1). The appearance of this relation is imperative in any supersymmetric theory and it is therefore suggestive to interpret Q and Q are 'classical supercharges'. For that however, one also needs to satisfy the relations {Q, Q} = 0 = {Q, Q}.(20) The conditions given above are satisfied by considering exactly one amongst the parameters (α, β) to be nonzero while the other vanishes. For instance, if one takes α = √ 2 and β = 0, Eqs. (20) are satisfied and we get i 2 {Q, Q} = H,(21) which is the pseudomechanics analogue of Eq. (1). Eq. (21) is also obtained for α = 0 and β = √ 2. Now, with any choice of α = 0 and β = 0, one can verify that {X, Q} ∼ θ, {P, Q} = 0, {θ, Q} = 0, {π, Q} ∼ P, {X, Q} = 0, {P, Q} ∼ π, {θ, Q} ∼ X, {π, Q} = 0,(22) where all the generalized Poisson brackets given above are evaluated in the X, P, θ, π basis and the symbol '∼' indicates that the relations are true up to a constant factor, proportional to α. Eqs. (22) are a set of supersymmetry transformations, generated by Q and Q for α = 0 and β = 0. Similarly, if one would have chosen α = 0 and β = 0, then Eqs. (19) and (20) are still satisfied but Eqs. (22) become {X, Q} = 0, {P, Q} ∼ π, {θ, Q} ∼ X, {π, Q} = 0, {X, Q} ∼ θ, {P, Q} = 0, {θ, Q} = 0, {π, Q} = P,(23) where the symbol '∼' indicates that the relations are true up to a constant factor, proportional to β. Eqs. (23) are a set of supersymmetry transformations, for α = 0 and β = 0. Therefore, we make the following proposition: Proposition 1 The operators X Q (·) = {·, Q} and X Q (·) = {·, Q} acting on the algebra of functions on the phase space define a supersymmetry transformation. We would interpret X Q (·) = {·, Q} and X Q (·) = {·, Q} as 'classical supercharge operators' in the present context. The following result can be proved: Theorem 1 The classical supercharge operators satisfy X 2 Q = X 2 Q = 0. Proof -By a straightforward computation using the generalized Jacobi identity [Eq.(8)]. A. Conserved quantities We shall now show that one actually need not define the supercharges by hand; rather they emerge as conserved quantities of the theory. Consider the following sets of quantities P = (P π) and Q = (X θ). Then the total Hamiltonian reads H = iP T Q.(24) It is not hard to see that P = −iσ 3 Q * . Therefore, the Hamiltonian shall admit a U (1, 1) symmetry [10], i.e. Q → uQ, P → (σ 3 u * σ 3 )P. The conserved quantities can be calculated by demanding that under the action of u ∈ U (1, 1), the Hamiltonian remains invariant. Now, the infinitesimal variation of Q and P under the action of u ≡ exp [iφ µ T µ ] is given by δQ = iφ µ T µ Q, δP = −iφ µ (σ 3 (T µ ) * σ 3 )P,(25) where φ µ ∈ R and µ = 0, 1, 2, 3. Here, the T µ 's are the generators of U (1, 1) group and are given by T 0 = 1 2 σ 0 , T 3 = 1 2 σ 3 , T 1 = i 2 σ 1 , T 2 = i 2 σ 2 ; where σ 0 = I and σ i , i = 1, 2, 3 are the Pauli matrices. Demanding that the infinitesimal variation of the Hamiltonian vanishes under this transformation, we get the conserved quantities to be Z µ ∼ (σ µ ) jk P j Q k , µ = 0, 1, 2, 3.(26) One may now verify that Bosonic Hamiltonian ∼ Z 0 + Z 3 2 ,(27) Fermionic Hamiltonian ∼ Z 0 − Z 3 2 , Z 1 ∼ (Q + Q), Z 2 ∼ i(Q − Q). This means, the classical supercharges emerge out of the formalism as conserved quantities, along with the Hamiltonians describing the bosonic and fermionic parts. Since there is a supersymmetry transformation given by proposition-(1), one could ask if the Hamiltonians of the bosonic and fermionic parts map to each other under this transformation. As it turns out, the answer is positive: One may for instance, take α = √ 2 and β = 0 to show that {{P X, Q}, Q} ∼ πθ,(28) where the operators X Q (·) and X Q (·) are made to act subsequently on the bosonic Hamiltonian. At this stage, we pause to make a few comments. The bosonic oscillator, in the absence of supersymmetry admits a U (1) symmetry which can be found as a subgroup of the extended U (1, 1) symmetry group of the supersymmetric oscillator. Moreover, an isotropic two-dimensional bosonic oscillator, which admits two pairs of bosonic variables, rather than one bosonic and one fermionic pair, admits a different symmetry, given by U (2). This is rather distinct from the U (1, 1) group encountered presently and thus, the supersymmetric oscillator in four-dimensional phase space has a structure very different from a purely bosonic harmonic oscillator on a four-dimensional phase space. We should also note that since the phase space in the present case is four-dimensional, one cannot have four functionally independent integrals of motion in involution, i.e. with mutually commuting generalized Poisson brackets. It is easy to verify that the conserved quantities satisfy {Z 1 , Z 2 } = 0, {Z 1 , Z 1 } ∼ Z 0 ∼ {Z 2 , Z 2 },(29) and therefore, all the conserved quantities are not independent of each other. As a matter of fact, Eqs. (29) are equivalent to Eqs. (20) and (21). B. Generalized Nambu brackets Since we are considering a system with a fourdimensional phase space, we may as well consider the possibility of defining Nambu 4-brackets on the algebra of functions [7][8][9]. Following [9], we may define a generalized Nambu bracket as {F, Z 0 , Z 3 , Z 1 } = − 1 2Z 2 ∂ F, Z 0 , Z 3 , Z 1 ∂ (P, X, π, θ) .(30) One should notice that although the right hand side involves a 'Jacobian' which intrinsically appears to be antisymmetric, following the discussion presented in section-(II), we must be careful to perform the left (right) derivatives with respect to momenta (coordinates). If this prescription is followed, one can check that, taking F = P , we get the equation of motioṅ P = {P, Z 0 , Z 3 , Z 1 } = −iP,(31) which exactly matches with the Hamilton's equation, obtained using Eq. (7), involving the generalized Poisson bracket. One further getsẊ = {X, Z 0 , Z 3 , Z 1 }, θ = {θ, Z 0 , Z 3 , Z 1 } andπ = {π, Z 0 , Z 3 , Z 1 } meaning that, the generalized Nambu bracket describes the classical dynamics of the supersymmetric oscillator. Thus, the conserved quantities Z 0 , Z 1 and Z 3 serve as Nambu-Hamiltonians in the present problem. We remind the reader that in standard (nonsupersymmetric) classical mechanics, the definition of the Nambu bracket is not unique [9]. The same is true even in the pseudomechanics formalism, where one can pick any three amongst the conserved quantities {Z i } or their linear combinations to serve as Nambu-Hamiltonians, and choose an appropriate normalization of the bracket to describe the equations of motion. In doing so, one should keep in mind that all the derivatives with respect to position variables are right derivatives, while those with respect to the momenta are left derivatives. This shall ensure that the generalized Nambu bracket is no longer totally anti-symmetric. Instead, the generalized Nambu bracket shall have as subordinate structures, all generalized Poisson bracket relations. In any case, the bracket {Z µ , Z ν , Z ρ , Z σ } = 0. We conclude this section by noting that although we have introduced a generalized Nambu bracket for pseudomechanics, its quantum counterpart remains unclear. IV. GENERALIZATION TO TWO DIMENSIONS We now consider a generalization of the oscillator considered in the previous section to two spatial (bosonic) dimensions. Thus, one is considering a supersymmetric version of the usual isotropic harmonic oscillator encountered in standard classical mechanics. The twodimensional bosonic isotropic oscillator has the Hamiltonian H B = (p 2 1 + q 2 1 )/2 + (p 2 2 + q 2 2 )/2, where we have set the frequency to unity. To make this system supersymmetric, we need to introduce two fermionic oscillators so that the resulting total Hamiltonian becomes H = iP T Q,(32) with P = (P 1 P 2 π 1 π 2 ) and Q = (X 1 X 2 θ 1 θ 2 ). Here, the bosonic variables (X 1 , P 1 ) and (X 2 , P 2 ) are obtained from the variables (q 1 , p 1 ) and (q 2 , p 2 ) by invoking Eqs. (14) for each spatial direction. A. Conserved quantities Let us first consider the part of the Hamiltonian describing the bosonic sector. This itself enjoys a U (2) symmetry (see for example [9]). The structural similarity between the bosonic and fermionic parts will then help us to get the corresponding conserved quantities from the fermionic Hamiltonian at once. We define P = (P 1 P 2 ) and X = (X 1 X 2 ). Further, from Eqs. (14), we know that P ∼ X * . Let us consider the SU (2) transformations. Under the transformation X → U X, we have P → U * P , which means the bosonic Hamiltonian H B = iP T X, transforms as H B → iP T (U * ) T U X = iP T (U † U )X.(33) The conserved quantities can be found by demanding that under the action of U ∈ SU (2) the Hamiltonian remains invariant. The infinitesimal variations of X and P under the action of U ≡ exp [iφ a T a ] are δX = iφ a T a X, δP = −iφ a (T a ) * P,(34) where φ a ∈ R and a = 1, 2, 3. The T a 's are the generators of SU (2) group and are given by T a = 1 2 σ a ; where σ a 's are the Pauli matrices. Demanding that the infinitesimal variation of the Hamiltonian vanishes under this transformation, we get the conserved quantities as B a ∼ (T a ) jk P j X k , a = 1, 2, 3. We can simplify these B a 's to obtain B 1 = p 1 p 2 + q 1 q 2 , B 2 = q 2 p 1 − q 1 p 2 , (36) B 3 = (p 2 1 + q 2 1 ) − (p 2 2 + q 2 2 ),(37) where B 1 and B 2 are just the Fradkin tensor and the angular momentum, respectively. In addition to the ones quoted above, we also have the total energy as a conserved quantity E = (p 2 1 + q 2 1 )/2 + (p 2 2 + q 2 2 )/2, originating from the extended group U (2) (rather than SU (2)). However, all four of them are not independent of each other as can be easily checked. Similarly, with a little more careful treatment, one gets the conserved quantities from the fermionic part of the Hamiltonian to be F a ∼ (T a ) jk π j θ k , a = 1, 2, 3.(38) Notice that these conserved quantities are all even. What is rather interesting is that the full Hamiltonian enjoys a much larger set of symmetries. Following the discussion of the previous section, one can see that this Hamiltonian enjoys a U (2, 2) symmetry. Let us denote the generators of this group by λ µ ; µ = 0, 1, · · · , 15.(39) Then, the conserved quantities are C µ ∼ (λ µ ) jk P j Q k , µ = 0, 1, 2, · · · , 15.(40) An explicit representation of the U (4) generators can be found in [11]. One can obtain the generators of U (2, 2) by letting the block-diagonal generators remain unchanged, while multiplying the off-diagonal generators by a factor of i. The generators are of the form A iM † iM B = A 0 0 B + i 0 M † M 0 ,(41) such that all the entries A, B and M are 2 × 2 matrices and A, B are Hermitian themselves. The first term on the right hand side gives rise to the even conserved quantities, both from the bosonic and fermionic sectors while the second term is associated with the odd conserved quantities. From this, we observe at once that there are 8 + 8 conserved quantities which are even and odd, respectively. Explicitly, the conserved quantities, respectively even and odd, take the form E ∼ A 0 0 B jk P j Q k , O ∼ 0 M † M 0 jk P j Q k ,(42) where j, k = 0, 1, 2, 3 and we have suppressed an additional free index on E and O, for brevity. In Eq. (40), we have collectively denoted them all as {C i }. It might be noted that the even conserved quantities originate from symmetry of the supersymmetric Hamiltonian under transformations from the group U (2) ⊕ U (2) ⊂ U (2, 2) because the Hamiltonians describing the bosonic and the fermionic sectors individually admit a U (2) symmetry. The odd conserved quantities depend upon a mix of even and odd variables, i.e. bosonic and fermionic variables and are therefore associated with the off-block diagonal of Eq. (41). These contain the supercharges, which emerge as conserved quantities of the theory. However, all the conserved quantities are not independent of each other, since the phase space including all bosonic and fermionic variables is of dimension eight. For example, one easily finds that {C 5 , C 7 } ∼ C 1 , so on and so forth. One may go further and define a generalized Nambu 8-bracket on the algebra of functions on this eight-dimensional phase space. It can be verified that the bracket with proper normalization, does indeed reproduce the correct Hamilton's equations. We do not pursue it here. B. Classical supercharges There are total eight odd conserved quantities, but we can club them together to form Q 1 = (P 1 + P 2 )(θ 1 + θ 2 ), Q 1 = (π 1 + π 2 )(X 1 + X 2 ), Q 2 = (P 1 − P 2 )(θ 1 − θ 2 ), Q 2 = (π 1 − π 2 )(X 1 − X 2 ),(43) where Q 1 = Q * 1 and Q 2 = Q * 2 . Clearly, the quantities Q 1 , Q 2 and their complex conjugates are fermionic quantities, i.e. they are odd, because they involve multiplication of odd variables with even ones. It is easily verified that for Q = Q1+Q2 {X j , Q} = 0, {P j , Q} ∼ π j , {θ j , Q} ∼ X j , {π j , Q} = 0.(44) One could similarly consider another linearly independent combination of Q 1 and Q 2 : Q ′ = Q1−Q2 2 and Q ′ = Q 1 −Q 2 2 which gives (index k is being summed over) {X j , Q ′ } ∼ (1 − δ jk )θ k , {P j , Q ′ } = 0, {π j , Q ′ } ∼ (1 − δ jk )P k , {θ j , Q ′ } = 0, {P j , Q ′ } ∼ (1 − δ jk )π k , {X j , Q ′ } = 0, {θ j , Q ′ } ∼ (1 − δ jk )X k , {π j , Q ′ } = 0.(45) Clearly, sets of relations such as Eqs. (44) or (45) define supersymmetry transformations between the bosonic and fermionic variables of the system and can be viewed an as appropriate generalization of proposition-(1). We brand these Q's as the classical supercharges and have verified that they satisfy {Q i , Q j } = 0 = {Q i , Q j },(46){Q i , Q j } = 2 (σ m H m ) ij ,(47) with σ m = (σ 0 σ 3 ) and H = P T (λ 0 )Q, P T (λ 1 + λ 13 )Q . Eqs. (46) and (47) are the generalizations of Eqs. (20) and (21) respectively. It is not hard to verify that the integrals of motion from the bosonic and fermionic parts can be mapped to each other, say using Eqs. (44). For instance, if we consider C 1 = i(P 1 X 2 + P 2 X 1 ), then up to a factor, we have {{C 1 , Q}, Q} ∼ C 13 ,(48) where C 13 = i(π 1 θ 2 + π 2 θ 1 ). Similarly, for C 2 = P 1 X 2 − P 2 X 1 , one has {{C 2 , Q}, Q} ∼ C 14 ,(49) where C 14 = π 1 θ 2 − π 2 θ 1 . Finally, we point out that the supercharges Q 1 and Q 2 admit an R-symmetry. In order to make this explicit, we consider a modified set of supercharges as Q 1 Q 2 → e iφ 0 0 e iψ Q 1 Q 2 ,(50) where φ, ψ ∈ R. The new supercharges still satisfy Eqs. (46) and (47), which means they admit an Rsymmetry under global phase transformations from the group U (1) ⊕ U (1). V. OTHER MODELS In this section, we comment on classical supersymmetric generalizations of two more physically interesting oscillator models, in the light of the supersymmetric harmonic oscillators discussed in the preceding sections. We begin with the Pais-Uhlenbeck oscillator in the subsection below, and discuss the isotonic oscillator in the subsequent subsection. A. Pais-Uhlenbeck oscillator The Pais-Uhlenbeck oscillator is known as toy model for some higher derivative gravity theories [12], because it can model situations with positive as well as negative energies. The equation of motion is of the fourth order, and reads .... z + Az + Bz = 0,(51) for z = z(t). This may be re-written as d 2 dt 2 + ω 2 1 d 2 dt 2 + ω 2 2 z = 0,(52) where A = ω 2 1 + ω 2 2 , and B = ω 2 1 ω 2 2 . As discussed in [13,14] (see also [15]), the Pais-Uhlenbeck oscillator can be written as a system of two oscillators. In the positive energy case, the Hamiltonian of the system reads [14] H PU = 1 2 (p 2 x + p 2 y ) + 1 2 (µ 1 x 2 + µ 2 y 2 − 2ρxy),(53) where µ 1 , µ 2 and ρ are some real and positive constants. Then, one can define the rotated coordinates and momenta: q 1 = x cos α + y sin α, q 2 = −x sin α + y cos α, p 1 = p x cos α + p y sin α, p 2 = −p x sin α + p y cos α,(54) in which the Hamiltonian is diagonalized and reads H = 1 2 (p 2 1 + p 2 2 ) + 1 2 (a 2 q 2 1 + b 2 q 2 2 ).(55) The constants a 2 , b 2 and α can be straightforwardly related to the parameters µ 1 , µ 2 and ρ [14]. Notice that Eqs. (54) are just SO(2) rotations individually on the x − y and p x − p y planes, and it can be explicitly checked that the transformations are canonical. We further rescale the variables as q 1 → q 1 / √ a, q 2 → q 2 / √ b, p 1 → √ ap 1 and p 2 → √ bp 2 which constitute a set of canonical transforms so that the Hamiltonian reads H = a 2 (p 2 1 + q 2 1 ) + b 2 (p 2 2 + q 2 2 ).(56) Thereafter, following Eq. (14), we perform another set of canonical transformations P 1 = p 1 − iq 1 √ 2 , X 1 = q 1 − ip 1 √ 2 ,(57)P 2 = p 2 − iq 2 √ 2 , X 2 = q 2 − ip 2 √ 2 ,(58) for which Eq. (56) becomes H = i(aP 1 X 1 + bP 2 X 2 ). We label this as H B , i.e. H B := i(aP 1 X 1 + bP 2 X 2 ). We are now in a position to suggest supersymmetric generalizations of the Pais-Uhlenbeck oscillator. We describe two possible schemes below. Scheme one: Let us introduce two fermionic coordinate-momentum pairs (θ 1 , π 1 ) and (θ 2 , π 2 ), with a fermionic Hamiltonian H F = i(aπ 1 θ 1 + bπ 2 θ 2 ). Then, the total Hamiltonian becomes H = H B + H F = aiP T 1 Q 1 + biP T 2 Q 2 ,(59) where P 1 = (P 1 π 1 ), P 2 = (P 2 π 2 ), Q 1 = (X 1 θ 1 ) and Q 2 = (X 2 θ 2 ). It is therefore clear that the total Hamiltonian is merely two copies of Eq. (24), and thereafter, the symmetry group is U (1, 1) ⊕ U (1, 1). The classical supercharges and supercharge operators can be defined in an identical way as discussed in section-(III). Scheme two: We now describe an alternate scheme for a supersymmetric Pais-Uhlenbeck oscillator. Let us definẽ X 1 = √ aX 1 2 (1+ 1 a ) 1 P 1 2 (1− 1 a ) 1 ,P 1 = √ aX 1 2 (1− 1 a ) 1 P 1 2 (1+ 1 a ) 1 , X 2 = √ bX 1 2 (1+ 1 b ) 2 P 1 2 (1− 1 b ) 2 ,P 2 = √ bX 1 2 (1− 1 b ) 2 P 1 2 (1+ 1 b ) 2 ,(60) for which it turns out thatP 1X1 = aP 1 X 1 andP 2X2 = bP 2 X 2 . Now, one may verify that {X 1 ,P 1 } P.B. = {X 2 ,P 2 } P.B. = 1, where {·, ·} P.B. is a Poisson bracket evaluated in the X 1 , X 2 , P 1 , P 2 basis. Subsequently, Eqs. (60) are a set of canonical transformations and in terms of these new variables H B = i(P 1X1 +P 2X2 ). This is rather remarkable, because we have mapped the twodimensional bosonic anisotropic oscillator to a corresponding isotropic one [27]. Thereafter, following the presentation of section-(IV), we can introduce fermionic variables and obtain a supersymmetric generalization of the Pais-Uhlenbeck oscillator. It should be remarked that certain supersymmetric generalizations of the Pais-Uhlenbeck oscillator were reported earlier in [16][17][18]. B. Isotonic oscillator The isotonic oscillator is described by the onedimensional potential [19][20][21] V (z) = az 2 + b z 2 , z > 0, where a and b are positive constants. Interestingly, this is one of the only two rational potentials in one dimension which lead to isochronous orbits in the phase space; the other one being the standard harmonic oscillator (b = 0) [22,23]. Here, isochronicity refers to the lack of sensitivity of the time period of oscillation to the amplitude/energy of the system. Moreover, it was demonstrated that just like the familiar harmonic oscillator, the isotonic oscillator admits an equispaced quantum spectrum [19,24]. The isotonic oscillator has also found applications in the theory of coherent states [25]. One may obtain the dynamics of the isotonic oscillator by considering a two-dimensional central force problem with potential V (r) = kr 2 , where k > 0. Then, the Lagrangian reads L = m 2 (ṙ 2 + r 2θ2 ) − V (r),(62) whereby one deduces that p r = mṙ and p θ = mr 2θ . The latter is conserved, due to the fact that θ is a cyclic coordinate. Denoting p r = p and p θ = l, the Hamiltonian of the system reads H = p 2 2m + l 2 2mr 2 + kr 2 . (63) {F, {G, H}} + (−1) Π(F )(Π(G)+Π(H)) {G, {H, F }} (8) + (−1) Π(H)(Π(F )+Π(G)) {H, {F, G}} = 0,for any three functions F , G and H on the phase space. where symbols have their usual meanings. The variables obey the generalized Poisson bracket relations discussed in the previous section. The full phase space M can be split as M = M B ⊕ M F where M B and M F contain the variables (q, p) and (θ, π) respectively. {X j , Q} ∼ θ j , {P j , Q} = 0, {θ j , Q} = 0, {π j , Q} ∼ P j , This exactly looks like a one-dimensional problem, with the dynamics being described by a potential of the form given in Eq. (61), with a = k and b = l 2 /2m.We now suggest a classical supersymmetric generalization of the isotonic oscillator. On a plane, denoting the position and momentum vectors as r and p, the Hamiltonian of the system consisting of a unit mass particle reads (see also[26])where k = ω 2 . We may perform the (canonical) rescaling r → r/ √ ω and p → √ ωp and then, define the variablessuch that H = iωP · X := H B . Extending the classical phase space to include the odd variable pairs (θ 1 , π 1 ) and (θ 2 , π 2 ), and defining the fermionic Hamiltonian H F = iω(π 1 θ 1 + π 2 θ 2 ), the total Hamiltonian H = H F + H B looks exactly like Eq. (32). Therefore, the analysis of section-(IV) goes through in this case.VI. DISCUSSIONIn the present work, we have analyzed the symmetries of supersymmetric oscillators in one and two (spatial) dimensions in the framework of pseudo-classical mechanics, or simply pseudomechanics[6]. As it turns out, they are respectively associated with symmetry groups U (1, 1) and U (2, 2). We speculate that in n spatial dimensions, i.e. in the presence of n bosonic and fermionic pairs (q i , p i ) and (θ i , π i ), respectively for i = {1, 2, · · · , n}, the Hamiltonian has an underlying symmetry dictated by the group U (n, n). In that case, the first integrals are associated with generators of the formwhere A n and B n are n × n hermitian matrices, associated with the even conserved quantities, originating from the bosonic and fermionic sectors of the Hamiltonian respectively. This means that the even conserved quantities originate from the symmetry group U (n) ⊕ U (n) ⊂ U (n, n) because the bosonic and fermionic Hamiltonians themselves admit a U (n) symmetry individually. In Eq. (66), M n is some n × n matrix associated with the odd conserved quantities, i.e. the ones whose combinations lead to the supercharges. It should be noted that an identical structure describing the supercharges, i.e. the odd conserved quantities was pointed out in[3]in a quantum framework. However, we remind the reader that in a supersymmetric quantum theory, the supercharges themselves are the operators leading to supersymmetry transformations, whereas in pseudomechanics, they are odd conserved quantities such that {·, Q} and {·, Q} act on the phase space as classical supercharge operators. Supersymmetry and Quantum Mechanics. F Cooper, A Khare, U Sukhatme, Phys. Rep. 251267F. Cooper, A. Khare and U. Sukhatme, Supersymmetry and Quantum Mechanics, Phys. Rep. 251, 267 (1995). B K Bagchi, Supersymmetry In Quantum and Classical Mechanics. Chapman & HallB. K. Bagchi, Supersymmetry In Quantum and Classical Mechanics, Chapman & Hall (2000). Dynamical breaking of supersymmetry. E Witten, Nucl. Phys. B. 188513E. Witten, Dynamical breaking of supersymmetry, Nucl. Phys. B 188, 513 (1981). Constraints on supersymmetry breaking. E Witten, Nucl. Phys. B. 202253E. Witten, Constraints on supersymmetry breaking, Nucl. Phys. B 202, 253 (1982). Supersymmetric classical mechanics. S N Biswas, S K Soni, Pramana -J Phys. 27117S. N. Biswas and S. K. Soni, Supersymmetric classical mechanics, Pramana -J Phys 27, 117 (1986). On the quantization of systems with anticommuting variables. R Casalbuoni, Nuovo Cim, A. 33115R. Casalbuoni, On the quantization of systems with an- ticommuting variables, Nuovo Cim, A 33, 115 (1976). Generalized Hamiltonian dynamics. Y Nambu, Phys. Rev. D. 72405Y. Nambu, Generalized Hamiltonian dynamics, Phys. Rev. D 7, 2405 (1973). On Foundation of the Generalized Nambu Mechanics. L Takhtajan, Comm. Math. Phys. 160295L. Takhtajan, On Foundation of the Generalized Nambu Mechanics, Comm. Math. Phys. 160, 295 (1994). . R Chatterjee, Dynamical Symmetries and Nambu Mechanics, Lett. Math. Phys. 36117R. Chatterjee, Dynamical Symmetries and Nambu Me- chanics, Lett. Math. Phys. 36, 117 (1996). On the unitary representations of SU(1,1) and SU(2,1). L C Biedenharn, J Nuyts, N Straumann, 3Annales de l'I. H. P., section AL. C. Biedenharn, J. Nuyts and N. Straumann, On the unitary representations of SU(1,1) and SU(2,1), Annales de l'I. H. P., section A, tome 3, no 1, pp. 13-39 (1965). M A A Sbaih, M K H Srour, M S Hamada, H M Fayad, Lie Algebra and Representation of SU. 109M. A. A. Sbaih, M. K. H. Srour, M. S. Hamada and H. M. Fayad, Lie Algebra and Representation of SU(4), EJTP 10, 9 (2013). On Field Theories with Non-Localized Action. A Pais, G E Uhlenbeck, Phys. Rev. 79145A. Pais and G. E. Uhlenbeck, On Field Theories with Non-Localized Action, Phys. Rev. 79, 145 (1950). A Hamiltonian Formulation of the Pais-Uhlenbeck Oscillator that Yields a Stable and Unitary Quantum System. A Mostafazadeh, Phys. Lett. A. 375A. Mostafazadeh, A Hamiltonian Formulation of the Pais-Uhlenbeck Oscillator that Yields a Stable and Uni- tary Quantum System, Phys. Lett. A 375, pp. 93-98 (2010). Stable Self-Interacting Pais-Uhlenbeck Oscillator. M Pavsic, Mod. Phys. Lett. A. 28361350165M. Pavsic, Stable Self-Interacting Pais-Uhlenbeck Oscil- lator, Mod. Phys. Lett. A, Vol. 28, No. 36, 1350165 (2013). Advanced Classical Mechanics. B K Bagchi, CRC PressB. K. Bagchi, Advanced Classical Mechanics, CRC Press (2017). New realizations of N=2 l-conformal Newton-Hooke superalgebra. I Masterov, Mod.Phys. Lett. A. 301550073I. Masterov, New realizations of N=2 l-conformal Newton-Hooke superalgebra, Mod.Phys. Lett. A 30, 1550073 (2015). N=2 supersymmetric Pais-Uhlenbeck oscillator. I Masterov, Mod. Phys. Lett. A. 301550107I. Masterov, N=2 supersymmetric Pais-Uhlenbeck oscil- lator, Mod. Phys. Lett. A 30, 1550107 (2015). An alternative Hamiltonian formulation for the Pais-Uhlenbeck oscillator. I Masterov, Nucl. Phys. B. 90295I. Masterov, An alternative Hamiltonian formulation for the Pais-Uhlenbeck oscillator, Nucl. Phys. B 902, 95 (2016). The isotonic oscillator. Y Weissman, J Jortner, Phys. Lett. A. 70Y. Weissman and J. Jortner, The isotonic oscillator, Phys. Lett. A 70, pp. 177-179 (1979). Santander, A quantum exactly solvable non-linear oscillator related with the isotonic oscillator. J F Cariñena, A M Perelomov, M F Rañada, M , J. Phys. A: Math. Theor. 4185301J.F. Cariñena, A.M. Perelomov, M.F. Rañada, M. San- tander, A quantum exactly solvable non-linear oscillator related with the isotonic oscillator, J. Phys. A: Math. Theor. 41, 085301 (2008). On purely nonlinear oscillators generalizing an isotonic potential. A Ghose-Choudhury, A Ghosh, P Guha, A Pandey, Int. J. Non-Linear Mech. 106A. Ghose-Choudhury, A. Ghosh, P. Guha and A. Pandey, On purely nonlinear oscillators generalizing an isotonic potential, Int. J. Non-Linear Mech. 106, pp. 55-59 (2018). A remark on rational isochronous potentials. O A Chalykh, A P Veselov, J. Nonlinear Math. Phys. 12O. A. Chalykh and A. P. Veselov, A remark on rational isochronous potentials, J. Nonlinear Math. Phys. 12, pp. 179-183 (2005). The Jacobi last multiplier and isochronicity of Liénard type systems. P Guha, A Ghose-Choudhury, Rev. Math. Phys. 251330009P. Guha and A. Ghose-Choudhury, The Jacobi last mul- tiplier and isochronicity of Liénard type systems, Rev. Math. Phys. 25, 1330009 (2013). A new potential with the spectrum of an isotonic oscillator. D Zhu, J. Phys. A: Math. Gen. 204331D. Zhu, A new potential with the spectrum of an isotonic oscillator, J. Phys. A: Math. Gen. 20, 4331 (1987). Coherent states associated to the wavefunctions and the spectrum of the isotonic oscillator. K Thirulogasanthar, N Saad, J. Phys. A: Math. Gen. 374567K. Thirulogasanthar and N. Saad, Coherent states associ- ated to the wavefunctions and the spectrum of the isotonic oscillator, J. Phys. A: Math. Gen. 37, 4567 (2004). Creation and annihilation operators, symmetry and supersymmetry of the 3D isotropic harmonic oscillator. R D Mota, V D Granados, A Queijeiro, J García, L Guzmán, J. Phys. A: Math. Gen. 364849R. D. Mota, V. D. Granados, A. Queijeiro, J. García and L Guzmán, Creation and annihilation operators, symme- try and supersymmetry of the 3D isotropic harmonic os- cillator, J. Phys. A: Math. Gen. 36, 4849 (2003). The details involving Eqs. (60), i.e. the canonical transformations mapping the anisotropic and isotropic oscillators in general shall be presented elsewhere. The details involving Eqs. (60), i.e. the canonical trans- formations mapping the anisotropic and isotropic oscil- lators in general shall be presented elsewhere.
[]
[ "CHIRODIFF: MODELLING CHIROGRAPHIC DATA WITH DIFFUSION MODELS", "CHIRODIFF: MODELLING CHIROGRAPHIC DATA WITH DIFFUSION MODELS" ]
[ "Ayan Das [email protected] \nSketchX\nCVSSP\nUniversity of Surrey\n\n\niFlyTek-Surrey Joint Research Centre on AI\n\n", "Yongxin Yang [email protected] \nSketchX\nCVSSP\nUniversity of Surrey\n\n\nQueen Mary University of London\n\n", "Timothy Hospedales [email protected] \nSketchX\nCVSSP\nUniversity of Surrey\n\n\nUniversity of Edinburgh\n\n\nSamsung AI Centre\nCambridge\n", "Tao Xiang [email protected] \nSketchX\nCVSSP\nUniversity of Surrey\n\n\niFlyTek-Surrey Joint Research Centre on AI\n\n", "Yi-Zhe Song [email protected] \nSketchX\nCVSSP\nUniversity of Surrey\n\n\niFlyTek-Surrey Joint Research Centre on AI\n\n" ]
[ "SketchX\nCVSSP\nUniversity of Surrey\n", "iFlyTek-Surrey Joint Research Centre on AI\n", "SketchX\nCVSSP\nUniversity of Surrey\n", "Queen Mary University of London\n", "SketchX\nCVSSP\nUniversity of Surrey\n", "University of Edinburgh\n", "Samsung AI Centre\nCambridge", "SketchX\nCVSSP\nUniversity of Surrey\n", "iFlyTek-Surrey Joint Research Centre on AI\n", "SketchX\nCVSSP\nUniversity of Surrey\n", "iFlyTek-Surrey Joint Research Centre on AI\n" ]
[]
Generative modelling over continuous-time geometric constructs, a.k.a chirographic data such as handwriting, sketches, drawings etc., have been accomplished through autoregressive distributions. Such strictly-ordered discrete factorization however falls short of capturing key properties of chirographic data -it fails to build holistic understanding of the temporal concept due to one-way visibility (causality). Consequently, temporal data has been modelled as discrete token sequences of fixed sampling rate instead of capturing the true underlying concept. In this paper, we introduce a powerful model-class namely Denoising Diffusion Probabilistic Models or DDPMs for chirographic data that specifically addresses these flaws. Our model named "CHIRODIFF", being non-autoregressive, learns to capture holistic concepts and therefore remains resilient to higher temporal sampling rate up to a good extent. Moreover, we show that many important downstream utilities (e.g. conditional sampling, creative mixing) can be flexibly implemented using CHIRODIFF. We further show some unique use-cases like stochastic vectorization, de-noising/healing, abstraction are also possible with this model-class. We perform quantitative and qualitative evaluation of our framework on relevant datasets and found it to be better or on par with competing approaches. Please visit our project page for more details: https://ayandas.me/chirodiff.
10.48550/arxiv.2304.03785
[ "https://export.arxiv.org/pdf/2304.03785v1.pdf" ]
258,048,805
2304.03785
1927c30e5755b3d40d75162be7213d9cb47543e5
CHIRODIFF: MODELLING CHIROGRAPHIC DATA WITH DIFFUSION MODELS Ayan Das [email protected] SketchX CVSSP University of Surrey iFlyTek-Surrey Joint Research Centre on AI Yongxin Yang [email protected] SketchX CVSSP University of Surrey Queen Mary University of London Timothy Hospedales [email protected] SketchX CVSSP University of Surrey University of Edinburgh Samsung AI Centre Cambridge Tao Xiang [email protected] SketchX CVSSP University of Surrey iFlyTek-Surrey Joint Research Centre on AI Yi-Zhe Song [email protected] SketchX CVSSP University of Surrey iFlyTek-Surrey Joint Research Centre on AI CHIRODIFF: MODELLING CHIROGRAPHIC DATA WITH DIFFUSION MODELS Published as a conference paper at ICLR 2023 Generative modelling over continuous-time geometric constructs, a.k.a chirographic data such as handwriting, sketches, drawings etc., have been accomplished through autoregressive distributions. Such strictly-ordered discrete factorization however falls short of capturing key properties of chirographic data -it fails to build holistic understanding of the temporal concept due to one-way visibility (causality). Consequently, temporal data has been modelled as discrete token sequences of fixed sampling rate instead of capturing the true underlying concept. In this paper, we introduce a powerful model-class namely Denoising Diffusion Probabilistic Models or DDPMs for chirographic data that specifically addresses these flaws. Our model named "CHIRODIFF", being non-autoregressive, learns to capture holistic concepts and therefore remains resilient to higher temporal sampling rate up to a good extent. Moreover, we show that many important downstream utilities (e.g. conditional sampling, creative mixing) can be flexibly implemented using CHIRODIFF. We further show some unique use-cases like stochastic vectorization, de-noising/healing, abstraction are also possible with this model-class. We perform quantitative and qualitative evaluation of our framework on relevant datasets and found it to be better or on par with competing approaches. Please visit our project page for more details: https://ayandas.me/chirodiff. INTRODUCTION Chirographic data like handwriting, sketches, drawings etc. are ubiquitous in modern day digital contents, thanks to the widespread adoption of touch screen and other interactive devices (e.g. AR/VR sets). While supervised downstream tasks on such data like sketch-based image retrieval (SBIR) (Liu et al., 2020;Pang et al., 2019), semantic segmentation (Yang et al., 2021;, classification (Yu et al., 2015; continue to flourish due to higher commercial demand, unsupervised generative modelling remains slightly under-explored. Recently however, with the advent of large-scale datasets, generative modelling of chirographic data started to gain traction. Specifically, models have been trained on generic doodles/drawings data (Ha & Eck, 2018), or more "specialized" entities like fonts (Lopes et al., 2019), diagrams (Gervais et al., 2020;, SVG Icons (Carlier et al., 2020) etc. Building unconditional neural generative models not only allows understanding the distribution of chirographic data but also enables further downstream tasks (e.g. segmentation, translation) by means of conditioning. with auto-regressive model. CHIRODIFF's latent space is much more effective with compositional structures for complex data. By far, learning neural models over continuous-time chirographic structures have been facilitated broadly by two different representations -grid-based raster image and vector graphics. Raster format, the de-facto representation for natural images, has served as an obvious choice for chirographic structures (Yu et al., 2015;. The static nature of the representation however does not provide the means for modelling the underlying creative process that is inherent in drawing. "Creative models", powered by topology specific vector formats (Carlier et al., 2020;Ha & Eck, 2018;Lopes et al., 2019;Das et al., 2022), on the other hand, are specifically motivated to mimic this dynamic creation process. They build distributions of a chirographic entity (e.g., a sketch) X with a specific topology (drawing direction, stroke order etc), i.e. p θ (X). Majority of the creative models are designed with autoregressive distributions (Ha & Eck, 2018;Ribeiro et al., 2020). Such design choice is primarily due to vector formats having variable lengths, which is elegantly handled by autoregression. Doing so, however, restrict the model from gaining full visibility of the data and fails to build holistic understanding of the temporal concepts. A simple demonstration of its latent-space interpolation confirms this hypothesis ( Figure 2). The other possibility is to drop the ordering/sequentiality of the points entirely and treat chirographic data as 2D point-sets and use prominent techniques from 3D point-cloud modelling (Luo & Hu, 2021a;b;. However, point-set representation does not fit chirographic data well due to its inherently unstructured nature. In this paper, with CHIRODIFF, we find a sweet spot and propose a framework that uses non-autoregressive density while retaining its sequential nature. Another factor in traditional neural chirographic models that limit the representation is effective handling of temporal resolution. Chirographic structures are inherently continuous-time entities as rightly noted by Das et al. (2022). Prior works like SketchRNN (Ha & Eck, 2018) modelled continuous-time chirographic data as discrete token sequence or motor program. Due to limited visibility, these models do not have means to accommodate different sampling rates and are therefore specialized to learn for one specific temporal resolution (seen during training), leading to the loss of spatial/temporal scalability essential for digital contents. Even though there have been attempts Das et al., 2020) to directly represent continuous-time entities with their underlying geometric parameters, most of them still possess some form of autoregression. Recently, SketchODE (Das et al., 2022) approached to solve this problem by using Neural ODE (abbreviated as NODE) (Chen et al., 2018) for representing time-derivative of continuous-time functions. However, the computationally restrictive nature of NODE's training algorithm makes it extremely hard to train and adopt beyond simple temporal structures. CHIRODIFF, having visibility of the entire sequence, is capable of implicitly modelling the sampling rate from data and consequently is robust to learning the continuous-time temporal concept that underlies the discrete motor program. In that regard, CHIRODIFF outperforms Das et al. (2022) significantly by adopting a model-class superior in terms of computational costs and representational power while training on similar data. We chose Denoising Diffusion Probabilstic Models (abbr. as DDPMs) as the model class due to their spectacular ability to capture both diversity and fidelity (Ramesh et al., 2021;Nichol et al., 2022). Furthermore, Diffusion Models are gaining significant popularity and nearly replacing GANs in wide range of visual synthesis tasks due to their stable training dynamics and generation quality. A surprising majority of existing works on Diffusion Model is solely based or specialized to gridbased raster images, leaving important modalities like sequences behind. Even though there are some isolated works on modelling sequential data, but they have mostly been treated as fixed-length entities (Tashiro et al., 2021). Our proposed model, in that regard, is one of the first models to exhibit the potential to apply Diffusion Model on continuous-time entities. To this end, our generative model generates X by transforming a discretized brownian motion with unit step size. We consider learning stochastic generative model for continuous-time chirographic data both in unconditional (samples shown in Figure 1) and conditional manner. Unlike autoregressive models, CHIRODIFF offers a way to draw conditional samples from the model without an explicit encoder when conditioned on homogeneous data (see section 5.4.2). Yet another similar but important application we consider is stochastic vectorization, i.e. sampling probable topological reconstruction X given a perceptive input R(X) where R is a converter from vector representation to perceptive representation (e.g. raster image or point-cloud). We also learn deterministic mapping from noise to data with a variant of DDPM namely Denoising Diffusion Implicit Model or DDIM which allows latent space interpolations like Ha & Eck (2018) and Das et al. (2022). A peculiar property of CHIRODIFF allows a variant of the traditional interpolation which we term as "Creative Mixing", which do not require the model to be trained only on one end-points of the interpolation. We also show a number of unique use-cases like denoising/healing (Su et al., 2020;Luo & Hu, 2021a) and controlled abstraction (Muhammad et al., 2019;Das et al., 2022) in the context of chirographic data. As a drawback however, we loose some of the abilities of autoregressive models like stochastic completion etc. In summary, we propose a Diffusion Model based framework, CHIRODIFF, specifically suited for modelling continuous-time chirographic data (section 4) which, so far, has predominantly been treated with autoregressive densities. Being non-autoregressive, CHIRODIFF is capable of capturing holistic temporal concepts, leading to better reconstruction and generation metrics (section 5.3). To this end, we propose the first diffusion model capable of handling temporally continuous data modality with variable length. We show a plethora of interesting and important downstream applications for chirographic data supported by CHIRODIFF (section 5.4). RELATED WORK Causal auto-regressive recurrent networks (Hochreiter & Schmidhuber, 1997;Cho et al., 2014) were considered to be a natural choice for sequential data modalities due to their inherent ability to encode ordering. It was the dominant tool for modelling natural language (NLP) (Bowman et al., 2016), video (Srivastava et al., 2015) and audio (van den Oord et al., 2016). Recently, due to the breakthroughs in NLP (Vaswani et al., 2017), interests have shifted towards non-autoregressive models even for other modalities (Girdhar et al., 2019;Huang et al., 2019). Continuous-time chirographic models also experienced a similar shift in model class from LSTMs (Graves, 2013) to Transformers (Ribeiro et al., 2020; in terms of representation learning. Most of them however, still contains autoregressive generative components (e.g. causal transformers). Lately, set structures have also been experimented with (Carlier et al., 2020) for representing chirographic data as a collection of strokes. Due to difficulty with generating sets (Zaheer et al., 2017), their objective function requires explicit accounting for mis-alignments. CHIRODIFF finds a middle ground with the generative component being non-autoregressive while retaining the notion of order. A recent unpublished work (Luhman & Luhman, 2020) applied diffusion models out-of-the-box to handwriting generation, although it lacks right design choices, explanations and extensive experimentation. Diffusion Models (DM), although existed for a while (Sohl-Dickstein et al., 2015), made a breakthrough recently in generative modelling (Ho et al., 2020;Dhariwal & Nichol, 2021;Ramesh et al., 2021;Nichol et al., 2022). They are arguably by now the de-facto model for broad class of image generation (Dhariwal & Nichol, 2021) due to their ability to achieve both fidelity and diversity. With consistent improvements like efficient samplers (Song et al., 2021a;Liu et al., 2022), latentspace diffusion (Rombach et al., 2021), classifier(-free) guidance (Ho & Salimans, 2022;Dhariwal & Nichol, 2021) these models are gaining traction in diverse set of vision-language (VL) problem. Even though DMs are generic in terms of theoretical formulation, very little focus have been given so far in non-image modalities (Lam et al., 2022;Hoogeboom et al., 2022;Xu et al., 2022). DENOISING DIFFUSION PROBABILISTIC MODELS (DDPM) DDPMs (Ho et al., 2020;Sohl-Dickstein et al., 2015) are parametric densities realized by a stochastic "reverse diffusion" process that transforms a predefined isotropic gaussian prior p(X T ) = N (X T ; 0, I) into model distribution p θ (X 0 ) by de-noising it in T discrete steps. The sequence of T parametric de-noising distributions admit markov property, i.e. p θ (X t−1 |X t:T ) = p θ (X t−1 |X t ) and can be chosen as gaussians (Sohl-Dickstein et al., 2015) as long as T is large enough. With the model parameters defined as θ, the de-noising conditionals have the form p θ (X t−1 |X t ) := N (X t−1 ; µ θ (X t , t), Σ θ (X t , t)) (1) Forward Diffusion Reverse Diffusion Real data Brownian Motion Sampled data Figure 3: The forward and reverse diffusion on chirographic data. The "disconnected lines" effect is due to the pen-bits being diffused together. We show the topology by color map (black to yellow). Sampling can be performed with a DDPM sampler (Ho et al., 2020) by starting at the prior X T ∼ p(X T ) and running ancestral sampling till t = 0 using p θ * (X t−1 |X t ) where θ * denotes a set of trained model parameters. Due to the presence of latent variables X 1:T , it is difficult to directly optimize the log-likelihood of the model p θ (X 0 ). Practical training is done by first simulating the latent variables by a given "forward diffusion" process which allows sampling X 1:T by means of q(X t |X 0 ) = N (X t ; √ α t X 0 ; (1 − α t )I)(2) with α t ∈ (0, 1), a monotonically decreasing function of t, that completely specifies the forward noising process. By virtue of Eq. 2 and simplifying the reverse conditionals in Eq. 1 with Σ θ (X t , t) := σ 2 t I, Sohl-Dickstein et al. (2015); Ho et al. (2020) derived an approximate variational bound L simple (θ) that works well in practice L simple (θ) = E X0∼q(X0),t∼U [1,T ], ∼N (0,I) || − θ (X t (X 0 , ), t)|| 2 where a reparameterized Eq. 2 is used to compute a "noisy" version of X 0 as X t (X 0 , ) = √ α t X 0 + √ 1 − α t . Also note that the original parameterization of µ θ (X t , t) is modified in favour of θ (X t , t) , an estimator to predict the noise given a noisy sample X t at any step t. Please note that they are related as µ θ (X t , t) = 1 √ 1−βt X t − βt √ 1−αt θ (X t , t) where β t 1 − αt αt−1 . DIFFUSION MODEL FOR CHIROGRAPHIC DATA Just like traditional approaches, we use the polyline sequence X = · · · , x (j) , p (j) , · · · where the j-th point is x (j) ∈ R 2 and p (j) ∈ {−1, 1} is a binary bit denoting the pen state, signaling an end of stroke. This representation is popularized by Ha & Eck (2018) and known as Three-point format. We employ the same pre-processing steps (equispaced resampling, spatial scaling etc) laid down by Ha & Eck (2018). Note that the cardinality of the sequence |X| may vary for different samples. CHIRODIFF is fairly similar to the standard DDPM described in section 3, with the sequence X treated as a vector arranged by a particular topology. However, we found it beneficial not to directly use absolute points sequence X but instead use velocities V = · · · , v (j) , p (j) , · · · , where v (j) = x (j+1) − x (j) which can be readily computed using crude forward/backward differences. Upon generation, we can restore its original form by computing x (j) = j ≤j v (j ) . By modelling higher-order derivatives (velocity instead of position), the model focuses on high-level concepts rather than local temporal details (Ha & Eck, 2018;Das et al., 2022). We may use X and V interchangeably as they can be cheaply converted back and forth at any time. Please note that we will use the subscript t to denote the diffusion step and the superscript (j) to denote elements in the sequence. Following section 3, we define CHIRODIFF, our primary chirographic generative model p θ (V ) also as DDPM. We use a forward diffusion process termed as "sequence-diffusion" that diffuses each element (v q(V t |V 0 ) = |V | j=1 q(v (j) t |v (j) 0 ) |V | j=1 q(p (j) t |p (j) 0 ), with q(v (j) t |v (j) 0 ) = N (v (j) t ; √ α t v (j) 0 , (1 − α t )I), q(p (j) t |p (j) 0 ) = N (p (j) t ; √ α t p (j) 0 , (1 − α t )I) Forward Diffusion (Lower sampling) Reverse Diffusion (Higher sampling) Figure 4: The reverse process started with higher cardinality |X| than the forward process. Consequently, the prior at t = T has the form q( N (0, I). Note that we treat the binary pen state just like a continuous variable, an approach recently termed as "analog bits" by Chen et al. (2022). Our experimentation show that it works quite well in practice. While generating, we map the analog bit to its original discrete states {−1, 1} by simple thresholding at p = 0. V T ) = |V | j=1 q(v (j) T ) |V | j=1 q(p (j) T ) where individual elements are standard normal, i.e. q(v (j) T ) = q(p (j) T ) = With the reverse sequence diffusion process modelled as parametric conditional gaussian kernels similar to section 3, i.e. p θ (V t−1 |V t ) := N (V t−1 ; µ θ (V t , t), σ 2 t I) and analogous change in parameterization (from µ θ to θ ), we can minimize the following loss L simple (θ) = E V0∼q(V0), t∼U [1,T ], ∼N (0,I) || − θ (V t (V 0 , ), t)|| 2(3) With a trained θ * , we can run DDPM sampler as V t−1 ∼ p θ * (V t−1 |V t ) (refer to section 3) iteratively for t = T → 1. A deterministic variant, namely DDIM (Song et al., 2021a), can also be used as V t−1 = √ α t−1 V t − √ 1 − α t θ * (V t , t) √ α t + 1 − α t−1 θ * (V t , t)(4) Unlike the usual choice of U-Net in pixel-based perception models (Ho et al., 2020;Song et al., 2021b;Ramesh et al., 2021;Nichol et al., 2022), CHIRODIFF requires a sequence encoder as θ (V t , t) in order to preserve and utilize the ordering of elements. We chose to encode each element in the sequence with the entire sequence as context, i.e. θ (v (j) t , V t , t). Two prominent choices for such functional form are Bi-directional RNN (Bi-RNN) and Transformer encoder (Lee et al., 2019) with positional embedding. We noticed that Bi-RNN works quite well and provides much faster and better convergence. A design choice we found beneficial is to concatenate the absolute positions X t along with V t to the model, i.e. θ (·, [V t ; X t ], t), exposing the model to the absolute state of the noisy data at t instead of drawing dynamics only. Since X t can be computed from V t itself, we drop X t from the function arguments now onward just for notation brevity. Please note that the generation process is non-causal as it has access to the entire sequence while diffusing. This gives rise to a non-autoregressive model and thereby focusing on holistic concepts instead of low-level motor program. This design allows the reverse diffusion (generation) process to correct any part of sequence from earlier mistakes, which is a not possible in auto-regressive models. Transforming "Brownian motion" into "Guided motion": CHIRODIFF's generation process has an interesting interpretation. Recall that the reverse diffusion process begins at V T = [· · · , (v (j) T , p (j) T ), · · · ] where each velocity element v (j) T ∼ N (0, I). Due to our velocity-position encoding described above, the original chirographic structure is then x (j) T = j v (j ) T which, by definition, is a discretized brownian motion with unit step size. With the reverse process unrolled in time, the brownian motion with full randomness transforms into a motion with structure, leading to realistic data samples. We illustrate the entire process in Figure 3. Length conditioned re-sampling: A noticeable property of CHIRODIFF's generative process is that there is no hard conditioning on the cardinality of the sequence |X| or |V | due to our choice of the parametric model θ (·, t). As a result, we can kick-off the generation (reverse diffusion) process by sampling from a prior p( V T ) = L j=1 q(v (j) T )q(p (j) T ) of any length L, potentially higher than what the model was trained on. We hypothesize and empirically show (in section 5.3) that if trained to optimiality, the model indeed captures high level geometric concepts and can generate similar data with higher sampling rate (refer to Figure 4) with relatively less error. We credit this behaviour to the accessibility of the entire sequence V t (and additionally X t ) to the model θ (·). With the full sequence visible, the model can potentially build an internal (implicit) global representation which explains the resilience on increased temporal sampling resolution. EXPERIMENTS & RESULTS DATASETS VectorMNIST or VMNIST (Das et al., 2022) is a vector analog of traditional MNIST digits dataset. It contains 10K samples of 10 digits ('0' to '9') represented in polyline sequence format. We use 80-10-10 splits for our all our experimentation. KanjiVG 1 is a vector dataset containing Kanji characters. We use a preprocessed version of the dataset 2 which converted the original SVGs into polyline sequences. This dataset is used in order to evaluate our method's effective on complex chirographic structures with higher number of strokes. Quick, Draw! (Ha & Eck, 2018) is the largest collection of free-hand doodling dataset with casual depictions of given concepts. This dataset is an ideal choice for evaluating a method's effectiveness on real noisy data since it was collected by means of large scale crowd-sourcing. In this paper, we use the following categories: {cat, crab, bus, mosquito, fish, yoga, flower}. IMPLEMENTATION DETAILS CHIRODIFF's forward process, just like traditional DDPMs, uses a linear noising schedule of β min = 10 −4 · 1000/T, β max = 2 × 10 −2 · 1000/T as found out by Nichol & Dhariwal (2021); Dhariwal & Nichol (2021) to be quite robust. We noticed that there isn't much performance difference with different diffusion lengths, so we choose a standard value of T = 1000. The parametric noise estimator θ (v (j) t , V t , t) is chosen to be a bi-directional GRU encoder (Cho et al., 2014) where each element of the sequence is encoded while having contextual information from both direction of the sequence, making it non-causal. We use a 2-layer GRU with D = 48 hidden units for VM-NIST and 3-layer GRU for QuickDraw (D = 128) and KanjiVG (D = 96). We also experimented with transformers with positional encoding but failed to achieve reasonable results, concluding that positional encoding is not a good choice for representing continuous time. We trained all of our models by minimizing Eq. 3 using AdamW optimizer (Loshchilov & Hutter, 2019) and step-wise LR scheduling of γ e = 0.9997·γ e−1 at every epoch e where γ 0 = 6×10 −3 . The diffusion time-step t ∈ 1, 2, · · · , T was made available to the model by concatenating sinusoidal positional embeddings (Vaswani et al., 2017) into each element of a sequence at every layer. We noticed the importance of reverse process variance σ 2 t in terms of generation quality of our models. We found σ 2 t = 0.8β t to work well in majority of the cases, whereβ t = 1−αt−1 1−αt β t as defined by Ho et al. (2020) to be true variance of the forward process posterior. Please refer to the project page 3 for full source code. QUANTITATIVE EVALUATIONS In order to assess the effectiveness of our model for chirographic data, we perform quantitative evaluations and compare with relevant approaches. We measure performance in terms of representation learning, generative modelling and computational efficiency. By choosing proper dimensions for competing methods/architectures, we ensured approximately same model capacity (# of parameters) for fare comparison. Reconstruction We construct a conditional variant of CHIRODIFF with an encoder E V being a traditional Bi-GRU for fully encoding a given data sample V into latent code z. The decoder, in our case, is a diffusion model described in section 4. We sample from the conditional model p θ (V 0 |z = E V (V )) which is effectively same as standard DDPM but with the noise-estimator θ * (V t , t, z) additionally conditioned on z. We expose the latent variable z to the noise-estimator by simply concatenating it with every element j at all timestep t ∈ [1, T ]. We also evaluate CHIRODIFF's ability to adopt to higher temporal resolution while sampling, proving our hypothesis that it captures concepts at a holistic level. We encode a sample with E V , and decode explicitly with a higher temporal sampling rate (refer to section 4). We compare our method with relevant frameworks like SketchODE (Das et al., 2022), SketchRNN (Ha & Eck, 2018) and CoSE . Since autoregressive models like SketchRNN has no explicit way to increase temporal resolution, we train different models with resampled data, which is already disadvantageous. We quantitatively compare them with Chamfer Distance (CD) (Qi et al., 2017) (ignore the pen-up bit) for conditional reconstruction. Figure 5 shows the reconstruction CD against sampling rate factor (multiple of the original data cardinality) which shows the resilience of our model against higher sampling rate. SketchRNN being autoregressive, fails at high sampling rate (longer sequences), as visible in Figure 5(A, B & C). CoSE and SketchODE being naturally continuous, has a relatively flat curve. Also, we couldn't reasonably train SketchODE on the complex KanjiVG dataset (due to computational and convergance issues) and hence omitted from Figure 5. Qualitative examples of reconstruction shown in Fig. 6. Generation We assess the generative performance of CHIRODIFF by sampling unconditionally (in 50 steps with DDIM sampler) and compute the FID score against the real data samples. Since the original inception network is not trained on chirographic data, we train our own on Quick, Draw! dataset (Ha & Eck, 2018) following the setup of (Ge et al., 2021). We compare our method with SketchRNN (Ha & Eck, 2018) and CoSE on all three datasets. Quantitative results in Figure 5(E) show a consistent superiority of our model in terms of generation FID. Quick-Draw FID values are averaged over the individual categories used. Qualitative samples are shown in Figure 1. Computational Efficiency We also compare our method with competing methods in terms of ease of training and convergence. We found that our method, being from Diffusion Model family, enjoys easy training dynamics and relatively faster convergence (refer to Figure 5(D)). We also provide approximate sampling time for unconditional generation. DOWNSTREAM APPLICATIONS STOCHASTIC VECTORIZATION An interesting use-case of generative chirographic model is stochastic vectorization, i.e. recreating plausible chirographic structure (with topology) from a given perceptive input. This application is intriguing due to human's innate ability to do the same with ease. The recent success of diffusion models in capturing distributions with high number of modes (Ramesh et al., 2021;Nichol et al., 2022) prompted us to use it for stochastic vectorization, a problem of inferring potentially multimodal distribution. We simply condition our generative model on a perceptive input X = R(X) = {x (j) |(x (j) , p (j) ) ∈ X}, i.e. we convert the sequence into a point-set (also densely resample them as part of pre-processing). We employ a set-transformer encoder E R (·) with max-pooling aggregator (Lee et al., 2019) to obtain a latent vector z and condition the generative model similar to section 5.3, i.e. p θ (V |z = E R (X )). We evaluated the conditional generation with Chamfer Distance (CD) on test set and compare with Das et al. (2021) (refer to Figure 7). IMPLICIT CONDITIONING Chamfer Distance Figure 7: Stochastic vectorization. Note the different topologies (color map) of the samples. Unlike explicit conditioning mechanism (includes an encoder) described in section 5.3, CHIRODIFF allows a form of "Implicit Conditioning" which requires no explicit encoder. Such conditioning is more stochastic and may not be used for reconstruction, but can be used to sample similar data from a pre-trained model p θ * . Given a condition X cond 0 (or V cond 0 ), we sample a noisy version at t = T c < T (a hyperparameter) using the forward diffusion (Eq. 2) process as V cond Tc = √ α Tc V cond 0 + √ 1 − α Tc where ∼ N (0, I). We then utilize the trained model p θ * to gradually de-noise V cond Tc . We run the reverse process from t = T c till t = 0 V t−1 ∼ p θ * (V t−1 |V t ), for T c > t > 0, with V Tc := V cond Tc The hyperparameter T c controls how much the generated samples correlate the given condition. By starting the reverse process at t = T c with the noisy condition, we essentially restrict the generation to be within a region of the data space that resembles the condition. Higher the value of T c , the generated samples will resemble the condition more (refer to Figure 8). We also classified the generated samples for VMNIST & Quick, Draw! and found it to belong to the same class as the condition 93% of the time in average. HEALING Bad Healed Bad Healed Figure 8: Implicit conditioning and healing. The task of healing is more prominent in 3D point-cloud literature (Luo & Hu, 2021a). Even though the typical chirographic (autoregressive) models offer "stochastic completion" (Ha & Eck, 2018), it does not offer an easy way to "heal" a sample due to uni-directional generation. It is only very recently, works have emerged that propose tools for healing bad sketches (Su et al., 2020). With diffusion model family, it is fairly straightforward to solve this problem with CHIRODIFF. Given a "poor" chirographic dataX 0 , we would like to generate samples from a region in p θ * (X 0 ) close toX 0 in terms of semantic concept. Surprisingly, this problem can be solved with the "Implicit Conditioning" described in section 5.4.2. Instead of a real data as condition, we provide the poorly drawn dataX 0 (equivalentlyṼ 0 ) as condition. Just as before, we run the reverse process starting at t = T h with V T h :=Ṽ T h in order to sample from a healed data distribution aroundṼ 0 . T h is a similar hyperparameter that decides the trade-off between healing the given sample and drifting away from it in terms of high-level concept. Refer to Figure 8 (right) for qualitative samples of healing (with T h = T /5). CREATIVE MIXING Creative Mixing is a chirographic task of merging two high-level concepts into one. This task is usually implemented as latent-space interpolation in traditional autoencoder-style generative models (Ha & Eck, 2018;Das et al., 2022). A variant of CHIRODIFF that uses DDIM sampler (Song et al., 2021a), can be used for similar interpolations. We use a pre-trained conditional model to decode the interpolated latent vector using V T = 0 as a fixed point. Given two samples V 01 and V 02 , we compute the interpolated latent variable as z interp = (1 − δ)E V (V 01 ) + δE V (V 02 ) for any δ ∈ [0, 1] and run DDIM sampler shown in Eq. 4 with the noise-estimator θ * (V t , t, z interp ). This solution works well for some datasets (KanjiVG & VMNIST; shown in Figure 9 (left)). For others (Quick, Draw), we instead propose a more general method inspired by ILVR (Choi et al., 2021) that allows "mixing" using the DDPM sampler itself. In fact, it allows us to perform mixing without one of the samples (reference sample) being known to the trained model. Given two samples of potentially different concepts, X 0 and X ref 0 (or equivalently V 0 and V ref 0 ), we sample from a pretrained conditional model given V 0 , but with a modified reverse process X t−1 = X t−1 − Φ ω (X t−1 ) + Φ ω (X ref t−1 ) where, V t−1 ∼ p θ * (V t−1 |V t , z = E V (V 0 ), and V ref t−1 ∼ q(V ref t−1 |V ref 0 ) where Φ ω (·) is a temporal low-pass filter that reduces high-frequency details of the input data along temporal axis. We implement Φ ω (·) using temporal 1D convolution of window size ω = 7. Please note that for the operation in Eq. 5.4.4 to be valid, the sequences must be of same length. We simply resample the conditioning sequence to match the cardinality of the reverse process. The three-tuples presented in Figure 9(right) shows the mixing between different categories of Quick, Draw!. Figure 10: With reduced reverse process variance σ 2 t = k·β, generated samples loose high-frequency details but retains abstract concepts. CONTROLLED ABSTRACTION Visual abstraction is a relatively new task (Muhammad et al., 2019;Das et al., 2022) in chirographic literature that refers to deriving a new distribution (possibly with a control) that holistically matches the data distribution, but more "abstracted" in terms of details. Our definition to the problem matches that of Das et al. (2022), but with an advantage of being able to use the same model instead of retraining for different controls. The solution of the problem lies in the sampling process of CHIRODIFF, which has an abstraction effect when the reverse process variance σ 2 t is low. We define a continuous control k ∈ [0, 1] as hyperparameter and use a reverse process variance of σ 2 t = k ·β t . The rationale behind this method is that when k is near zero, the reverse process stops exploring the data distribution and converges near the dominant modes, which are data points conceptually resembling original data but more "canonical" representation (highly likely under data distribution). Figure 10 shows qualitative results of this observation on Quick, Draw!. In this paper, we introduced CHIRODIFF, a non-autoregressive generative model for chirographic data. ChiroDiff is powered by DDPM and offers better holistic modelling of concepts that benefits many downstream tasks, some of which are not feasible with competing autoregressive models. One limitation of our approach is that vector representations are more susceptible to noise than raster images (see Figure 11 (Top)). Since we empirically set the reverse (generative) process variance σ 2 t , the noise sometimes overwhelms the model-predicted mean (see Figure 11 (Bottom)). Moreover, due the use of velocities, the accumulated absolute positions also accumulate the noise in proportion to its cardinality. One possible solution is to modify the noising process to be adaptive to the generation or the data cardinality itself. We leave this as a potential future improvement on CHIRODIFF. CONCLUSIONS, LIMITATIONS & FUTURE WORK Figure 1 : 1Unconditional samples from CHIRODIFF trained on VMNIST, KanjiVG and Quick, Draw!. Figure 2 : 2Latent space interpolation (Top) with CHIRODIFF using DDIM sampler and (Bottom) independently analogous to Eq. 2 Figure 5 : 5(A, B, C) Reconstruction CD against sampling rate factor. (D) Relative convergence time & sampling time (transparent bars) w.r.t our method. (E) FID of unconditional generation (averaged over multiple classes for QD). Figure 6 : 61 st and 2 nd column for each example depicts sampling rate 1 & 2 respectively while reconstructing. Figure 9 : 9(Left) Latent-space semantic interpolation with DDIM. (Right) Creative Mixing shown as three-tuples consisting of X 0 , X ref 0 and the mixed sample respectively. Figure 11 : 11(Top) Noisy vector and raster data at same α. (Bottom) Failures due to noise. Original KanjiVG: kanjivg.tagaini.net 2 Pre-processed KanjiVG: github.com/hardmaru/sketch-rnn-datasets/tree/master/kanji 3 Our project page: https://ayandas.me/chirodiff Cose: Compositional stroke embeddings. Emre Aksan, Thomas Deselaers, Andrea Tagliasacchi, Otmar Hilliges, NeurIPS. Emre Aksan, Thomas Deselaers, Andrea Tagliasacchi, and Otmar Hilliges. Cose: Compositional stroke embeddings. NeurIPS, 2020. Generating sentences from a continuous space. R Samuel, Luke Bowman, Oriol Vilnis, Andrew Vinyals, Rafal Dai, Samy Jozefowicz, Bengio, CoNLLSamuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew Dai, Rafal Jozefowicz, and Samy Bengio. Generating sentences from a continuous space. In CoNLL, 2016. Learning gradient fields for shape generation. Ruojin Cai, Guandao Yang, Hadar Averbuch-Elor, Zekun Hao, Serge J Belongie, Noah Snavely, Bharath Hariharan, ECCV. Ruojin Cai, Guandao Yang, Hadar Averbuch-Elor, Zekun Hao, Serge J. Belongie, Noah Snavely, and Bharath Hariharan. Learning gradient fields for shape generation. In ECCV, 2020. Deepsvg: A hierarchical generative network for vector graphics animation. Alexandre Carlier, Martin Danelljan, Alexandre Alahi, Radu Timofte, NeurIPS. Alexandre Carlier, Martin Danelljan, Alexandre Alahi, and Radu Timofte. Deepsvg: A hierarchical generative network for vector graphics animation. NeurIPS, 2020. Neural ordinary differential equations. Yulia Tian Qi Chen, Jesse Rubanova, David Bettencourt, Duvenaud, NeurIPS. Tian Qi Chen, Yulia Rubanova, Jesse Bettencourt, and David Duvenaud. Neural ordinary differential equations. In NeurIPS, 2018. Analog bits: Generating discrete data using diffusion models with self-conditioning. Ting Chen, Ruixiang Zhang, Geoffrey E Hinton, abs/2208.04202ArXiv. Ting Chen, Ruixiang Zhang, and Geoffrey E. Hinton. Analog bits: Generating discrete data using diffusion models with self-conditioning. ArXiv, abs/2208.04202, 2022. Learning phrase representations using RNN encoder-decoder for statistical machine translation. Kyunghyun Cho, Bart Van Merrienboer, Dzmitry Aglar Gülçehre, Fethi Bahdanau, Holger Bougares, Yoshua Schwenk, Bengio, EMNLP. Kyunghyun Cho, Bart van Merrienboer, Ç aglar Gülçehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwenk, and Yoshua Bengio. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In EMNLP, 2014. ILVR: conditioning method for denoising diffusion probabilistic models. Jooyoung Choi, Sungwon Kim, Yonghyun Jeong, Youngjune Gwon, Sungroh Yoon, ICCV. 2021Jooyoung Choi, Sungwon Kim, Yonghyun Jeong, Youngjune Gwon, and Sungroh Yoon. ILVR: conditioning method for denoising diffusion probabilistic models. In ICCV, 2021. Béziersketch: A generative model for scalable vector sketches. Ayan Das, Yongxin Yang, Timothy Hospedales, Tao Xiang, Yi-Zhe Song, ECCV. Ayan Das, Yongxin Yang, Timothy Hospedales, Tao Xiang, and Yi-Zhe Song. Béziersketch: A generative model for scalable vector sketches. In ECCV, 2020. Cloud2curve: Generation and vectorization of parametric sketches. Ayan Das, Yongxin Yang, Timothy M Hospedales, Tao Xiang, Yi-Zhe Song, CVPR. 2021Ayan Das, Yongxin Yang, Timothy M. Hospedales, Tao Xiang, and Yi-Zhe Song. Cloud2curve: Generation and vectorization of parametric sketches. In CVPR, 2021. Sketchode: Learning neural sketch representation in continuous time. Ayan Das, Yongxin Yang, Timothy M Hospedales, Tao Xiang, Yi-Zhe Song, ICLR. 2022Ayan Das, Yongxin Yang, Timothy M Hospedales, Tao Xiang, and Yi-Zhe Song. Sketchode: Learn- ing neural sketch representation in continuous time. In ICLR, 2022. Diffusion models beat gans on image synthesis. Prafulla Dhariwal, Alexander Quinn, Nichol , NeurIPS. 2021Prafulla Dhariwal and Alexander Quinn Nichol. Diffusion models beat gans on image synthesis. In NeurIPS, 2021. Creative sketch generation. Vedanuj Songwei Ge, Larry Goswami, Devi Zitnick, Parikh, ICLR. 2021Songwei Ge, Vedanuj Goswami, Larry Zitnick, and Devi Parikh. Creative sketch generation. In ICLR, 2021. The DIDI dataset: Digital ink diagram data. Philippe Gervais, Thomas Deselaers, Emre Aksan, Otmar Hilliges, CoRRPhilippe Gervais, Thomas Deselaers, Emre Aksan, and Otmar Hilliges. The DIDI dataset: Digital ink diagram data. CoRR, 2020. Video action transformer network. Rohit Girdhar, João Carreira, Carl Doersch, Andrew Zisserman, CVPR. Rohit Girdhar, João Carreira, Carl Doersch, and Andrew Zisserman. Video action transformer net- work. In CVPR, 2019. Generating sequences with recurrent neural networks. Alex Graves, arXiv:1308.0850arXiv preprintAlex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013. A neural representation of sketch drawings. David Ha, Douglas Eck, ICLR. David Ha and Douglas Eck. A neural representation of sketch drawings. In ICLR, 2018. Classifier-free diffusion guidance. Jonathan Ho, Tim Salimans, Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. CoRR, 2022. Denoising diffusion probabilistic models. Jonathan Ho, Ajay Jain, Pieter Abbeel, NeurIPS. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In NeurIPS, 2020. Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural Comput. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Comput., 1997. Equivariant diffusion for molecule generation in 3d. Emiel Hoogeboom, Victor Garcia Satorras, Clément Vignac, Max Welling, ICML, 2022. Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan SabatoPublished as a conference paper at ICLR 2023Emiel Hoogeboom, Victor Garcia Satorras, Clément Vignac, and Max Welling. Equivariant diffu- sion for molecule generation in 3d. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato (eds.), ICML, 2022. Published as a conference paper at ICLR 2023 Music transformer: Generating music with long-term structure. Cheng-Zhi Anna Huang, Ashish Vaswani, Jakob Uszkoreit, Ian Simon, Curtis Hawthorne, Noam Shazeer, Andrew M Dai, Matthew D Hoffman, Monica Dinculescu, Douglas Eck, ICLR. Cheng-Zhi Anna Huang, Ashish Vaswani, Jakob Uszkoreit, Ian Simon, Curtis Hawthorne, Noam Shazeer, Andrew M. Dai, Matthew D. Hoffman, Monica Dinculescu, and Douglas Eck. Music transformer: Generating music with long-term structure. In ICLR, 2019. BDDM: bilateral denoising diffusion models for fast and high-quality speech synthesis. W Y Max, Jun Lam, Dan Wang, Dong Su, Yu, ICLR. 2022Max W. Y. Lam, Jun Wang, Dan Su, and Dong Yu. BDDM: bilateral denoising diffusion models for fast and high-quality speech synthesis. In ICLR, 2022. Set transformer: A framework for attention-based permutation-invariant neural networks. Juho Lee, Yoonho Lee, Jungtaek Kim, Adam R Kosiorek, Seungjin Choi, Yee Whye Teh, ICML. Juho Lee, Yoonho Lee, Jungtaek Kim, Adam R. Kosiorek, Seungjin Choi, and Yee Whye Teh. Set transformer: A framework for attention-based permutation-invariant neural networks. In ICML, 2019. Scenesketcher: Fine-grained image retrieval with scene sketches. Fang Liu, Changqing Zou, Xiaoming Deng, Ran Zuo, Yu-Kun Lai, Cuixia Ma, Yong-Jin Liu, Hongan Wang, ECCV. Fang Liu, Changqing Zou, Xiaoming Deng, Ran Zuo, Yu-Kun Lai, Cuixia Ma, Yong-Jin Liu, and Hongan Wang. Scenesketcher: Fine-grained image retrieval with scene sketches. In ECCV, 2020. Pseudo numerical methods for diffusion models on manifolds. Luping Liu, Yi Ren, Zhijie Lin, Zhou Zhao, ICLR. 2022Luping Liu, Yi Ren, Zhijie Lin, and Zhou Zhao. Pseudo numerical methods for diffusion models on manifolds. In ICLR, 2022. A learned representation for scalable vector graphics. Raphael Gontijo Lopes, David Ha, Douglas Eck, Jonathon Shlens, ICCV. Raphael Gontijo Lopes, David Ha, Douglas Eck, and Jonathon Shlens. A learned representation for scalable vector graphics. In ICCV, 2019. Diffusion models for handwriting generation. CoRR, abs. Ilya Loshchilov, Frank Hutter ; Troy Luhman, Eric Luhman, ICLR. Decoupled weight decay regularizationIlya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In ICLR, 2019. Troy Luhman and Eric Luhman. Diffusion models for handwriting generation. CoRR, abs/2011.06704, 2020. Score-based point cloud denoising. Shitong Luo, Wei Hu, ICCV. Shitong Luo and Wei Hu. Score-based point cloud denoising. In ICCV, 2021a. Diffusion probabilistic models for 3d point cloud generation. Shitong Luo, Wei Hu, CVPR. Shitong Luo and Wei Hu. Diffusion probabilistic models for 3d point cloud generation. In CVPR, 2021b. Goal-driven sequential data abstraction. Yongxin Umar Riaz Muhammad, Timothy M Yang, Tao Hospedales, Yi-Zhe Xiang, Song, ICCV. Umar Riaz Muhammad, Yongxin Yang, Timothy M Hospedales, Tao Xiang, and Yi-Zhe Song. Goal-driven sequential data abstraction. In ICCV, 2019. Improved denoising diffusion probabilistic models. Alexander Quinn, Nichol , Prafulla Dhariwal, ICML. 2021Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In ICML, 2021. GLIDE: towards photorealistic image generation and editing with text-guided diffusion models. Alexander Quinn Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob Mcgrew, Ilya Sutskever, Mark Chen, ICML. Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato2022Alexander Quinn Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. GLIDE: towards photorealistic image generation and editing with text-guided diffusion models. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato (eds.), ICML, 2022. Generalising fine-grained sketch-based image retrieval. K Pang, K Li, Y Yang, H Zhang, T M Hospedales, T Xiang, Y. -Z Song, CVPR. K. Pang, K. Li, Y. Yang, H. Zhang, T. M. Hospedales, T. Xiang, and Y. -Z. Song. Generalising fine-grained sketch-based image retrieval. In CVPR, 2019. Pointnet: Deep learning on point sets for 3d classification and segmentation. Hao Charles Ruizhongtai Qi, Kaichun Su, Leonidas J Mo, Guibas, CVPR. Charles Ruizhongtai Qi, Hao Su, Kaichun Mo, and Leonidas J. Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In CVPR, 2017. Zero-shot text-to-image generation. Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, Ilya Sutskever, Marina Meila and Tong Zhang2021Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. In Marina Meila and Tong Zhang (eds.), ICML, 2021. Sketchformer: Transformer-based representation for sketched structure. Leo Sampaio, Ferraz Ribeiro, Tu Bui, John P Collomosse, Moacir Ponti, CVPR. Leo Sampaio Ferraz Ribeiro, Tu Bui, John P. Collomosse, and Moacir Ponti. Sketchformer: Transformer-based representation for sketched structure. In CVPR, 2020. Highresolution image synthesis with latent diffusion models. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer, CVPRRobin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High- resolution image synthesis with latent diffusion models. CVPR, 2021. Deep unsupervised learning using nonequilibrium thermodynamics. Jascha Sohl-Dickstein, Eric A Weiss, Niru Maheswaranathan, Surya Ganguli, ICML. Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsuper- vised learning using nonequilibrium thermodynamics. In ICML, 2015. Denoising diffusion implicit models. Jiaming Song, Chenlin Meng, Stefano Ermon, ICLR. Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In ICLR, 2021a. Score-based generative modeling through stochastic differential equations. Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole, ICLR. Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In ICLR, 2021b. Unsupervised learning of video representations using lstms. Nitish Srivastava, Elman Mansimov, Ruslan Salakhudinov, ICML. Nitish Srivastava, Elman Mansimov, and Ruslan Salakhudinov. Unsupervised learning of video representations using lstms. In ICML, 2015. Sketchhealer: A graph-tosequence network for recreating partial human sketches. Guoyao Su, Yonggang Qi, Kaiyue Pang, Jie Yang, Yi-Zhe Song, BMVC. Guoyao Su, Yonggang Qi, Kaiyue Pang, Jie Yang, and Yi-Zhe Song. Sketchhealer: A graph-to- sequence network for recreating partial human sketches. In BMVC, 2020. CSDI: conditional score-based diffusion models for probabilistic time series imputation. Yusuke Tashiro, Jiaming Song, Yang Song, Stefano Ermon, NeurIPS. 2021Yusuke Tashiro, Jiaming Song, Yang Song, and Stefano Ermon. CSDI: conditional score-based diffusion models for probabilistic time series imputation. In NeurIPS, 2021. Wavenet: A generative model for raw audio. Aäron Van Den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew W Senior, Koray Kavukcuoglu, ISCA. Aäron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew W. Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. In ISCA, 2016. Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, NeurIPS. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NeurIPS, 2017. Multicolumn point-cnn for sketch segmentation. Fei Wang, Shujin Lin, Hanhui Li, Hefeng Wu, Tie Cai, Xiaonan Luo, Ruomei Wang, Neurocomputing. Fei Wang, Shujin Lin, Hanhui Li, Hefeng Wu, Tie Cai, Xiaonan Luo, and Ruomei Wang. Multi- column point-cnn for sketch segmentation. Neurocomputing, 2020. Geodiff: A geometric diffusion model for molecular conformation generation. Minkai Xu, Lantao Yu, Yang Song, Chence Shi, Stefano Ermon, Jian Tang, ICLR. 2022Minkai Xu, Lantao Yu, Yang Song, Chence Shi, Stefano Ermon, and Jian Tang. Geodiff: A geomet- ric diffusion model for molecular conformation generation. In ICLR, 2022. SketchGNN: Semantic sketch segmentation with graph neural networks. Lumin Yang, Jiajie Zhuang, Hongbo Fu, Xiangzhi Wei, Kun Zhou, Youyi Zheng, ACM Transactions on Graphics. 2021Lumin Yang, Jiajie Zhuang, Hongbo Fu, Xiangzhi Wei, Kun Zhou, and Youyi Zheng. SketchGNN: Semantic sketch segmentation with graph neural networks. ACM Transactions on Graphics (TOG), 2021. Sketch-a-net that beats humans. Qian Yu, Yongxin Yang, Yi-Zhe Song, Tao Xiang, and Timothy Hospedales. BMVCQian Yu, Yongxin Yang, Yi-Zhe Song, Tao Xiang, and Timothy Hospedales. Sketch-a-net that beats humans. In BMVC, 2015. Qian Yu, Yongxin Yang, Feng Liu, Yi-Zhe Song, Tao Xiang, and Timothy Hospedales. Sketch-a-net: A deep neural network that beats humans. IJCV. 122Qian Yu, Yongxin Yang, Feng Liu, Yi-Zhe Song, Tao Xiang, and Timothy Hospedales. Sketch-a-net: A deep neural network that beats humans. IJCV, 122:411-425, 2017. Deep sets. Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabás Póczos, Ruslan Salakhutdinov, Alexander J Smola, NeurIPS. Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabás Póczos, Ruslan Salakhutdinov, and Alexander J. Smola. Deep sets. In NeurIPS, 2017.
[]
[ "Bipol: A Novel Multi-Axes Bias Evaluation Metric with Explainability for NLP", "Bipol: A Novel Multi-Axes Bias Evaluation Metric with Explainability for NLP" ]
[ "Lama Alkhaled \nML Group\nEISLAB\nLuleå University of Technology\nSweden\n", "Tosin Adewumi \nML Group\nEISLAB\nLuleå University of Technology\nSweden\n", "Sana Sabah Sabry \nML Group\nEISLAB\nLuleå University of Technology\nSweden\n" ]
[ "ML Group\nEISLAB\nLuleå University of Technology\nSweden", "ML Group\nEISLAB\nLuleå University of Technology\nSweden", "ML Group\nEISLAB\nLuleå University of Technology\nSweden" ]
[]
We introduce bipol, a new metric with explainability, for estimating social bias in text data. Harmful bias is prevalent in many online sources of data that are used for training machine learning (ML) models. In a step to address this challenge we create a novel metric that involves a two-step process: corpuslevel evaluation based on model classification and sentence-level evaluation based on (sensitive) term frequency (TF). After creating new models to detect bias along multiple axes using SotA architectures, we evaluate two popular NLP datasets (COPA and SQuADv2). As additional contribution, we created a large dataset (with almost 2 million labelled samples) for training models in bias detection and make it publicly available. We also make public our codes. 1
10.48550/arxiv.2304.04029
[ "https://export.arxiv.org/pdf/2304.04029v1.pdf" ]
258,049,205
2304.04029
f05e88d13c3c4c13ea14fa1dda67107558b766c7
Bipol: A Novel Multi-Axes Bias Evaluation Metric with Explainability for NLP Lama Alkhaled ML Group EISLAB Luleå University of Technology Sweden Tosin Adewumi ML Group EISLAB Luleå University of Technology Sweden Sana Sabah Sabry ML Group EISLAB Luleå University of Technology Sweden Bipol: A Novel Multi-Axes Bias Evaluation Metric with Explainability for NLP *Joint first authors, + corresponding author We introduce bipol, a new metric with explainability, for estimating social bias in text data. Harmful bias is prevalent in many online sources of data that are used for training machine learning (ML) models. In a step to address this challenge we create a novel metric that involves a two-step process: corpuslevel evaluation based on model classification and sentence-level evaluation based on (sensitive) term frequency (TF). After creating new models to detect bias along multiple axes using SotA architectures, we evaluate two popular NLP datasets (COPA and SQuADv2). As additional contribution, we created a large dataset (with almost 2 million labelled samples) for training models in bias detection and make it publicly available. We also make public our codes. 1 Introduction Bias can be a difficult subject to tackle, especially as there are different opinions as to the scope of its definition (Hammersley and Gomm, 1997;Dhamala et al., 2021). The origin of the word means a slant or slope. 2 In this work, we define social bias as the unbalanced disposition (or prejudice) in favor of or against a thing, person or group, relative to another, in a way that is deemed as unfair (Maddox, 2004;Adewumi et al., 2019;Antoniak and Mimno, 2021). 3 This is harmful bias and it is related to fairness. In some quarters, bias also involves overgeneralization (Brigham, 1971;Rudinger et al., 2018;Nadeem et al., 2021), fulfilling characteristic 2 in the next paragraph. As a motivation, we address the challenge of estimating bias in text data from some of the many axes (or dimensions) of bias (e.g. race and gender). Social bias in text usually has some of the following characteristics: 3 1. It is heavily one-sided (Zhao et al., 2018), as will be observed with the results in this work. 2. It uses extreme or inappropriate language (Rudinger et al., 2018). This forms the basis of the assumption (for some of the samples) in the two datasets used to create the new multi-axes bias dataset (MAB), as discussed in Section 3. 3. It is based on unsupported or unsubstantiated claims, such as stereotypes (Brigham, 1971). 4. It is entertainment-based or a form of parody or satire (Eliot, 2002). ML models pick these biases from the data they are trained on. Although classification accuracy has been observed to fall with attempts at mitigating biases in data (Pleiss et al., 2017;Oneto et al., 2019;Cho et al., 2020;Speicher et al., 2018), it is important to estimate and mitigate them, nonetheless. This is because of the ethical implications and harm that may be involved for the disadvantaged group (Klare et al., 2012;Raji et al., 2020). Our contributions We introduce a novel multiaxes bias estimation metric called bipol. Compared to other bias metrics, this is not limited in the number of bias axes it can evaluate and has explainability built in. It will provide researchers with deeper insight into how to mitigate bias in data. Our second contribution is the introduction of the new English MAB dataset. It is a large, labelled dataset that is aggregrated from two other sources. A third contribution is the multi-axes bias lexica we collected from public sources. We perform experiments using state-of-the-art (SotA) models to benchmark on the dataset. Furthermore, we use the trained models to evaluate the bias in two common NLP datasets (SQuADv2 (Rajpurkar et al., 2018) and (COPA (Roemmele et al., 2011)). We make our models, codes, dataset, and lexica publicly available. The rest of this paper is structured as follows: Section 2 describes in detail the characteristics of the new metric. Section 3 gives details of the new MAB dataset. Section 4 explains the experimental setup. Section 5 presents the results and error analyses. Section 6 discusses some previous related work. In Section 7, we give concluding remarks. Bipol Bipol, represented by Equation 1a, involves a twostep mechanism: the corpus-level evaluation (Equation 1b) and the sentence-level evaluation (Equation 1c). It is a score between 0.0 (zero or undetected bias) and 1.0 (extreme bias). This is further described below: 1. In step 1, a bias-trained model is used to classify all the samples for being biased or unbiased. The ratio of the biased samples (i.e. predicted positives) to the total samples predicted makes up this evaluation. When the true labels are available, this step is represented by Equation 1b. The predicted positives is the sum of the true positives (tp) and false positives (fp). The total samples predicted is the sum of the true positives (tp), false positives (fp), true negatives (tn), and false negatives (fn). A more accurate case of the equation will be to have only the tp evaluated (in the numerator), however, since we want comparable results to when bipol is used in the "wild" with any dataset, we choose the stated version in 1b and report the positive error rate. Hence, in an ideal case, an fp of zero is preferred. However, there's hardly a perfect classifier. It is also preferable to maximize tp to capture all the biased samples, if possible. False positives exist in similar classification systems (such as hate speech detection, spam detection, etc) but they are still used (Heron, 2009;Markines et al., 2009;Feng et al., 2018;Adewumi et al., 2022b). New classifiers may also be trained for this purpose without using ours, as long as the dataset used is large and representative enough to capture the many axes of biases, as much as possible. Hence, bipol's two-step mechanism may be seen as a framework. 2. In step 2, if a sample is positive for bias, it is evaluated token-wise along all possible bias axes, using all the lexica of sensitive terms. Table 1 provides the lexica sizes. The lexica are adapted from public sources 4 and may be expanded as the need arises, given that bias terms and attitudes are ever evolving (Haemmerlie and Montgomery, 1991;Antoniak and Mimno, 2021). They include terms that may be stereotypically associated with certain groups (Zhao et al., 2017(Zhao et al., , 2018 and names associated with specific gender (Nangia et al., 2020). Examples of racial terms stereotypically associated with the white race (which may be nationality-specific) include charlie (i.e. the oppressor) and bule (i.e. albino in Indonesian) while darkey and bootlip are examples associated with the black race. Additional examples from the lexica are provided in the appendix. Each lexicon is a text file with the following naming convention: axes_type.txt, e.g. race_white.txt. In more detail, step 2 involves finding the absolute difference between the two maximum summed frequencies (as lower frequencies cancel out) in the types of an axis (| n s=1 a s − m s=1 c s |). This is divided by the summed frequencies of all the terms in that axis ( p s=1 d s ). This operation is then carried out for all axes and the average obtained ( 1 q q x=1 ). Then it is carried out for all the biased samples and the average ob- The use of the two-step process minimizes the possibility of wrongly calculating the metric on a span of text solely because it contains sensitive features. For example, given the sentences below 5 , the first one should be classified as biased by a model in the first step, ideally, because the sentence assumes a nurse should be female. The second step can then estimate the level of bias in that sentence, based on the lexica. In the second example, a good classifier should not classify this as biased since the coreference of Veronica and her are established, with the assumption that Veronica identifies as a female name. The second example becomes difficult to classify, even for humans, if Veronica was anonymised, say with a part-of-speech (PoS) tag. In the case of the third example, an advantage of bipol is that even if it is misclassifed as biased, the sentence-level evaluation will evaluate to zero because the difference between the maximum frequencies of the two types (his and her) is 1 -1 = 0. Bipol does not differentiate explicitly whether the bias is in favour of or against a targeted group. 1. A nurse should wear her mask as a prerequisite. 2. Veronica, a nurse, wears her mask as a pre-requisite. 3. A nurse should wear his or her mask as a pre-requisite. tained ( 1 r r t=1 ). b = bc.bs (1a) bc = tp + f p tp + f p + tn + f n (1b) bs = 1 r r t=1 1 q q x=1 | n s=1 as − m s=1 cs| p s=1 ds x t (1c) Strengths of bipol 1. It is relatively simple to calculate. 2. It is based on existing tools (classifiers and lexica), so it is straight-forward to implement. 3. It is a two-step process that captures both semantic and term frequency (TF) aspects of text. 4. It is flexible, as it has no limits in the number of axes or TF that can be included. Its explainability makes up for what is not ob- vious from a single score. For example, the magnitude of the difference between term frequencies in an axis is not immediately obvious since (1−0)/1 = (1, 000−0)/1, 000 = 1. As an example, if he has a frequency of 1 while she has 0 in one instance, it is the same score of 1 if they have 1,000 and 0, respectively, in another instance. 5 These are mere examples. The datasets do not contain usernames Weakness of bipol 1. Although one of its strengths is that it is based on existing tools, this happens to also be a weakness, since the limitations of these tools also limit its accuracy. Datasets The new MAB dataset This English bias-detection dataset has a total size of 1,946,975 samples, as given in Table 2. This makes it one of the largest annotated datasets for bias detection, especially when compared to Bias in Open-Ended Language Generation Dataset (BOLD) with 23,679 samples (Dhamala et al., 2021) or HolisticBias with 459,758 samples (Smith et al., 2022). The large size of the dataset increases the chances of training a classifier to identify a broad range of biased cases. It is a combination of the Jigsaw 6 (of 1,902,194 samples) and the Social Bias Inference Corpus v2 (SBICv2) (of 147,139 samples) by Sap et al. (2020). Hence, it has 12 explicit bias axes (from the combination of both). In creating the data, we dropped duplicates since both datasets draw some content from a similar source. Examples in the MAB are given in Table 3. In creating the MAB, given that the Jigsaw is a multipurpose dataset that assumes that bias correlates with toxicity, the target and toxicity columns in the training and test sets, respectively, with values greater than or equal to the bias threshold of 0.1 (on a scale from 0 to 1) are automatically annotated as biased while those below are automatically annotated as unbiased. The rationale for choosing the threshold of 0.1 (instead of, say, 0.5 as done by the authors of Jigsaw) is based on random inspection of several examples in the dataset and the fact that a little bias (0.1) is still bias. For example, the comment below, which we consider biased, has a target of 0.2. In addition, adopting a threshold higher than 0.1 will result in further imbalance in the dataset in favour of unbiased samples. The SBICv2 dataset follows a similar assumption as the Jigsaw. This assumption is realistic and has been used in previous work in the literature (Nangia et al., 2020). We use the aggregrated version of the dataset and the same bias threshold for the offensiveYN column in the sets. In the Jigsaw, we retained the old IDs so that we can always trace back useful features to the original data source, but the SBICv2 did not use IDs. The MAB data statement is provided in the appendix (A.3). More details of the two base datasets are given in the following paragraphs. Jigsaw The Jigsaw dataset 7 is a multipurpose dataset that came about as a result of annotations by the civil comments platform. It has the following axes: gender, sexual orientation, religion, race/ethnicity, disability, and mental illness. It contains 1,804,874 comments in the training set and 97,320 comments in the test set. A small ratio (0.0539) was taken from the training set as part of the validation set for the MAB because the Jigsaw has no validation set and we wanted a validation set that is representative of the test set in size. The average of scores given by all the annotators is calculated to get the final values for all the labels. The Jigsaw was annotated by a total of almost 9,000 human raters, with a range of three to ten raters per comment. It is under CC0 licence in the public domain. SBICv2 The dataset covers a variety of social biases implied in text, along the following axes: 7 medium.com/jigsaw/creating-labeled-datasets-andexploring-the-role-of-human-raters-56367b6db298 gender/sexuality, race/ethnicity, religion/culture, social/political, disability body/age, and victims. Each split of the dataset has an aggregated-per-post version. The annotations in SBICv2 showed 82.4% pairwise agreement and Krippendorf α=0.45 on average. There are no usernames in the dataset. The SBICv2 is licensed under the CC-BY 4.0 license. The data is drawn from online posts from the following sources: • -r/darkJokes, r/meanJokes, r/offensiveJokes (r: reddit) • -Reddit microaggressions (Breitfeller et al., 2019) • -Toxic language detection Twitter corpora (Waseem and Hovy, 2016;Davidson et al., 2017;Founta et al., 2018) • -Data scraped from hate sites (Gab, Stormfront, r/incels, r/mensrights) Experiments & Methods All the experiments were conducted on two shared Nvidia DGX-1 machines running Ubuntu 18 and 20 with 8 × 32GB V100 and 8 x 40GB A100 GPUs, respectively. Each experiment is conducted multiple times and the average results reported. Wandb (Biewald, 2020), the experiment tracking tool, runs for 16 counts with bayesian optimization to suggest the best hyper-parameter combination for the initial learning rate (1e-3 -2e-5) and epochs (6 -10), given the importance of hyper-parameters (Adewumi et al., 2022a). These are then used to train the final models (on the Jigsaw, SBICv2 and MAB), which are then used to evaluate their test sets, the context of the SQuADv2 validation set and the premise of the COPA training set. Figure 4 in Appendix A.1 shows the wandb exploration for DeBERTa on MAB in parallel coordinates. We use the pretrained base models of RoBERTa (Liu et al., 2019), DeBERTa (He et al., 2021) and Electra (Clark et al., 2020), from the HuggingFace hub (Wolf et al., 2020). Average training time ranges from 41 minutes to 3 days, depending on the data size. Average test set evaluation time ranges from 4.8 minutes to over 72.3 hours. 8 Results and Discussion Across the results of the three models for the datasets in Table 4 regards to all the metrics. This trend can be observed in the explainability bar graphs (Figures 1, 2 & 3) of the top-10 frequent terms in the gender axis as captured in step 2 of bipol. We also observe from the test set results that RoBERTa appears to be the best classifier except with SBICv2, possibly because of the suggested hyper-parameters. MABtrained models are better than the Jigsaw-trained ones, though the Jigsaw shows the lowest bipol scores out of the 3 datasets for training, with MAB following closely. The bipol scores for SBICv2 show more than 100% increase over any of the other datasets -suggesting it contains much more bias relative to the dataset size. The two benchmark datasets (COPA and SQuADv2) also contain bias, though little, partly because the sets have very few unique samples. The models with the lowest positive error rates are those trained on Social Bias Inference Corpus v2 (SBICv2), however, when choosing a suitable model for evaluating other datasets, it's important to prioritize the size and representative nature of the training data the model was trained on. This is the reason why we used the MAB-trained models to estimate for COPA and SQuADv2. The error rate provides a lower bound of error for other datasets while the size and representative nature of the training data determines the extent of generalisation of the model. A snapshot of the explainability dictionary of lists of terms, which produced the chart in Figure 2, is given in Appendix A.2. From the bar charts, We observe that the MAB dataset has a strong male bias. In Figure 1, the top male term ('he') has a frequency of 6,589 while 'she' has only 1,593. This follows a similar observation with other datasets (Fuertes-Olivera, 2007) or OneNote 5.0, a resource for training co-reference systems, that entities with gendered pronouns are over 80% male (Zhao et al., 2018). Furthermore, when highly subjective terms like love, old, favorite, and significant that are associated with the female gender in the lexica are removed or put in both the male and female lexica, they cancel out from influencing bipol. Qualitative results: Some qualitative examples of perceived correct predictions using the MABtrained DeBERTa model are given in Table 5. We chose the DeBERTa-MAB model for this analysis because it has the lowest bipol out of the three MAB-trained models since it presents the most challenging case to find successful examples. It appears the term cowgirl, in the first example in the table, is perceived as gendered, the same way chairman or spokesman is perceived as such (Romaine, 2001;Fuertes-Olivera, 2007 Some qualitative examples of correct predictions in SQuADv2 using the MAB-trained DeBERTa model are given in Table 6. It seems the third example (id: 989) in the table has many different viewpoints it's hard to find a justification for the model not to classify it as biased for at least one of them. Table 7 shows the prediction distribution for the models trained on MAB. Unbiased samples are more easily detected in the dataset because there are more of these in the training set. One way to improve the performance and the MAB dataset is to upsample the biased class. This may be done through counter-factual data augmentation (CDA) or sentence completion through generative models. Although bipol is designed to be data-agnostic, it is important to note that estimating bias on outof-domain (OOD) datasets may result in less stellar performances of the metric. This is because examples label csv id many locals and tourists frequent the southern california coast for its popular beaches and the desert city of palm springs is popular for its resort feel and nearby open spaces. unbiased 96 rugby is also a growing sport in southern california particularly at the high school level with increasing numbers of schools adding rugby as an official school sport. Error Analysis unbiased 126 moderate and reformist islamists who accept and work within the democratic process include parties like the tunisian ennahda movement. jamaateislami of pakistan is basically a sociopolitical and democratic vanguard party but has also gained political influence through military coup détat in past. the islamist groups like hezbollah in lebanon and hamas in palestine participate in democratic and political process as well as armed attacks seeking to abolish the state of israel. radical islamist organizations like alqaeda and the egyptian islamic jihad and groups such as the taliban entirely reject democracy often declaring as kuffar those muslims who support it see takfirism as well as calling for violentoffensive jihad or urging and conducting attacks on a religious basis. biased 989 the trained MAB models are based on MAB's 12 explicit bias axes (7 axes from the Jigsaw and 5 additional axes from SBICv2). Some qualitative examples of perceived incorrect predictions in COPA using the MAB-trained DeBERTa model are given in Table 8. The second example (id: 71), particularly, is considered incorrect since the definite article "the" is used to identify the subject "terrorist". Model-Data tn fp fn tp 565 4,976 13,371 20,678 4,863 13,962 19,733 4,808 13,741 19,729 Related Work Previous studies on quantifying bias have used metrics such as odds ratio or vector word distance (Cryan et al., 2020 ticular gender (e.g. woman) rather than another. Meanwhile, vector word distance is used to measure bias by calculating the difference between the average distance of a word to a set of words belonging to different gender (Mikolov et al., 2013;Cryan et al., 2020). Dhamala et al. (2021) use sentiment to evaluate bias in religion. In the study by Cryan et al. (2020), they compare model classification against lexicon method for gender bias. Our approach combines the strengths of both approaches. There have been several methods involving lexicon usage, as observed by Antoniak and Mimno (2021), and they are usually constructed through crowd-sourcing, hand-selection, or drawn from prior work. Sengupta et al. (2021) introduced a library for measuring gender bias. It is based on word co-occurrence statistical methods. Zhao et al. (2018) introduced WinoBias, which is focused on only gender bias for coreference resolution, similarly to Winogender by Rudinger et al. (2018). On the other hand, bipol is designed to be multi-axes and dataset-agnostic, to the extent the trained classifier and lexica allow. Besides, in both Zhao et al. (2018) and Rudinger et al. (2018), they focus on the English language and binary gender bias only (with some cases for neutral in Winogender). Both admit their approaches may demonstrate the presence of gender bias in a system, but not prove its absence. CrowS-Pairs, by Nangia et al. (2020), is a dataset of 1,508 pairs of more and less stereotypical examples that cover stereotypes in 9 axes of bias, which are presented to language models (LM) to determine their bias. It is similar to StereoSet, (for associative contexts), which measures 4 axes of social bias in LM (Nadeem et al., 2021). Table 9 below compares some of the metrics and bipol. Conclusion We introduce bipol and the MAB dataset. We also demonstrate the explainability of bipol. We believe the metric will help researchers to estimate bias in datasets in a more robust way in order to address Metric/Evaluator Axis Lexicon Terms/Sentences WinoBias (Zhao et al., 2018) 1 40 occupations Winogender (Rudinger et al., 2018) 1 60 occupations CrowS-Pairs Nangia et al. (2020) 9 3,016 StereoSet (Nadeem et al., 2021) 4 321 terms Bipol (ours) >2, 13*< >45, 466*< social bias in text. Future work may explore ways of minimising false positives in bias classifiers, address the data imbalance in the MAB training data, and how this work scales to other languages. A library with bipol may be produced to make it easy for users to deploy. Another issue is to have a system that can automatically determine if bias is in favour of or against a group. Limitations The models for estimating the biases in the datasets in step 1 are limited in scope, as they only cover certain number of axes (12). Therefore, a result of 0 on any dataset does not necessarily indicate a bias-free dataset. The MAB dataset was aggregated from the Jigsaw and SBICv2, which were annotated by humans who may have biases of their own, based on their cultural background or demographics. Hence, the final annotations may not be seen as absolute ground truth of social biases. Furthermore, satisfying multiple fairness criteria at the same time in ML models is known to be difficult (Speicher et al., 2018;Zafar et al., 2017), thus, bipol or these models, though designed to be robust, are not guaranteed to be completely bias-free. Effort was made to mask examples with offensive content in this paper. Lukas Biewald. 2020. Experiment tracking with weights and biases. References Software available from wandb.com. Figure 1 : 1Top-10 gender frequent terms influencing bipol in the MAB test set after RoBERTa classification. Terms like love & old are associated with the female gender according to the lexica. However, when such subjective words are removed or put in both the male & female lexica, they cancel out from influencing bipol. Figure 2 : 2Top-10 gender frequent terms influencing bipol in the MAB test set after DeBERTa classification. Terms like love & old are associated with the female gender according to the lexica. However, when such subjective words are removed or put in both the male & female lexica, they cancel out from influencing bipol. Figure 3 : 3Top-10 gender frequent terms influencing bipol in the MAB test set after Electra classification. Terms like love & old are associated with the female gender according to the lexica. However, when such subjective words are removed or put in both the male & female lexica, they cancel out from influencing bipol. Table 1: Lexica sizes. These may be expanded.Axis Axis type 1 Axis type 2 Axis type 3 Gender 76 (female) 46 (male) Racial 84 (black) 127 (white) Religious 180 (christian) 465 (muslim) 179 (hindu) 4 merriam-webster.com/thesaurus/female, merriam-webster.com/thesaurus/male, en.wikipedia.org/wiki/List_of_ethnic_slurs, en.wikipedia.org/wiki/List_of_religious_slurs Table 2 : 2MAB dataset split This b*tch is nuts. Who would read a book by a woman.comment_text label old_id id biased 239612 106351 What do you call lingerie for Arabs? Socks. biased none 1355035 I am assuming a sequel is in the works, then?" unbiased 282386 613423 What are the chances that a Jew had the number 8675309? 1 in 6 million biased none 812633 Table 3 : 3Examples from the MABIn 3 years from now, the Alaska Permanent Fund Dividend will be ZERO $$$. Democrats will moan, wail, and scream that there is no more OTHER PEOPLES' MONEY to FREE GIFT. Alaskans will have to go back to living on what money they earn for themselves. The oil boom is over. It's bust time in Alaska. , we observe similar trends withunique samples macro F1 ↑ (s.d.) bipol level ↓ (s.d.) error rate ↓ RoBERTa dev test corpus sentence bipol (b) fp/(fp+tp) Jigsaw 97,320 0.88 (0) 0.778 (0) 0.244 0.919 0.225 (0) 0.236 SBICv2 4,691 0.763 (0.004) 0.796 (0.004) 0.755 0.711 0.538 (0.06) 0.117 MAB 102,011 0.877 (0) 0.780 (0) 0.246 0.925 0.227 (0) 0.198 COPA 400 0.03 0.917 0.027 (0) > 0.198 SQuADv2 1,204 0.002 0 0 > 0.198 DeBERTa Jigsaw 97,320 0.877 (0.004) 0.771 (0) 0.239 0.914 0.218 (0) 0.222 SBICv2 4,691 0.767 (0) 0.83 (0) 0.754 0.712 0.537 (0) 0.116 MAB 102,011 0.876 (0.001) 0.773 (0) 0.239 0.923 0.22 (0) 0.2 COPA 400 0.035 1 0.035 (0) > 0.2 SQuADv2 1,204 0.007 0.883 0.006 (0) > 0.2 Electra Jigsaw 97,320 0.88 (0) 0.769 (0) 0.226 0.916 0.207 (0) 0.216 SBICv2 4,691 0.712 (0.002) 0.828 (0) 0.706 0.667 0.471 (0) 0,097 MAB 102,011 0.875 (0) 0.777 (0) 0.241 0.925 0.223 (0) 0.196 COPA 400 0.028 0.909 0.025 (0) > 0.196 SQuADv2 1,204 0.004 0.587 0.002 (0) > 0.196 Table 4 : 4Average F1 and bipol scores. Lower is better for bipol and the positive error rate unlike F1. COPA and SQuADv2 are evaluated with the MAB-trained models and do not have F1 scores since we do not train on them. ).examples label csv id the cowgirl threw her lasso towards the horse. biased 132 the flashlight was dead. unbiased 136 i wrote a thankyou note to my grand- mother unbiased 139 Table 5 : 5Examples of correct COPA predictions using MAB-trained DeBERTa model. Table 6 : 6Examples of correct SQuADv2 predictions using MAB-trained DeBERTa model. Table 7 : 7Prediction distribution for the models on MAB. Table 8 : 8Examples of incorrect COPA predictions using MAB-trained DeBERTa model. Table 9 : 9Comparison of some metrics to bipol. (*As used in this work. The upper bounds are not limited by the bipol algorithm but the dataset & lexica.) Tosin Adewumi, Foteini Liwicki, and Marcus Liwicki. 2022a. Word2vec: Optimal hyperparameters and their impact on natural language processing downstream tasks. Open Computer Science, 12(1):134-141. Tosin Adewumi, Sana Sabah Sabry, Nosheen Abid, Foteini Liwicki, and Marcus Liwicki. 2022b. T5 for hate speech, augmented data and ensemble. arXiv preprint arXiv:2210.05480. Tosin P Adewumi, Foteini Liwicki, and Marcus Liwicki. 2019. Conversational systems in machine learning from the point of view of the philosophy of science-using alime chat and related studies. Maria Antoniak and David Mimno. 2021. Bad seeds: Evaluating lexical methods for bias measurement. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1889-1904.Philosophies, 4(3):41. github.com/tosingithub/Bipol 2 etymonline.com/word/bias 3 https://libguides.uwgb.edu/bias medium.com/jigsaw/creating-labeled-datasets-andexploring-the-role-of-human-raters-56367b6db298 when cpulimit is enforced, in fairness to others. A AppendixA.1 MethodsThe first 10 terms in the lexica: Finding microaggressions in the wild: A case for locating elusive phenomena in social media posts. Luke Breitfeller, Emily Ahn, David Jurgens, Yulia Tsvetkov, 10.18653/v1/D19-1176Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsLuke Breitfeller, Emily Ahn, David Jurgens, and Yu- lia Tsvetkov. 2019. Finding microaggressions in the wild: A case for locating elusive phenomena in so- cial media posts. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 1664-1674, Hong Kong, China. As- sociation for Computational Linguistics. C John, Brigham, Ethnic stereotypes. Psychological bulletin. 7615John C Brigham. 1971. Ethnic stereotypes. Psycholog- ical bulletin, 76(1):15. A fair classifier using mutual information. Jaewoong Cho, Gyeongjo Hwang, Changho Suh, 10.1109/ISIT44484.2020.91742932020 IEEE International Symposium on Information Theory (ISIT). Jaewoong Cho, Gyeongjo Hwang, and Changho Suh. 2020. A fair classifier using mutual information. In 2020 IEEE International Symposium on Information Theory (ISIT), pages 2521-2526. Kevin Clark, Minh-Thang Luong, V Quoc, Christopher D Le, Manning, arXiv:2003.10555Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprintKevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020. Electra: Pre-training text encoders as discriminators rather than genera- tors. arXiv preprint arXiv:2003.10555. Detecting gender stereotypes: Lexicon vs. supervised learning methods. Jenna Cryan, Shiliang Tang, Xinyi Zhang, Miriam Metzger, Haitao Zheng, Ben Y Zhao, 10.1145/3313831.3376488Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20. the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20New York, NY, USAAssociation for Computing MachineryJenna Cryan, Shiliang Tang, Xinyi Zhang, Miriam Met- zger, Haitao Zheng, and Ben Y. Zhao. 2020. De- tecting gender stereotypes: Lexicon vs. supervised learning methods. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Sys- tems, CHI '20, page 1-11, New York, NY, USA. As- sociation for Computing Machinery. Automated hate speech detection and the problem of offensive language. Thomas Davidson, Dana Warmsley, Michael Macy, Ingmar Weber, Proceedings of the international AAAI conference on web and social media. the international AAAI conference on web and social media11Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of the international AAAI conference on web and social media, volume 11, pages 512-515. Bold: Dataset and metrics for measuring biases in open-ended language generation. Jwala Dhamala, Tony Sun, Varun Kumar, Satyapriya Krishna, Yada Pruksachatkun, Kai-Wei Chang, Rahul Gupta, ACM FAccT. Jwala Dhamala, Tony Sun, Varun Kumar, Satyapriya Krishna, Yada Pruksachatkun, Kai-Wei Chang, and Rahul Gupta. 2021. Bold: Dataset and metrics for measuring biases in open-ended language genera- tion. In ACM FAccT 2021. Personal bias in other critics. Personal Bias in Literary Criticism: Dr. Ts Eliot, 216Johnson, Matthew Arnold, TS EliotTS Eliot. 2002. Personal bias in other critics. Personal Bias in Literary Criticism: Dr. Johnson, Matthew Arnold, TS Eliot, page 216. Multistage and elastic spam detection in mobile social networks through deep learning. Bo Feng, Qiang Fu, Mianxiong Dong, Dong Guo, Qiang Li, IEEE Network. 324Bo Feng, Qiang Fu, Mianxiong Dong, Dong Guo, and Qiang Li. 2018. Multistage and elastic spam detec- tion in mobile social networks through deep learning. IEEE Network, 32(4):15-21. Large scale crowdsourcing and characterization of twitter abusive behavior. Constantinos Antigoni Maria Founta, Despoina Djouvas, Ilias Chatzakou, Jeremy Leontiadis, Gianluca Blackburn, Athena Stringhini, Michael Vakali, Nicolas Sirivianos, Kourtellis, Twelfth International AAAI Conference on Web and Social Media. Antigoni Maria Founta, Constantinos Djouvas, De- spoina Chatzakou, Ilias Leontiadis, Jeremy Black- burn, Gianluca Stringhini, Athena Vakali, Michael Sirivianos, and Nicolas Kourtellis. 2018. Large scale crowdsourcing and characterization of twitter abusive behavior. In Twelfth International AAAI Conference on Web and Social Media. A corpus-based view of lexical gender in written business english. English for Specific Purposes. A Pedro, Fuertes-Olivera, 26Pedro A Fuertes-Olivera. 2007. A corpus-based view of lexical gender in written business english. En- glish for Specific Purposes, 26(2):219-234. Goldberg revisited: Pro-female evaluation bias and changed attitudes toward women by engineering students. M Frances, Robert L Haemmerlie, Montgomery, Journal of Social Behavior and Personality. 62179Frances M Haemmerlie and Robert L Montgomery. 1991. Goldberg revisited: Pro-female evaluation bias and changed attitudes toward women by engi- neering students. Journal of Social Behavior and Personality, 6(2):179. Bias in social research. Martyn Hammersley, Roger Gomm, Sociological research online. 21Martyn Hammersley and Roger Gomm. 1997. Bias in social research. Sociological research online, 2(1):7-19. Deberta: Decoding-enhanced bert with disentangled attention. Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen, International Conference on Learning Representations. Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021. Deberta: Decoding-enhanced bert with disentangled attention. In International Conference on Learning Representations. Technologies for spam detection. Simon Heron, Network Security. 1Simon Heron. 2009. Technologies for spam detection. Network Security, 2009(1):11-15. Face recognition performance: Role of demographic information. Brendan F Klare, J Mark, Joshua C Burge, Richard W Klontz, Vorder Bruegge, Jain, IEEE Transactions on Information Forensics and Security. 76Brendan F Klare, Mark J Burge, Joshua C Klontz, Richard W Vorder Bruegge, and Anil K Jain. 2012. Face recognition performance: Role of demographic information. IEEE Transactions on Information Forensics and Security, 7(6):1789-1801. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692Roberta: A robustly optimized bert pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692. Perspectives on racial phenotypicality bias. B Keith, Maddox, Personality and Social Psychology Review. 84Keith B Maddox. 2004. Perspectives on racial pheno- typicality bias. Personality and Social Psychology Review, 8(4):383-401. Social spam detection. Benjamin Markines, Ciro Cattuto, Filippo Menczer, Proceedings of the 5th international workshop on adversarial information retrieval on the web. the 5th international workshop on adversarial information retrieval on the webBenjamin Markines, Ciro Cattuto, and Filippo Menczer. 2009. Social spam detection. In Proceedings of the 5th international workshop on adversarial informa- tion retrieval on the web, pages 41-48. Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean, Advances in neural information processing systems. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119. StereoSet: Measuring stereotypical bias in pretrained language models. Moin Nadeem, Anna Bethke, Siva Reddy, 10.18653/v1/2021.acl-long.416Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnline. Association for Computational LinguisticsMoin Nadeem, Anna Bethke, and Siva Reddy. 2021. StereoSet: Measuring stereotypical bias in pre- trained language models. In Proceedings of the 59th Annual Meeting of the Association for Compu- tational Linguistics and the 11th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 5356-5371, Online. As- sociation for Computational Linguistics. CrowS-pairs: A challenge dataset for measuring social biases in masked language models. Nikita Nangia, Clara Vania, Rasika Bhalerao, Samuel R Bowman, 10.18653/v1/2020.emnlp-main.154Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsNikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R. Bowman. 2020. CrowS-pairs: A chal- lenge dataset for measuring social biases in masked language models. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1953-1967, Online. As- sociation for Computational Linguistics. Taking advantage of multitask learning for fair classification. Luca Oneto, Michele Doninini, Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. the 2019 AAAI/ACM Conference on AI, Ethics, and SocietyAmon Elders, and Massimiliano PontilLuca Oneto, Michele Doninini, Amon Elders, and Mas- similiano Pontil. 2019. Taking advantage of multi- task learning for fair classification. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pages 227-237. On fairness and calibration. Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, Kilian Q Weinberger, Advances in Neural Information Processing Systems. Curran Associates, Inc30Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Klein- berg, and Kilian Q Weinberger. 2017. On fairness and calibration. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc. Saving face: Investigating the ethical concerns of facial recognition auditing. Timnit Inioluwa Deborah Raji, Margaret Gebru, Joy Mitchell, Joonseok Buolamwini, Emily Lee, Denton, 10.1145/3375627.3375820Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, AIES '20. the AAAI/ACM Conference on AI, Ethics, and Society, AIES '20New York, NY, USAAssociation for Computing MachineryInioluwa Deborah Raji, Timnit Gebru, Margaret Mitchell, Joy Buolamwini, Joonseok Lee, and Emily Denton. 2020. Saving face: Investigating the ethical concerns of facial recognition auditing. In Proceed- ings of the AAAI/ACM Conference on AI, Ethics, and Society, AIES '20, page 145-151, New York, NY, USA. Association for Computing Machinery. Know what you don't know: Unanswerable questions for SQuAD. Pranav Rajpurkar, Robin Jia, Percy Liang, 10.18653/v1/P18-2124Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaShort Papers2Association for Computational LinguisticsPranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable ques- tions for SQuAD. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784- 789, Melbourne, Australia. Association for Compu- tational Linguistics. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. Melissa Roemmele, Andrew S Cosmin Adrian Bejan, Gordon, AAAI spring symposium: logical formalizations of commonsense reasoning. Melissa Roemmele, Cosmin Adrian Bejan, and An- drew S Gordon. 2011. Choice of plausible alterna- tives: An evaluation of commonsense causal reason- ing. In AAAI spring symposium: logical formaliza- tions of commonsense reasoning, pages 90-95. A corpus-based view of gender in british and american english. Suzanne Romaine , Gender across languages. 1Suzanne Romaine. 2001. A corpus-based view of gen- der in british and american english. Gender across languages, 1(2001):153-175. Gender bias in coreference resolution. Rachel Rudinger, Jason Naradowsky, Brian Leonard, Benjamin Van Durme, 10.18653/v1/N18-2002Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaAssociation for Computational Linguistics2Short PapersRachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 8-14, New Orleans, Louisiana. Association for Computational Linguistics. Social bias frames: Reasoning about social and power implications of language. Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A Smith, Yejin Choi, 10.18653/v1/2020.acl-main.486Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsMaarten Sap, Saadia Gabriel, Lianhui Qin, Dan Ju- rafsky, Noah A. Smith, and Yejin Choi. 2020. So- cial bias frames: Reasoning about social and power implications of language. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 5477-5490, Online. As- sociation for Computational Linguistics. Genbit: measure and mitigate gender bias in language datasets. Kinshuk Sengupta, Rana Maher, Declan Groves, Chantal Olieman, Microsoft Journal of Applied Research. 16Kinshuk Sengupta, Rana Maher, Declan Groves, and Chantal Olieman. 2021. Genbit: measure and mit- igate gender bias in language datasets. Microsoft Journal of Applied Research, 16:63-71. I'm sorry to hear that": Finding new biases in language models with a holistic descriptor dataset. Eric Michael Smith, Melissa Hall, Melanie Kambadur, Eleonora Presani, Adina Williams, Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. the 2022 Conference on Empirical Methods in Natural Language ProcessingAbu Dhabi, United Arab EmiratesAssociation for Computational LinguisticsEric Michael Smith, Melissa Hall, Melanie Kambadur, Eleonora Presani, and Adina Williams. 2022. "I'm sorry to hear that": Finding new biases in language models with a holistic descriptor dataset. In Pro- ceedings of the 2022 Conference on Empirical Meth- ods in Natural Language Processing, pages 9180- 9211, Abu Dhabi, United Arab Emirates. Associa- tion for Computational Linguistics. A unified approach to quantifying algorithmic unfairness: Measuring individual &group unfairness via inequality indices. Till Speicher, Hoda Heidari, Nina Grgic-Hlaca, Krishna P Gummadi, Adish Singla, Adrian Weller, Muhammad Bilal Zafar, Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining. the 24th ACM SIGKDD international conference on knowledge discovery & data miningTill Speicher, Hoda Heidari, Nina Grgic-Hlaca, Kr- ishna P Gummadi, Adish Singla, Adrian Weller, and Muhammad Bilal Zafar. 2018. A unified approach to quantifying algorithmic unfairness: Measuring in- dividual &group unfairness via inequality indices. In Proceedings of the 24th ACM SIGKDD interna- tional conference on knowledge discovery & data mining, pages 2239-2248. Hateful symbols or hateful people? predictive features for hate speech detection on Twitter. Zeerak Waseem, Dirk Hovy, 10.18653/v1/N16-2013Proceedings of the NAACL Student Research Workshop. the NAACL Student Research WorkshopSan Diego, CaliforniaAssociation for Computational LinguisticsZeerak Waseem and Dirk Hovy. 2016. Hateful sym- bols or hateful people? predictive features for hate speech detection on Twitter. In Proceedings of the NAACL Student Research Workshop, pages 88-93, San Diego, California. Association for Computa- tional Linguistics. Transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Mariama Gugger, Drame, 10.18653/v1/2020.emnlp-demos.6Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsQuentin Lhoest, and Alexander RushOnline. Association for Computational LinguisticsThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics. Fairness Constraints: Mechanisms for Fair Classification. Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rogriguez, Krishna P Gummadi, PMLRProceedings of the 20th International Conference on Artificial Intelligence and Statistics. the 20th International Conference on Artificial Intelligence and StatisticsProceedings of Machine Learning ResearchMuhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rogriguez, and Krishna P. Gummadi. 2017. Fair- ness Constraints: Mechanisms for Fair Classifica- tion. In Proceedings of the 20th International Con- ference on Artificial Intelligence and Statistics, vol- ume 54 of Proceedings of Machine Learning Re- search, pages 962-970. PMLR. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, Kai-Wei Chang, 10.18653/v1/D17-1323Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkAssociation for Computational LinguisticsJieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2979-2989, Copenhagen, Denmark. Association for Computational Linguis- tics. Gender bias in coreference resolution: Evaluation and debiasing methods. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, Kai-Wei Chang, 10.18653/v1/N18-2003Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, Louisiana2Short Papers. Association for Computational LinguisticsJieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15-20, New Orleans, Louisiana. Association for Computa- tional Linguistics. A.2 Results. A.2 Results belle ': 0, ' doll ': 7, ' señora ': 0, ' senorita ': 0, ' lassie ': 0, ' ingénue ': 0, ' miss ': 67, ' mademoiselle ': 1, ' señorita ': 0, ' babe ': 3, ' girlfriend ': 32, ' lover ': 12, ' mistress ': 5, ' ladylove ': 0, ' inamorata ': 0, ' gill ': 1, ' old ': 656, ' beloved ': 16, ' dear ': 35, ' sweetheart ': 4, ' sweet ': 25, ' flame ': 5, ' love ': 439, ' valentine ': 1, ' favorite ': 52, ' moll ': 0, ' darling ': 8, ' honey ': 9, ' significant ': 38, ' wife ': 182, ' wifey ': 0, ' missus ': 0, ' helpmate ': 0, ' helpmeet ': 0, ' spouse ': 15, ' bride ': 1, ' partner ': 30, ' missis ': 0, ' widow ': 5, ' housewife ': 1, ' mrs ': 8, ' matron ': 0, ' soul ': 34. {&apos;gender, lass ': 024she ': 1554, ' her ': 1492, ' woman ': 407, ' lady ': 65, ' female ': 152, ' girl ': 157, ' skirt ': 5, ' madam ': 0, ' gentlewoman ': 0. mate ': 5, ' housekeeper ': 1, ' dowager ': 0, ' companion ': 1, ' homemaker ': 0, ' consort ': 1, ' better half ': 1, ' hausfrau ': 0, ' stay-at-home ': 0, ' he ': 6361, ' him ': 1577, ' boy ': 186, ' man ': 953, ' male ': 155, ' guy ': 603, ' masculine ': 4, ' virile ': 0, ' manly ': 4, ' man-sized ': 0, ' hypermasculine ': 0, ' macho ': 3, ' mannish ': 0, ' manlike ': 0, ' man-size ': 0, ' hairy-chested ': 0, ' butch ': 0, ' ultramasculine ': 0, ' boyish ': 0, ' tomboyish ': 0, ' hoydenish ': 0, ' amazonian ': 0, ' gentleman ': 13, ' dude ': 64, ' fellow ': 143, ' cat ': 43, ' gent ': 0, ' fella ': 2, ' lad ': 1, ' bloke ': 0, ' bastard ': 9. chappie ': 0, ' hombre ': 0, ' galoot ': 0, ' buck ': 25, ' joker ': 2, ' mister ': 3, ' jack ': 20, ' sir ': 36, ' master ': 26, ' buddy ': 25, ' buster ': 3], 'racial': [' nigga ': 61{'gender': [' she ': 1554, ' her ': 1492, ' woman ': 407, ' lady ': 65, ' female ': 152, ' girl ': 157, ' skirt ': 5, ' madam ': 0, ' gentlewoman ': 0, ' madame ': 2, ' dame ': 3, ' gal ': 5, ' maiden ': 0, ' maid ': 2, ' damsel ': 0, ' senora ': 0, ' lass ': 0, ' beauty ': 16, ' ingenue ': 0, ' belle ': 0, ' doll ': 7, ' señora ': 0, ' senorita ': 0, ' lassie ': 0, ' ingénue ': 0, ' miss ': 67, ' mademoiselle ': 1, ' señorita ': 0, ' babe ': 3, ' girlfriend ': 32, ' lover ': 12, ' mistress ': 5, ' ladylove ': 0, ' inamorata ': 0, ' gill ': 1, ' old ': 656, ' beloved ': 16, ' dear ': 35, ' sweetheart ': 4, ' sweet ': 25, ' flame ': 5, ' love ': 439, ' valentine ': 1, ' favorite ': 52, ' moll ': 0, ' darling ': 8, ' honey ': 9, ' significant ': 38, ' wife ': 182, ' wifey ': 0, ' missus ': 0, ' helpmate ': 0, ' helpmeet ': 0, ' spouse ': 15, ' bride ': 1, ' partner ': 30, ' missis ': 0, ' widow ': 5, ' housewife ': 1, ' mrs ': 8, ' matron ': 0, ' soul ': 34, ' mate ': 5, ' housekeeper ': 1, ' dowager ': 0, ' companion ': 1, ' homemaker ': 0, ' consort ': 1, ' better half ': 1, ' hausfrau ': 0, ' stay-at-home ': 0, ' he ': 6361, ' him ': 1577, ' boy ': 186, ' man ': 953, ' male ': 155, ' guy ': 603, ' masculine ': 4, ' virile ': 0, ' manly ': 4, ' man-sized ': 0, ' hypermasculine ': 0, ' macho ': 3, ' mannish ': 0, ' manlike ': 0, ' man-size ': 0, ' hairy-chested ': 0, ' butch ': 0, ' ultramasculine ': 0, ' boyish ': 0, ' tomboyish ': 0, ' hoydenish ': 0, ' amazonian ': 0, ' gentleman ': 13, ' dude ': 64, ' fellow ': 143, ' cat ': 43, ' gent ': 0, ' fella ': 2, ' lad ': 1, ' bloke ': 0, ' bastard ': 9, ' joe ': 45, ' chap ': 2, ' chappie ': 0, ' hombre ': 0, ' galoot ': 0, ' buck ': 25, ' joker ': 2, ' mister ': 3, ' jack ': 20, ' sir ': 36, ' master ': 26, ' buddy ': 25, ' buster ': 3], 'racial': [' nigga ': 61, ' negro ': 24, ...
[]
[ "Quantum non-demolition measurements of moving target states", "Quantum non-demolition measurements of moving target states", "Quantum non-demolition measurements of moving target states", "Quantum non-demolition measurements of moving target states" ]
[ "Anton L Andersen ", "Klaus Mølmer ", "\nCenter for Complex Quantum Systems\nDepartment of Physics and Astronomy\nAarhus Institute of Advanced Studies\nAarhus University\nNy Munkegade 120DK-8000Aarhus CDenmark\n", "\nCenter for Complex Quantum Systems\nDepartment of Physics and Astronomy\nAarhus University\nHøegh-Guldbergs Gade 6BDK-8000Aarhus CDenmark\n", "\nAarhus University\nNy Munkegade 120DK-8000Aarhus CDenmark\n", "Anton L Andersen ", "Klaus Mølmer ", "\nCenter for Complex Quantum Systems\nDepartment of Physics and Astronomy\nAarhus Institute of Advanced Studies\nAarhus University\nNy Munkegade 120DK-8000Aarhus CDenmark\n", "\nCenter for Complex Quantum Systems\nDepartment of Physics and Astronomy\nAarhus University\nHøegh-Guldbergs Gade 6BDK-8000Aarhus CDenmark\n", "\nAarhus University\nNy Munkegade 120DK-8000Aarhus CDenmark\n" ]
[ "Center for Complex Quantum Systems\nDepartment of Physics and Astronomy\nAarhus Institute of Advanced Studies\nAarhus University\nNy Munkegade 120DK-8000Aarhus CDenmark", "Center for Complex Quantum Systems\nDepartment of Physics and Astronomy\nAarhus University\nHøegh-Guldbergs Gade 6BDK-8000Aarhus CDenmark", "Aarhus University\nNy Munkegade 120DK-8000Aarhus CDenmark", "Center for Complex Quantum Systems\nDepartment of Physics and Astronomy\nAarhus Institute of Advanced Studies\nAarhus University\nNy Munkegade 120DK-8000Aarhus CDenmark", "Center for Complex Quantum Systems\nDepartment of Physics and Astronomy\nAarhus University\nHøegh-Guldbergs Gade 6BDK-8000Aarhus CDenmark", "Aarhus University\nNy Munkegade 120DK-8000Aarhus CDenmark" ]
[]
We present a protocol for probing the state of a quantum system by its resonant coupling and entanglement with a meter system. By continuous measurement of a time evolving meter observable, we infer the evolution of the entangled systems and, ultimately, the state and dynamics of the system of interest. The photon number in a cavity field is thus resolved by simulated monitoring of the time dependent excited state population of a resonantly coupled two-level system, and we propose to regard this as an extension of quantum non-demolition measurements with potential applications in quantum metrology and quantum computing.
10.1103/physrevlett.129.120402
[ "https://export.arxiv.org/pdf/2201.03918v2.pdf" ]
245,853,869
2201.03918
f2e9c2ecee537ff9aa501948edd2a52b7d649667
Quantum non-demolition measurements of moving target states Anton L Andersen Klaus Mølmer Center for Complex Quantum Systems Department of Physics and Astronomy Aarhus Institute of Advanced Studies Aarhus University Ny Munkegade 120DK-8000Aarhus CDenmark Center for Complex Quantum Systems Department of Physics and Astronomy Aarhus University Høegh-Guldbergs Gade 6BDK-8000Aarhus CDenmark Aarhus University Ny Munkegade 120DK-8000Aarhus CDenmark Quantum non-demolition measurements of moving target states (Dated: January 19, 2022) We present a protocol for probing the state of a quantum system by its resonant coupling and entanglement with a meter system. By continuous measurement of a time evolving meter observable, we infer the evolution of the entangled systems and, ultimately, the state and dynamics of the system of interest. The photon number in a cavity field is thus resolved by simulated monitoring of the time dependent excited state population of a resonantly coupled two-level system, and we propose to regard this as an extension of quantum non-demolition measurements with potential applications in quantum metrology and quantum computing. Introduction. In most studies and applications of quantum systems, it is required to perform precise measurements of a physical observable to either detect its value in a given state or changes of its value due to physical interactions. So-called quantum non-demolition (QND) observations play a special role: these are observations where the interaction with the measurement apparatus does not change the value of the observable of interest or any other property of the system that may subsequently cause changes of that value [1][2][3]. QND observables permit practical detection schemes where a sequence of weak measurements accumulates measurement statistics and gradually approaches a projective measurement with the outcome distribution given by Born's rule. QND measurements are useful for the high precision monitoring of perturbations or dissipative state changes of quantum systems and sensors [3][4][5][6]. The degree of excitation of a quantum system commutes with the system Hamiltonian and constitutes a QND observable, which can be probed by a dispersive interactions that induce a complex phase rotation on, e.g., a qubit or field probe. Repeated or continuous measurements with such probes gradually project the system of interest on an energy eigenstate [7][8][9][10][11][12][13][14][15], and they may be used to identify, quantum jumps in its excitation dynamics [16][17][18][19][20][21][22][23]. Variants of QND measurements include stroboscopic QND measurements, such as brief position measurements carried out around times t n = nπ/ω of an harmonic oscillator with frequency ω [24][25][26], to enable the study of periodically evolving properties of quantum systems, and emergent QND measurements, which probe a physical observable very weakly and effectively, over time, extract its expectation value in one of the energy eigenstates [27,28]. Other strategies employ additional degrees of freedom to evade back action and thereby reach ultimate sensitivity with quantum probes [29][30][31][32][33]. The stochastic dynamics of the expectation value σee , conditioned on weak continuous measurement ofσee. (d) The continuous probing ofσee gradually identifies a single Rabi oscillation frequency and hence collapses the system from a thermal ensemble with mean excitation n = 3 into a single energy subspace as shown by the evolution of the subspace probabilities pn. Probing by resonant Rabi dynamics. In this Letter we propose a different approach for the measurement of the excitation of an oscillator system, relying on resonant interactions and a deliberate exchange of quanta of energy with a qubit meter system. Fig.1(a) shows how a mixture or superposition of oscillator eigenstates leads to the socalled damped and revived Rabi oscillations appearing as oscillations in a two-level resonant probe with different ndependent frequencies, demonstrated, e.g., with trapped ions [34] and superconducting qubits [35]. In this situation, the total, shared number of excitations is a conserved quantity and QND observable, and we suggest to measure its value by a weak continuous monitoring of the oscillating qubit meter excited state population in stead of the final state projective and destructive measurements applied in [34,35]. Fig.1(c) shows the conditioned excited state population dynamics as the measurement gradually resolves the frequency (and phase) of the coherent exchange of energy between the quantum oscillator and the qubit. Panel (d) shows the associated collapse of the system on a moving target state with a definite total number of excitations. We argue that this measurement is faster and may thus enable detection of dynamics and oscilator quantum jumps that cannot be resolved by the dispersive probing. Weak continuous measurements. We consider a harmonic oscillator resonantly coupled to a qubit with states |g and |e , via the resonant Jaynes-Cummings Hamiltonian, H = ω(â †â +σ ee ) + g(â †σ ge +âσ eg ),(1) where = 1 such that ω is the energy spacing of the oscillator and the qubit,â † (â) is the creation (annihilation) operator of the oscillator,σ ij = |i j| and g is the coupling strength. The Jaynes-Cummings coupling will drive oscillations between product states |n, e ↔ |n + 1, g with angular frequency 2g √ n + 1, as is seen in Fig. 1. We imagine that the qubit is a real or artificial atom with further excited states and that the qubit observablê σ ee can be continuously measured by phase sensitive, homodyne detection of a classical probe field coupling |e off-resonantly to an excited state. While this probing is taking place, the dynamics of the system is governed by the stochastic master equation (SME) [36,37], dρ = −i[H, ρ]dt + kD[σ ee ]ρdt + √ 2kηH[σ ee ]ρdW,(2) where dW represents the Gaussian noise on the phase quadrature of the probe field with mean zero and variance equal to dt, k denotes the measurement strength and η is the detection efficiency. In this work we will assume η = 1 for simplicity, but our approach works also for non-unit optical detection efficiencies. The first term in Eq. (2) describes the normal time evolution including the Rabi oscillation dynamics. The second term in Eq. (2) contains the dissipation superoperator D[Ô]ρ = 2ÔρÔ † − {Ô †Ô , ρ},(3) describing decoherence due to the disturbance caused by the measurement. The final term in Eq. (2) contains the superoperator H[Ô]ρ =Ôρ + ρÔ † − Ô +Ô † ρ ,(4) where Ô ρ = Tr [Ôρ]. This term describes the back action of the stochastic information gain by the measurement process. The Wiener noise increment dW is given by the difference between the random measurement outcome obtained in the experiment, dY (t), and its expected mean value, dY (t) = σ ee ρ dt + dW √ 8k .(5) dW can be simulated in numerical studies, while one obtains the conditioned dynamics of an experimentally monitored system by solving Eq. (2) with dW extracted from Eq. (5). Numerical solutions to the stochastic master equation are obtained using the QuTiP toolbox [38,39]. To observe how the continuous measurement ofσ ee reveals the oscillator dynamics, we will consider a situation where the qubit is initially prepared in the state |e , while the harmonic oscillator is in a mixed state described by ρ HO = n p n (t = 0) |n n|. Results of simulations are presented in Fig. 2 for η = 1, and they show that the system converges to states with a definite total number of excitations. For the case of weak probing we see that the definite value of n occurs together with a definite harmonic evolution of the excited state population (at frequency 2g √ n + 1), while intermediate probing strengths k also identify n but continuously disturb the phase of the Rabi oscillations. For even stronger probing, a Zeno effect prevents the coherent Rabi oscillations and makes the distinction harder between different values of n [40]. Optimal probing. In order to assess the time needed for the continuous measurements to determine the degree of excitation of the system by the corresponding frequency of the Rabi oscillations, we study the convergence towards unity of the purity P of the conditional density matrix Tr(ρ 2 ). As seen in Fig. 3, we can fit its mean value over many trajectories with the model P (t) = 1 − (1 − P (t = 0))e −t/τ , and repeating this procedure for different probing strengths we observe in the insert of Fig. 3 that the time needed to perform the QND measurement is smallest in the intermediate strength probing regime, k g. This result can be explained qualitatively, since probing with a small value of k only yields an appreciable signal-to-noise when accumulated over times ∝ 1/k. During that time the system undergoes one or several Rabi oscillations, and despite the white noise component in the weak probe signal it is possible to discern a single leading harmonic component and hence reveal the value of n. While increasing k increases the data extraction rate, when k becomes of the order of the value of g the back action of the qubit excited state measurements causes significant disturbance of the Rabi oscillations. Discerning different n-values by the frequency of the Rabi os- The average purity P dW from 200 simulated trajectories in the weak probing regime (k=0.1g). The gray shaded area corresponds to values within one standard deviation from the mean. The average time τ , needed to perform the measurement of the energy, is extracted by fitting the model P (t) = 1 − (1 − P (t = 0))e −t/τ . The harmonic oscillator is prepared with an initial thermal distribution with n = 3 and the qubit is initially prepared in the excited state. The insert shows the average time of purification and distinction of the excitation of the harmonic oscillator as function of probing the strength k. cillations, is gradually hampered by these disturbances when k exceeds g. Ultimately, when k is very large, the measurements effectively project the qubit in its energy eigenbasis and thus freezes the Rabi oscillations by the Quantum Zeno mechanism [40], see Fig. 2 (f). We note that the moderate and strong measurement back action do not invalidate the QND property with respect to distinction of Rabi subspaces, they only cause stochastic modifications of the harmonic population oscillation within the subspaces as shown in Fig.3, and hence they make the distinction between different subspaces less effective. Observation of Quantum Jumps. Our probing may be applied to mechanical oscillators, quantized fields, photons and magnons, which are all systems where there has been an interest in demonstrating the quantized nature of their interactions [8-15, 17, 28, 41], and dynamical features such as quantum jumps [16,[19][20][21][22][23]. The latter experiments are often hampered by the time between jumps being comparable to the time needed to detect the change of n in an experiment. For this, our scheme may be particularly useful, and we now discuss how to incorporate thermal quantum jumps in the formalism and how well they are inferred from a measurement record. If the the harmonic oscillator is connected to a thermal reservoir with an average number of excitations n T and coupling rate where γ, Eq. (2) is modified into dρ = − i[H, ρ]dt − γ 2 (n T + 1)D[â]ρdt − γ 2 n T D[â † ]ρdt + kD[σ ee ]ρdt + √ 2kηH[σ ee ]ρdW.(6) The terms involving D[â] and D[â † ] are responsible for the loss or absorption of excitations to or from the bath. Figure 4. shows a simulation of the dynamics described by Eq. (6). For this particular simulation, we assumed that no quanta were emitted into or absorbed from the bath until gt = 25 where we simulated an incoherent heating event. The blue curve in the lower panel shows the mean excitation of the oscillator, inferred by a hypothetical observer of both the probing dynamics and the occurrence of the energy exchange between the oscillator and the heat bath, while the orange curve shows the mean excitation of the oscillator inferred by an observer having only access to the continuous probing record. In the upper panel, the change in n accompanies a change in the Rabi oscillation frequency appearing instantly in the regular blue curve inferred by the hypothetical observer. The more erratic orange curve reveals the uncertainty of the real observer who realizes the change of state and agrees with the hypothetical observer only after the signal-to-noise ratio has accumulated to permit distinction of the different n-values. In Fig. 5 we show a longer measurement record with multiple jumps inferred as the rapid transfer of near unit probability weights on different values of n. A long time average of these probabilities would reveal the Boltzmann distribution, while the measurements act like a Maxwell demon and turn the probabilities into random almost certain knowledge about the state of the system. We note that, as in [18,42], it is possible to use the entire measurement record and not only previous data for the theoretical estimation of the state at any given time and that would improve the agreement between the inferred and the true jumps in Figs. 4 and 5. General picture. The characteristic property of our emergent subspace QND procedure is the convergence and subsequent restriction of the system to follow trajectories within single degenerate subspaces of a certain operator which commutes with the Hamiltonian, [Â,Ĥ] = 0. An interaction term in the HamiltonianĤ causes a time evolution of a meter observableB, which commutes withÂ, and the temporal outcome of measurements ofB may gradually collapse the system on a state that is evolving in a definite eigenspace ofÂ. For this detection to work, it is important that the characteristic measurement records differ when the system occupies different such subspaces. In our example, is the total number of excitations andB is the qubit meter excitation, and the Rabi oscillation frequencies, indeed, have distinct values in each eigenspace ofÂ. Notably, the measurements both reveal the subspace (ofÂ) and the actual time dependent entangled state of the system and meter from which we infer the separate system dynamics. While our analysis used the example of system and meter entangled state dynamics, the commutator requirements betweenĤ and and between andB suffice for our scheme to resolve the value of by the measurement of B in any quantum system, e.g., pertaining to the population of subspaces of a single multi-level quantum system. Observation of multiple quantum jumps. The harmonic oscillator starts in a thermal distribution with mean excitation number n = 3. The probing strength k = g, such that it is close to the optimal value. The coupling to the bath is γ = 10 −3 g and its mean excitation is nT = 3. If we assume unit detector efficiency, and an initial mixture of pure states |ψ n (t) , each occupying a single degenerate subspace ofÂ, ρ(t) = n p n (t) |ψ n (t) ψ n (t)| ,(7) we may generalize (2) to the measured observableB dρ = −i[H, ρ]dt + kD[B]ρdt + √ 2kηH[B]ρdW.(8) The ansatz in Eq.(7) then leads to the following equa-tions dp n = √ 8kp n ψ n (t)|B |ψ n (t) − B ρ(t) dW, and we observe that, if only one state |ψ n is populated, p n = 1, ψ n |B |ψ n = B ρ , and the stochastic noise terms does not affect the future evolution of the unit value of p n (t), while the state |ψ n (t) may still evolve within the given occupied subspace. To further understand why the system collapses on a single subspace, we note that the purity of the system is P (t) = n p 2 n (t), and applying It's rule for d(p 2 n ) yields dP = n 8kp 2 n ψ n |B |ψ n − B ρ 2 dt + √ 32kp 2 n ψ n |B |ψ n − B ρ dW .(10) The average of dW is zero and hence the average evolution of the purity obeys d P dW = n 8k p 2 n B n − B ρ 2 dW dt,(11) which is positive and causes P dW to increase until the time evolution of B ρ is indistinguishable from the one in just one of the subspaces B n . If several subspaces display the same evolution, they are not distinguished and our measurement is emergent QND with respect to their union, but we may populate a mixed state in that union. Conclusion and Outlook. We have presented a new principle for continuous QND measurements which does not project the system on the eigenstate of the QND observable but rather on a still evolving state within a subspace of states. These subspaces are discerned by the characteristic frequency of the evolution of the mean value of the observed quantity, which may be monitored faster than the accumulation of dispersive phase shifts in the more usual QND setting. If ∆ is the detuning and g is the resonant coupling strength between a harmonic oscillator and a two-level system, their dispersive coupling is given by g 2 /∆ [43]. To avoid transfer of excitation, the detuning should be much larger than the coupling strength g ∆, and the timescale on which the conventional QND measurement takes place is longer by a factor of order ∆/g 1 compared to our resonant proposal. Our method may pave the way to monitor thermal quantum jumps in real time and use measurements and feedback for rapid state preparation and control. Acknowledgment. The authors acknowledge support from the Danish National Research Foundation through the Center of Excellence for Complex Quantum Systems (Grant agreement No. DNRF156). KM acknowledges discussions with Dr. Faezeh Pirmoradian in the early stages of the project. Unconditioned dynamics of the excited state population σee , showing collapses and revivals due to its composition by quantum Rabi oscillations with different frequencies and constant weight factors pn, shown in (b). (c) FIG. 2 . 2Simulated dynamics with the harmonic oscillator prepared initially in a thermal state with mean excitation number n = 3 and the qubit prepared in the excited state. The upper panels show the probabilities for the total (integer) number of excitations, while the lower panels show the qubit excited state population for weak probing (k = 0.1g, panels (a) and (b)), strong probing (k = g, panels (c) and (d), and very strong probing (k = 10g, panels (e) and (f)). FIG. 4 . 4Observation of a quantum jump. The orange curve in the upper (lower) panel shows the excited state population (average excitation of the oscillator) inferred from weak continuous measurements on the qubit meter. The oscillator is subject to a single quantum jump occurring at gt = 25, and the blue curves show the inferred qubit excited state population and oscillator number of quanta, assuming the added knowledge of when the jump happened. Parameters used for the simulation are k = 0.1g, γ = 10 −3 g, n = nT = 3. Quantum Nondemolition Measurements. V B Braginsky, Y I Vorontsov, K S Thorne, 10.1126/science.209.4456.547Science. 209547V. B. Braginsky, Y. I. Vorontsov, and K. S. Thorne, Quantum Nondemolition Measurements, Science 209, 547 (1980). On the measurement of a weak classical force coupled to a quantum-mechanical oscillator. I. Issues of principle. C M Caves, K S Thorne, R W P Drever, V D Sandberg, M Zimmermann, 10.1103/RevModPhys.52.341Rev. Mod. Phys. 52341C. M. Caves, K. S. Thorne, R. W. P. Drever, V. D. Sand- berg, and M. Zimmermann, On the measurement of a weak classical force coupled to a quantum-mechanical os- cillator. I. Issues of principle, Rev. Mod. Phys. 52, 341 (1980). Gravitational wave antenna with QND speed meter. V B Braginsky, F J Khalili, 10.1016/0375-9601(90)90442-QPhysics Letters A. 147251V. B. Braginsky and F. J. Khalili, Gravitational wave antenna with QND speed meter, Physics Letters A 147, 251 (1990). Frequencyanticorrelated quantum states. V B Braginskii, F I Khalili, Zh. Eksp. Teor. Fiz. 94151Sov. Phys. JETPV. B. Braginskii and F. I. Khalili, Frequency- anticorrelated quantum states, Zh. Eksp. Teor. Fiz. 94, 151 (1988), [Sov. Phys. JETP 67, 84 (1988)]. Quantum nondemolition measurements: the route from toys to tools. V B Braginsky, F Y Khalili, 10.1103/RevModPhys.68.1Rev. Mod. Phys. 681V. B. Braginsky and F. Y. Khalili, Quantum nondemo- lition measurements: the route from toys to tools, Rev. Mod. Phys. 68, 1 (1996). On the measurement of a weak classical force coupled to a harmonic oscillator: experimental progress. M F Bocko, R Onofrio, 10.1103/RevModPhys.68.755Rev. Mod. Phys. 68755M. F. Bocko and R. Onofrio, On the measurement of a weak classical force coupled to a harmonic oscillator: experimental progress, Rev. Mod. Phys. 68, 755 (1996). From Lamb shift to light shifts: Vacuum and subphoton cavity fields measured by atomic phase sensitive detection. M Brune, P Nussenzveig, F Schmidt-Kaler, F Bernardot, A Maali, J M Raimond, S Haroche, 10.1103/PhysRevLett.72.3339Phys. Rev. Lett. 723339M. Brune, P. Nussenzveig, F. Schmidt-Kaler, F. Bernar- dot, A. Maali, J. M. Raimond, and S. Haroche, From Lamb shift to light shifts: Vacuum and subphoton cav- ity fields measured by atomic phase sensitive detection, Phys. Rev. Lett. 72, 3339 (1994). Quantum non-demolition detection of single microwave photons in a circuit. B R Johnson, M D Reed, A A Houck, D I Schuster, L S Bishop, E Ginossar, J M Gambetta, L Dicarlo, L Frunzio, S M Girvin, R J Schoelkopf, 10.1038/nphys1710Nature Phys. 6663B. R. Johnson, M. D. Reed, A. A. Houck, D. I. Schuster, L. S. Bishop, E. Ginossar, J. M. Gambetta, L. DiCarlo, L. Frunzio, S. M. Girvin, and R. J. Schoelkopf, Quantum non-demolition detection of single microwave photons in a circuit, Nature Phys 6, 663 (2010). Resolving quanta of collective spin excitations in a millimeter-sized ferromagnet. D Lachance-Quirion, Y Tabuchi, S Ishino, A Noguchi, T Ishikawa, R Yamazaki, Y Nakamura, Sci Adv. 3D. Lachance-Quirion, Y. Tabuchi, S. Ishino, A. Noguchi, T. Ishikawa, R. Yamazaki, and Y. Nakamura, Resolving quanta of collective spin excitations in a millimeter-sized ferromagnet, Sci Adv 3 (2017). Nanomagnonic Cavities for Strong Spin-Magnon Coupling and Magnon-Mediated Spin-Spin Interactions. T Neuman, D S Wang, P Narang, 10.1103/PhysRevLett.125.247702Phys. Rev. Lett. 125247702T. Neuman, D. S. Wang, and P. Narang, Nanomagnonic Cavities for Strong Spin-Magnon Coupling and Magnon- Mediated Spin-Spin Interactions, Phys. Rev. Lett. 125, 247702 (2020). Quantum control of surface acousticwave phonons. K J Satzinger, Y P Zhong, H S Chang, G A Peairs, A Bienfait, M.-H Chou, A Y Cleland, C R Conner, E Dumur, J Grebel, I Gutierrez, B H November, R G Povey, S J Whiteley, D D Awschalom, D I Schuster, A N Cleland, 10.1038/s41586-018-0719-5Nature. 563661K. J. Satzinger, Y. P. Zhong, H. S. Chang, G. A. Peairs, A. Bienfait, M.-H. Chou, A. Y. Cleland, C. R. Conner, E. Dumur, J. Grebel, I. Gutierrez, B. H. November, R. G. Povey, S. J. Whiteley, D. D. Awschalom, D. I. Schuster, and A. N. Cleland, Quantum control of surface acoustic- wave phonons, Nature 563, 661 (2018). Creation and control of multi-phonon fock states in a bulk acoustic-wave resonator. Y Chu, P Kharel, T Yoon, L Frunzio, P T Rakich, R J Schoelkopf, 10.1038/s41586-018-0717-7Nature. 563666Y. Chu, P. Kharel, T. Yoon, L. Frunzio, P. T. Rakich, and R. J. Schoelkopf, Creation and control of multi-phonon fock states in a bulk acoustic-wave resonator, Nature 563, 666 (2018). Resolving the energy levels of a nanomechanical oscillator. P Arrangoiz-Arriola, E A Wollack, Z Wang, M Pechal, W Jiang, T P Mckenna, J D Witmer, R Van Laer, A H Safavi-Naeini, 10.1038/s41586-019-1386-xNature. 571537P. Arrangoiz-Arriola, E. A. Wollack, Z. Wang, M. Pechal, W. Jiang, T. P. McKenna, J. D. Witmer, R. Van Laer, and A. H. Safavi-Naeini, Resolving the energy levels of a nanomechanical oscillator, Nature 571, 537 (2019). Resolving Phonon Fock States in a Multimode Cavity with a Double-Slit Qubit. L R Sletten, B A Moores, J J Viennot, K W Lehnert, 10.1103/PhysRevX.9.021056Phys. Rev. X. 921056L. R. Sletten, B. A. Moores, J. J. Viennot, and K. W. Lehnert, Resolving Phonon Fock States in a Multimode Cavity with a Double-Slit Qubit, Phys. Rev. X 9, 021056 (2019). Entanglement-based singleshot detection of a single magnon with a superconducting qubit. D Lachance-Quirion, S P Wolski, Y Tabuchi, S Kono, K Usami, Y Nakamura, 10.1126/science.aaz9236Science. 367425D. Lachance-Quirion, S. P. Wolski, Y. Tabuchi, S. Kono, K. Usami, and Y. Nakamura, Entanglement-based single- shot detection of a single magnon with a superconducting qubit, Science 367, 425 (2020). Quantum jumps of light recording the birth and death of a photon in a cavity. S Gleyzes, S Kuhr, C Guerlin, J Bernu, S Delglise, U Hoff, M Brune, J.-M Raimond, S Haroche, 10.1038/nature05589Nature. 446297S. Gleyzes, S. Kuhr, C. Guerlin, J. Bernu, S. Delglise, U. Busk Hoff, M. Brune, J.-M. Raimond, and S. Haroche, Quantum jumps of light recording the birth and death of a photon in a cavity, Nature 446, 297 (2007). Progressive field-state collapse and quantum non-demolition photon counting. C Guerlin, J Bernu, S Delglise, C Sayrin, S Gleyzes, S Kuhr, M Brune, J.-M Raimond, S Haroche, 10.1038/nature06057Nature. 448889C. Guerlin, J. Bernu, S. Delglise, C. Sayrin, S. Gleyzes, S. Kuhr, M. Brune, J.-M. Raimond, and S. Haroche, Pro- gressive field-state collapse and quantum non-demolition photon counting, Nature 448, 889 (2007). Forwardbackward analysis of the photon-number evolution in a cavity. T Rybarczyk, B Peaudecerf, M Penasa, S Gerlich, B Julsgaard, K Mølmer, S Gleyzes, M Brune, J M Raimond, S Haroche, I Dotsenko, 10.1103/PhysRevA.91.062116Phys. Rev. A. 9162116T. Rybarczyk, B. Peaudecerf, M. Penasa, S. Ger- lich, B. Julsgaard, K. Mølmer, S. Gleyzes, M. Brune, J. M. Raimond, S. Haroche, and I. Dotsenko, Forward- backward analysis of the photon-number evolution in a cavity, Phys. Rev. A 91, 062116 (2015). Observation of Quantum Jumps in a Superconducting Artificial Atom. R Vijay, D H Slichter, I Siddiqi, 10.1103/PhysRevLett.106.110502Phys. Rev. Lett. 106110502R. Vijay, D. H. Slichter, and I. Siddiqi, Observation of Quantum Jumps in a Superconducting Artificial Atom, Phys. Rev. Lett. 106, 110502 (2011). Observation of Quantum Jumps of a Single Quantum Dot Spin Using Submicrosecond Single-Shot Optical Readout. A Delteil, W Gao, P Fallahi, J Miguel-Sanchez, A Imamolu, 10.1103/PhysRevLett.112.116802Phys. Rev. Lett. 112116802A. Delteil, W.-b. Gao, P. Fallahi, J. Miguel-Sanchez, and A. Imamolu, Observation of Quantum Jumps of a Single Quantum Dot Spin Using Submicrosecond Single-Shot Optical Readout, Phys. Rev. Lett. 112, 116802 (2014). To catch and reverse a quantum jump mid-flight. Z K Minev, S O Mundhada, S Shankar, P Reinhold, R Gutirrez-Juregui, R J Schoelkopf, M Mirrahimi, H J Carmichael, M H Devoret, 10.1038/s41586-019-1287-zNature. 570200Z. K. Minev, S. O. Mundhada, S. Shankar, P. Rein- hold, R. Gutirrez-Juregui, R. J. Schoelkopf, M. Mir- rahimi, H. J. Carmichael, and M. H. Devoret, To catch and reverse a quantum jump mid-flight, Nature 570, 200 (2019). Observing the Quantum Limit of an Electron Cyclotron: QND Measurements of Quantum Jumps between Fock States. S Peil, G Gabrielse, 10.1103/PhysRevLett.83.1287Phys. Rev. Lett. 831287S. Peil and G. Gabrielse, Observing the Quantum Limit of an Electron Cyclotron: QND Measurements of Quan- tum Jumps between Fock States, Phys. Rev. Lett. 83, 1287 (1999). Tracking photon jumps with repeated quantum non-demolition parity measurements. L Sun, A Petrenko, Z Leghtas, B Vlastakis, G Kirchmair, K M Sliwa, A Narla, M Hatridge, S Shankar, J Blumoff, L Frunzio, M Mirrahimi, M H Devoret, R J Schoelkopf, 10.1038/nature13436Nature. 511444L. Sun, A. Petrenko, Z. Leghtas, B. Vlastakis, G. Kirch- mair, K. M. Sliwa, A. Narla, M. Hatridge, S. Shankar, J. Blumoff, L. Frunzio, M. Mirrahimi, M. H. Devoret, and R. J. Schoelkopf, Tracking photon jumps with repeated quantum non-demolition parity measurements, Nature 511, 444 (2014). Stroboscopic Backaction Evasion in a Dense Alkali-Metal Vapor. G Vasilakis, V Shah, M V Romalis, 10.1103/PhysRevLett.106.143601Phys. Rev. Lett. 106143601G. Vasilakis, V. Shah, and M. V. Romalis, Stroboscopic Backaction Evasion in a Dense Alkali-Metal Vapor, Phys. Rev. Lett. 106, 143601 (2011). Generation of a squeezed state of an oscillator by stroboscopic back-action-evading measurement. G Vasilakis, H Shen, K Jensen, M Balabas, D Salart, B Chen, E S Polzik, 10.1038/nphys3280Nature Physics. 11389G. Vasilakis, H. Shen, K. Jensen, M. Balabas, D. Salart, B. Chen, and E. S. Polzik, Generation of a squeezed state of an oscillator by stroboscopic back-action-evading mea- surement, Nature Physics 11, 389 (2015). Squeezing and Entanglement of Density Oscillations in a Bose-Einstein Condensate. A C J Wade, J F Sherson, K Mølmer, 10.1103/PhysRevLett.115.060401Phys. Rev. Lett. 11560401A. C. J. Wade, J. F. Sherson, and K. Mølmer, Squeez- ing and Entanglement of Density Oscillations in a Bose- Einstein Condensate, Phys. Rev. Lett. 115, 060401 (2015). Theory of a Quantum Scanning Microscope for Cold Atoms. D Yang, C Laflamme, D Vasilyev, M Baranov, P Zoller, 10.1103/PhysRevLett.120.133601Phys. Rev. Lett. 120133601D. Yang, C. Laflamme, D. Vasilyev, M. Baranov, and P. Zoller, Theory of a Quantum Scanning Microscope for Cold Atoms, Phys. Rev. Lett. 120, 133601 (2018). Quantum nondemolition measurement of mechanical motion quanta. L Dellantonio, O Kyriienko, F Marquardt, A S Sørensen, 10.1038/s41467-018-06070-yNature Communications. 93621L. Dellantonio, O. Kyriienko, F. Marquardt, and A. S. Sørensen, Quantum nondemolition measurement of me- chanical motion quanta, Nature Communications 9, 3621 (2018). Coherent Quantum-Noise Cancellation for Optomechanical Sensors. M Tsang, C M Caves, 10.1103/PhysRevLett.105.123601Phys. Rev. Lett. 105123601M. Tsang and C. M. Caves, Coherent Quantum-Noise Cancellation for Optomechanical Sensors, Phys. Rev. Lett. 105, 123601 (2010). Quantum Noise Limited and Entanglement-Assisted Magnetometry. W Wasilewski, K Jensen, H Krauter, J J Renema, M V Balabas, E S Polzik, 10.1103/PhysRevLett.104.133601Phys. Rev. Lett. 104133601W. Wasilewski, K. Jensen, H. Krauter, J. J. Renema, M. V. Balabas, and E. S. Polzik, Quantum Noise Limited and Entanglement-Assisted Magnetometry, Phys. Rev. Lett. 104, 133601 (2010). Evading Quantum Mechanics: Engineering a Classical Subsystem within a Quantum Environment. M Tsang, C M Caves, 10.1103/PhysRevX.2.031016Phys. Rev. X. 231016M. Tsang and C. M. Caves, Evading Quantum Mechan- ics: Engineering a Classical Subsystem within a Quan- tum Environment, Phys. Rev. X 2, 031016 (2012). Coherent cancellation of backaction noise in optomechanical force measurements. M H Wimmer, D Steinmeyer, K Hammerer, M Heurs, 10.1103/PhysRevA.89.053836Phys. Rev. A. 8953836M. H. Wimmer, D. Steinmeyer, K. Hammerer, and M. Heurs, Coherent cancellation of backaction noise in optomechanical force measurements, Phys. Rev. A 89, 053836 (2014). E S Polzik, K Hammerer, 10.1002/andp.201400099Trajectories without quantum uncertainties. 52715E. S. Polzik and K. Hammerer, Trajectories without quantum uncertainties, Annalen der Physik 527, A15 (2015). Generation of Nonclassical Motional States of a Trapped Atom. D M Meekhof, C Monroe, B E King, W M Itano, D J Wineland, Phys. Rev. Lett. 764D. M. Meekhof, C. Monroe, B. E. King, W. M. Itano, and D. J. Wineland, Generation of Nonclassical Motional States of a Trapped Atom, Phys. Rev. Lett. 76, 4 (1996). Generation of fock states in a superconducting quantum circuit. M Hofheinz, E M Weig, M Ansmann, R C Bialczak, E Lucero, M Neeley, A D O&apos;connell, H Wang, J M Martinis, A N Cleland, 10.1038/nature07136Nature. 454310M. Hofheinz, E. M. Weig, M. Ansmann, R. C. Bialczak, E. Lucero, M. Neeley, A. D. O'Connell, H. Wang, J. M. Martinis, and A. N. Cleland, Generation of fock states in a superconducting quantum circuit, Nature 454, 310 (2008). M Howard, G J Wiseman, Milburn, Quantum Measurement and Control. Cambridge University PressHoward M. Wiseman and G. J. Milburn, Quantum Measurement and Control (Cambridge University Press, 2010). Conditioned quantum motion of an atom in a continuously monitored onedimensional lattice. R Blattmann, K Mølmer, 10.1103/PhysRevA.93.052113Phys. Rev. A. 9352113R. Blattmann and K. Mølmer, Conditioned quantum motion of an atom in a continuously monitored one- dimensional lattice, Phys. Rev. A 93, 052113 (2016). QuTiP: An open-source Python framework for the dynamics of open quantum systems. J R Johansson, P D Nation, F Nori, 10.1016/j.cpc.2012.02.021Computer Physics Communications. 1831760J. R. Johansson, P. D. Nation, and F. Nori, QuTiP: An open-source Python framework for the dynamics of open quantum systems, Computer Physics Communications 183, 1760 (2012). QuTiP 2: A Python framework for the dynamics of open quantum systems. J R Johansson, P D Nation, F Nori, 10.1016/j.cpc.2012.11.019Computer Physics Communications. 1841234J. R. Johansson, P. D. Nation, and F. Nori, QuTiP 2: A Python framework for the dynamics of open quantum systems, Computer Physics Communications 184, 1234 (2013). Continuous and pulsed observations in the quantum Zeno effect. L S Schulman, 10.1103/PhysRevA.57.1509Phys. Rev. A. 571509L. S. Schulman, Continuous and pulsed observations in the quantum Zeno effect, Phys. Rev. A 57, 1509 (1998). Quantum ground state and single-phonon control of a mechanical resonator. A D Oconnell, M Hofheinz, M Ansmann, R C Bialczak, M Lenander, E Lucero, M Neeley, D Sank, H Wang, M Weides, J Wenner, J M Martinis, A N Cleland, 10.1038/nature08967Nature. 464697A. D. OConnell, M. Hofheinz, M. Ansmann, R. C. Bial- czak, M. Lenander, E. Lucero, M. Neeley, D. Sank, H. Wang, M. Weides, J. Wenner, J. M. Martinis, and A. N. Cleland, Quantum ground state and single-phonon control of a mechanical resonator, Nature 464, 697 (2010). Past Quantum States of a Monitored System. S Gammelmark, B Julsgaard, K Mølmer, 10.1103/PhysRevLett.111.160401Phys. Rev. Lett. 111160401S. Gammelmark, B. Julsgaard, and K. Mølmer, Past Quantum States of a Monitored System, Phys. Rev. Lett. 111, 160401 (2013). C C Gerry, P L Knight, Introductory Quantum Optics. Cambridge University PressC. C. Gerry and P. L. Knight, Introductory Quantum Optics (Cambridge University Press, 2008).
[]
[ "BOOSTED-BOTTOM JET TAGGING AND BSM SEARCHES", "BOOSTED-BOTTOM JET TAGGING AND BSM SEARCHES" ]
[ "Zack Sullivan \nDepartment of Physics\nIllinois Institute of Technology\n60616-3793ChicagoIllinoisUSA\n" ]
[ "Department of Physics\nIllinois Institute of Technology\n60616-3793ChicagoIllinoisUSA" ]
[]
I present a new scheme for tagging boosted heavy flavor jets called "µx tagging" and its application to TeV-scale physics beyond the Standard Model. Using muons from B hadron decay to define a particular combination "x" of angular information, and jet substructure variables, we identify a clean ( fake / b ∼ 1/100) good efficiency ( b = 14%) tag. I demonstrate the usefulness of this new scheme by showing the reach for discovery of leptophobic Z → bb and tH ± → ttb.
null
[ "https://arxiv.org/pdf/1605.04280v1.pdf" ]
119,199,205
1605.04280
713c241b35b0602d2dd708e80c98003b33985ab8
BOOSTED-BOTTOM JET TAGGING AND BSM SEARCHES Zack Sullivan Department of Physics Illinois Institute of Technology 60616-3793ChicagoIllinoisUSA BOOSTED-BOTTOM JET TAGGING AND BSM SEARCHES I present a new scheme for tagging boosted heavy flavor jets called "µx tagging" and its application to TeV-scale physics beyond the Standard Model. Using muons from B hadron decay to define a particular combination "x" of angular information, and jet substructure variables, we identify a clean ( fake / b ∼ 1/100) good efficiency ( b = 14%) tag. I demonstrate the usefulness of this new scheme by showing the reach for discovery of leptophobic Z → bb and tH ± → ttb. Introduction As searches for new particles at the CERN Large Hadron Collider (LHC) shift to TeV-scale energies, observation of their decays into jets becomes challenging. Dijet resonances are typically smaller than the QCD background unless top or bottom tags are applied. Unfortunately, current b-tagging efficiencies degrade (28-15%) around 1-2 TeV for light-jet fake rates of 1-2% 1 (producing low purity fake / b ∼ 1/10). At this conference, I presented a new method for flavor tagging at TeV-scale energies called "µ x boosted-bottom-jet tagging." 2 This method is derived from kinematic first principles, and provides a 14% efficiency for b-tagging, with a factor of 10 improvement in fake rejection over existing tags ( fake / b ∼ 1/100). In Sec. 2 I summarize the µ x definition and cuts, and plot its transverse momentum-and pseudorapidity-dependent efficiencies. In Sec. 3 I briefly describe the reach provided by µ x boosted-b tagging in an analysis for discovery of a leptophobic Z → bb. Following the recent provocative proposals to measure H ± → tb in tbH ± -associated production at the LHC, I use the µ x tag in Sec. 4 to provide a realistic estimate of the reach at a 14 TeV machine. I summarize our results in Sec. 5. 2 µ x boosted-b tag In Ref. 2 we introduced the µ x boosted-b tag, a high purity b tag for use with boosted jets (p T > 500 GeV) based on the kinematics of semi-muonic B hadron decay and jet substructure. At large momentum, the boost γ B of the B hadron compresses its decay products into a narrow arXiv:1605.04280v1 [hep-ph] 13 May 2016 subjet at high energy. We define a "smart" angular lab frame observable x ≡ γ B tan θ lab ,(1) where θ lab is the angle between the muon and the direction of the B hadron in the jet. For sufficiently boosted B hadrons (γ B ≥ 3) the lab frame distribution of the muon count N vs. x follows a universal shape. We define the µ x boosted-b tag by first demanding x < 3 to capture 90% of muons from the B decays. Then we demand f subjet ≡ p T subjet p T jet ≥ 0.5,(2) to account for the observation that the hard fragmentation function for b quarks leads to the B hadron subjet carrying a large fraction f subjet of the total jet momentum. Both cuts in the µ x tag depend on identification of the subjet containing the B hadron. While exact identification of the B is not possible, an effective proxy can be found by taking standard anti-k T clustered jets with R = 0.4, and reclustering the muon and calorimeter towers using a smaller R = 0.04. Following the detailed reconstruction algorithm of Ref. 2, we combine an identified probable charm subjet remnant, with double the muon momentum (as a boosted neutrino proxy) to provide the input to γ B and p T subjet . A custom µ x tagging module MuXboostedBTagging for DELPHES 1 is available on GitHub. 3 In Fig. 1 we see µ x tagging efficiencies as a function of p T and η for bottom jets, charm jets, light-light jets (where the muon came from a light-flavor hadron), and light-heavy jets (where a gluon split to bb/cc -producing heavy-flavor hadrons in the final state). The kinematic nature of the tagging variables leads to nearly flat p T efficiencies when p T > 500 GeV. The η distribution is also flat except for B hadrons from gluon splitting. This leads to the intriguing possibility that the g → bb contribution to jets in the Monte Carlo could be calibrated using the rapidity dependence of these highly-boosted jets. 3 Leptophobic Z → bb at the LHC New neutral vectors, generically called Z bosons, appear in many BSM models. In cases where the decay to leptons is suppressed, we look to tag heavy flavors to overcome the QCD dijet background. We examine the reach 2 at a 13 TeV LHC for a leptophobic Z decaying to bb or cc using a U (1) B Lagrange density 4 L = g B 6 Z Bµq γ µ q,(3) with flavor-independent coupling to quarks. The signal and backgrounds are simulated using a MLM-matched MadEvent sample 5 and CT14llo PDFs 6 fed through PYTHIA 8 7 into DELPHES. We demand one or two µ x tags, |η j | < 2.7, and ∆η jj < 1.5. We reconstruct a dijet mass out of the two leading-p T jets, and look for a resonance in the mass window [0.85, 1.25] × M Z B . We see the reach for 5σ discovery of this leptophobic Z in Fig. 2 for a two-tag, and one-tag inclusive sample, 8 compared to current exclusion limits from Ref. 4. In 100 fb −1 of integrated luminosity at 13 TeV, a two b-tag analysis could discover a Z of 3 TeV if the universal coupling g B ∼ 2.5. The single-tag inclusive search is even more effective -allowing for discovery up to 1 TeV above mass limits from Run I. In the absence of a discovery, the one-tag search would set a 95% C.L. exclusion that can access g B couplings a factor of 2 smaller than current limits, and masses up to 2 TeV higher. 4 Associated top-charged Higgs production tH ± → ttb In the MSSM, associated production of charged Higgs with a top quark produces a final state rich in b jets. Recent excitement was generated by a claim 9 that the "wedge region" in tan β (tan β ∼ 6 where the h 0 shares equal coupling to top and bottom) could be explored up to 2 TeV in H ± mass at a 14 TeV LHC through the channel tbH ± → tb(tb). In contrast, others 10 found that even 500 GeV could not be probed. We explore 11 whether the ∼ 2 TeV limit can be reached in the wedge region, and how µ x tagging performs in this final state. Using MLM-matched samples for tH ± → ttb generated in MadEvent, showered in PYTHIA, and reconstructed in DELPHES, we look for final states involving one boosted-top tag, one µ x boosted-b tag, and a fully reconstructed t → blν decay (with a normal low-energy b tag). The background is dominated by fake tags from ttj and tjj that we take as measured from CMS data. After cuts on the relative p T and angle of the two leading jets, we find S/B ∼ 1/10. A preliminary estimate of the reach in H ± tb Yukawa coupling y tb and tan β are shown vs. H ± mass in Fig. 3. Our analysis appears to extend the results of Ref. 10 up to 2 TeV -meaning the wedge remains. It appears the wedge region will need to wait until a 100 TeV machine. Conclusions At Moriond 2016 I presented the new µ x boosted-bottom jet tag and its applications to some important searches for new physics at the LHC. Combining angular information x from B hadron decay with jet substructure f subjet in TeV-scale jets allows for clean extraction of signals for Z and MSSM charged Higgs above backgrounds. We find that the reach for leptophobic Z discovery at a 13 TeV LHC is about 1 TeV higher than current limits. If a Z is not found, 95% C.L. exclusion limits can be set up to 2 TeV higher, or g B couplings a factor of 2 smaller, than the current limits. Despite recent excitement, the search for MSSM charged Higgs in the mid-tan β "wedge" region in tH ± → t(tb) will remain elusive. The signal appears to be too small when realistic tagging efficiencies are applied. A 100 TeV collider is likely needed to fill this region. On the other hand, the µ x tag could be used to immediately improve the existing searches for W → tb in the boosted-top and boosted-bottom channel. 12 The µ x boosted-bottom jet tag is a powerful new tool in the exploration for physics beyond the Standard Model. Figure 1 - 1µx tagging efficiency vs. (left) jet pT and (right) ηjet. Solid (dashed) lines include µ = 0 (40) pileup events. Figure 2 - 25σ discovery reach for a leptophobic Z with universal coupling and one or two boosted-b tags at a 13 TeV LHC compared to exclusion limits from Ref.4. Also shown is the 95% C.L. exclusion reach of the one-tag analysis. Figure 3 - 395% C.L. exclusion limits that can be reached as a function of H ± mass at a 14 TeV LHC for (left) H ± tb Yukawa coupling y tb and (right) tan β. AcknowledgmentsThis work was supported by the U.S. Department of Energy under award No. DE-SC0008347. . J De Favereau, DELPHES 3 CollaborationJ. High Energy Phys. 0257J. de Favereau et al. (DELPHES 3 Collaboration), J. High Energy Phys. 02, 057 (2014). . K Pedersen, Z Sullivan, Phys. Rev. D. 9314014K. Pedersen and Z. Sullivan, Phys. Rev. D 93, 014014 (2016). . B A Dobrescu, F Yu, Phys. Rev. D. 8879901Phys. Rev. DB.A. Dobrescu and F. Yu, Phys. Rev. D 88, 035021 (2013); Erratum: Phys. Rev. D 90, 079901 (2014). . J , J. High Energy Phys. 0779J. Alwall et al., J. High Energy Phys. 07, 079 (2014). . S Dulat, Phys. Rev. D. 9333006S. Dulat et al., Phys. Rev. D 93, 033006 (2016). . T Sjostrand, S Mrenna, P Z Skands, Comput. Phys. Commun. 178852T. Sjostrand, S. Mrenna, and P.Z. Skands, Comput. Phys. Commun. 178, 852 (2008). Flavor tagging TeV jets for physics beyond the Standard Model. Z Sullivan, K Pedersen, XLVth International Symposium on Multiparticle Dynamics. to appear in Proceedings of theZ. Sullivan and K. Pedersen, "Flavor tagging TeV jets for physics beyond the Standard Model," to appear in Proceedings of the XLVth International Symposium on Multiparticle Dynamics (ISMD 2015). . J Hajer, Y Y Li, T Liu, J F H Shiu, J. High Energy Phys. 11124J. Hajer, Y. Y. Li, T. Liu and J. F. H. Shiu, J. High Energy Phys. 11, 124 (2015). . N Craig, F Deramo, P Draper, S Thomas, H Zhang, J. High Energy Phys. 06137N. Craig, F.DEramo, P. Draper, S. Thomas and H. Zhang, J. High Energy Phys. 06, 137 (2015). . K Pedersen, Z Sullivan, in productionK. Pedersen and Z. Sullivan, in production. . D Duffty, Z Sullivan, Phys. Rev. D. 9015031D. Duffty and Z. Sullivan, Phys. Rev. D 90, 015031 (2014).
[]
[ "Ward identities for disordered metals and superconductors", "Ward identities for disordered metals and superconductors" ]
[ "Revaz Ramazashvili \nMaterials Science Division\nArgonne National Laboratory\n60439ArgonneILUSA\n" ]
[ "Materials Science Division\nArgonne National Laboratory\n60439ArgonneILUSA" ]
[]
This article revisits Ward identities for disordered interacting normal metals and superconductors. It offers a simple derivation based on gauge invariance and recasts the identities in a new form that allows easy analysis of the quasiparticle charge conservation (as e.g. in a normal metal) or non-conservation (as e.g. in a d-wave superconductor).
10.1103/physrevb.66.220503
[ "https://export.arxiv.org/pdf/cond-mat/0208030v2.pdf" ]
119,343,738
cond-mat/0208030
b5edeb91b3bacabf7fe4504f35f6813951725ec0
Ward identities for disordered metals and superconductors 31 Oct 2002 Revaz Ramazashvili Materials Science Division Argonne National Laboratory 60439ArgonneILUSA Ward identities for disordered metals and superconductors 31 Oct 2002 This article revisits Ward identities for disordered interacting normal metals and superconductors. It offers a simple derivation based on gauge invariance and recasts the identities in a new form that allows easy analysis of the quasiparticle charge conservation (as e.g. in a normal metal) or non-conservation (as e.g. in a d-wave superconductor). I. INTRODUCTION Interplay of interaction and disorder remains one of the central topics in condensed matter physics. Given the complexity of the problem, constraints imposed by symmetries acquire particular importance. An example of such a constraint is given by Ward identities. In the early days of many-body theory, Ward identities were used to establish key properties of the Fermi liquid theory. 1 In the context of the CPA approximation for a disordered non-interacting metal, similar identities were derived by Velický. 2 In the theory of superconductivity, Ward identities were used early on to establish gauge invariance of the electromagnetic response. 3 Subsequently, they were employed by D. Vollhardt This paper revisits the Ward identities for disordered interacting normal metals and superconductors. Using gauge invariance, it derives the identities in a new form that makes quasiparticle charge conservation (as e.g. in a normal metal) or absence thereof (as e.g. in a d-wave superconductor) explicit. In a normal metal, the identity takes a particularly simple form: Λ RA (ω, ω ′ ; p, p) = − 2iΣ ′′ R (ω, p) ω − ω ′ , where Λ RA is the disorder average of the retarded-advanced charge density vertex correction at zero momentum transfer Q = p − p = 0 and small frequency transfer Ω = ω − ω ′ ≪ ω, ω ′ , and Σ ′′ R (ω, p) is the imaginary part of the retarded quasiparticle self energy, which is proportional to the quasiparticle scattering rate. The vertex Λ RA is closely related to the correlation function of the quasiparticle charge density, and the 1/(ω − ω ′ ) behavior of the vertex at low frequency transfer and zero momentum transfer points to quasiparticle charge conservation and its diffusive propagation. By contrast, in a d-wave superconductor, the Ward identity reflects the fact that impurity scattering causes exchange of charge between the quasiparticle subsystem and the condensate, which leads to non-conservation of the quasiparticle charge. The structure of the paper is as follows. Section II gives a detailed derivation of the Ward identities for a disordered interacting normal metal. Section III briefly discusses the Ward identities for an s-wave superconductor in the approximation of a spatially uniform gap. Section IV derives the Ward identifies for a disordered d-wave superconductor in the same approximation, and Section V illustrates the meaning of the identity by explaining how, in a d-wave superconductor, the impurity scattering leads to exchange of charge between the quasiparticle subsystem and the condensate. Section VI presents a summary and a brief discussion of the results. II. WARD IDENTITIES FOR A NORMAL METAL Consider a disordered interacting normal metal with the Matsubara action S = dτ drψ + (r, τ ) ih∂ τ − ξ(−ih ∇ − e c A) + eφ(r, τ ) − u(r) ψ(r, τ ) − − dτ dr dτ ′ dr ′ ψ + α (r, τ )ψ β (r, τ )V αβγδ (τ − τ ′ , r − r ′ )ψ + γ (r ′ , τ ′ )ψ δ (r ′ , τ ′ ), where ψ + (ψ) are the electron creation (annihilation) operators, A is the electromagnetic vector potential, φ(r, τ ) is the scalar potential and u(r) is the impurity potential. This action respects the continuous gauge symmetry ψ → e iχ(r,τ ) ψ; A → A +h c e ∇χ; φ → φ +h e ∂ τ χ, of which the sought Ward identities are a consequence. To establish the scheme used throughout the rest of this article, I present below a detailed derivation. Everywhere hereafter, only infinitesimal time dependent spatially uniform transformations ψ α (r, τ ) → e iχ(τ ) ψ α (r, τ ) will be considered. Under such a transformation, the Green function changes according to G αβ (r, r ′ , τ − τ ′ ) → e iχ(τ ) G αβ (r, r ′ , τ − τ ′ )e −iχ(τ ′ ) and thus, to first order in χ, its variation equals δG αβ (r, r ′ , τ − τ ′ ) ≈ i[χ(τ ) − χ(τ ′ )]G αβ (r, r ′ , τ − τ ′ ). On the other hand, the same transformation induces extra terms in the action due to the presence of the temporal derivative. Hence, the very same variation of the Green function can also be calculated by perturbation theory. The crucial point is that the four-fermion interaction term in the action is invariant under the gauge transformation and, therefore, does not contribute to the perturbative correction to the Green function. Thus, to first order in infinitesimal χ, the same correction to G is equal to δG αβ (x, x ′ , τ − τ ′ ) = −i dtdr ψ α (x, τ )ψ + γ (r, t)ψ γ (r, t)ψ + β (x ′ , τ ′ ) ∂ t χ(t). Equating the two expressions leads to the identity [χ(τ ) − χ(τ ′ )]G αβ (x, x ′ , τ − τ ′ ) = − dtdr ψ α (x, τ )ψ + γ (r, t)ψ γ (r, t)ψ + β (x ′ , τ ′ ) ∂ t χ(t) for a given disorder configuration. Disorder averaging replaces the exact Green function on the left hand side by its translationally invariant average. The average on the right hand side can be presented as the product of the two average Green functions plus the vertex correction term and, for χ = χ 0 e iΩτ with Ω → 0, the Fourier transformed identity takes the form G(iω + iΩ, p) − G(iω, p) = iΩG(iω + iΩ, p) [1 + Λ(iω, iω + iΩ; p, p)] G(iω, p), where Λ(iω, iω + iΩ; p, p) is the disorder average of the scalar vertex correction. At this point, two different types of identities can be derived: one for the retarded-advanced vertex correction Λ RA (ω, ω + Ω; p, p), and another one for the retarded-retarded vertex correction Λ RR (ω, ω + Ω; p, p). A. The identity for the retarded-advanced (RA) vertex To obtain the identities for the retarded-advanced vertex, choose iω to be in the lower half-plane and iω+iΩ in the upper half-plane. Then, upon analytic continuation iω → ω±i0, G(iω) transforms into G A (ω − i0), whereas G(iω + iΩ) transforms into G R (ω + Ω + i0). The identity then takes the form G −1 R (ω + Ω + i0, p) − G −1 A (ω − i0, p) = Ω[1 + Λ RA (ω + Ω, ω; p, p)]. The disorder averaged Green function reads G −1 A/R (ω, p) = ω − Σ A/R (ω, p) − ξ(p), where Σ A/R (ω, p) is the advanced/retarded self energy. Using the relation Σ R (ω, p) = Σ * A (ω, p) and assuming that the derivative ∂ ω Σ R/A (ω, p) is non-singular, for small Ω = ω − ω ′ → 0 one finds −2iΣ ′′ R (ω, p) = [ω − ω ′ ]Λ RA (ω, ω ′ ; p, p).(1) Identifying 2Σ ′′ R (ω) with the scattering rate 1/τ , one immediately recognizes in Λ RA the zero momentum transfer (Q = 0) limit of the charge density vertex D(ω − ω ′ , Q) 9 D(ω − ω ′ , Q) = 1 i(ω − ω ′ )τ + DQ 2 τ . where D is the diffusion coefficient. For a non-interacting disordered metal, D(ω − ω ′ , Q) is commonly obtained by a direct calculation, 9 first finding self-consistently the impurity self energy, and then summing the ladder series for the vertex. In the presence of interactions, diagrammatic treatment becomes much more involved, while the present derivation appeals only to gauge invariance and is insensitive to turning on the interaction. B. The identity for the retarded-retarded (RR) vertex By contrast with the identity just derived, the identity for the retarded-retarded vertex can be found in textbooks, and I present its derivation here only for completeness. In this case, it is convenient to choose both iω and iω + iΩ in the same (say, the upper) half-plane. Upon analytic continuation and multiplication by G −1 (iω) and G −1 (iω + iΩ), the identity takes the form G −1 R (ω + Ω + i0, p) − G −1 R (ω + i0, p) = Ω[1 + Λ RR (ω + Ω, ω; p, p)], which, to first order in Ω → 0, leads to the standard relation between the energy derivative of the retarded self energy and the retarded-retarded vertex 10 : ∂ ω Σ R (ω) = Λ RR (ω, ω; p, p).(2) III. WARD IDENTITY FOR AN S-WAVE SUPERCONDUCTOR In the Nambu notations, the BCS Hamiltonian of an s-wave superconductor reads H = drΨ † τ 3 ξ( p − e c Aτ 3 ) + τ 1 ∆(r) + τ 3 eφ + τ 3 u Ψ. Here Ψ † ≡ (ψ † ↑ , ψ ↓ ) is the Nambu spinor, τ i are the Nambu matrices, and r denotes the center of mass coordinate of a Cooper pair. The pair field ∆(r) has been chosen real for the sake of simplicity. A gauge transformation takes the form Ψ → exp iτ 3 ē hc χ Ψ and, in addition to the standard change of potentials A and φ, has to be accompanied by the pair field transformation ∆ → ∆ exp 2i ē hc χ . One then proceeds the same way as for a normal metal, with two important points to note. The first point amounts to the approximation of a spatially uniform gap which, along with the frequency, gets renormalized by disorder. The second point stems from the Nambu matrix structure of the theory: the vertices, that appear after disorder averaging of the perturbative expression for the Green function variation, are defined in the Nambu space and thus carry a Nambu index. For instance, the disorder averaged term arising from the temporal derivative of χ = χ 0 exp[iΩτ ] has the form dy ψ(x)ψ(y)iΩτ 3 ψ(y)ψ(x ′ ) → dydzdz ′ G(x, z)iΩ [τ 3 δ(z − y)δ(y − z ′ ) + τ 3 (z, y, z ′ )] G(z ′ , x ′ ), where the vertex correction τ 3 (z, y, z ′ ) appears as a result of disorder dressing of the corresponding bare vertex, the latter being simply the Nambu matrix τ 3 . The disorder averaged Green function has the form 1 G(p, ω) = [iωτ 0 − ξ(p)τ 3 −∆τ 1 ] −1 , whereω and∆ are the renormalized frequency and the gap amplitude, which yields the Ward identity [iω − iω ′ ] τ 3 − i ∆ ω +∆ ω ′ τ 2 = [iω − iω ′ ][τ 3 + τ 3 ] − 2i∆[τ 2 + τ 2 ]. This, in turn, indicates a diffusion pole in the quasiparticle charge density vertex correction τ 3 RA : −2iΣ ′′ R (ω, p) ω − ω ′ τ 3 = τ 3 RA , where Σ ′′ R (ω, p) is the imaginary part of the retarded self energy renormalization of the frequency, the notation is chosen to coincide with the normal metal limit. IV. WARD IDENTITY FOR A D-WAVE SUPERCONDUCTOR: In a d-wave superconductor, the situation turns out to be quite different. The BCS Hamiltonian of a d-wave superconductor reads H = drΨ † τ 3 ξ( p − e c Aτ 3 ) + τ 3 eφ + τ 3 u Ψ + dRdrΨ † (R + r 2 )τ 1 ∆(R, r)Ψ(R − r 2 ), where the pair field ∆(R, r) has been chosen real and having d-wave angular dependence on the relative coordinate r, and R denotes the center of mass coordinate of a Cooper pair. As in the s-wave case, the Hamiltonian respects the gauge symmetry, and the identities can be obtained similarly, with one important difference: because of the d-wave symmetry of the gap and its oscillating angular dependence, the gap amplitude ∆ p , although suppressed by impurities, does not acquire a frequency dependent renormalization. Hence the disorder average of the quasiparticle Green function is G(iω, p) = [iω − τ 1 ∆ p − τ 3 ξ p ] −1 . Another important point is that the angular dependence of the gap leads to the appearance of a vertex correction ∆ p τ 2 on the right hand side of the Ward identity, which assumes the form −2iτ 3 Σ ′′ R (ω, p) = [ω − ω ′ ] τ 3 RA + 2i ∆ p τ 2 RA .(3) As for an s-wave superconductor, Σ ′′ R (ω, p) is the retarded self-energy renormalization of the frequency:ω = ω − Σ. Due to the d-wave symmetry of the gap and its oscillatory angular dependence, ∆ p τ 2 RA ∝ τ 3 RA , which leads one to conclude that the vertex correction τ 3 RA has to remain finite as ω − ω ′ → 0. Hence, in a disordered d-wave superconductor, the quasiparticle charge is not conserved. Note that, upon transition to the normal state, the quasiparticle charge diffusion mode re-appears, as can be seen seen by sending ∆ p to zero in Eq. (3) and identifying τ 3 RA with Λ RA (ω, ω ′ ; p, p) of Section II. V. QUALITATIVE ARGUMENT FOR A D-WAVE SUPERCONDUCTOR The absence of quasiparticle charge conservation in a d-wave superconductor can be understood based on a simple argument going back to the studies of charge imbalance relaxation in superconductors. 11 I reproduce the argument here for the sake of completeness. Consider the Bogolyubov quasiparticle creation operator: γ + p↑ = u p c + p↑ + v p c −p↓ , u 2 p = 1 2 [1 + ξ p ξ 2 p + ∆ 2 p ] , u 2 p + v 2 p = 1. Impurity scattering is elastic, i.e. it conserves the quasiparticle energy E p = ξ 2 p + ∆ 2 p . In an s-wave superconductor with uniform gap, ∆ p is a constant and, in the absence of the Andreev scattering that turns ξ p into −ξ p , the energy conservation implies conservation of u p and v p . Hence the impurity scattering conserves the particle-hole content of a quasiparticle, and this leads to the effective charge conservation -even though a Bogolyubov quasiparticle, being a superposition of a particle and a hole, does not have a well defined charge quantum number. The same conclusion can be reached by considering directly the expectation value of the quasiparticle charge Q p : Q p = u 2 p (+1) + v 2 p (−1) = ξ p ξ 2 p + ∆ 2 p . In an isotropic s-wave superconductor, the gap does not vary around the Fermi surface and hence, in the absence of the Andreev processes, Q p is conserved by the impurity scattering, which leads to the charge diffusion pole. By contrast, in a d-wave superconductor, the gap ∆ p is strongly anisotropic. Thus, even in the absence of the Andreev scattering processes, neither Q p nor the moduli of the Bogolyubov factors u p and v p are conserved: impurity scattering changes the particlehole content of a quasiparticle. Physically, this means that the impurity scattering induces exchange of charge between the quasiparticle subsystem and the condensate. Indeed, this quasiparticle charge non-conservation is not a consequence of the d-wave symmetry of the gap, but rather of the gap anisotropy around the Fermi surface, and is present not only in other superconductors with non-trivial symmetry, but even in s-wave superconductors with anisotropic gap. However, in the latter case, the effect is small in the measure of the relative gap anisotropy, which is itself reduced by disorder. As a result, the quasiparticle charge non-conservation appears only at time scales that are long compared with the scattering time. By contrast, in a d-wave superconductor, the gap anisotropy is large, and the quasiparticle charge changes at the time scale of order the impurity scattering time, which eliminates quasiparticle charge conservation at any time scale beyond the elastic scattering time. VI. SUMMARY AND DISCUSSION In this article, I revisited the Ward identities for superconductors and disordered interacting normal metals, and presented a simple derivation based solely on gauge invariance. The identities were recast in a new form that made quasiparticle charge conservation (as in a normal metal or an isotropic s-wave superconductor) or absence thereof (as in a d-wave superconductor) explicit. Using the Ward identities, I showed how, in a d-wave superconductor, impurity scattering causes exchange of charge between the quasiparticle subsystem and the condensate, thus leading to the quasiparticle charge non-conservation. Transparency of the Ward identities is particularly appealing in comparison with microscopic approaches. The simplicity of the identities is insensitive to the strength of the impurity potential or to whether disorder has to be treated in the Born or in the unitary limit -or to the presence of interaction. By contrast, to achieve a controllable approximation even in the Born limit, microscopic calculations, e.g. for a d-wave superconductor, have to resort to rather complex methods and/or unrealistic approximations, such as expansion in the inverse number of gap nodes. and P. Wölfle 4 in a self-consistent theory of the Anderson transition, by F. Wegner, 5 A. J. McKane and M. Stone 6 in the sigma model approach to localization, and by C. Castellani et al. 7 in an early treatment of an interacting disordered metal. Very recently, T. R. Kirkpatrick and D. Belitz 8 invoked the Ward identities in an attempt to resolve the issue of decoherence at zero temperature. 12 It is my pleasure to thank A. J. Leggett, M. R. Norman and especially Yu. M. Galperin for helpful discussions. Work supported by the U. S. Department of Energy, Division of Basic Energy Science-Material Science under contract No. W-31-109-ENG-38. A A Abrikosov, L P Gorkov, I E Dzyaloshinski, Methods of Quantum Field Theory in Statistical Physics. DoverA. A. Abrikosov, L. P. Gorkov and I. E. Dzyaloshinski, Methods of Quantum Field Theory in Statistical Physics, (Dover, 1975); P Nozières, Theory of Interacting Fermi Systems. Addison-WesleyP. Nozières, Theory of Interacting Fermi Systems, (Addison-Wesley, 1997). . B Velický, Phys. Rev. 184614B. Velický, Phys. Rev. 184, 614 (1969). E G See, J R Schrieffer, Theory of Superconductivity. Addison-WesleySee, e. g., J. R. Schrieffer, Theory of Superconductivity, Chapter 8. (Addison-Wesley, 1988); . Y Nambu, Phys. Rev. 117648Y. Nambu, Phys. Rev. 117, 648 (1960). . D Vollhardt, P Wölfle, Phys. Rev. B. 224666D. Vollhardt and P. Wölfle, Phys. Rev. B 22, 4666 (1980). . F Wegner, Z. Phys. B. 35207F. Wegner, Z. Phys. B 35, 207 (1979); . L Schfer, F Wegner, 38113L. Schfer and F. Wegner, ibid. B 38, 113 (1980). . A J Mckane, M Stone, Ann. Phys. 131A. J. McKane and M. Stone, Ann. Phys. 131, 36, (1981). . C Castellani, C Di Castro, G Forgacs, E Tabet, Nuclear Physics. 225441C. Castellani, C. Di Castro, G. Forgacs, E. Tabet, Nuclear Physics B225, 441 (1983). . T R Kirkpatrick, D Belitz, preprint cond-mat/0111398 and references thereinT. R. Kirkpatrick and D. Belitz, preprint cond-mat/0111398 and references therein. B L Altshuler, B D Simons, Mesoscopic Quantum Physics. E. Akkermans, G. Montambaux, J. -L. Pichard and J. Zinn-JustinAmsterdamNorth HollandB. L. Altshuler and B. D. Simons, in Mesoscopic Quantum Physics, eds. E. Akkermans, G. Montambaux, J. -L. Pichard and J. Zinn-Justin (North Holland, Amsterdam, 1995). Gerald D Mahan, Many-Particle Physics. PlenumGerald D. Mahan, Many-Particle Physics, Ch. 7 (Plenum, 1990). 1997); for the original discussion, see M. Tinkham. The Physics of Superconductors. Springer-Verlag61747A lucid explanation can be found in VA lucid explanation can be found in V. V. Schmidt, The Physics of Superconductors, Chapter 7, (Springer-Verlag, 1997); for the original discussion, see M. Tinkham, Phys. Rev. B 6, 1747 (1972); Introduction to Superconductivity. M Tinkham, McGraw-HillM. Tinkham, Introduction to Superconductivity, Chapter 11, (McGraw-Hill, 1997). . A Altland, B D Simons, M R Zirnbauer, Physics Reports. 359283A. Altland, B. D. Simons, M. R. Zirnbauer, Physics Reports 359, 283 (2002); . M , M. J. . J. -S Bhaseen, I I Caux, A M Kogan, Tsvelik, Nuclear Physics. 618465Bhaseen, J. -S. Caux, I. I. Kogan, A. M. Tsvelik, Nuclear Physics B618, 465 (2001).
[]
[ "Magnetic entropy change of ErAl 2 magnetocaloric wires fabricated by a powder-in-tube method", "Magnetic entropy change of ErAl 2 magnetocaloric wires fabricated by a powder-in-tube method" ]
[ "Takafumi D Yamamoto [email protected] \nNational Institute of Materials Science\n305-0047TsukubaIbarakiJapan\n", "Hiroyuki Takeya \nNational Institute of Materials Science\n305-0047TsukubaIbarakiJapan\n", "Suguru Iwasaki \nNational Institute of Materials Science\n305-0047TsukubaIbarakiJapan\n", "Kensei Terashima \nNational Institute of Materials Science\n305-0047TsukubaIbarakiJapan\n", "Pedro Baptista De Castro \nNational Institute of Materials Science\n305-0047TsukubaIbarakiJapan\n\nUniversity of Tsukuba\n305-8577TsukubaIbarakiJapan\n", "Takenori Numazawa \nNational Institute of Materials Science\n305-0047TsukubaIbarakiJapan\n", "Yoshihiko Takano \nNational Institute of Materials Science\n305-0047TsukubaIbarakiJapan\n\nUniversity of Tsukuba\n305-8577TsukubaIbarakiJapan\n" ]
[ "National Institute of Materials Science\n305-0047TsukubaIbarakiJapan", "National Institute of Materials Science\n305-0047TsukubaIbarakiJapan", "National Institute of Materials Science\n305-0047TsukubaIbarakiJapan", "National Institute of Materials Science\n305-0047TsukubaIbarakiJapan", "National Institute of Materials Science\n305-0047TsukubaIbarakiJapan", "University of Tsukuba\n305-8577TsukubaIbarakiJapan", "National Institute of Materials Science\n305-0047TsukubaIbarakiJapan", "National Institute of Materials Science\n305-0047TsukubaIbarakiJapan", "University of Tsukuba\n305-8577TsukubaIbarakiJapan" ]
[]
We report the fabrication of ErAl 2 magnetocaloric wires by a powder-intube method (PIT) and the evaluation of magnetic entropy change through magnetization measurements. The magnetic entropy change of ErAl 2 PIT wires exhibits similar behavior to the bulk counterpart, while its magnitude is reduced by the decrease in the volume fraction of ErAl 2 due to the surrounding non-magnetic sheaths. We find that another effect reduces the magnetic entropy change of the ErAl 2 PIT wires around the Curie temperature, and discuss its possible origin in terms of a correlation between magnetic properties of ErAl 2 and mechanical properties of sheath material.
10.1088/1361-6463/ab5c71
[ "https://arxiv.org/pdf/2001.00174v1.pdf" ]
209,531,623
2001.00174
6ac3fdb46ccaa9eae16e7656cc95d8fd98847b2d
Magnetic entropy change of ErAl 2 magnetocaloric wires fabricated by a powder-in-tube method 1 Jan 2020 Takafumi D Yamamoto [email protected] National Institute of Materials Science 305-0047TsukubaIbarakiJapan Hiroyuki Takeya National Institute of Materials Science 305-0047TsukubaIbarakiJapan Suguru Iwasaki National Institute of Materials Science 305-0047TsukubaIbarakiJapan Kensei Terashima National Institute of Materials Science 305-0047TsukubaIbarakiJapan Pedro Baptista De Castro National Institute of Materials Science 305-0047TsukubaIbarakiJapan University of Tsukuba 305-8577TsukubaIbarakiJapan Takenori Numazawa National Institute of Materials Science 305-0047TsukubaIbarakiJapan Yoshihiko Takano National Institute of Materials Science 305-0047TsukubaIbarakiJapan University of Tsukuba 305-8577TsukubaIbarakiJapan Magnetic entropy change of ErAl 2 magnetocaloric wires fabricated by a powder-in-tube method 1 Jan 2020Submitted to: J. Phys. D: Appl. Phys. 2magnetic refrigerationhydrogen liquefactionintermetallic compoundspowder- in-tube method We report the fabrication of ErAl 2 magnetocaloric wires by a powder-intube method (PIT) and the evaluation of magnetic entropy change through magnetization measurements. The magnetic entropy change of ErAl 2 PIT wires exhibits similar behavior to the bulk counterpart, while its magnitude is reduced by the decrease in the volume fraction of ErAl 2 due to the surrounding non-magnetic sheaths. We find that another effect reduces the magnetic entropy change of the ErAl 2 PIT wires around the Curie temperature, and discuss its possible origin in terms of a correlation between magnetic properties of ErAl 2 and mechanical properties of sheath material. Introduction Magnetic refrigeration is a cooling technology based on the magnetocaloric effect in which the variation in magnetic entropy (or temperature) of a magnetic material is caused by changing a magnetic field. A well-established applied technique is a cooling by adiabatic demagnetization to achieve ultra-low temperatures below 1 K [1,2]. Since 1997, the application to room temperature refrigerators has been enthusiastically studied because magnetic refrigeration has the potential to outperform the conventional vapor-compression refrigeration concerning energy efficiency and environmental friendliness [3,4]. Many great efforts have been made up to date on the development of working materials with a large magnetocaloric effect near room temperature [5][6][7] (e.g., Gd 5 Si 2 Ge 2 discovered by Pecharsky and Gschneidner [8]) and efficient refrigeration systems such as an active magnetic regenerator (AMR) [9][10][11]. A newly attracting potential application of magnetic refrigeration is the hydrogen liquefaction. Hydrogen is one of the cleanest energy sources to replace fossil fuels [12]. For the use in society, it is efficient and economical to transport and store hydrogen in a liquid state because liquid hydrogen is denser than gaseous hydrogen. In this context, high-efficient liquefaction technology is required. One of the authors has confirmed >50% liquefaction efficiency in a test apparatus of the Carnot magnetic refrigerator worked around the hydrogen liquefaction temperature (20.3 K) [13]. On the other hand, in the practical liquefaction process, it is necessary to pre-cool the hydrogen gas from the temperature of a heat sink, such as liquid nitrogen, to nearly 20.3 K by using a multistage AMR cycle [14][15][16]. What should be noted here is that the magnetocaloric material must be processed into a specific shape suitable for each refrigeration system. For example, spherical particles or thin plates have been employed for AMR systems to gain better heat exchange efficiency between the working material and the heat-exchanger fluid [17,18]. Candidate materials for hydrogen magnetic refrigeration are often found in intermetallic compounds containing heavy rare-earth elements. A representative example is the lanthanide (R) Laves phase RT 2 (T = Al, Co, and Ni) [19][20][21], which exhibits a large magnetic entropy change in the temperature range from 20 to 80 K. However, these compounds are difficult to be shaped due to their poor ductility and malleability. Moreover, these materials are quite brittle, leading to a risk of damage by the friction between them during the refrigeration cycle operation. Such mechanical properties prevent these candidate materials from being used as magnetic refrigerants. Besides, they are known to easily absorb hydrogen, resulting in the degradation of the refrigerants and their performance. A coating for protection is a typical way to solve this issue, but this takes extra effort in addition to the shaping process for producing magnetic refrigerants. Very recently, Funk et al. reported [22] a way for producing magnetocaloric wire by a PIT method in La (Fe, Co, Si) 13 , which is a promising material for the room temperature magnetic refrigeration [23,24]. The PIT method is a conventional and simple technology that has been developed in the field of superconducting wires [25,26], in which a powdered raw material is filled into a metal tube and then formed into wire-shaped by various metal workings. This approach is attractive because of many advantages in applying the PIT method to the candidate materials for hydrogen magnetic refrigeration as follows: (1) This method is available even for the difficult-to-process materials since raw materials can be powder. (2) The metal sheath surrounding the magnetic refrigerants protects them from the friction wear or the hydrogen embrittlement. (3) As Funk et al. have pointed out, the wires provide the possibility of various arrangements of magnetic refrigerants. Besides, it should be noted that recent works have focused on wire-shaped magnetocaloric materials because they have been suggested to show superior performance as magnetic refrigerants to conventional spherical or plate-like materials [27][28][29]. In this paper, we investigate the effects of a PIT process on the magnetocaloric properties in a well-studied compound ErAl 2 that exhibits a second-order ferromagnetic transition at T c ∼ 14 K [19,30,31]. We have confirmed that the magnetic entropy change ∆S M is similar in the ErAl 2 PIT wires and the bulk counterpart, while it decreases in magnitude for the former due to a reduction of volume fraction of ErAl 2 in the wires. We have further found that another effect causes an additional decrease of ∆S M near T c , which depends on the sheath material. This is the first report to apply the PIT method for fabricating magnetocaloric wire for the hydrogen liquefaction. Experimental details ErAl 2 single-core wires were fabricated by an ex-situ PIT method without any heat treatment. ErAl 2 raw powder with a diameter of less than 50 µm was prepared by a gas-atomization process. The powder was filled into several metal tubes with 50 mm in length, an outer diameter (d o ) of 6 mm, and inner diameter (d i ) of 4 or 5 mm (hereafter, referred to 6×4 tube and 6×5 tube respectively). The tubes were plugged on both sides by cylinders 7 mm in length made of the same material as each tube. Thus-made initial rods were first groove-rolled into wires with a size of 2 mm stepwisely. Then the wires were cut into about 70 mm and further groove-rolled into those with a size of 1 mm stepwisely. The resulting PIT wires were 260-300 mm in length. Cu, Al, and Brass were employed as the sheath materials because they are non-magnetic and show relatively high thermal conductivity. The cross-sectional observations for the fabricated PIT wires were carried out using a JEOL JSM-6010LA scanning electron microscope (SEM) operated at 15 kV. The crosssectional area was evaluated using an image analysis software Image-J (National Institute of Health, US). Figures 1(a) and 1(b) show SEM images of ErAl 2 /Cu PIT wires fabricated from the 6×4 and 6×5 tubes, respectively. These images indicate that the ErAl 2 powder is uniformly filled inside the Cu-sheath as a core material. The cross-section ratios of the ErAl 2 core to the whole wire were evaluated to be 0.437 from Fig. 1(a) and 0.655 from Fig. 1(b), which are comparable to the theoretical filling rate, defined as d 2 i /d 2 o , expected for each initial tube (0.444 for the 6×4 tube and 0.694 for the 6×5 tube). This result implies that the core and the sheath material were deformed at the same proportion during the rolling process. We have found the same features in ErAl 2 /Al and ErAl 2 /Brass PIT wires. Magnetization measurements were performed by a Quantum Design magnetic property measurement system. Temperature (T ) dependence of magnetization (M) of the ErAl 2 powder and the PIT wires was measured between 2 and 60 K at a temperature sweep rate of 0.5 K/min under various magnetic fields (µ 0 H) ranging from 0.1 to 5 T in zero-field cooling (ZFC) process. The magnetic fields were applied along the longitudinal direction of each PIT wire with 5-7 mm in length. For the powder sample, field dependence of magnetization was collected between 0 and 5 T in the temperature range of 2 ≤ T ≤ 40 K. The magnetic entropy change is often evaluated from the isothermal magnetization (Mµ 0 H) measurements by using one of Maxwell's relations ∆S M (T, µ 0 ∆H) = µ 0 H f H i ∂ M ∂ T H dH,(1) where H i and H f is the initial and final magnetic field, and ∆H = H f − H i . However, this way requires us to collect lots of magnetization curves at various temperatures for correct evaluation, which is somewhat time-consuming and makes it difficult to obtain in detail the temperature dependence of ∆S M . So we first examined how to efficiently and accurately evaluate ∆S M from the isofield magnetization (M-T ) measurements in the ErAl 2 powder. Then the validity of this unconventional method was verified by comparing the results obtained from this and the often-used method. In the following, H i is set to zero. Figures 3 (a) and 3 (b) represent the temperature dependence of |∆S M | for µ 0 ∆H = 5 T per total volume of 1 cm 3 in various ErAl 2 PIT wires fabricated from the 6×4 and 6×5 tubes. The data for the ErAl 2 powder is also shown for the comparison. The magnetic entropy change of the PIT wires exhibits qualitatively similar characteristics as those of the powder sample, while the magnitude is decreased by about 60-70%. This result is not surprising because the volume fraction of ErAl 2 is reduced in the PIT wires. In that sense, the data for the powder sample can be regarded as |∆S M | of a hypothetical wire with 100% ErAl 2 core material. Indeed, |∆S M | becomes larger in the case of the PIT wire fabricated from the 6×5 tube, namely, the larger filling rate of the core material. Furthermore, when the filling rate is the same, at the temperatures above 30 K, |∆S M | does not depend on the sheath material. These facts suggest that the volume fraction of the ErAl 2 core material mainly determines the magnetic entropy change of the PIT wires. On the other hand, we should notice the difference in |∆S M | between the PIT wires at around T c , where the |∆S M | of the ErAl 2 /Brass wire is significantly decreased. A possible origin of which is discussed below. Results and discussion Here let us evaluate the ratios of the magnetic entropy change in each PIT wire (|∆S wire M |) to that in the ErAl 2 powder (|∆S powder M |), which should correspond to the volume fraction of the core material. Figures 4(a) and 4(b) show the temperature dependence of |∆S wire M /∆S powder M | calculated for the PIT wires made from the 6×4 and 6×5 tubes. One finds that the ratios take constant values at temperatures above 30 K. This makes sense because the volume fraction should not change at any temperature. On that account, we employ the mean value of the temperature-independent |∆S wire M /∆S powder M | as the actual volume fraction of ErAl 2 in the PIT wires, being ∼ 0.30 for the wires made from the 6×4 tube and ∼ 0.49 for those made from the 6×5 tube. These values are about 70-75% of the theoretical volume fraction expected from the SEM images assuming no voids. Funk et al. have reported that the volume fraction of La(Fe, Si, Co) 13 core is about 85% of the theoretical one, even though pre-compacted raw materials were filled into a metal tube [22]. In contrast, ErAl 2 powder was filled without any treatments in this study, implying that there can be more voids in our PIT wires compared with the La(Fe, Si, Co) 13 PIT wire. Accordingly, the obtained values of the ErAl 2 core volume fraction seem to be reasonable. With further decreasing temperature, |∆S wire M /∆S powder M | gradually decreases and exhibits a dip structure near T c , whose characteristic is noticeable with ErAl 2 /Brass wires. This behavior suggests that there is another contribution that affects the magnetic entropy change of ErAl 2 itself in the PIT wires, in addition to the decrease in the volume fraction of the core material. Now we will discuss a possible origin of the extra reduction of |∆S M | around T c observed in the PIT wires. According to Eq. (1), a decrease in ∆S M results from a decrease in (∂ M/∂ T ) H , which occurs when M decreases without changing the temperature dependence and/or when the temperature dependence itself becomes more gradual. To clarify this point, we plot M/M 50 K at 5 T as a function of temperature in Fig. 5 for the ErAl 2 powder and the PIT wires made from the 6×5 tube. The magnetizations show the same temperature dependence down to 30 K for all the samples, but the rise in M of the PIT wires is suppressed with decreasing temperature, the trend is most significant in the ErAl 2 /Brass wire. This mild temperature variation does be the cause of the decrease in (∂ M/∂ T ) H for the PIT wires. The difference in M-T curves observed here resembles those found in ferromagnetic materials with a uniaxial magnetic anisotropy [33][34][35], in which (∂ M/∂ T ) H becomes smaller in the direction perpendicular to an easy axis of the magnetization. Thus, the extra reduction of |∆S M | around T c implies that the PIT process induces a magnetic anisotropy in the ErAl 2 core material with an easy axis perpendicular to the longitudinal direction of the wire. It is well known that a rolling process causes a kind of magnetic anisotropy in magnetic materials [36][37][38][39]. This magnetic anisotropy is known to increase as the mechanical deformation increases, and the latter usually increases with the stress on magnetic material during rolling. On the other hand, in several studies on superconducting PIT wires [40,41], it has been pointed out that the higher the hardness of sheath material, the stronger the stress on core material during cold working. From these facts, the magnetic anisotropy induced by rolling is expected to be large in the use of the harder tube in a PIT process. In fact, since Vickers hardness is higher in the order of Al, Cu, and Brass, the expectation is consistent with the result that |∆S M | around T c is most decreased in the ErAl 2 /Brass PIT wires. Therefore, we conclude that the PIT process affects the magnetocaloric properties of the ErAl 2 core material through the induced uniaxial magnetic anisotropy. However, the exact nature of the magnetic anisotropy remains unclear at the present stage. To get more insight, it is desirable to investigate the effect of annealing that may control the plastic deformation. Conclusion We have fabricated the magnetocaloric wires of ErAl 2 cladded by non-magnetic metal sheaths by using a powder-in-tube method combined with groove rolling. These PIT wires exhibit magnetic entropy changes similar to that of the powder sample, with their magnitude reduced due to the decrease in the volume fraction of the ErAl 2 core. We propose that the PIT process affects the magnetocaloric properties of the core material through a kind of the induced uniaxial magnetic anisotropy and causes the extra reduction of the magnetic entropy change around T c . There is still room for improvement of the magnetocaloric properties in the PIT wires by annealing process and additional processes that increase the volume fraction of the core. We believe that the wire-shaped magnetocaloric materials prepared by a PIT method would be of benefit to the development of magnetic refrigerators for the hydrogen liquefaction. Figure 1 . 1Cross-sectional SEM images of ErAl 2 /Cu PIT wires fabricated from (a) 6×4 and (b) 6×5 tubes (see the text). The size is about 1 mm each. Figure 2 (Figure 2 . 22a) shows the M-T curves of the ErAl 2 powder. One finds the features typical of a second-order ferromagnetic transition with T c of 12 K, defined as the temperature at which |∂ M/∂ T | at 0.1 T takes a maximum. The slight discrepancy with the T c in the literatures may be because that the ErAl 2 powderwas made by the gas-atomization process in which where the material is quenched. Similar M-T curves have been observed in all the ErAl 2 PIT wires (not shown). To calculate ∆S M correctly from M-T measurements, one should select the measuring magnetic fields properly. As shown inFig. 2(b), |∂ M/∂ T | calculated from Fig. 2(a) exhibit a non-monotonic field dependence, especially around T c : it steeply increases and reaches the highest point below 1 T, followed by a gradual decrease under higher fields. Since ∆S M (T, µ 0 ∆H) at a fixed T is equivalent to the area in the ∂ M/∂ T -µ 0 H (Color Online) (a) Temperature dependence of magnetization and (b) field dependence of the temperature derivative of magnetization for the ErAl 2 powder. (c) Magnetic entropy change for µ 0 ∆H = 5 T as a function of temperature evaluated from M-T (square) and M-µ 0 H (circle) measurements. plane, this peak structure can largely affect the evaluated value of ∆S M . Accordingly, it is essential to finely collect the M-T curves under magnetic fields in which the peak of |∂ M/∂ T | appears [32]. Figure 2(c) shows ∆S M (T , µ 0 ∆H = 5 T) of the ErAl 2 powder evaluated using Eq. (1) based on ∂ M/∂ T -µ 0 H data calculated from the M-T curves and the M-µ 0 H curves (see the supplementary data), respectively. Two ∆S M curves almost agree with each other and peaks at T c . This result indicates that the magnetic entropy change can be correctly evaluated through the isofield magnetization measurements. ∆S M of the PIT wires were evaluated by the same procedure. Figure 3 . 3(Color Online) Temperature dependence of |∆S M | for µ 0 ∆H = 5 T in various ErAl 2 PIT wires fabricated from (a) 6×4 and (b) 6×5 tubes, along with the data for the ErAl 2 powder. For the PIT wires, |∆S M | represents the magnetic entropy change of the wire with a total volume of 1 cm 3 . Figure 4 . 4(Color Online) Temperature dependence of |∆S wire M /∆S powder M | for the ErAl 2 PIT wires made from (a) 6×4 and (b) 6×5 tubes (see the text). Solid lines depict the theoretical volume fraction of the core material expected from Fig. 1 with assuming no voids. Figure 5 . 5(Color Online) Temperature dependence of magnetization at 5 T normalized by the magnetization at 50 K for ErAl 2 powder and PIT wires made from the 6×5 tube. AcknowledgmentsThis work was supported by JST-Mirai Program Grant Number JPMJMI18A3, Japan. . P Debye, Ann. Phys. 811154Debye P 1926 Ann. Phys. 81 1154. . W Giauque, J. Am. Chem. 491864Giauque W F 1927 J. Am. Chem. 49 1864. . C Zimm, A Jastrab, A Sternberg, V Pecharsky, K GschnednerJr, M Osborne, Anderson I , Adv. Cryog. Eng. 431759Zimm C, Jastrab A, Sternberg A, Pecharsky V, Gschnedner Jr K, Osborne M, and Anderson I 1998 Adv. Cryog. Eng. 43 1759. . E Brück, J. Phys. D: Appl. Phys. 38381Brück E 2005 J. Phys. D: Appl. Phys. 38 R381. . K A GschneidnerJr, V K Pecharsky, Rep. Prog. Phys. 681479Gschneidner Jr K A, Pecharsky V K, and Tsokol A O 2005 Rep. Prog. Phys. 68 1479. . A Tishin, J. Magn. Magn. Mater. 316351Tishin A M 2007 J. Magn. Magn. Mater. 316 351. . V Franco, J S Blázquez, J J Ipus, J Y Law, L M Moreno-Ramírez, A Conde, Prog. Mater. Sci. 93112Franco V, Blázquez J S, Ipus J J, Law J Y, Moreno-Ramírez L M, Conde A 2018 Prog. Mater. Sci. 93 112. . V K Pecharsky, Gschneidner Jr, K , Phys. Rev. Lett. 784494Pecharsky V K and Gschneidner Jr K A 1997 Phys. Rev. Lett. 78 4494. . J A Barclay, W A Steyert, 4135U. S. PatentBarclay J A and Steyert W A 1982 U. S. Patent 4 332 135. . K A GschneidnerJr, V K Pecharsky, Int. J. Ref. 31945Gschneidner Jr K A and Pecharsky V K 2008 Int. J. Ref. 31 945. . K K Nielsen, J Tusek, K Engelbrecht, S Schopfer, A Kitanovski, C R H Bahl, A Smith, N Pryds, A Peredos, Int. J. Ref. 34603Nielsen K K, Tusek J, Engelbrecht K, Schopfer S, Kitanovski A, Bahl C R H, Smith A, Pryds N, Peredos A 2011 Int. J. Ref. 34 603. . L Jones, Science. 174367Jones L W 1971 Science 174 367. . K Kamiya, H Takahashi, T Numazawa, H Nozawa, Yanagitani , Cryocooler. 14637Kamiya K, Takahashi H, Numazawa T, Nozawa H, and Yanagitani T 2007 Cryocooler 14 637. . T Utaki, K Kamiya, T Nakagawa, T A Yamamoto, T Numazawa, Cryocooler. 14645Utaki T, Kamiya K, Nakagawa T, Yamamoto T A, and Numazawa T 2007 Cryocooler 14 645. . K Matsumoto, T Kondo, S Yoshioka, K Kamiya, T Numazawa, J. Phys.: Conf. Ser. 15012028Matsumoto K, Kondo T, Yoshioka S, Kamiya K, and Numazawa T 2009 J. Phys.: Conf. Ser. 150 012028. . T Numazawa, K Kamiya, T Utaki, K Matsumoto, Cryogenics. 62185Numazawa T, Kamiya K, Utaki T, Matsumoto K 2014 Cryogenics 62 185. . B Yu, M Liu, P W Egolf, A Kitanovski, Int. J. Ref. 331029Yu B, Liu M, Egolf P W, and Kitanovski A 2010 Int. J. Ref. 33 1029. . J Tušek, A Kitanovski, A Poredoš, Int. J. Ref. 361456Tušek J, Kitanovski A, and Poredoš A 2013 Int. J. Ref. 36 1456. . T Hashimoto, K Matsumoto, T Kurihara, T Numazawa, A Tomokiyo, H Yayama, T Goto, S Todo, M Sahashi, Adv. Cryog. Eng. Mater. 32279Hashimoto T, Matsumoto K, Kurihara T, Numazawa T, Tomokiyo A, Yayama H, Goto T, Todo S, and Sahashi M 1986 Adv. Cryog. Eng. Mater. 32 279. . A Tomokiyo, H Yayama, H Wakabayashi, T Kuzuhara, T Hashimoto, M Sahashi, K Inomata, Adv. Cryog. Eng. Mater. 32295Tomokiyo A, Yayama H, Wakabayashi H, Kuzuhara T, Hashimoto T, Sahashi M, and Inomata K 1986 Adv. Cryog. Eng. Mater. 32 295. . T Zhu, K Asamoto, Y Nishimura, T Kouen, S Abe, K Matsumoto, T Numazawa, Cryogenics. 51494Zhu T, Asamoto K, Nishimura Y, Kouen T, Abe S, Matsumoto K, and Numazawa T 2011 Cryogenics 51 494. . A Funk, J Freundenberger, A Waske, Krautz , Mater. Today Ener. 9223Funk A, Freundenberger J, Waske A, and Krautz M 2018 Mater. Today Ener. 9 223. . F X Hu, B G Shen, J R Sun, Z H Cheng, G H Rao, X X Zhang, Appl. Phys. Lett. 783675Hu F X, Shen B G, Sun J R, Cheng Z H, Rao G H, and Zhang X X 2001 Appl. Phys. Lett. 78 3675. . S Fujieda, A Fujita, K Fukamichi, Appl. Phys. Lett. 811276Fujieda S, Fujita A, and Fukamichi K 2002 Appl. Phys. Lett. 81 1276. . J E Kunzler, E Buehler, F S L Hsu, J H Wernick, Phys. Rev. Lett. 689Kunzler J E, Buehler E, Hsu F S L, and Wernick J H 1961 Phys. Rev. Lett. 6 89. . J Kunzler, Rev. Mod. Phys. 33501Kunzler J E 1961 Rev. Mod. Phys. 33 501. . H X Shen, D W Xing, Sánchez Liamazares, J L Sánchez-Valdés, C F Belliveau, H Wang, H Qin, F X Liu, Y F Sun, J F Srikanth, H Phan, M H , Appl. Phys. Lett. 10892403Shen H X, Xing D W, Sánchez Liamazares J L, Sánchez-Valdés C F, Belliveau H, Wang H, Qin F X, Liu Y F, Sun J F, Srikanth H, and Phan M H 2016 Appl. Phys. Lett. 108 092403. . M Kondo, K Ueno, K Takeuchi, R Nomura, Kizaki , Fujikura Tech. Rev. 4747Kondo M, Ueno K, Takeuchi K, Nomura R, and Kizaki T 2017 Fujikura Tech. Rev. 47 47. . D Vuarnoz, T Kawanami, Appl. Therm. Eng. 37388Vuarnoz D and Kawanami T 2012 Appl. Therm. Eng. 37 388. . N Nereson, C Olisen, Arnold G , J. Appl. Phys. 394605Nereson N, Olisen C, and Arnold G, 1968 J. Appl. Phys. 39 4605. . V K Pecharsky, Gschneidner Jr, K , J. Appl. Phys. 86565Pecharsky V K and Gschneidner Jr K A 1999 J. Appl. Phys. 86 565. This magnetic field range corresponds to that in which the magnetization at a sufficiently low temperature increases rapidly and almost saturates in the M-µ 0 H plane. (see the supplementary data). This magnetic field range corresponds to that in which the magnetization at a sufficiently low temperature increases rapidly and almost saturates in the M-µ 0 H plane. (see the supplementary data) . X X Zhang, H L Wei, Z Q Zhang, L Zhang, Phys. Rev. Lett. 87157203Zhang X X, Wei H L, Zhang Z Q, and Zhang L 2001 Phys. Rev. Lett. 87 157203. . F Luis, F Bartolomé, F Petroff, J Bartolomé, L M García, C Deranlot, H Jaffrés, M J Martínez, P Bencok, F Wilhelm, A Rogalev, N B Brookes, Europhys. Lett. 76142Luis F, Bartolomé F, Petroff F, Bartolomé J, García L M, Deranlot C, Jaffrés H, Martínez M J, Bencok P, Wilhelm F, Rogalev A, and Brookes N B 2006 Europhys. Lett. 76 142. . Y Liu, C Petrovic, Phys. Rev. Mat. 314001Liu Y and Petrovic C 2019 Phys. Rev. Mat. 3 014001. . S Chikazumi, J. Appl. Phys. 29346Chikazumi S 1958 J. Appl. Phys. 29 346. . S Chikazumi, K Suzuki, H Iwata, J. Phys. Soc. Jpn. 15250Chikazumi S, Suzuki K, and Iwata H 1960 J. Phys. Soc. Jpn. 15 250. . G Y Chin, E A Nesbitt, J H Wernick, L L Vanskike, J. Appl. Phys. 382623Chin G Y, Nesbitt E A, Wernick J H, and Vanskike L L 1967 J. Appl. Phys. 38 2623. . H Morita, H Fujimori, Y Obi, Jpn. J. Appl. Phys. 18683Morita H, Fujimori H, and Obi Y 1979 Jpn. J. Appl. Phys. 18 683. . G Grasso, A Malagoli, C Ferdeghini, S Roncallo, V Braccini, Cimberle M R , Appl. Phys. Lett. 79230Grasso G, Malagoli A, Ferdeghini C, Roncallo S, Braccini V, and Cimberle M R, 2001 Appl. Phys. Lett. 79 230. . H Kumakura, A Matsumoto, H Fujii, H Kitaguchi, K Togano, Physica C. 38293Kumakura H, Matsumoto A, Fujii H, Kitaguchi H, Togano K, 2002 Physica C 382 93.
[]
[ "Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring", "Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring" ]
[ "Seungjun Nah \nDepartment of ECE\nASRI\nSeoul National University\n151-742SeoulKorea\n", "Tae Hyun \nDepartment of ECE\nASRI\nSeoul National University\n151-742SeoulKorea\n", "Kim Kyoung [email protected] \nDepartment of ECE\nASRI\nSeoul National University\n151-742SeoulKorea\n", "Mu Lee \nDepartment of ECE\nASRI\nSeoul National University\n151-742SeoulKorea\n" ]
[ "Department of ECE\nASRI\nSeoul National University\n151-742SeoulKorea", "Department of ECE\nASRI\nSeoul National University\n151-742SeoulKorea", "Department of ECE\nASRI\nSeoul National University\n151-742SeoulKorea", "Department of ECE\nASRI\nSeoul National University\n151-742SeoulKorea" ]
[]
Non-uniform blind deblurring for general dynamic scenes is a challenging computer vision problem since blurs are caused by camera shake, scene depth as well as multiple object motions. To remove these complicated motion blurs, conventional energy optimization based methods rely on simple assumptions such that blur kernel is partially uniform or locally linear. Moreover, recent machine learning based methods also depend on synthetic blur datasets generated under these assumptions. This makes conventional deblurring methods fail to remove blurs where blur kernel is difficult to approximate or parameterize (e.g. object motion boundaries). In this work, we propose a multi-scale convolutional neural network that restores blurred images caused by various sources in an end-to-end manner. Furthermore, we present multi-scale loss function that mimics conventional coarse-to-fine approaches. Moreover, we propose a new large scale dataset that provides pairs of realistic blurry image and the corresponding ground truth sharp image that are obtained by a high-speed camera. With the proposed model trained on this dataset, we demonstrate empirically that our method achieves the state-of-the-art performance in dynamic scene deblurring not only qualitatively, but also quantitatively.
10.1109/cvpr.2017.35
[ "https://arxiv.org/pdf/1612.02177v1.pdf" ]
8,671,030
1612.02177
df4b310740d0691c7813cd94a1b5c9bc9fae83cd
Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring 7 Dec 2016 Seungjun Nah Department of ECE ASRI Seoul National University 151-742SeoulKorea Tae Hyun Department of ECE ASRI Seoul National University 151-742SeoulKorea Kim Kyoung [email protected] Department of ECE ASRI Seoul National University 151-742SeoulKorea Mu Lee Department of ECE ASRI Seoul National University 151-742SeoulKorea Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring 7 Dec 2016 Non-uniform blind deblurring for general dynamic scenes is a challenging computer vision problem since blurs are caused by camera shake, scene depth as well as multiple object motions. To remove these complicated motion blurs, conventional energy optimization based methods rely on simple assumptions such that blur kernel is partially uniform or locally linear. Moreover, recent machine learning based methods also depend on synthetic blur datasets generated under these assumptions. This makes conventional deblurring methods fail to remove blurs where blur kernel is difficult to approximate or parameterize (e.g. object motion boundaries). In this work, we propose a multi-scale convolutional neural network that restores blurred images caused by various sources in an end-to-end manner. Furthermore, we present multi-scale loss function that mimics conventional coarse-to-fine approaches. Moreover, we propose a new large scale dataset that provides pairs of realistic blurry image and the corresponding ground truth sharp image that are obtained by a high-speed camera. With the proposed model trained on this dataset, we demonstrate empirically that our method achieves the state-of-the-art performance in dynamic scene deblurring not only qualitatively, but also quantitatively. Introduction Motion blur is one of the most commonly arising artifact types when taking photos. Camera shaking and fast object motions degrade image quality to undesired blurry images. Furthermore, various causes such as depth variation, occlusion in motion boundaries make blurs even more complex. Single image deblurring problem is to estimate the unknown sharp image given a blurry image. Earlier studies focused on removing blurs caused by simple translational or rotational camera motions. More recent works try to handle removing general non-uniform blurs caused by depth variation, camera shakes and object motions in dynamic environments. Most of these approaches are based on following blur model [29,10,13,11]. B = KS + n,(1) where B, S and n are vectorizd blurry image, latent sharp image, and noise, respectively. K is a large sparse matrix whose rows each contain a local blur kernel acting on S to generate a blurry pixel. In practice, blur kernel is unknown. Thus, blind deblurring methods try to estimate latent sharp image S and blur kernel K simultaneously, given a blurry image B. Finding blur kernel for every pixel is severely ill-posed problem. Thus, some approaches tried to parametrize blur models with simple assumptions on the sources of blurs. In [29,10], they assumed that blur is caused by 3D camera motion only. However, in dynamic scenes, the kernel estimation is more challenging as there are multiple moving objects as well as camera motion. Thus, Kim et al. [14] proposed a dynamic scene deblurring method that jointly segments and deblurs a non-uniformly blurred image, allowing the estimation of complex (non-linear) kernel within a segment. In addition, Kim and Lee [15] approximated the blur kernel as locally linear and proposed an approach that estimates both the latent image and the local linear motions jointly. However, these blur kernel approximations are still inaccurate, especially in the cases of abrupt motion discontinuities and occlusions. Note that such erroneous kernel estimation directly affects to the quality of the latent image, resulting undesired ringing artifacts. Recently, CNNs (Convolutional Neural Networks) have been applied in numerous computer vision problems and showed promising results including deblurring problem [30,25,27,1]. Since no pairs of real blurry image and ground truth sharp image are available for supervised learning, they commonly used blurry images generated by convolving synthetic blur kernels. In [30,25,1], synthesized blur images with uniform blur kernel are used for training. And, in [27], classification CNN is trained to estimate locally linear blur kernels. Thus, still, CNN based models are only suitable for several specific types of blurs and have limits against more general spatially varying blurs. Therefore, all the existing methods still have many problems before they could be generalized and used in practice. These are mainly due to the use of simple and unrealistic blur kernel models. Thus, to solve those problems, in this work, we propose a novel end-to-end learning approach for dynamic scene deblurring. First, we propose a multi-scale CNN that directly restores latent images without assuming any restricted blur kernel model. Unlike other approaches, our method does not estimate explicit blur kernels. Accordingly, our method is free from artifacts that arise from kernel estimation errors. Especially, the multi-scale architecture is designed to mimic conventional coarse-to-fine optimization methods. Second, we train the proposed model with multi-scale loss that is appropriate for coarse-to-fine architecture, and enhances convergence greatly. In addition, we further improve the results by employing adversarial loss [9]. Third, we propose a new realistic blurry image dataset with ground truth sharp images. To obtain kernel model-free dataset for training, we employ the dataset acquisition method introduced in [17]. As the blurring process can be modeled by the integration of sharp images during shutter time [17,21,16], we captured a sequence of sharp frames of a dynamic scene with a high-speed camera, and averaged them to generate blurry image by considering gamma correction. By training with the proposed dataset and adding proper augmentation, our model can handle general local blur kernel implicitly. As the loss term optimizes the result to resemble the ground truth, it even restores occluded regions where blur kernel is extremely complex as shown in Fig. 1. We trained our model with millions of pairs of image patches, and achieved significant improvements in dynamic scene deblurring. Extensive experimental results demonstrate that the performance of the proposed method is far superior to those of the state-of-the-art dynamic scene deblurring methods in both qualitative and quantitative evaluations. Related Works There are several approaches that employed learning CNN for deblurring [30,27,25,1]. Xu et al. [30] proposed an image deconvolution CNN to deblur a blurry image in a non-blind setting. They built a network based on the separable kernel property that the (inverse) blur kernel can be decomposed into a small number of significant filters. Additionally, they incorporated the denoising network [6] to reduce visual artifacts such as noise and color saturation by concatenating the module at the end of their proposed network. On the other hand, Schuler et al. [25] proposed a blind deblurring method with CNN. The proposed network mimics conventional optimization-based deblurring methods and iterates the feature extraction, kernel estimation, and the latent image estimation steps in a coarse-to-fine manner. To obtain pairs of sharp and blurry images for network training, they generated uniform blur kernels using a Gaussian Process and synthesized lots of blurry images by convolving them to the sharp images collected from the ImageNet dataset [3]. However, they reported performance limits for large blurs due to their suboptimal architecture. Similarly to the work of Couzinie-Devy et al. [2], Sun et al. [27] proposed a sequential deblurring approach. First, they generated pairs of blurry and sharp patches with 73 candidate blur kernels. Next, they trained classification CNN to measure the local likelihood of a specific blur kernel of local patch. And then smoothly varying blur kernel is obtained by optimizing an energy model that is composed of the CNN likelihoods and smoothness priors. Final latent image estimation is performed with conventional optimization method [31]. Note that all these methods require an accurate kernel estimation step for restoring the latent sharp image. In contrast, our proposed system is learned to produce the latent image directly without estimating blur kernels. In other computer vision tasks, several forms of coarseto-fine architecture or multi-scale architecture were applied [7,5,4,23,8]. However, not all multi-scale CNNs are designed to produce optimal results similarly to [25]. In depth estimation, optical flow estimation, etc., networks usually produce outputs having smaller resolution compared to input image resolution [7,5,8]. These methods have difficulties in handling long-range dependency even if multi-scale architecture is used. Therefore, we make a multi-scale architecture that preserves fine-grained detail information as well as long-range dependency from coarser scales. Furthermore, we make sure intermediate level networks help the final stage in an explicit way by training network with multi-scale losses. Kernel-Free Learning for Dynamic Scene Deblurring Conventionally, it was essential to find blur kernel before estimating latent image. CNN based approaches are not exception [25,26]. However, estimating kernel involves several problems. First, assuming simple kernel convolution cannot model several challenging cases such as occluded regions or depth variations. Second, kernel estimation process is subtle and sensitive to noise and saturations, unless blur model is carefully designed. Furthermore, incorrectly estimated kernels give rise to artifacts in latent images. Third, finding spatially varying kernel for every pixel in dynamic scene requires huge amount of memory and computation. Therefore, we adopt kernel-free methods in both blur dataset generation and latent image estimation. In blur image generation, we follow to approximate camera imaging process, rather than assuming specific motions, instead of finding or designing complex blur kernel. We capture successive sharp frames and integrate to simulate blurring process. The detailed procedure is described in section 2. Note that our dataset is composed of blurry and sharp image pairs only, and that the local kernel information is implicitly embedded in it. In Fig. 2, our kernel-free blurry image is compared with a conventional synthesized image with uniform blur kernel. Notably, the blur image generated by our method exhibits realistic and spatially varying blurs caused by the moving person and the static background, while the blur image synthesized by conventional method does not. For latent image estimation, we do not assume blur sources, and train the model solely on our blurry and sharp image pairs. Thus, our proposed method does not suffer from kernel-related problems in deblurring. Blur Dataset Instead of modeling a kernel to convolve on a sharp image, we choose to record the sharp information to be integrated over time for blur image generation. As camera sensor receives light during exposure, sharp image stimulation at every time is accumulated, generating blurry image [13]. The integrated signal is then transformed into pixel value by nonlinear CRF (Camera Response Function). Thus, the process could be approximated by accumulating signals from high-speed video frames. Blur accumulation process can be modeled as follows. B = g 1 T T t=0 S(t)dt ≃ g 1 M M−1 i=0 S[i] ,(2) where T and S(t) denote the exposure time and the sensor signal of a sharp image at time t, respectively. Similarly, M , S[i] are the number of sampled frames and the i-th sharp frame captured during the exposure time, respectively. g is the CRF that maps a sharp latent image S(t) into an observed imageŜ(t) such thatŜ(t) = g(S(t)), or S[i] = g(S[i]) In practice, we only have observed video frames while the original signal and the CRF is unknown. It is known that non-uniform deblurring becomes significantly difficult when nonlinear CRF is involved, and nonlinearity should be taken into account. However, currently, there are no CRF estimation techniques available for an image with spatially varying blur [28]. When the ground truth CRF is not given, a common practical method is to approximate CRF as a gamma curve with γ = 2.2 as follows, since it is known as an aproximated average of known CRFs [28]. g(x) = x 1/γ .(3) Thus, by correcting the gamma function, we obtain the sharp latent frame S[i] from the observed imageŜ[i] by S[i] = g −1 (Ŝ[i]), and then synthesize the corresponding blur image B by using (2). We used GOPRO4 Hero Black camera to generate our dataset. We took 240 fps videos with GOPRO camera and then averaged varying number (7 -13) of successive latent frames to produce blurs of different strengths. For example, averaging 15 frames simulates a photo taken at 1/16 shutter speed, while corresponding sharp image shutter speed is 1/240. Notably, the sharp latent image corresponding to each blurry one is defined as the mid-frame among the sharp frames that are used to make the blurry image. Finally, our dataset is composed of 3214 pairs of blurry and sharp images at 1280x720 resolution. The proposed GOPRO dataset is publicly available on our website 1 . In this case, blur is mostly caused by person motion, leaving background as it is. The blur kernel is non-uniform, complex shaped. However, when blurry image is synthesized by convolution with uniform kernel, background also gets blurred as if blur was caused by camera shake. To model dynamic scene blur, our kernel-free method is required. Proposed Method In our model, finer scale image deblurring is aided by coarser scale features. To exploit coarse and middle level information while preserving fine level information at the same time, input and output to our network take form of Gaussian pyramids. Note that most other coarse-to-fine networks takes single input and single output. Model Architecture In addition to the multi-scale architecture, we employ slightly modified version of residual network structure [12] as a building block of our model. Using residual network structure enables deeper architecture compared to a plain CNN. Also, as blurry and sharp image pairs are similar in values, it is efficient to let parameters learn the difference only. We found that removing the rectified linear unit after the shortcut connection of the original residual building block boosts the convergence speed at training time. We denote the modified building block as ResBlock. The original and our modified building block is compared in Fig. 3. By stacking enough number of convolution layers with ResBlocks, the receptive field at each scale is expanded. Details are described in the following paragraphs. For sake of consistency, we define scale levels in the order of decreasing resolution (i.e. level 1 for finest scale). Unless denoted otherwise, we use total K = 3 scales. At training time, we set the resolution of the input and output Gaussian pyramid patches to be {256 × 256, 128 × 128, 64 × 64}. The scale ratio between consecutive scales is 0.5. For all convolution layers, we set the filter size to be 5 × 5. As our model is fully convolutional, at test time, the patch size may vary as the GPU memory allows. The overall architecture is shown in Fig. 4. Modified building block of our network. We did not use batch normalization layers since we trained model with mini-batch of size 4, which is smaller than usual for batch normalization. We found removing rectified linear unit just before the block output is beneficial in terms of performance empirically. Coarsest level network At the front of the network locates the coarsest level network. The first convolution layer transforms 1/4 resolution, 64 × 64 size input to 64 feature maps. Then, 19 ResBlocks are stacked followed by last convolution layer that transforms the feature map into input dimension. Every convolution layer preserves dimensions with zero padding. In total, there are 40 convolution layers. The number of convolution layer at each scale level is determined so that total model should have 120 convolution layers. Thus, the coarsestscale network has receptive field large enough to 'see' the whole patch. At the end of the stage, a coarsest level latent sharp image is generated. Moreover, information from the coarsest level output is delivered to the next stage where finer scale network is. To convert a coarsest output to fit the input size of the next finer scale, the output patch passes an upconvolution [22] layer, while other multi-scale methods uses reshaping [7] or upsampling [4,5,23]. Since the sharp and blurry patches share low frequency information, learning suitable feature with upconvolution helps to remove redundancy. In our experiment, using upconvolution showed better performance than upsampling. Then, the upconvolution feature is concatenated with the finer scale blurry patch as an input. Finer level network Finer level networks basically have the same structure as in the coarsest scale network. However, the first convolution layer takes sharp feature from previous stage as well as its own blurry input image, in a concatenated form. Every convolution filter size is 5 × 5 with the same number of feature maps as in the coarsest scale. Except for the last finest scale, there is an upconvolution layer before the next stage. At the finest scale, the original resolution sharp image is restored. Training Our model is trained on the proposed GOPRO dataset. Among 3214 pairs, 2103 pairs were used for training and remainings were used for the test. To prevent our network from overfitting, several data augmentation techniques are involved. In terms of geometric transformations, patches are flipped horizontally/vertically, rotated at random degrees. For color, RGB channels are randomly permuted. To take image degradations into account, saturation in HSV colorspace is multiplied by random number within [0. 5, 1.5]. Also, Gaussian random noise is added to blurry images. To make our network be robust against different strengths of noise, standard deviation of noise is also randomly sampled from Gaussian distribution, N (0, (2/255) 2 ). Finally, augmented image values are clipped in range of [0,1]. In optimizing the network parameters, we trained the model in the combination of two losses, multi-scale content loss and adversarial loss. Multi-scale content loss Basically, the coarse-to-fine approach desires that every mid-level outputs are the sharp images of the corresponding scales. Thus, we train our network so that every intermediate latent images should form Gaussian pyramid of sharp image. MSE criterion is applied to every level of pyramids. Hence, the loss function is defined as follows: L cont = 1 2K K k=1 1 c k w k h k L k − S k 2 ,(4) where L k , S k denote the model output and ground truth image at scale level k, respectively. The loss at each scale is normalized by the number of channels c k , width w k , and the height h k (i.e. the total number of elements). Adversarial loss Recently, adversarial networks are reported to generate sharp realistic images [9,4,24]. Following the architecture introduced in [24], we build discriminator as in Table 1. Discriminator takes the output of the finest scale or the ground truth sharp image as input and classifies whether it is network output or not. The adversarial loss is defined as follows. L adv = E S∼p sharp (S) [log D(S)]+ E B∼p blurry (B) [log(1 − D(G(B)))],(5) where G and D denote the generator, that is our multiscale deblurring network in Fig. 4 and the discriminator (classifier), respectively. Finally, by combining the multi-scale content loss and adversarial loss, the generator network and discriminator network is jointly trained. Thus, our final loss term is L total = L cont + λ × L adv ,(6) where the weight constant λ = 1 × 10 −4 . We used ADAM [18] optimizer with a mini-batch size 4 for training. The learning rate is adaptively tuned beginning from 5 × 10 −5 . After 1.5 × 10 5 iterations, learning Experimental Results GOPRO Dataset We evaluate the performance of our model on the proposed GOPRO dataset. Our test dataset consists of 1111 pairs, which is approximately 1/3 of the total dataset. We compare the results with those of the state-of-the art methods [16,27] in both qualitative and quantitative ways. Our results show significant improvement in terms of image quality. Some deblurring results are shown in Fig. 5. We notice from the results of Sun et al. [27], deblurring is not successful on the regions where blurs are nonlinearly shaped or located at the boundary of motion. Kim and Lee [16]'s results also fail in cases where strong edges are not found. In contrast, our results are free from those kernel-estimation related problems. Table 2, shows the quantitative evaluation results of the competing methods and ours with different scale level k in terms of PSNR, SSIM over the test data. Köhler Dataset Köhler dataset [19] consists of 4 latent images and 12 differently blurred images for each of them. The blurs are caused by replaying recorded 6D camera motion. We report the quantitative results on this dataset in Table 3. Dataset of Lai et al. Lai et al. [20] generated synthetic dataset by convolving nonuniform blur kernels and imposing several common Figure 5. Test results on the GOPRO dataset. From top to bottom: Blurry images, results of Sun et al. [27], results of Kim and Lee [15], and results of the proposed method. degradations. They also recorded 6D camera trajectories to generate blur kernels. However, their blurry images and sharp images are not aligned in the way of our dataset, making simple image quality measures such as PSNR and SSIM less correlated with perceptual quality. Thus, we show qualitative comparisons in Fig. 6. Clearly, our results avoid ringing artifacts while preserving details such as wave ripple. Conclusion In this paper, we proposed blind deblurring neural networks for sharp image estimation. Contrary to existing works, our model avoids kernel estimation related problems. The proposed model follows coarse-to-fine approach and also trained in multi-scale spaces. Furthermore, we generated realistic blur dataset with ground truth, enabling efficient supervised learning and rigorous evaluation. Experimental results show that our approach outperforms the state-of-the-art methods in both qualitative and quantitative ways. Figure 1 . 1(a) Input blurry image. (b) Result of Sun et al. [27]. (c) Our deblurring result. Our results show clear object boundaries without artifacts. Figure 2 . 2(a) Ground truth sharp image. (b) Blurry image generated by convolving an uniform blur kernel. (c) Blurry image by averaging sharp frames. Figure 3 . 3(a) Original residual network building block. (b) Figure 4 . 4Multi-scale network architecture. B k , L k , S k denote blurry and latent, and ground truth sharp images, respectively. Subscript k denotes kth scale level in the Gaussian pyramid, which is downsampled to 1/2 k scale. Our model takes a blurry image pyramid as the input, and outputs an estimated latent image pyramid. Every intermediate scale output is trained to be sharp. At test time, original scale image is chosen as the final result. Figure 6 . 6Deblurring results on the dataset[20]. Top rows are results of results of Sun et al.[27], bottom rows are our results. Table 1 . 1Model parameters of discriminator. Every convolution layers are activated with LeakyReLU layer. rate is decreased to 1/10 of the previous learning rate. Total training takes 4.5 × 10 5 iterations to converge.# Layer Weight dimension Stride 1 conv 32 × 3 × 5 × 5 2 2 conv 64 × 32 × 5 × 5 1 3 conv 64 × 64 × 5 × 5 2 4 conv 128 × 64 × 5 × 5 1 5 conv 128 × 128 × 5 × 5 2 6 conv 256 × 128 × 5 × 5 1 7 conv 256 × 256 × 5 × 5 4 8 conv 512 × 256 × 5 × 5 1 9 conv 512 × 512 × 5 × 5 4 10 conv 1024 × 512 × 5 × 5 2 11 fc 1024 × 1024 - 12 sigmoid - - Table 2 . 2Quantitative deblurring performance comparison on the GOPRO dataset. K denotes the scale level.Table 3. Quantitative comparison on the Köhler dataset. The dataset has its own evaluation code, thus we report multi-scale SSIM instead of SSIM.Measure [27] [15] Ours K=1 K=2 K=3 PSNR 24.68 23.70 28.24 28.41 28.45 SSIM 0.8557 0.8295 0.9062 0.9096 0.9170 Measure [27] [15] Ours K=1 K=2 K=3 PSNR 25.22 24.68 25.74 26.02 26.48 MSSIM 0.7735 0.7937 0.8042 0.8116 0.8079 https://github.com/SeungjunNah/DeepDeblur_release A. AppendixIn this appendix, we present more comparative experimental results to demonstrate the effectiveness of our proposed deblurring method.A.1. Comparison of loss functionIn section 3.2, we employed a loss function that combines both the multi-scale content loss (MSE) and the adversarial loss for training our network. We examine the effect of the adversarial loss term quantitatively and qualitatively. The PSNR and SSIM results are shown in table A.1. From this results, we observe that adding adversarial loss does not increases PSNR, but increase SSIM, which means that it encourages to generate more natural and structure preserving images. A neural approach to blind motion deblurring. A Chakrabarti, ECCV. 1A. Chakrabarti. A neural approach to blind motion deblur- ring. In ECCV, 2016. 1, 2 Learning to estimate and remove non-uniform image blur. F Couzinie-Devy, J Sun, K Alahari, J Ponce, CVPR. F. Couzinie-Devy, J. Sun, K. Alahari, and J. Ponce. Learning to estimate and remove non-uniform image blur. In CVPR, 2013. 2 Imagenet: A large-scale hierarchical image database. J Deng, W Dong, R Socher, L.-J Li, K Li, L Fei-Fei, CVPR. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei- Fei. Imagenet: A large-scale hierarchical image database. In CVPR, pages 248-255. IEEE, 2009. 2 Deep generative image models using a laplacian pyramid of adversarial networks. E L Denton, S Chintala, R Fergus, Advances in Neural Information Processing Systems. 56E. L. Denton, S. Chintala, R. Fergus, et al. Deep genera- tive image models using a laplacian pyramid of adversarial networks. In Advances in Neural Information Processing Systems, pages 1486-1494, 2015. 3, 5, 6 Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. D Eigen, R Fergus, ICCV. 35D. Eigen and R. Fergus. Predicting depth, surface normals and semantic labels with a common multi-scale convolu- tional architecture. In ICCV, pages 2650-2658, 2015. 3, 5 Restoring an image taken through a window covered with dirt or rain. D Eigen, D Krishnan, R Fergus, ICCV. D. Eigen, D. Krishnan, and R. Fergus. Restoring an image taken through a window covered with dirt or rain. In ICCV, pages 633-640, 2013. 2 Depth map prediction from a single image using a multi-scale deep network. D Eigen, C Puhrsch, R Fergus, Advances in Neural Information Ppocessing Ssytems. 35D. Eigen, C. Puhrsch, and R. Fergus. Depth map prediction from a single image using a multi-scale deep network. In Advances in Neural Information Ppocessing Ssytems, pages 2366-2374, 2014. 3, 5 P Fischer, A Dosovitskiy, E Ilg, P Häusser, C Hazırbaş, V Golkov, P Van Der Smagt, D Cremers, T Brox, arXiv:1504.06852Flownet: Learning optical flow with convolutional networks. arXiv preprintP. Fischer, A. Dosovitskiy, E. Ilg, P. Häusser, C. Hazırbaş, V. Golkov, P. van der Smagt, D. Cremers, and T. Brox. Flownet: Learning optical flow with convolutional networks. arXiv preprint arXiv:1504.06852, 2015. 3 Generative adversarial nets. I Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, Y Bengio, Advances in Neural Information Processing Systems. 26I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Gen- erative adversarial nets. In Advances in Neural Information Processing Systems, pages 2672-2680, 2014. 2, 6 Single image deblurring using motion density functions. A Gupta, N Joshi, C L Zitnick, M Cohen, B Curless, ECCV. Springer1A. Gupta, N. Joshi, C. L. Zitnick, M. Cohen, and B. Curless. Single image deblurring using motion density functions. In ECCV, pages 171-184. Springer, 2010. 1 Spacevariant single-image blind deconvolution for removing camera shake. S Harmeling, H Michael, B Schölkopf, Advances in Neural Information Processing Systems. 1S. Harmeling, H. Michael, and B. Schölkopf. Space- variant single-image blind deconvolution for removing cam- era shake. In Advances in Neural Information Processing Systems, pages 829-837, 2010. 1 Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, arXiv:1512.03385arXiv preprintK. He, X. Zhang, S. Ren, and J. Sun. Deep residual learn- ing for image recognition. arXiv preprint arXiv:1512.03385, 2015. 4 Fast removal of non-uniform camera shake. M Hirsch, C J Schuler, S Harmeling, B Schölkopf, ICCV. 13M. Hirsch, C. J. Schuler, S. Harmeling, and B. Schölkopf. Fast removal of non-uniform camera shake. In ICCV, 2011. 1, 3 Dynamic scene deblurring. T H Kim, B Ahn, K M Lee, ICCV. T. H. Kim, B. Ahn, and K. M. Lee. Dynamic scene deblur- ring. In ICCV, 2013. 1 Segmentation-free dynamic scene deblurring. T H Kim, K M Lee, CVPR. 1319T. H. Kim and K. M. Lee. Segmentation-free dynamic scene deblurring. In CVPR, 2014. 1, 7, 8, 13, 19 Generalized video deblurring for dynamic scenes. T H Kim, K M Lee, CVPR. 26T. H. Kim and K. M. Lee. Generalized video deblurring for dynamic scenes. In CVPR, 2015. 2, 6 Dynamic scene deblurring using a locally adaptive linear blur model. T H Kim, S Nah, K M Lee, arXiv:1603.04265arXiv preprintT. H. Kim, S. Nah, and K. M. Lee. Dynamic scene deblurring using a locally adaptive linear blur model. arXiv preprint arXiv:1603.04265, 2016. 2 Adam: A method for stochastic optimization. D Kingma, J Ba, arXiv:1412.6980arXiv preprintD. Kingma and J. Ba. Adam: A method for stochastic opti- mization. arXiv preprint arXiv:1412.6980, 2014. 6 Recording and playback of camera shake: Benchmarking blind deconvolution with a real-world database. R Köhler, M Hirsch, B Mohler, B Schölkopf, S Harmeling, ECCV. SpringerR. Köhler, M. Hirsch, B. Mohler, B. Schölkopf, and S. Harmeling. Recording and playback of camera shake: Benchmarking blind deconvolution with a real-world database. In ECCV, pages 27-40. Springer, 2012. 6 A comparative study for single image blind deblurring. W.-S Lai, J.-B Huang, Z Hu, N Ahuja, M.-H Yang, 618W.-S. Lai, J.-B. Huang, Z. Hu, N. Ahuja, and M.-H. Yang. A comparative study for single image blind deblurring. 6, 8, 16, 17, 18 Generating sharp panoramas from motion-blurred videos. Y Li, S B Kang, N Joshi, S M Seitz, D P Huttenlocher, CVPR. Y. Li, S. B. Kang, N. Joshi, S. M. Seitz, and D. P. Hutten- locher. Generating sharp panoramas from motion-blurred videos. In CVPR, 2010. 2 Fully convolutional networks for semantic segmentation. J Long, E Shelhamer, T Darrell, CVPR. J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, pages 3431- 3440, 2015. 5 Deep multi-scale video prediction beyond mean square error. M Mathieu, C Couprie, Y Lecun, arXiv:1511.0544035arXiv preprintM. Mathieu, C. Couprie, and Y. LeCun. Deep multi-scale video prediction beyond mean square error. arXiv preprint arXiv:1511.05440, 2015. 3, 5 Unsupervised representation learning with deep convolutional generative adversarial networks. A Radford, L Metz, S Chintala, arXiv:1511.06434arXiv preprintA. Radford, L. Metz, and S. Chintala. Unsupervised repre- sentation learning with deep convolutional generative adver- sarial networks. arXiv preprint arXiv:1511.06434, 2015. 6 . C J Schuler, M Hirsch, S Harmeling, B Schölkopf, arXiv:1406.744413Learning to deblur. arXiv preprintC. J. Schuler, M. Hirsch, S. Harmeling, and B. Schölkopf. Learning to deblur. arXiv preprint arXiv:1406.7444, 2014. 1, 2, 3 A quantitative analysis of current practices in optical flow estimation and the principles behind them. D Sun, S Roth, M J Black, IJCV. 1062D. Sun, S. Roth, and M. J. Black. A quantitative analysis of current practices in optical flow estimation and the principles behind them. IJCV, 106(2):115-137, 2014. 3 Learning a convolutional neural network for non-uniform motion blur removal. J Sun, W Cao, Z Xu, J Ponce, CVPR. IEEE1319J. Sun, W. Cao, Z. Xu, and J. Ponce. Learning a convolu- tional neural network for non-uniform motion blur removal. In CVPR, pages 769-777. IEEE, 2015. 1, 2, 6, 7, 8, 13, 19 Nonlinear camera response functions and image deblurring: Theoretical analysis and practice. Y.-W Tai, X Chen, S Kim, S J Kim, F Li, J Yang, J Yu, Y Matsushita, M S Brown, PAMI35Y.-W. Tai, X. Chen, S. Kim, S. J. Kim, F. Li, J. Yang, J. Yu, Y. Matsushita, and M. S. Brown. Nonlinear camera response functions and image deblurring: Theoretical analysis and practice. PAMI, 35(10):2498-2512, 2013. 3 Non-uniform deblurring for shaken images. O Whyte, J Sivic, A Zisserman, J Ponce, 1O. Whyte, J. Sivic, A. Zisserman, and J. Ponce. Non-uniform deblurring for shaken images. 2010. 1 Deep convolutional neural network for image deconvolution. L Xu, J S Ren, C Liu, J Jia, Advances in Neural Information Processing Systems. 1L. Xu, J. S. Ren, C. Liu, and J. Jia. Deep convolutional neural network for image deconvolution. In Advances in Neural Information Processing Systems, pages 1790-1798, 2014. 1, 2 From learning models of natural image patches to whole image restoration. D Zoran, Y Weiss, ICCV. IEEED. Zoran and Y. Weiss. From learning models of natural image patches to whole image restoration. In ICCV, pages 479-486. IEEE, 2011. 3
[ "https://github.com/SeungjunNah/DeepDeblur_release" ]
[ "EMP-SSL: TOWARDS SELF-SUPERVISED LEARNING IN ONE TRAINING EPOCH UNDER REVIEW", "EMP-SSL: TOWARDS SELF-SUPERVISED LEARNING IN ONE TRAINING EPOCH UNDER REVIEW" ]
[ "Shengbang Tong \nUniversity of California\nBerkeley\n", "Yubei Chen \nCenter for Data Science\nNew York University\n\n", "Yi Ma \nUniversity of California\nBerkeley\n\nTsinghua-Berkeley Shenzhen Institute (TBSI)\n\n", "Yann Lecun \nCenter for Data Science\nNew York University\n\n\nCourant Inst\nNew York University\n\n" ]
[ "University of California\nBerkeley", "Center for Data Science\nNew York University\n", "University of California\nBerkeley", "Tsinghua-Berkeley Shenzhen Institute (TBSI)\n", "Center for Data Science\nNew York University\n", "Courant Inst\nNew York University\n" ]
[]
Recently, self-supervised learning (SSL) has achieved tremendous success in learning image representation. Despite the empirical success, most self-supervised learning methods are rather "inefficient" learners, typically taking hundreds of training epochs to fully converge. In this work, we show that the key towards efficient self-supervised learning is to increase the number of crops from each image instance. Leveraging one of the state-of-the-art SSL method, we introduce a simplistic form of self-supervised learning method called Extreme-Multi-Patch Self-Supervised-Learning (EMP-SSL) that does not rely on many heuristic techniques for SSL such as weight sharing between the branches, feature-wise normalization, output quantization, and stop gradient, etc, and reduces the training epochs by two orders of magnitude. We show that the proposed method is able to converge to 85.1% on CIFAR-10, 58.5% on CIFAR-100, 38.1% on Tiny ImageNet and 58.5% on ImageNet-100 in just one epoch. Furthermore, the proposed method achieves 91.5% on CIFAR-10, 70.1% on CIFAR-100, 51.5% on Tiny ImageNet and 78.9% on ImageNet-100 with linear probing in less than ten training epochs. In addition, we show that EMP-SSL shows significantly better transferability to out-of-domain datasets compared to baseline SSL methods. We will release the code in https://github.com/tsb0601/EMP-SSL. * Equal contribution 2 For example, all representations collapse to the same point.
10.48550/arxiv.2304.03977
[ "https://export.arxiv.org/pdf/2304.03977v1.pdf" ]
258,049,291
2304.03977
5367ca1c122a0806549e484fb488a977b4334777
EMP-SSL: TOWARDS SELF-SUPERVISED LEARNING IN ONE TRAINING EPOCH UNDER REVIEW Shengbang Tong University of California Berkeley Yubei Chen Center for Data Science New York University Yi Ma University of California Berkeley Tsinghua-Berkeley Shenzhen Institute (TBSI) Yann Lecun Center for Data Science New York University Courant Inst New York University EMP-SSL: TOWARDS SELF-SUPERVISED LEARNING IN ONE TRAINING EPOCH UNDER REVIEW Recently, self-supervised learning (SSL) has achieved tremendous success in learning image representation. Despite the empirical success, most self-supervised learning methods are rather "inefficient" learners, typically taking hundreds of training epochs to fully converge. In this work, we show that the key towards efficient self-supervised learning is to increase the number of crops from each image instance. Leveraging one of the state-of-the-art SSL method, we introduce a simplistic form of self-supervised learning method called Extreme-Multi-Patch Self-Supervised-Learning (EMP-SSL) that does not rely on many heuristic techniques for SSL such as weight sharing between the branches, feature-wise normalization, output quantization, and stop gradient, etc, and reduces the training epochs by two orders of magnitude. We show that the proposed method is able to converge to 85.1% on CIFAR-10, 58.5% on CIFAR-100, 38.1% on Tiny ImageNet and 58.5% on ImageNet-100 in just one epoch. Furthermore, the proposed method achieves 91.5% on CIFAR-10, 70.1% on CIFAR-100, 51.5% on Tiny ImageNet and 78.9% on ImageNet-100 with linear probing in less than ten training epochs. In addition, we show that EMP-SSL shows significantly better transferability to out-of-domain datasets compared to baseline SSL methods. We will release the code in https://github.com/tsb0601/EMP-SSL. * Equal contribution 2 For example, all representations collapse to the same point. Introduction In the past few years, tremendous progress has been made in unsupervised and self-supervised learning (SSL) [32]. Classification performance of representations learned via SSL has even caught up with supervised learning or even surpassed the latter in some cases [23,9]. This trend has opened up the possibility of large-scale data-driven unsupervised learning for vision tasks, similar to what have taken place in the field of natural language processing [6,17]. A major branch of SSL methods is joint-embedding SSL methods [26,9,54,3], which try to learn a representation invariant to augmentations of the the same image instance. These methods have two goals: (1) Representation of two different augmentations of the same image should be close; (2) The representation space shall not be a collapsed trivial one 2 , i.e., the important geometric or stochastic structure of the data must be preserved. Many recent works [9,23,54,3] have explored various strategies and different heuristics to attain these two properties, resulting in increasingly better performance. Despite the good final performance of self-supervised learning, most of the SOTA SSL methods happen to be rather "inefficient" learners. Figure 1 plots convergence behaviors of representative SOTA SSL methods. We observe that on CIFAR-10 [29], most methods would require at least 400 epochs to reach 90%, whereas supervised learning typically can reach 90% on CIFAR-10 within less than ten training epochs. The convergence efficiency gap is surprisingly large. While the success of SSL has been demonstrated on a number of benchmarks, the principle or reason behind the success of this line of methods remains largely unknown. Recently, the work [11] has revealed that the success of SOTA joint-embedding SSL methods can be explained by learning distributed representation of image patches, and this discovery echos with the discovery of BagNet [4] in the supervised learning regime. Specifically, the work [11] show that joint-embedding SSL methods rely on successful learning the co-occurrence statistics of small image patches, and linearly aggregating of the patch representation as image representation leads to on-par or even better Figure 1: The convergence plots of many SOTA SSL methods on CIFAR-10 in 800 epochs (80000 iterations). The Accuracy of the methods is measured by k-nearest-neighbor (KNN). The plots are adopted from [20]. We observe from the plots that nearly all SOTA SSL methods take at least 500 epochs to converge to 90%. representation than the baseline methods. Similarly, another work based on sparse manifold transform (SMT) of small image patches [13] has shown that simple white-box method can converge to close to SOTA performance in only one epoch. Given these observations, one natural question arises: Can we make self-supervised learning converge faster, even in one training epoch? In this work, we answer this question by leveraging the observation in [11] and by pushing the number of crops in jointembedding SSL methods to an extreme. We offer a novel new method called Extreme-Multi-Patch Self-Supervised Learning (EMP-SSL). With a simplistic formulation of joint-embedding self-supervised learning, we demonstrate that the SSL training epochs can be reduced by about two orders of magnitude. In particular, we show that EMP-SSL can achieve 85.1% on CIFAR-10, 58.5% on CIFAR-100, 38.1% on Tiny ImageNet and 58.5% on ImageNet-100 in just one training epoch. Moreover, with linear probing and a standard ResNet-18 backbone [27], EMP-SSL achieves 91.5% accuracy on CIFAR-10, 70.1% on CIFAR-100, 51.5% on Tiny ImageNet, and 78.9% on ImageNet-100 in less than ten training epochs. Remarkably, EMP-SSL achieves benchmark performance similar to that of SOTA methods, with more than two orders of magnitude less training epochs. 2 The Extreme-Multi-Patch SSL Formulation Figure 2: The pipeline of the proposed method. During the training, a image is randomly cropped into n fixed-size image patches with overlapping. We then apply augmentation including color jitter, greyscale, horizontal flip, gaussian blur and solarization [3] to n fixed-size patches. Like other SSL methods [9,3,54], image patches are then passed into the encoder F to get the representations z. The Overall Pipeline. Like other methods for SSL [9,11,3,54], EMP-SSL operates on a joint embedding of augmented views of images. Inspired by the observation in [11], the augmented views in EMP-SSL are fixed-size image patches with augmentation. As discussed in the previous sections, the purpose of joint-embedding self-supervised learning is to enforce different image patches from the same image to be close while avoiding collapsed representation. The success of these methods comes from learning patch co-occurrence [11]. In order to learn the patch co-occurrence more efficiently, we increase the number of patches in self-supervised learning to an extreme. For a given image x, we break x into n fixed-size image patches via random crops with overlapping and apply standard augmentation identically to VICReg [3] to cropped image patches get image patches x 1 , ..., x n . We denote x i as the i-th augmented image patch from x. For an augmented image patch x i , we get embedding h i and projection z i , where h i = f (x i ; θ) and z i = g(h i ). At last, we normalize the projection z i learned. The parameter function f (·; θ) is a deep neural network (ResNet-18 for example) with parameters θ and g is a much simpler neural network with only two fully connected layers. We define our encoder F as F = g(f (·; θ)). The pipeline is illustrated as Figure 2. During the training, for a batch of b images we denote as X = [x 1 , ..., x b ], where x j is the j-th image in the batch. We first augment the images as described above to get X 1 , .., X n where X i = [x 1 i , .., x b i ]. Then, we pass the augmented image patches into the encoder to get the features Z i = F (X i ) and concatenate them into Z = [Z 1 , ..., Z n ]. In this work, we adopt Total Coding Rate (TCR) [36,35,53,15], which is a covariance regularization technique, to avoid collapsed representation: R(Z) = 1 2 log det I + d b 2 ZZ ,(1) where b is the batch size, is a chosen size of distortion with > 0, and d is the dimension of projection vectors. It can be seen as a soft-constrained regularization of covariance term in VICReg [3], where the covariance regularization is achieved by maximizing the Total Coding Rate (TCR). We would also want the representation of different image patches from the same image to be invariant, that is, different image patches from the same image should be close in the representation space. In doing so, we minimize the distance between the representation of augmented images and the mean representation of augmented images patches from the same image. Overall, the training objective is: max 1 n i=1,...,n R(Z i ) + λD(Z i ,Z) ,(2) where λ is the weight for invariance loss andZ = 1 n i=1,..,n Z i is the mean of representations of different augmented patches. In this work, we choose Cosine Similarity to implement the Distance function D, where D(Z 1 , Z 2 ) = T r(Z T 1 Z 2 ) Hence, the larger value of D, the more similar Z i is toZ. The pseudocode for EMP-SSL is shown as Algorithm 1. The objective (2) can be seen as a variant to the maximal rate reduction objective [53], or a generalized version of many covariance-based SSL methods such as VICReg [3], I 2 -VICReg [11], TCR [35] and Barlow Twins [54], in which n is set to 2 for the common 2-view self-supervised learning methods. In this work, we choose n to be much larger in order to learn the co-occurrence between patches much faster. Details can be found in Section 3. Bag-of-Feature Model. Similar to [11,35], we define the representation of a given image x to be the average of the embedding h 1 , ..., h n of all the image patches. It is argued by [11,1] that the representation on the embedding h i contains more equivariance and locality that lead to better performance, whereas the projection z i is more invariant. An experimental justification can be found in [1,11], while a rigorous justification remains an open problem. Architecture. In this work, we try to adopt the simplistic form of network architecture used in self-supervised learning. Specifically, EMP-SSL does not require prediction networks, momentum encoders, non-differentiable operators, or stop gradients. While these methods have been shown to be effective in some self-supervised learning approaches, we leave their exploration to future work. Our focus in this work is to demonstrate the effectiveness of a simplistic yet powerful approach to self-supervised learning. Empirical Results In this section, we first verify the efficiency of the proposed objective in terms of convergence speed on standard datasets: CIFAR-10 [29], CIFAR-100 [29], Tiny ImageNet [31] and ImageNet-100 [16]. We then use t-SNE maps to show that, despite only a few epochs, EMP-SSL already learns meaningful representations. Next, we provide an ablation study on the number of patches n in the objective (2) to justify the significance of patches in the convergence of our method. Finally, we present some empirical observations that the proposed method enjoys much better transferability to out-of-distribution datasets compared with other SOTA SSL methods. Experiment Settings and Datasets. We provide empirical results on the standard CIFAR-10 [29], CIFAR-100 [29], Tiny ImageNet [31] and ImageNet-100 [16] datasets, which contains 10, 100, 200 and 100 classes respectively. Both CIFAR-10 and CIFAR-100 contain 50000 training images and 10000 test images, size 32 × 32 × 3. Tiny ImageNet contains 200 classes, 100000 training images and 10000 test images. Image size of Tiny ImageNet is 64 × 64 × 3. ImageNet-100 is a common subset of ImageNet with 100 classes 3 , containing around 126600 training images and 5000 test images, size 224 × 224. For all the experiments, we use a ResNet-18 [27] as the backbone and train for at most 30 epochs. We use a batch size of 100, the LARS optimizer [51] with η set to 0.005, and a weight decay of 1e-4. The learning rate is set to 0.3 and follows a cosine decay schedule with a final value 0. In the TCR loss, λ is set to 200.0 and 2 is set to 0.2. The projector network consists of 2 linear layers with respectively 4096 hidden units and 512 output units. The data augmentations used are identical to those of VICReg [3]. For the number of image patches, we have set n to 200 unless specified otherwise. For both CIFAR-10 and CIFAR-100, we use fixed-size image patches 16 × 16 and upsample to 32 × 32. For Tiny ImageNet, we use a fixed patch size of 32 × 32 and upsample to 64 × 64 for the convenience of using ResNet-18. For ImageNet-100, we use a fixed patch size of 112 × 112 and upsample to 224 × 224. We train an additional linear classifier to evaluate the performance of the learned representation. The additional classifier is trained with 100 epochs, optimized by SGD optimizer [41] with a learning rate of 0.03. A Note on Reproducing Results of SOTA Methods. We have selected five representative SOTA SSL methods [9,23,3,33,7] as baselines. For reproduction of other methods, we use sololearn [14], which is one of the best SSL libraries on github. For CIFAR-10 and CIFAR-100, we run each method 3 times for 1000 epochs with their optimal parameters provided. For Tiny ImageNet, We notice that sololearn [14] does not contain code to reproduce results on Tiny ImageNet and nearly all SOTA methods does not have official github code on Tiny ImageNet. So for fairness comparison, we adopt result from other peer-reviewed works [19,55], in which SOTA methods are trained to 1000 epochs on ResNet-18. For ImageNet-100, we adopt results from sololearn [14]. All baseline methods run for 400 epochs, which is commonly used for these SSL methods. Because our models are trained only on fixed-size image patches, we use bag-of-feature as the representation as described in Section 2. Following [11], we choose 128 as the number of patches in the bag-of-feature. The other reproduced models follow the routine in [9,26,3] and evaluate on the whole image. We acknowledge that this may give a slight advantage to EMP-SSL. But as shown in Table 1, 2, 3 in [11], the difference between bag-of-feature and whole image evaluation in [9,26,3] is at most 1.5%. We consider it negligible since this is a work about data efficiency of SSL methods, not about advancing the SOTA performance. Self-Supervised Learning in One Epoch In this subsection, we conducted an experiment for one epoch and set the learning rate weight decay to one epoch, while keeping all other experiment settings the same as in 3. Table 1 shows the results of our method, as well as some representative state-of-the-art (SOTA) SSL methods. From the Table, we observe that, even only seen the dataset once, the method is able to converge to a decent result close to the fully converged SOTA performance. This demonstrates great potential not only in improving the convergence of current SSL methods, but also in other fields of computer vision where the data can only be seen once, such as in online learning, incremental learning and robot learning. Table 1: Performance of EMP-SSL with 1 epoch vs standard self-supervised SOTA methods converged. Accuracy is measured by linear probing. Fast Convergence on Standard Datasets Comparisons with Other SSL Methods on CIFAR-10 and CIFAR-100. In Table 2, we present results of EMP-SSL trained up to 30 epochs and other SOTA methods trained up to 1000 epochs following the routine in [9,3,54]. On CIFAR-10, EMP-SSL is observed to converge much faster than traditional SSL methods. After just one epoch, it achieves 80.6% accuracy with 20 patches and 82.6% accuracy with 200 patches. In only ten epochs, it converges to more than 90%, which is considered as the state-of-the-art result for self-supervised learning methods on CIFAR-10. By 30 epochs, EMP-SSL surpasses all current methods, achieving over 93% accuracy as shown in the 1000 epochs column in Table 2. Similarly, EMP-SSL also converges very quickly on more complex datasets like CIFAR-100. In Table 2, with just 10 epochs, EMP-SSL is able to converge to 70.1% accuracy. The method further surpasses current SOTA methods with 30 epochs of training. Due to the increased complexity of the CIFAR-100 dataset, the difference between EMP-SSL and standard SSL methods in the first 30 epochs becomes even larger than that observed on CIFAR-10. Table 2: Performance on CIFAR-10 and CIFAR-100 of EMP-SSL and standard self-supervised SOTA methods with different epochs. Accuracy is measured by training linear classifier on learned embedding representation. Since EMP-SSL already converges with 10 epochs, we do not run it to 1000 epochs like other SOTA methods. Best are marked in bold. We also present EMP-SSL's plot of convergence on CIFAR-10 in Figure 3 and on CIFAR-100 in Figure 4. From Figures, we observe that the method indeed converges very quickly. In particular, it only takes at most 5 epochs for the method to achieve over 90% on CIFAR-10 and over 65% on CIFAR-100 with 200 patches and at most 8 epochs with 20 patches. More importantly, it is evident EMP-SSL converges after 15 epochs on both datatsets, around 93% on CIFAR-10 and 72% on CIFAR-100. A Note on Time Efficiency. It is admittedly true that increasing number of patches in joint-embedding selfsupervised learning could lead to increased training time. Here, we compare the time needed for each method to reach a prescribed performance on CIFAR. We use 90% on CIFAR-10 and 65% on CIFAR-100. conducting all experiments with two A100 GPUs. We present the results in Table 3. From the table, we observe that on CIFAR-10, EMP-SSL not only requires far fewer training epochs to converge, but also less runtime. This advantage becomes more evident on more complicated CIFAR-100 dataset. While previous methods require more epochs and, therefore, longer time to converge, EMP-SSL uses a few epochs to reach a good result. This result provides empirical evidence that the proposed method would enjoy the faster speed of training, especially with the setting with 20 patches. Beyond advantage in efficiency, one may wonder how the model learned with a few epochs is different from previous methods learned with 1000 epochs. As we will further show in section 3.3 and 3.5, the so learned model is actually better in certain aspects. Figure 3: The convergence plot of EMP-SSL trained on CIFAR-10 for 30 epochs. The Accuracy is measured by linear probing. Each method runs 3 random seeds and standard deviation is displayed by shadows. Figure 4: The convergence plot of EMP-SSL trained on CIFAR-100 for 30 epochs. The Accuracy is measured by linear probing. Each method runs 3 random seeds and standard deviation is displayed by shadows. Comparisons with Other SSL Methods on Tiny ImageNet and ImageNet-100 We evaluated the performance of EMP-SSL on larger datasets, namely Tiny ImageNet and ImageNet-100. Table 4 presents the results of EMP-SSL trained for 10 epochs on these two datasets. Even on the more challenging dataset Tiny ImageNet, EMP-SSL is still able to achieve 51.5%, which is slightly better than SOTA methods trained with 1000 epochs. A similar result is observed on ImageNet-100. The method converges to the range SOTA performance within 10 epochs. The result shows the potential of our method in applying to data sets of larger scales. Visualizing the Learned Representation To further understand the representations learned by EMP-SSL with a few epochs, we visualize the features learned using t-SNE [48]. In Figure 5, Ablation studies of EMP-SSL We provide ablation studies on the number of patches n to illustrate the importance of patch number in joint-embedding SSL. All experiments on done on CIFAR-10, with training details same with the ones in 3. Figure 6 shows the effect that the number of patches n has on the convergence and performance of EMP-SSL. As the number n increases, the accuracy clearly rises sharply. Increasing number of patches n used in training will facilitate the models to learn patch representation and the co-occurrence, and therefore accelerate the convergence of our model. Transferability to Out of Domain Data Aside from converging with much fewer epochs, we are interested in whether EMP-SSL can bring additional benefits comparing to standard 2-view self-supervised learning methods trained to 1000 epochs. In this section, we provide an interesting empirical observation: the method's better transferability to out of domain data. We conduct two sets of experiments: (1) models pretrained on CIFAR-10 and linearly evaluated on CIFAR-100 (2) models pretrained on CIFAR-100 and linearly evaluated on CIFAR-10. We present the results of these two sets of experiments in Table 5 and Table 6 respectively. In both tables, EMP-SSL is trained for 30 epochs and other self-supervised methods are trained for 1000 epochs like previous subsections. Note that despite similar names, CIFAR-10 and CIFAR-100 have very little overlap hence they are suitable for testing model's transferability. In both Table 5 and Table 6, EMP-SSL clearly demonstrates better transferability to out of domain data. Although current state of the art methods trained with 1000 epochs have shown less transferability to out-of-domain dataset. Since the main goal of self-supervised learning is to develop data-driven machine learning on wide ranges of vision tasks, it is crucial for the self-supervised learning methods to generalize well to out-of-domain data instead of overfitting the training data. From the result shown in Table 5 and 6, we believe this work will help advance SSL methods in such a direction. Table 6: Transfer to out-of-domain data: CIFAR-100 to CIFAR-10 We benchmark the representation of each model evaluated on CIFAR-10 by training linear classifiers on features extracted by models trained on CIFAR-100. Best Results are in bold. A possible explanation for this phenomenon is that a larger number of training epochs causes the models to overfit to the training dataset. Hence, converged with only a few epochs, EMP-SSL can better avoid the curse of overfitting. We leave a more rigorous explanation for this phenomenon to future studies. More Related Works There are several intertwined quests closely related to this work. Here, we touch them briefly. Joint-Embedding Self-Supervised Learning. Our work is mostly related to joint-embedding self-supervised learning. The idea of instance contrastive learning was first proposed in Wu [50] The method relies on a joint embedding architecture in which two networks are trained to produce similar embeddings for different views of the same image. The idea can trace back to Siamese network architecture which was proposed in [5]. The main challenge to these methods is collapse where all representations are identical, ignoring the input. To overcome this issue, there are mainly two approaches: contrastive and information maximization. On the branch of contrastive learning, methods search for dissimilar samples from the current branch [9] or memory bank [26]. More recently, a few methods jump out of the constraint of using contrastive samples. They exploit several tricks, such as the parameter vector of one branch being a low-pass-filtered version of the parameter vector of the other branch [23], stop-gradient operation in one of the branches [10] and batch normalization [40]. On the other line of anti-collapse methods, several simpler non-constrastive methods are proposed to avoid the collapsed representation problem. TCR [35], Barlow Twins [54], and VICReg [3] propose covariance regularization to enforce a non-collapsing solution. Our work is constructed on the basis of covariance regularization to avoid collapsed representation. Besides exploring ways to achieve anti-collapsing solution, SwAV [7] explores multi-crop in self-supervised learning. The work uses a mix of views with different resolutions in place of two full-resolution views. It is the first work to demonstrate that multi-view augmentation improves the performance of SSL learning. Our work simplifies and generalizes this approach and takes it to an extreme. Aside from the empirical success of SSL learning, work like I 2 -VICReg [11] digs into the principle behind these methods. The work argues that success largely comes from learning a representation of image patches based on their co-occurrence statistics in the images. In this work, we adopt this observation and demonstrate that learning the cooccurrence statistics of image patches can lead to fundamental change in the efficiency of self-supervised learning as shown in Section 3. Patch-Based Representation Learning. Our work is also closely related to representation learning on fixed-size patches in images. The idea of exploiting patch-level representation is first raised in the supervised setting. Bagnet [4] classifies an image based on the co-occurrences of small local image features without taking the spatial ordering into consideration. Note, this philosophy strongly echoes with the principle raised in [11]. The paper demonstrates that this "bag-of-feature" approach works very well on supervised classification tasks. Many follow-up works like SimplePatch [44] and ConvMixer [47] have all demonstrated the power of patch representation in supervised learning. In unsupervised learning, some early work like Jigsaw puzzle [38] learns patch representation via solving a patchwise jigsaw puzzle task and implicitly uses patch representation in self-supervised learning. Gidaris [22] takes the "bag-of-words" concept from NLP and applies it into the image self-supervision task. The work raises the concept of "bag-of-patches" and demonstrates that this image discretization approach can be a very powerful self-supervision in the image domain. In the recent joint-embedding self-supervised domain, I 2 -VICReg [11] is the first work to highlight the importance of patch representation in self-supervised learning. There's another line of self-supervised learning work [2,25] based on vision transformers, which naturally uses fixed-size patch level representation due to the structure of the vision transformers. SSL Methods Not Based on Deep Learning. Our work has also been inspired by the classical approaches before deep learning, especially sparse modeling and manifold learning. Some earlier works approach unsupervised learning mainly from the perspective of sparsity [52,30,39]. In particular, a work focuses on lossy coding [36] has inspired many of the recent SSL learning methods [35,11], as well as our work to promote covariance in the representation of data through maximizing the coding rate. Manifold learning [24,42] and spectral clustering [43,37] propose to model the geometric structure of high dimensional objects in the signal space. In 2018, a work called sparse manifold transform [12] builds upon the above two areas. The work proposes to use sparsity to handle locality in the data space to build support and construct representations that assign similar values to similar points on the support. One may note that this work already shares a similar idea with the current joint-embedding self-supervised learning in the deep-learning community. Discussion This paper seeks to solve the long-standing inefficient problem in self-supervised learning. We introduced EMP-SSL, which tremendously increases the learning efficiency of self-supervised learning via learning patch co-occurrence. We demonstrated that with an increased number of patches during training, the method of joint-embedding self-supervised can achieve a prescribed level of performance on various datasets, such as CIFAR-10, CIFAR-100, Tiny ImageNet, and ImageNet-100, in just one epoch. Further, we show that the method further converges to the state-of-the-art performance in about ten epochs on these datasets. Furthermore, we show that, although converged with much fewer epochs, EMP-SSL not only learns meaningful representations but also shows advantages in tasks like transferring to out-of-domain datasets. Our work has further verified that learning patch co-occurrence is key to the success and efficiency of SSL. This discovery opens the doors to developing even more effective and efficient self-supervised learning methods, such as uncovering the mystery behind networks used in self-supervised learning and designing more interpretable and efficient "white-box" networks for learning in an unsupervised setting. This can potentially lead to more transparent and understandable models and advance the field of self-supervised learning in various applications. Furhter, Joint-embedding self-supervised learning has not only yielded promising results in learning more discriminative latent representations, but has also inspired the development of generative models [28,45,34]. The success of this approach has also led to significant improvements in downstream tasks such as image clustering [49,35,18] and incremental learning [46,8,21]. Our work builds on this foundation and has the potential to further improve downstream tasks, including online learning, with the possibility of achieving significant efficiency gains. Lastly, adapting the proposed strategy to other methods in the field of self-supervised learning could be a promising direction for future research. While it may require careful engineering tuning to apply the strategy to other methods, the potential benefits in improving the efficiency and performance of self-supervised learning make it worth exploring. A Implementation Details Due to the limited space in the main paragraph, we include a more detailed implementation of our method and reproduction of other methods in here. A.1 Training Details of EMP-SSL The augmentation used follows VICReg [3]. A pytorch stype pseudo code is listed below: All experiments are trained with at most 4 A100 GPUs. • A.2 Training Details of other methods When reproducing methods of other work, we have adopted solo-Learn [14] as described in the main paragraph. We followed the optimal parameters and augmentation provided by solo-learn. A special note is that we followed the default batch size, which is 256 because it is studied in many SSL methods [9,3] that larger batch size will produce better performance. B More Ablation Studies In this section, we present more ablation studies of EMP-SSL. B.1 Ablation on Batch Size In this subsection, we verify if our method is applicable to different batch sizes. Again, we use CIFAR-10 to conduct ablation study and training details same in 3. We choose batch size of 50, 100, and 200 to conduct our ablation study. In all experiments, we use 200 patches and all the parameters are kept the same, in other words, we have not searched different hyperparameters for different batch sizes. We visualize the results of ablation study in Figure 7. One may observe that batch size has little impact on the convergence of EMP-SSL. The result is very important because different batch size leads to different iteration the method has run in the same epochs. It shows that, even without changing hyperparameters, the proposed method helps the convergence of SSL method under different batch sizes. C t-SNE comparison with other methods Due to limited space in the main text, we present the t-SNE of all of the SOTA SSL methods we have chosen to compare in here. We present the result of all t-SNE graphs in Figure 8. Here, we draw a similar conclusion as the main paragraph, that EMP-SSL learns highly structured representation in just 10 epochs. Figure 5 : 5we visualize the learned representations of the training set of CIFAR-10 by t-SNE. EMP-SSL is trained up to 10 epochs with 200 patches and other SOTA methods are trained up to 1000 epochs. All t-SNEs are produced with the same set of parameters. Each color represents one class in CIFAR-10. As shown in the figure, EMP-SSL learns much more separated and structured representations for different classes. Comparing to other SOTA methods, the features learned by EMP-SSL show more refined low-dim structures. For a number of classes, such as the pink, purple, and green classes, the method even learns well-structured representation inside each class. Moreover, the most amazing part is that all such structures are learned from training with just 10 t-SNE of learned representation on CIFAR-10. We use projection vectors to generate the t-SNE graph. Figure 6 : 6Ablation Study on the number of patches n. Experiments are conducted on CIFAR-10. Figure 7 : 7Ablation Study on Batch Size Experiments are conducted on CIFAR-10. Algorithm 1: EMP-SSL PyTorch Pseudocode# F : encoder network # lambda: weight on the invariance term # n: number of augmented fixed-size image patches # m: number of pairs to calculate invariance # R: function to calculate total coding rate # D: function to calculate cosine similarity for X in loader: # augment n fixed-size image patches X1 . . . Xn = extract patches & augment(X) # calculate projection Z1 . . . Zn = F (X1). . .F (Xn) # calculate total coding rate and invariance loss tcr loss = average([R(Zi) for i in range(n)] inv loss = average([D(Z, Zi) for i in range(n)]) # calculate loss loss = tcr loss + lambda * inv loss # optimization step loss.backward() optimizer.step() Table 3 : 3Amount of time and epochs each method takes to reach 90% on CIFAR-10 and 65% on CIFAR-100. Time is measured in minutes and best are marked in bold. Table 4 : 4Performance on Tiny ImageNet and ImageNet-100 of EMP-SSL vs SOTA SSL methods at different epochs. Best results are marked in bold. Table 5 : 5Transferto out-of-domain data: CIFAR-10 to CIFAR-100. We benchmark the representation of each model evaluated on CIFAR-100 by training linear classifiers on features extracted by models trained on CIFAR-10. Best results are marked in bold. Methods CIFAR-100 CIFAR-10 (OOD) SimCLR 0.662 0.783 BYOL 0.708 0.813 VICReg 0.685 0.791 SwAV 0.658 0.771 ReSSL 0.674 0.780 EMP-SSL (20 patch) 0.724 0.857 EMP-SSL (200 patch) 0.733 0.859 The selection of 100 classes can be found in[14]. Towards good practices in self-supervised representation learning. Srikar Appalaraju, Yi Zhu, Yusheng Xie, István Fehérvári, arXiv:2012.00868arXiv preprintSrikar Appalaraju, Yi Zhu, Yusheng Xie, and István Fehérvári. Towards good practices in self-supervised representation learning. arXiv preprint arXiv:2012.00868, 2020. 3 Beit: Bert pre-training of image transformers. Hangbo Bao, Li Dong, Furu Wei, arXiv:2106.08254arXiv preprintHangbo Bao, Li Dong, and Furu Wei. Beit: Bert pre-training of image transformers. arXiv preprint arXiv:2106.08254, 2021. 9 VICReg: Variance-invariance-covariance regularization for self-supervised learning. Adrien Bardes, Jean Ponce, Yann Lecun, arXiv:2105.04906813arXiv preprintAdrien Bardes, Jean Ponce, and Yann LeCun. VICReg: Variance-invariance-covariance regularization for self-supervised learning. arXiv preprint arXiv:2105.04906, 2021. 1, 2, 3, 4, 5, 8, 13 Approximating cnns with bag-of-local-features models works surprisingly well on imagenet. Wieland Brendel, Matthias Bethge, arXiv:1904.0076019arXiv preprintWieland Brendel and Matthias Bethge. Approximating cnns with bag-of-local-features models works surprisingly well on imagenet. arXiv preprint arXiv:1904.00760, 2019. 1, 9 Signature verification using a" siamese" time delay neural network. Jane Bromley, Isabelle Guyon, Yann Lecun, Eduard Säckinger, Roopak Shah, Advances in neural information processing systems. 68Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard Säckinger, and Roopak Shah. Signature verification using a" siamese" time delay neural network. Advances in neural information processing systems, 6, 1993. 8 Language models are few-shot learners. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Advances in neural information processing systems. 33Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901, 2020. 1 Unsupervised learning of visual features by contrasting cluster assignments. Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, Armand Joulin, Advances in Neural Information Processing Systems. 33Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. Advances in Neural Information Processing Systems, 33:9912-9924, 2020. 4, 8 Co2l: Contrastive continual learning. Hyuntak Cha, Jaeho Lee, Jinwoo Shin, Proceedings of the IEEE/CVF International conference on computer vision. the IEEE/CVF International conference on computer visionHyuntak Cha, Jaeho Lee, and Jinwoo Shin. Co2l: Contrastive continual learning. In Proceedings of the IEEE/CVF Interna- tional conference on computer vision, pages 9516-9525, 2021. 9 A simple framework for contrastive learning of visual representations. Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton, PMLRInternational conference on machine learning. 813Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597-1607. PMLR, 2020. 1, 2, 4, 5, 8, 13 Exploring simple siamese representation learning. Xinlei Chen, Kaiming He, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionXinlei Chen and Kaiming He. Exploring simple siamese representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15750-15758, 2021. 8 Intra-instance VICReg: Bag of self-supervised image patch embedding. Yubei Chen, Adrien Bardes, Zengyi Li, Yann Lecun, arXiv:2206.0895489arXiv preprintYubei Chen, Adrien Bardes, Zengyi Li, and Yann LeCun. Intra-instance VICReg: Bag of self-supervised image patch embed- ding. arXiv preprint arXiv:2206.08954, 2022. 1, 2, 3, 4, 8, 9 The sparse manifold transform. Yubei Chen, Dylan Paiton, Bruno Olshausen, Advances in neural information processing systems. 31Yubei Chen, Dylan Paiton, and Bruno Olshausen. The sparse manifold transform. Advances in neural information processing systems, 31, 2018. 9 Minimalistic unsupervised learning with the sparse manifold transform. Yubei Chen, Zeyu Yun, Yi Ma, Bruno Olshausen, Yann Lecun, arXiv:2209.152612022arXiv preprintYubei Chen, Zeyu Yun, Yi Ma, Bruno Olshausen, and Yann LeCun. Minimalistic unsupervised learning with the sparse manifold transform. arXiv preprint arXiv:2209.15261, 2022. 2 solo-learn: A library of self-supervised methods for visual representation learning. Enrico Victor Guilherme Turrisi Da Costa, Moin Fini, Nicu Nabi, Elisa Sebe, Ricci, J. Mach. Learn. Res. 2313Victor Guilherme Turrisi da Costa, Enrico Fini, Moin Nabi, Nicu Sebe, and Elisa Ricci. solo-learn: A library of self-supervised methods for visual representation learning. J. Mach. Learn. Res., 23:56-1, 2022. 4, 13 Ctrl: Closed-loop transcription to an ldr via minimaxing rate reduction. Xili Dai, Shengbang Tong, Mingyang Li, Ziyang Wu, Michael Psenka, Kwan Ho Ryan Chan, Pengyuan Zhai, Yaodong Yu, Xiaojun Yuan, Heung-Yeung Shum, Entropy. 244Xili Dai, Shengbang Tong, Mingyang Li, Ziyang Wu, Michael Psenka, Kwan Ho Ryan Chan, Pengyuan Zhai, Yaodong Yu, Xiaojun Yuan, Heung-Yeung Shum, et al. Ctrl: Closed-loop transcription to an ldr via minimaxing rate reduction. Entropy, 24(4):456, 2022. 3 Imagenet: A large-scale hierarchical image database. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Li Fei-Fei, 2009 IEEE conference on computer vision and pattern recognition. IeeeJia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. Ieee, 2009. 4 Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova Bert, arXiv:1810.04805Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. 1 Tianjiao Ding, Shengbang Tong, Kwan Ho Ryan Chan, Xili Dai, Yi Ma, Benjamin D Haeffele, arXiv:2301.01805Unsupervised manifold linearizing and clustering. arXiv preprintTianjiao Ding, Shengbang Tong, Kwan Ho Ryan Chan, Xili Dai, Yi Ma, and Benjamin D Haeffele. Unsupervised manifold linearizing and clustering. arXiv preprint arXiv:2301.01805, 2023. 9 Whitening for self-supervised representation learning. Aleksandr Ermolov, Aliaksandr Siarohin, Enver Sangineto, Nicu Sebe, PMLR, 2021. 4International Conference on Machine Learning. Aleksandr Ermolov, Aliaksandr Siarohin, Enver Sangineto, and Nicu Sebe. Whitening for self-supervised representation learning. In International Conference on Machine Learning, pages 3015-3024. PMLR, 2021. 4 Selfsupervised models are continual learners. Enrico Fini, G Turrisi Da Victor, Xavier Costa, Elisa Alameda-Pineda, Karteek Ricci, Julien Alahari, Mairal, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition2022Enrico Fini, Victor G Turrisi Da Costa, Xavier Alameda-Pineda, Elisa Ricci, Karteek Alahari, and Julien Mairal. Self- supervised models are continual learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9621-9630, 2022. 9 Learning representations by predicting bags of visual words. Spyros Gidaris, Andrei Bursuc, Nikos Komodakis, Patrick Pérez, Matthieu Cord, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionSpyros Gidaris, Andrei Bursuc, Nikos Komodakis, Patrick Pérez, and Matthieu Cord. Learning representations by predicting bags of visual words. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6928-6938, 2020. 9 Bootstrap your own latent-a new approach to self-supervised learning. Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, Advances in neural information processing systems. 33Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent-a new approach to self-supervised learning. Advances in neural information processing systems, 33:21271-21284, 2020. 1, 4, 8 Dimensionality reduction by learning an invariant mapping. Raia Hadsell, Sumit Chopra, Yann Lecun, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06). IEEE2Raia Hadsell, Sumit Chopra, and Yann LeCun. Dimensionality reduction by learning an invariant mapping. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), volume 2, pages 1735-1742. IEEE, 2006. 9 Masked autoencoders are scalable vision learners. Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition2022Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16000- 16009, 2022. 9 Momentum contrast for unsupervised visual representation learning. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, Ross Girshick, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionKaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual represen- tation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9729-9738, 2020. 1, 4, 8 Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition24Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 2, 4 Training gans with stronger augmentations via contrastive discriminator. Jongheon Jeong, Jinwoo Shin, arXiv:2103.09742arXiv preprintJongheon Jeong and Jinwoo Shin. Training gans with stronger augmentations via contrastive discriminator. arXiv preprint arXiv:2103.09742, 2021. 9 Learning multiple layers of features from tiny images. Alex Krizhevsky, Geoffrey Hinton, 14Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. online: http://www.cs.toronto.edu/kriz/cifar.html, 2009. 1, 4 Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. Svetlana Lazebnik, Cordelia Schmid, Jean Ponce, IEEE computer society conference on computer vision and pattern recognition (CVPR'06). IEEE2Svetlana Lazebnik, Cordelia Schmid, and Jean Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In 2006 IEEE computer society conference on computer vision and pattern recognition (CVPR'06), volume 2, pages 2169-2178. IEEE, 2006. 9 Tiny imagenet visual recognition challenge. Ya Le, Xuan Yang, CS. 2317Ya Le and Xuan Yang. Tiny imagenet visual recognition challenge. CS 231N, 7(7):3, 2015. 4 A path towards autonomous machine intelligence. Yann Lecun, 2022preprint posted on openreviewYann LeCun. A path towards autonomous machine intelligence. preprint posted on openreview, 2022. 1 Efficient self-supervised vision transformers for representation learning. Chunyuan Li, Jianwei Yang, Pengchuan Zhang, Mei Gao, Bin Xiao, Xiyang Dai, Lu Yuan, Jianfeng Gao, arXiv:2106.09785arXiv preprintChunyuan Li, Jianwei Yang, Pengchuan Zhang, Mei Gao, Bin Xiao, Xiyang Dai, Lu Yuan, and Jianfeng Gao. Efficient self-supervised vision transformers for representation learning. arXiv preprint arXiv:2106.09785, 2021. 4 Tianhong Li, Huiwen Chang, Shlok Kumar Mishra, Han Zhang, Dina Katabi, Dilip Krishnan, Mage, arXiv:2211.09117Masked generative encoder to unify representation learning and image synthesis. arXiv preprintTianhong Li, Huiwen Chang, Shlok Kumar Mishra, Han Zhang, Dina Katabi, and Dilip Krishnan. Mage: Masked generative encoder to unify representation learning and image synthesis. arXiv preprint arXiv:2211.09117, 2022. 9 Zengyi Li, Yubei Chen, Yann Lecun, Friedrich T Sommer, arXiv:2201.10000Neural manifold clustering and embedding. 89arXiv preprintZengyi Li, Yubei Chen, Yann LeCun, and Friedrich T Sommer. Neural manifold clustering and embedding. arXiv preprint arXiv:2201.10000, 2022. 3, 8, 9 Segmentation of multivariate mixed data via lossy data coding and compression. Yi Ma, Harm Derksen, Wei Hong, John Wright, IEEE transactions on pattern analysis and machine intelligence. 299Yi Ma, Harm Derksen, Wei Hong, and John Wright. Segmentation of multivariate mixed data via lossy data coding and compression. IEEE transactions on pattern analysis and machine intelligence, 29(9):1546-1562, 2007. 3, 9 A random walks view of spectral segmentation. Marina Meilȃ, Jianbo Shi, PMLRInternational Workshop on Artificial Intelligence and Statistics. Marina Meilȃ and Jianbo Shi. A random walks view of spectral segmentation. In International Workshop on Artificial Intelligence and Statistics, pages 203-208. PMLR, 2001. 9 Unsupervised learning of visual representations by solving jigsaw puzzles. Mehdi Noroozi, Paolo Favaro, European conference on computer vision. SpringerMehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In European conference on computer vision, pages 69-84. Springer, 2016. 9 Improving the fisher kernel for large-scale image classification. Florent Perronnin, Jorge Sánchez, Thomas Mensink, European conference on computer vision. Springer9Florent Perronnin, Jorge Sánchez, and Thomas Mensink. Improving the fisher kernel for large-scale image classification. In European conference on computer vision, pages 143-156. Springer, 2010. 9 H Pierre, Jean-Bastien Richemond, Florent Grill, Corentin Altché, Florian Tallec, Andrew Strub, Samuel Brock, Smith, arXiv:2010.10241Bilal Piot, et al. Byol works even without batch statistics. Soham De, Razvan PascanuarXiv preprintPierre H Richemond, Jean-Bastien Grill, Florent Altché, Corentin Tallec, Florian Strub, Andrew Brock, Samuel Smith, Soham De, Razvan Pascanu, Bilal Piot, et al. Byol works even without batch statistics. arXiv preprint arXiv:2010.10241, 2020. 8 A stochastic approximation method. The annals of mathematical statistics. Herbert Robbins, Sutton Monro, Herbert Robbins and Sutton Monro. A stochastic approximation method. The annals of mathematical statistics, pages 400- 407, 1951. 4 Nonlinear dimensionality reduction by locally linear embedding. science. T Sam, Lawrence K Roweis, Saul, 290Sam T Roweis and Lawrence K Saul. Nonlinear dimensionality reduction by locally linear embedding. science, 290(5500):2323-2326, 2000. 9 The geometry of kernelized spectral clustering. Geoffrey Schiebinger, J Martin, Bin Wainwright, Yu, The Annals of Statistics. 432Geoffrey Schiebinger, Martin J Wainwright, and Bin Yu. The geometry of kernelized spectral clustering. The Annals of Statistics, 43(2):819-846, 2015. 9 Louis Thiry, Michael Arbel, Eugene Belilovsky, Edouard Oyallon, arXiv:2101.07528The unreasonable effectiveness of patches in deep convolutional kernels methods. arXiv preprintLouis Thiry, Michael Arbel, Eugene Belilovsky, and Edouard Oyallon. The unreasonable effectiveness of patches in deep convolutional kernels methods. arXiv preprint arXiv:2101.07528, 2021. 9 Unsupervised learning of structured representations via closed-loop transcription. Shengbang Tong, Xili Dai, Yubei Chen, Mingyang Li, Zengyi Li, Brent Yi, Yann Lecun, Yi Ma, arXiv:2210.16782arXiv preprintShengbang Tong, Xili Dai, Yubei Chen, Mingyang Li, Zengyi Li, Brent Yi, Yann LeCun, and Yi Ma. Unsupervised learning of structured representations via closed-loop transcription. arXiv preprint arXiv:2210.16782, 2022. 9 Incremental learning of structured memory via closed-loop transcription. Shengbang Tong, Xili Dai, Ziyang Wu, Mingyang Li, Brent Yi, Yi Ma, arXiv:2202.054112022arXiv preprintShengbang Tong, Xili Dai, Ziyang Wu, Mingyang Li, Brent Yi, and Yi Ma. Incremental learning of structured memory via closed-loop transcription. arXiv preprint arXiv:2202.05411, 2022. 9 . Asher Trockman, Kolter, arXiv:2201.097922022Patches are all you need? arXiv preprintAsher Trockman and J Zico Kolter. Patches are all you need? arXiv preprint arXiv:2201.09792, 2022. 9 Visualizing data using t-sne. Laurens Van Der Maaten, Geoffrey Hinton, Journal of machine learning research. 911Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(11), 2008. 6 Scan: Learning to classify images without labels. Simon Wouter Van Gansbeke, Stamatios Vandenhende, Marc Georgoulis, Luc Proesmans, Van Gool, European conference on computer vision. SpringerWouter Van Gansbeke, Simon Vandenhende, Stamatios Georgoulis, Marc Proesmans, and Luc Van Gool. Scan: Learning to classify images without labels. In European conference on computer vision, pages 268-285. Springer, 2020. 9 Unsupervised feature learning via non-parametric instance discrimination. Zhirong Wu, Yuanjun Xiong, X Stella, Dahua Yu, Lin, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionZhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua Lin. Unsupervised feature learning via non-parametric instance dis- crimination. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3733-3742, 2018. 8 Large batch training of convolutional networks. Yang You, Igor Gitman, Boris Ginsburg, arXiv:1708.03888arXiv preprintYang You, Igor Gitman, and Boris Ginsburg. Large batch training of convolutional networks. arXiv preprint arXiv:1708.03888, 2017. 4 Nonlinear learning using local coordinate coding. Kai Yu, Tong Zhang, Yihong Gong, Advances in neural information processing systems. 22Kai Yu, Tong Zhang, and Yihong Gong. Nonlinear learning using local coordinate coding. Advances in neural information processing systems, 22, 2009. 9 Learning diverse and discriminative representations via the principle of maximal coding rate reduction. Yaodong Yu, Kwan Ho Ryan, Chong Chan, Chaobing You, Yi Song, Ma, Advances in Neural Information Processing Systems. 33Yaodong Yu, Kwan Ho Ryan Chan, Chong You, Chaobing Song, and Yi Ma. Learning diverse and discriminative representa- tions via the principle of maximal coding rate reduction. Advances in Neural Information Processing Systems, 33:9422-9434, 2020. 3 Barlow twins: Self-supervised learning via redundancy reduction. Jure Zbontar, Li Jing, Ishan Misra, Yann Lecun, Stéphane Deny, PMLRInternational Conference on Machine Learning. Jure Zbontar, Li Jing, Ishan Misra, Yann LeCun, and Stéphane Deny. Barlow twins: Self-supervised learning via redundancy reduction. In International Conference on Machine Learning, pages 12310-12320. PMLR, 2021. 1, 2, 3, 5, 8 4 (a) t-SNE of EMP-SSL (b) t-SNE of BYOL (c) t-SNE of VICReg (d) t-SNE of SwAV (e) t-SNE of SimCLR (f) t-SNE of ReSSL Figure 8: t-SNE of learned representation on CIFAR-10. Mingkai Zheng, Shan You, Fei Wang, Chen Qian, Changshui Zhang, Xiaogang Wang, Chang Xu, Advances in Neural Information Processing Systems. 34Ressl: Relational self-supervised learning with weak augmentation. We use projection vectors trained on CIFAR-10 to generate the t-SNE graphMingkai Zheng, Shan You, Fei Wang, Chen Qian, Changshui Zhang, Xiaogang Wang, and Chang Xu. Ressl: Relational self-supervised learning with weak augmentation. Advances in Neural Information Processing Systems, 34:2543-2555, 2021. 4 (a) t-SNE of EMP-SSL (b) t-SNE of BYOL (c) t-SNE of VICReg (d) t-SNE of SwAV (e) t-SNE of SimCLR (f) t-SNE of ReSSL Figure 8: t-SNE of learned representation on CIFAR-10. We use projection vectors trained on CIFAR-10 to generate the t-SNE graph.
[ "https://github.com/tsb0601/EMP-SSL." ]